I admit it’s probably still too early to be giving tutorials on how to use DirectCanvas. I’m sure it’ll be outdated by the time the weekend is over! But I do think it is far enough along to show how to fire up the library and render something.
The Pixel Shader Sample
The Boiler Plate
“Oh nos! Boiler plate?! I thought this crap was supposed to be easy to use! I hate you.”
There’s no getting around it. You have to write a few lines of code before you get started, but I promise it’s nothing much to worry about.
So far we’ve had to create our DirectCanvasFactory. As it stands now, all resources created with a single instance of a DirectCanvasFactory can only be used with other resources that were created with the same DirectCanvasFactory.
We’ve also created a WindowsFormsPresenter. This is a special DrawingLayer made to render content to various technologies. In this sample, it happens to be Windows Forms. If we were to render to WPF, it would look like this:
When we are done drawing, presenting the rendering to the given technology the Presenter is tied to, simply run this:
Now what? I want to draw something!
So now on to the juicy stuff! This is where this PixelShaderSource.cs code comes into play. If you look at the constructor here, you will notice it takes a parameter of a DrawingLayer.
Our presenter we had created earlier, well they both inherit from a DrawingLayer, so we had no problems passing it to our PixelShaderScene class. If you read the second line of code, you may be wondering, “What is this InitializeResources method?”
The InitializeResources will setup the resources we wish to use for rendering. In this case they are all meant to exist for the life of the application. Here we actually are creating two drawing layers. One is going to be used for a place to render our initial drawing. The other one is going to hold an image we load from the file system.
One other important thing to notice in this code snippet is the RippleEffect. This class uses the extensible shader API in DirectCanvas. The original code actually came from the codeplex project and only needed slight modification when ported over (which was actually only a 5 minute process in whole).
Cool, we created ‘resources’. But you never showed me how to draw!
In the same class, you will find a method called Render. This method is actually called from a simple timer in the main application. So how that method is being executed is nothing important, so I won’t say anything more on it!
You will notice that we have a Begin/EndDraw pattern on our DrawingLayer. This not only mirrors the underlying graphics libraries, but is also a necessary evil for a high performance API. This is because a BeginDraw statement will setup any needed state or resources needed for drawing. An EndDraw will typically flush any drawing commands, doing it’s best to batch the work to the GPU. The DrawingLayer also has BeginCompose/EndCompose methods, but I’ll save that for another tutorial.
The DrawLayer method that is being called in between the Begin/EndDraw, draws it’s contents into the given target rectangle area. This ensures our image will always fill the m_tempLayer entirely, even warping it if it has to.
Also notice the ApplyEffect method and how you must specify an output layer. You always need an output layer! This is a restriction of how the GPU works. Effects are applied basically on a copy of the source to a destination. In this case the output is our m_mainLayer, which also happens to be our presenter we had initialized in our boiler plate. That means a Present just needs to be called…and BAP! GPU Ripples!