Non-Realistic Rendering Sample
This sample shows how to implement stylized non-photorealistic rendering techniques such as cartoon shading, edge detection, and a pencil sketch effect.Sample Overview
The sample provides three specialized rendering techniques.
- Toon shading displays models using a banded lighting technique. Rather than the usual smooth gradients from light to dark, it uses just three discrete levels of brightness, with sudden transitions where the object goes from light into shadow.
- Edge detection adds black lines around the silhouette of the model.
- The sketch postprocess adds a pencil stroke pattern over the top of the scene.
Many different graphical effects can be achieved by combining these techniques in various ways. For instance, a cartoon effect is created by using the toon shading and edge detection together, or by using a pencil sketch effect by combining edge detection with the sketch postprocess.
Sample Controls
This sample uses the following keyboard and gamepad controls.
Action | Keyboard control | Gamepad control |
---|---|---|
Change the display settings | A | A |
Exit the sample | ESC or ALT+F4 | BACK |
How the Sample Works
Toon Shading
Toon shading is implemented by the ToonPixelShader function in the CartoonEffect.fx file. This takes in a smoothly varying light amount that was computed by the vertex shader, and uses a series of if...else statements to quantize it into three discrete levels of brightness.
C# |
---|
if (input.LightAmount > ToonThresholds[0]) light = ToonBrightnessLevels[0]; else if (input.LightAmount > ToonThresholds[1]) light = ToonBrightnessLevels[1]; else light = ToonBrightnessLevels[2]; |
For comparison, the LambertPixelShader
function (which is used by a different technique in the same effect file) takes in the light amount from the same vertex shader, but instead uses this to compute a traditional smoothly varying light value. Both shaders then multiply their light amount with a color from the model texture lookup.
The following figure shows the difference between conventional Lambert shading and the quantized toon shading effect.

Edge Detection
Edge detection is implemented as a post-processing effect using a two-dimensional gradient filter. First, the model is drawn into a custom render target by using the NormalDepth
technique from the CartoonEffect.fx file. Instead of outputting colors, this writes the surface normal into the red, green, and blue channels of the output color, and the depth into the alpha channel.
The main scene is then drawn into a second render target by using either the Lambert or Toon lighting shaders as described.
Finally, a full-screen sprite is used to draw both custom render targets onto the back buffer by using the Postprocess.fx effect to apply the edge detection filter.
For each pixel on the screen, this filter makes four look ups into the render target containing normal and depth information: one slightly to the top left of the current position, one to the top right, another to the bottom left, and finally one to the bottom right. It then compares the values obtained from these lookups to determine whether the normal or depth values are changing rapidly at the location of this pixel. If it detects a dramatic change, this must be an edge location; while if the normal and depth are similar in all directions, we must be inside a flat area of the scene.
A threshold is used to reject very small changes in the normal or depth, which would otherwise cause false positive edges to be detected wherever the model contained a slight curve.
This next figure shows the contents of the normal and depth render target (which are actually stored together in a single image, using the alpha channel to hold the depth values), along with the resulting edge information.

Once the edge data has been computed, this is combined with the color from the main scene render target, adding black lines around the silhouette of the object.
It would also be possible to not bother rendering normals and depth to a special render target, and just run the edge detection filter directly over the main scene image instead. That can work well if your objects contain mostly flat colors, but it doesn't yield great results for textured models because it will incorrectly pick up an edge wherever the texture changes color. Doing the edge detection using normal and depth information gives higher-quality results, because it depends only on the shape of the objects rather than on how they have been textured.
The depth edge detection works best if the camera near and far clip planes are set as close as possible to the models in the scene. If the near clip is very small, or the far clip is very distant, there may not be enough precision to produce good results from this (although the normal data will still be able to provide useful edge information).
Pencil Sketch
The pencil sketch effect is implemented as a post-process by using the same Postprocess.fx effect that also provides the edge detection filter.
It works by doing a lookup into a texture containing a pre-drawn pencil stroke pattern, and then by combining this with the color from the main scene in such a way that there will be lots of strokes where the scene is dark, and less where it is light. The resulting value can be used directly to produce a monochrome sketch effect, or multiplied with the original scene color to create colored pencil strokes.
There are three different pencil stroke patterns, each aligned in a different direction. These are combined into the red, green, and blue channels of a single texture so the shader can look up all three in a single operation.

Each stroke pattern is keyed off a different color channel of the input scene color, so the direction of the output strokes depends on the color of the input. You can see this most clearly if you select the monochrome "Pencil" settings in the sample. Notice how the blue background is shaded along the diagonal from upper left to lower right, while the orange spaceship contains mostly horizontal strokes, and the black cockpit cover is crosshatched simulataneously along both diagonals.
You may notice that the red channel of the stroke texture is aligned along the top left to bottom right diagonal, but in the final rendering it is the blue background that picks up this stroke direction. Likewise, the orange spaceship is picking up the horizontal stroke direction, which comes from the blue channel of the input stroke texture. Why don't these color channels match up?
In fact they do match up, but in a subtractive color space. In computer graphics, we normally represent colors by using an additive format where we start out with zero and then add amounts of red, green, and blue to create whatever color we like (if we add lots of all three, we eventually end up with white). This is how computer monitors and televisions display images, so it makes a lot of sense when dealing with computer images. In the physical world, however, painters work with subtractive color, which is the exact opposite. Painters start out with a blank white canvas, and then paint colored pigments over the top of it. Each pigment absorbs some amount of color, subtracting that from the incoming light, until eventually they end up with black. This difference between additive and subtractive color spaces can cause a lot of confusion when computer people talk to real-world artists. A computer guy will tell you that the three primary colors are red, green, and blue, but an artist will insist they are actually red, yellow, and blue, which is inaccurate, but not as crazy as it might initally seem. It is not red, yellow, and blue, but magenta, yellow, and cyan. Artists are simplifying by ignoring the small differences between magenta/red and cyan/blue. Magenta, yellow, and cyan are the primary colors of subtractive space, and are the exact opposites of the additive primary colors red, green, and blue. For example, if you start with white and subtract cyan, you get the same result as if you started with black and added red.
Pencil sketching is a physical process involving colored ink on white paper, so to get convincing results we need to calculate it in a subtractive color space. We do this by subtracting our additive format input colors from 1 (white) at the start of the computation, and later repeat the same conversion to convert our subtractive result back into additive format. Because we are applying the sketch effect in a subtractive color space, everything comes out inverted to what you might expect. In subtractive color, for example, the sky is not really blue, but is "white minus red." Thus, it is the red channel of the stroke texture that picks up the need to darken down that region of the screen.
The pencil stroke pattern is animated by the Game.Update method, which periodically shifts it sideways by a random offset. This is a hack to emulate the way real hand-drawn animations tend to be displayed at very low frame rates, typically as little as 10 frames per second (fps) or 15 fps. It would look terrible if we slowed all of our rendering down to such a low speed (nobody wants to play a game at 10 fps!) but it also looks bad if we animate the stroke texture at the same high frame rate as the rest of our drawing, because it flickers too quickly to see the pattern and ends up just looking like random noise. The compromise is to render the scene at as high a frame rate as possible, but to only update the texture animation at a lower speed, thus preserving some of the appearance of a hand-drawn animation even though we are rendering at 60 fps.