Cartoon Painter


In short: Paint just a specific nodepath whit cartoon shading and inking. Good to create interesting scene where only few objects look cartoonized.


  • Apply cartoon shading and inking only to some of your scene objects.
  • Change step function to customize the number of dark/bright regions and their relative contrast.
  • Set camera spot light effect to have the shading directional light follow the camera movements.

Cartoon Painter can selectively apply cartoon shading and inking to specific nodepath of your scene. To achieve this effect it creates two extra display regions where the cartoonized nodepaths are drawn into. These regions are connected to two scenes called toon_render (for objects with toon shading) and inking_render2d (for the objects black outlines).

These regions has to be drawn before the main scene in render. You can adjust the sorting by setting the parameter in the constructor. The sort parameter is the sort value for the region of inking_render2d; the region of toon_render comes right after (sort - 1).

You can cartoon paint a model even when it’s deeply nested into any nodes hierarchy of your main scene. When you paint a nodepath you actually create an instance under toon_render. The original nodepath will be stashed but still retain its parents hierarchy under render and therefore its global transform. The CartoonPainter will take care of synchronizing position and rotation of the instances under toon_render.

Cartoon Painter comes as a Python package containing a CartoonPainter class and few shaders taken from cartoon shading Panda tutorial. The file shading.sha is a modified version of lightingGen.sha. It works not only with the model’s vertex colors but also with the flat color (if any). It takes three extra shader inputs to control the step function.

Comments and constructive critics are all welcome, especially about the shaders since I’m a noob at it. Enjoy and make the most out it!


Known Limitations:
*Cartoon shading doesn’t not apply to transparent models.
*Cartoon shading doesn’t not apply to textured models. It works only for vertex and flat colored models.
*Ink outlines are not affected by fog.

Known Bugs:
*A glgsg error is printed out when you exit your Panda script:(glGraphicsBuffer_src.cxx, line 1020: GL error 1282). Perhaps the normals buffer has to be destroyed before exiting?
*On my old machine with Intel integrated graphics card the CartoonPainter crashes my Panda script.

Thanks to Kwasi Mensah for making the cartoon shading advanced tutorial and to David for helping on display regions and tex buffers.

Those dear shaders
Selective cartoon shading
Porting the Cartoon Painter Python sample to C++

Nice, I’ll have a look at it.

Edit: I have the GL error 1282 in Windows as well regardless of program used. No errors in Linux.


I still haven’t got a clue about the GL error 1282. Perhaps the Panda team could shed light on it?



We have tended to be a little bit lax in our investigations of error messages at shutdown. There are a few known issues where Panda doesn’t quite clean up all buffers and graphics objects in the right order, and therefore generates a few (harmless) GL error messages on the way out.

We should be more aggressive about tracking these down and cleaning them up, but so far it hasn’t been high on our priority list. My apologies. As always, patches are welcome. :slight_smile:



No big deal David, I’m sure there is a good reason to have low it in your priority list.

I have another issue with the cartoon painter. On my old machine I got this error at startup:

:display:wgldisplay(error): Could not share texture contexts between wglGraphicsStateGuardians.

I have an old Intel integrated graphics card and, sure enough, it uses the RAM as video memory. How can get rid of this error? Ideally I’d like the cartoon painter to be disabled if the video card cannot support either shaders or texture buffers.


possibly have something to do with this


That error message means that you created a new graphics context, and your graphics driver cannot share textures between graphics contexts. (Most modern drivers cannot do this, but some older drivers cannot.)

The solution is to use the same graphics context for all of your windows and buffers. It’s usually a good idea to do this anyway. The graphics context is encapsulated in the GraphicsStateGuardian (GSG) object. Try passing gsg = to any calls to create an offscreen buffer.



Which function should I pass makeTextureBuffer doesn’t seem to have a gsg parameter…



Right, makeTextureBuffer() doesn’t need this parameter, because it’s implicit in this case (it takes it from the object you have called it on). So if you’re only using makeTextureBuffer() to create your offscreen buffer, you should be already good.

Hmm, unless you are calling makeTextureBuffer() before the first frame has rendered and the window is initially open, in which case it won’t have a GSG yet. Is that the case? Try calling base.graphicsEngine.openWindows() before you call makeTextureBuffer().



I tried calling base.graphicsEngine.openWindows() before creating any texture buffer or display region but I still get the same error.



Hmm, what version of Panda are you using?


I’m using Panda 1.7.2 on Windows XP. Video card: Mobile Interl 945GM Express… it’s a motherboard integrated video card… essentially crap.



Any new thought about :display:wgldisplay(error): Could not share texture contexts between wglGraphicsStateGuardians?



I’m a little puzzled by this, actually. I have an old computer that has an early Intel integrated card on it, that I was going to try to dust off and see if I can reproduce this error. But I haven’t had a chance to do that yet.



Hello, here are my comments, hope you don’t mind.

To get textured models and transparency you can do this :

uniform sampler2D k_diffuse : TEXUNIT0, //in shader inputs, custom texture
uniform sampler2D tex_0, //in shader inputs, first texture found in .egg


float4 albedo = tex2D(k_diffuse, l_texcoord.xy); =  (factor ) *;
o_color.a = albedo.a;

You can get better lighting results, if you do all your light calculation in pixel shader instead of vertex shader.

What you are doing currently is more of a rim light effect than a real toon shading with correct light position and self-shadows. If you want “correct” lighting and self-shadows, you need to make all your light calculations in world or view space instead of model space. Like this you won’t need to resynchronize the painted instances with their original nodepaths. Also if you want a directionnal light, you don’t need vertex position in light calculation. In current code your light is like a point light.

Here is the code I use for toon shading calculations, it’s very simple :

float3 N = normalize(mul(float3x3(trans_model_to_world), l_normal)); 
float3 L = normalize((wspos_A - wspos_B));

float intensity = dot(L, N);
float factor = 1.0;
if ( intensity < 0.1 ) factor = 0.2;

float4 albedo = tex2D(k_diffuse, l_texcoord.xy); =  (factor ) *;
o_color.a = albedo.a;

A is the light position, B is origin of the world.
l_normal is the interpolated normals from the vertex shader.

For a rim light effect, you can do this :

float4 rim = pow( 1 - dot( N, normalize(wspos_camera) ), 2 );

It’s like calculating the opposite of the specular component, the rim effect is function of camera position.


The current limitation here is the thickness of the ink shading lines is function of camera distance, and I find it quite complex to use a separate normal buffer for it. What I do is use 2 offscreen buffers, one for lit models, one for bigger black colored dupplicated model. The problem is you have to use 2 models to get this, but you can have more possibilities with the ink shading, for example, have a simpler or round ink outline using the same model with fewer polygons.
You then composite the 2 buffers in a compositing shader, main problem is you’ll have to calculate the depth manually I think.

FXAA and others

Also, I think for toon shading you need to smouth out edges in some cases, you can use a last pass for FXAA, code is in my deferred shading thread. Only problem with FXAA is you can’t use basic shaders anymore.

I think also the separate normal buffer calculation could be used in a deferred shading system, so you could get deferred toon shading with tens of lights !

rim light:

transparency + texture:

note: the model here doesn’t look good (with toon shading) + some normals on the wings are wrong, but you see the effect.


Manou those are great comments! I’ll dive into them after the weekend.

About the ink-shading… I like the effect done through normals buffer cause it adds few black lines to the inside of the model too. Anyhow, I’m listening to new suggestions since I don’t know much about shaders.

By the way, do you know a way to get cartoon inking and fog working at the same time? When the fog is bright you can see the outlines even from far away.

What is FXAA and deferred shading?

Thanks for the contribution, great job dude!!



of course cartoon inking is a filter, so it’s applied on top of fog. The solution would be to use a filter based fog. That would even allow you to have custom shaders affected by it. There is one here but it doesn’t work 100% correct: … 48&start=0

If you guys could fix it that would be great, maybe it could even become part of CommonFilters one day.


Fxaa is a post-process shader that will smooth edges and high-frequency details, to avoid aliasing.

Deferred shading can allow you to have hundreds of lights in realtime; in fixed pipeline you are limited with numbers of lights, because you have to render whole model for each existing light; in deferred shading only the lit parts of models are calculated.