preventing objects from self-shadowing

Shadows in my game are mostly pre-baked into the static models, but I would like to cast some shadows from dynamic objects onto the ground.
My idea is to use a shadow map similar to the shadow samples for nearby objects, and just stick a card underneath distant objects.
I need objects to have one of two behaviours:

  • those which cast shadows, but do not receive
  • those which receive shadows, but do not cast
    Presently my shadow map is working, and the dynamic objects are not self-shadowing since they use a different shader with no shadows (this is good).
    The problem is with the level, since it is casting shadows onto itself and this is not desired.
    Wondering if anyone had a similar situation and already sorted this out?

Just make sure that two-sided rendering isn’t enabled. For objects that you really need two-sided rendering or, use setDepthOffset(1).

As for preventing objects from casting shadows at all - you can use camera masks for that. As for preventing them from receiving shadows - you can just disable the light for that node, for instance by calling setLightOff.

Forgot to mention, I am not using the shader generator, since I have custom shaders.
Maybe this will help to illustrate, the panda should cast onto all of the planes, but the planes should not cast onto each other. I can’t remove/mask out the planes from the depth texture since they need to be there to receive shadow, is that right?

from pandac.PandaModules import *
import direct.directbase.DirectStart
from direct.showbase.DirectObject import DirectObject
from import Actor

class World(DirectObject):
    def __init__(self):
        self.accept('escape', __import__('sys').exit)
        # shadowmap
        depthmap = Texture()
        buffer ='depthmap', 1024, 1024, depthmap)
        self.light = base.makeCamera(buffer)
        self.light.node().getLens().setNearFar(0.1, 100)
        self.light.setPos(-150, -150, 150)
        self.light.setScale(3.0) # scale according to scene size
        # shaders
        self.shadow_shader = Shader.load('assets/shaders/')
        #self.basic_shader = Shader.load('assets/shaders/')
        render.setShaderInput('light', self.light)
        render.setShaderInput('depthmap', depthmap)
        #render.setShaderInput('cutout', cutout)
        # scene
        render.setTransparency(TransparencyAttrib.MNone, 1)
        for i in range(0, 3):
            cm = CardMaker('')
            cm.setFrame(-1, 1, -1, 1)
            floor = render.attachNewNode(cm.generate())
            floor.setScale(i * 10 + 5)
        model = Actor.Actor('panda-model', {'walk': 'panda-walk4'})



void vshader(float4 vtx_position : POSITION,
             float2 vtx_texcoord0: TEXCOORD0,
             float2 vtx_texcoord1: TEXCOORD1,
             uniform float4x4 trans_model_to_clip_of_light,
             uniform float4x4 mat_modelproj,
             out float4 l_position : POSITION,
             out float2 l_texcoord0 : TEXCOORD0,
             out float2 l_texcoord1 : TEXCOORD1,
             out float4 l_texcoord2 : TEXCOORD2)
  float bias_value = -0.000001;
  float4x4 bias_matrix = {
  0.5f, 0.0f, 0.0f, 0.5,
  0.0f, 0.5f, 0.0f, 0.5,
  0.0f, 0.0f, 0.5f, 0.5 + bias_value,
  0.0f, 0.0f, 0.0f, 1.0f};

  l_position = mul(mat_modelproj, vtx_position);
  float4x4 tex_matrix = mul(bias_matrix, trans_model_to_clip_of_light); // transformation to the light's clip space
  l_texcoord0 = vtx_texcoord0;
  l_texcoord1 = vtx_texcoord1;
  l_texcoord2 = mul(tex_matrix, vtx_position);

void fshader(float2 l_texcoord0: TEXCOORD0,
             float2 l_texcoord1: TEXCOORD1,
             float4 l_texcoord2: TEXCOORD2,
             uniform sampler2D tex_0: TEXUNIT0,
             uniform sampler2D tex_1: TEXUNIT1,
             uniform sampler2D k_depthmap,
             out float4 o_color: COLOR)
  float4 base0 = tex2D(tex_0, l_texcoord0);
  float4 base1 = tex2D(tex_1, l_texcoord1);
  float3 shadow_uv = / l_texcoord2.w;
  float delta = 0.06;
  if(shadow_uv.x > delta && shadow_uv.x < 1.0 - delta && shadow_uv.y > delta && shadow_uv.y < 1.0 - delta) {
    float shade1 = tex2Dproj(k_depthmap, float4(l_texcoord2.x + delta, l_texcoord2.y, l_texcoord2.z, l_texcoord2.w));
    float shade2 = tex2Dproj(k_depthmap, float4(l_texcoord2.x + delta, l_texcoord2.y + delta, l_texcoord2.z, l_texcoord2.w));
    float shade3 = tex2Dproj(k_depthmap, float4(l_texcoord2.x + delta, l_texcoord2.y - delta, l_texcoord2.z, l_texcoord2.w));
    float shade4 = tex2Dproj(k_depthmap, float4(l_texcoord2.x, l_texcoord2.y + delta, l_texcoord2.z, l_texcoord2.w));
    float shade5 = tex2Dproj(k_depthmap, l_texcoord2);
    float shade6 = tex2Dproj(k_depthmap, float4(l_texcoord2.x, l_texcoord2.y - delta, l_texcoord2.z, l_texcoord2.w));
    float shade7 = tex2Dproj(k_depthmap, float4(l_texcoord2.x - delta, l_texcoord2.y, l_texcoord2.z, l_texcoord2.w));
    float shade8 = tex2Dproj(k_depthmap, float4(l_texcoord2.x - delta, l_texcoord2.y + delta, l_texcoord2.z, l_texcoord2.w));
    float shade9 = tex2Dproj(k_depthmap, float4(l_texcoord2.x - delta, l_texcoord2.y - delta, l_texcoord2.z, l_texcoord2.w));
    float shade = (shade1 + shade2 + shade3 + shade4 + shade5 + shade6 + shade7 + shade8 + shade9) / 9.0;
    o_color = base0 * shade;
  } else {
    o_color = base0;
  o_color.a = base1.a;

Try this:

self.light.setInitialState(RenderState.make(CullFaceAttrib.makeReverse(), ColorWriteAttrib.make(ColorWriteAttrib.COff)))

It enables reverse culling, which should resolve most of the self-shadowing issues, and it disables colour write, which should be a very significant speed boost.

Thanks, that is much better. Once I separate the ground geometry from the rest of the world I shouldn’t be getting anything casting on the ground except characters.

I’m wondering if the shadowmap in Pirates is done in a similar way, it appears to be exactly what I am trying to achieve. Is it similar to the implementation in direct/src/showbase/

One last bit of problem:
I render multiple views in buffers for split screen play. It is not practical to render a massive depth map covering the whole world. How can I ensure that each view will get a separate depth map, since using setShaderInput will link the “light” and depthmap to a nodepath, not a specific buffer/camera.

Oh one more thing… is it possible to re-use the frustum culling information from my regular camera for the depthmap? No point rendering a shadow for something that is behind the camera. I realize it is possible something outside the camera frustum might have otherwise cast a shadow into the view, but that is a compromise I can accept for performance gains.

I found setViewFrustum in CullTraverser, which might work for my last question. But it would be best if I could just share the culling information.

If you’re talking about re-using the time spent in the Cull traversal, you won’t be able to do that unless both scenes have exactly the same state. This is because most of the time spent in Cull is actually the time required to compute the net state for each object and group them all by state. The time required to actually cull the objects to the viewing frustum is negligible by comparison.

Since the shadow pass is generally a different state than the visible pass, you can’t share the cull traversal between the two.

To answer your question in general, though, Panda will automatically share the cull traversal between two different DisplayRegions if they both use the same Camera object. This is particularly useful for, e.g., stereo displays, with a left and a right view that have the same state and are almost from the same position.


My goal was to gain performance by limiting which objects were rendered in the shadow pass. I figured if I am not rendering a character because it is culled then no need to render a shadow for that character either.
I’ve done:


It does indeed appear to be working, though I haven’t yet tested if this gives any performance gain over doing:


The last bit of this puzzle seems to be assigning the shader inputs in a way that each visible pass will be properly paired with its shadow pass.

Ah, I don’t think either of those approaches will give the effect you’re looking for. I’m not sure if there’s any way to achieve that effect in Panda currently.


I worked out the shader inputs problem using setInitialState on the camera. Thanks all for the help.