God Ray Shader - clip to screen space?

Hello all,
I’m coding a God Ray shader in Pada3D Cg, but am having trouble with getting the screen-space position of a light. My shader has the following code:

//The vertex shader
void vshader(uniform float4x4 mat_modelproj,
             uniform float4 cspos_light,
             ...
             out float4 l_slp){
    
    ...
    l_slp = cspos_light;
    ...
}

//And the fragment shader
void fshader(...
             in float4 l_slp,
             out float4 o_color : COLOR){
    
    float3 l_screen = l_slp.xyp / l_slp.w;
    //l_screen.xy should now be the location of the light on the screen
    //in 2D - which I can use to give the God Rays a proper direction
    ...
    o_color = ...
}

And I set the shader input like this in python:

plight = PointLight('plight')
plight.setColor(VBase4(1,0.9,1,1))
plnp = render.attachNewNode(plight)
plnp.setPos(0,0,4)
render.setLight(plnp)
...
card.setShader(postProcShader)
card.setShaderInput("light", plnp)

The shader creates the God Ray effect, but the rays are always in the wrong direction. Is the above method not how you get the screen space co-ords? I can make the rays stream from a hard-coded screen space location by disregarding l_screen.xy and using, for example, float2(0.5,0.5) - but I need to stream the rays from the light’s SS position.

Thanks for the support - this effect looks absolutely awesome (even in my simple development environment with 3 models) - and I really want to share it, but I need to get it working properly first.

Update: The info here is what I based my screen space calcs off.

Hmm, there seems to be something wrong with cspos. You can better calculate it yourself, this works better for me:

        card.setShaderInput("cnode", base.cam)

and:

             uniform float4x4 trans_model_to_clip_of_cnode,

             uniform float4 mspos_light,
...
    l_slp = mul(trans_model_to_clip_of_cnode, mspos_light);
...
    float2 l_screen = l_slp.xy; // NOTE: don't divide by l_slp.w !
    l_screen += 1.0f;
    l_screen *= 0.5f;

(EDIT: I’m not entirely sure the +1.0f and /0.5f is really needed.)

You can also better normalize the deltaTexCoord, otherwise the density varies by distance:

half2 deltaTexCoord = normalize(l_my - l_screen);

That works great for me :slight_smile: It looks awesome, excellent work. I can’t wait to include this effect in Panda.

This is awesome, man.
pro-rsoft.com/screens/volumetric-lighting.png
Thanks so much for sending the code, I’ve integrated it into Panda’s shader generator now. Excellent work, thanks! (I might just consider adding some blur though.)

Hey pro-rsoft, it’s great you got it working!
-At my end, the rays are still facing the wrong direction, though (if you move behind the models, looking at the light, there is nothing, while standing between the models and the light (looking toward the models) the rays are visible).

Is that all you changed?

sigh - I was hoping to see this at my end this morning :S - Your screenshot looks great though…

Maybe you could email me back your source code?

EDIT: Arggggg I reaeeealy want to see this in action

EDIT: As soon as I get a copy working I’ll do up a small teaser for everyone to see while they’re waiting for 1.6 :stuck_out_tongue:

Sure thing. You’ve got mail.

EDIT: to anyone who wants to see what’s become of this:
discourse.panda3d.org/viewtopic.php?t=5801

In case anyone knows a solution to this problem:
I want to render the scene two times: once just the normal render, and the other time with all objects black except for the sun. How should I go about doing that? Should I work with initialstate and tagstate?

That’s exactly what those features are for. Create two DisplayRegions and two Cameras. The first Camera will be normal, the second Camera will have an initial state set on it to make everything black, except for the sun which will have a tag state set on it to make it whatever color you want it to be.

David

I’m still looking for a clean way to integrate this. Right now, this works:

if (configuration.has_key("VolumetricLighting")):
    if self.vlbuffer == None:
      self.vlbuffer = base.win.makeTextureBuffer("VolumetricLighting", base.win.getXSize()/2, base.win.getYSize()/2)
      self.vlbuffer.setClearColor(Vec4(0, 0, 0, 0))
      cam = base.makeCamera(self.vlbuffer)
      cam.node().getLens().setFov(90)
      b = NodePath("aaa")
      b.setColor(0,0,0,1,10000)
      b.setColorScale(0,0,0,1,10000)
      b.setShaderOff(10000)
      b.setMaterialOff(10000)
      cam.node().setInitialState(b.getState())
      caster = configuration["VolumetricLighting"].caster
      caster.setState(caster.getState().adjustAllPriorities(20000))
    self.textures["vlbuffer"] = self.vlbuffer.getTexture()

This just makes a second camera with an initial state of “everything-black” and a high override value. “caster” is the sun, this should be rendered exactly as the user specifies - that’s why I’m adjusting the priorities to be higher than the “everything-black” state.

Any ideas how I can get this cleaner? (Preferably without messing with people’s scene graphs)
I thought maybe I can do this by using an extra output render target for the shader generator, but that would only allow a few lights to cast god rays.

Instead of storing the caster’s modified state back on the caster itself, why not store it on the camera, under a tag state key, and require the user set a particular tag on the caster?

That still leaves the possibility that the user might want to modify the caster’s state later, which would fail. Maybe we need a new feature on Camera, something like a “default” tag state, which would be applied to every GeomNode that lacked a matching in its ancestry. Then you could put the render-everything-black state on this default tag state, instead of on the initial state, and then you wouldn’t have to undo the high override for the caster. I dunno, though; that’s a little bit tricky to implement properly, and it would add a bit of overhead during the cull traversal, even if it is not used.

David

What if the user then changes the state on the caster (changes sun color), would I then need to check if the state changed every frame and re-set that state as tag state?

Right, that’s the problem I was talking about in the second paragraph. It’s actually worse than that, though, because you really need the net state of the caster (it might be inheriting state from above), not just the state on the caster itself. Plus, the caster might be a parent of lots of sub-nodes, each of which has its own state.

David