Proposal to integrate Shadow Mapping

Well, you could just call setShaderAuto on the nodes you don’t want to have shadows but still be lit, or setLightOff on the nodes you want to have lighting.

Oh, another question: Would it have to be one buffer per light per GSG, or can we make it one buffer per engine or pipe? I imagine if there are multiple GSG’s on the same scene, (e.g. if someone set up some entirely unrelated buffer but supplied gsg=0 so it made a second GSG) we could just use one buffer? Or is that not possible?

The buffer is intricately associated with its GSG, so there must be a different set of buffers for each different GSG.

David

Ah, fair enough. More things popped into my mind:

(1) Buffer Size. IMHO there needs to be a way to specify the size of the buffer, per light. A global everything-illuminating sun will require a larger buffer than, say, a tiny flashlight someone 50m away carries. So, we could add the xsize and ysize parameters to the setShadowCaster method, possibly with a default size of 512x512. Or, we could add a setShadowBufferSize method, but that would require a mechanism to resize the buffers after being created (is that possible at all?)

(2) If we want to add the per-light code to both Spotlight and DirectionalLight (which isn’t much, but it can get more later if we add more parameters like shadow-softness or so) we will end up with duplicate code. Should we add a ShadowLight class, which inherits from Camera? Both Spotlight and DirectionalLight would then inherit from ShadowLight.

(3) PointLight. Should we handle those? It would be possible, using a cube map buffer (ha, that would force me to add cubemap support to the SG), but I guess since we can’t inherit from Camera six times, it wouldn’t quite work with the current approach. Well, I guess people won’t want to enable such an expensive shadowcaster which requires 6x more renders anyways.

I believe PointLight is very cool light. I can’t imagine any medieval-like setting without candles, torches or campfire… It would be pity to have no shadows with it. The developer can always find some workaround, but at the end it will be more performance-drain then having true PointLight shadows (even with cube-mapped shadows)…
PS: I remember I saw something interesting on cost-effective PointLight shadows in the internet. I’ll try to search.

Hmm, I think the most commonly ways to do point light shadows are either by using cube maps, or, by smartly using multiple spotlights with a big FOV.
(those two approaches are actually more similar than you think.)
Let me know if you find more ideas.

Yes, now I remember. It was about optimization but of different kind. Somewhere it was suggested to use 1 cubemap for 4 point lights (i.e. storing one cubemap in Red channel, another in Green etc.) Sorry for this confusion.
Anyway, I would miss point light shadows even when they are so costly… In some situations (in many adventures, RPGs and so on) you can’t live without a point light…

EDIT: Found something interesting about omni-directional lights (2 passes for a point light instead of 6 passes): http://www.mpi-inf.mpg.de/~tannen/papers/cgi_02.pdf Maybe it helps a little…
EDIT2: Example of the above: http://graphicsrunner.blogspot.com/2008/07/dual-paraboloid-shadow-maps.html
Example of more advanced version of the above: http://graphicsrunner.blogspot.com/2008/07/dual-paraboloid-variance-shadow-mapping.html

Sounds very interesting. I’ll keep that in mind for later - right now, I will focus on getting the spot and directional lights working.

But I agree that point lights are very important and should be supported if there is a good way to do so.

  1. Specifying the buffer size at power-up seems reasonable. I wouldn’t mind having a way to change your mind about buffer size, too. There’s no way to resize a buffer after the fact, but you can of course destroy the old buffer and create a new one when needed. The ShaderGenerator could have the necessary logic to do this.

  2. Hmm, crazy inheritance chains gets messy. On the other hand, having the necessary shadow interfaces on a common base class does make certain things easier. But if we are going to eventually support PointLights anyway, maybe that common class is just Light. (Only AmbientLight wouldn’t have any use for the shadow interfaces, but I see no real harm in leaving the interfaces there anyway.)

David

  1. Ah, I’ll just make it destroy the buffer when the size gets changed. The SG will automatically recreate the buffer, then.

  2. Yeah, but I think we’re making it a little bit too crazy when even an AmbientLight will have shadow settings, a camera, and a lens.

Maybe. Solving this with a ShadowLight class is not so bad, though it does mean we’ll also need a ShadowLightNode class. But maybe the ShadowLightNode class simply replaces the LightLensNode class anyway, and we end up with about the same complexity as we have now.

David

Hey, I just had another idea - we can just add the shadow functionality to LightLensNode, and make the latter derive from Camera. Then, we can make DirectionalLight inherit from LightLensNode as well. How 'bout that?

Sounds perfect!

David

While the releases are still compiling (curse ztriangle with its recursive mass-including), I’ve started to implement this (I’m making great progress) but I ran into some more things.

If we make one ShaderGenerator per GSG, what becomes the function of the ShaderGeneratorBase now? Could we remove it or does it still serve a purpose?

Also, the buffer and display region sort value, would they be just 0 or do we need a way to set the sort as well?

How should the shadowed scene be handled? You can call NP.setLight on multiple nodePaths, but you can’t call camera.setScene on multiple scenes, right?

Hmm, you’re right, we might be able to axe ShaderGeneratorBase. Let’s keep it around for now, though, until we’re sure we don’t need it. It doesn’t really do any harm, other than code complexity.

We’ll certainly need to be able to set the sort values. It’s important that the shadow buffers get rendered before the main scene tries to consult them. I guess we’ll need an interface on ShadowLight (or LightLensNode, or whatever) to specify this; but the defaults should be chosen to be sensible for a standard setup, e.g. -10 or so.

You can only call camera.setScene() once, but I don’t think you need to have a camera render multiple scenes. setScene() only changes what the camera is looking at, and isn’t related to the DisplayRegion(s) that reference the camera. You can call dr.addCamera(cam) multiple times for a given camera, to put the same camera in as many different DisplayRegions as you need.

David

Would that sort value be for the buffer, for the display region, or both? Or would they share the same sort?

Also, how to const cast from a CPT to a PT? In glGraphicsStateGuardian_src.cxx, function set_state_and_transform, I need to set a data member on the render state (containing the generated shader), but I can’t seem to do a simple const_cast since it’s a CPT and no ordinary const pointer.

The offscreen buffer is only going to have one DisplayRegion, so the sort value on that DisplayRegion doesn’t matter and can be zero. It’s just the buffer’s sort value that matters, to order it relative to the main window. Is there another DisplayRegion on the main window that post-processes the shadow results? If so, this one would need to have a sort value to order it relative to the main DisplayRegion.

You can cast a CPT to a PT by first converting it to a normal pointer with the p() method. But I don’t think you should be modifying the RenderState, since this pointer will be shared by multiple different GSG’s. Instead, can we move that data member to the GSG itself?

David

Well, the generated shader is different for each RenderState. That’s what the code used to do - RenderState::get_generated_shader checked if there was a generated shader for that state, if not, used the global ShaderGenerator to generate one, then it returns a ShaderAttrib.
Since the ShaderGenerator is now stored in the GSG and not global anymore, I have to move this code to glGSG_src.cxx.

Bleah. This means we need to cache this pointer for each unique pairing of RenderState and GSG. This means a lookup table: either a pmap of GSG’s in the RenderState, or a pmap of RenderStates in the GSG. Both have pros and cons. Maybe a pmap of GSG’s in the RenderState is the better of the two.

But, I still think this is the right thing to do. Having a ShaderGenerator be different for different GSG’s allows a lot of things we can’t do now, for instance, synthesize GLSL/HLSL, or synthesize a lower-level shader according to the capabilities of the GSG.

David

I’ve basically finished writing the code (I’ve temporarily used the const cast) but am hitting another wall. I can’t seem to create a buffer without specifying a “host”. Translated into Python, this fails for me:

from pandac.PandaModules import *
from direct.directbase import DirectStart

fbProp = FrameBufferProperties()
fbProp.setDepthBits(1)
assert base.graphicsEngine.makeOutput(base.pipe, "aaa",
      -10, fbProp, WindowProperties.size(512, 512),
      GraphicsPipe.BFRefuseWindow, base.win.getGsg()) != None

When I specify base.win as the “host”, it works.
What exactly is this “host” needed for, and how should I fix this? Could I just grab a random window from the GraphicsEngine?