Proposal to integrate Shadow Mapping

The host is used when creating a ParasiteBuffer. I think FBO’s also require a host window, because the FBO is designed to layer onto the existing framebuffer.

I think it is legitimate to require a host; maybe we should pass in this pointer in the beginning, in addition to (or in lieu of) the GraphicsPipe.

David

Yeah - but where should we pass it? The ShaderGenerator is constructed by the GSG, not by the user. We’d need to put that somewhere in ShowBase, where the GSG is created and need to store it in the GSG itself.

Foo. Well, at the time the GSG creates the ShaderGenerator, it actually has enough information to figure out which window it is currently drawing. This can be accessed via _scene_setup->get_display_region()->get_window(). But does this mean that we should have a different ShaderGenerator for each window, too? Bleah.

How about we keep the model of one ShaderGenerator per GSG, and don’t store a host within the ShaderGenerator. Instead, we pass a host at the time we call get_generated_shader(), so each buffer can be created with whatever host is first used to render the lighted scene.

David

Ah, ok. That sounds like a fair solution.

Meanwhile, I discovered I could temporarily work it around by using BF_require_window or by using _gsg->get_engine()->get_window(0) as host.
I ran into a segfault here:

GraphicsEngine::draw_bins (this=0x26456d8, wlist=@0x26457d8, 
    current_thread=0x2757728) at graphicsEngine.cxx:1420
1420	    if (win->is_active() && win->get_gsg()->is_active()) {

Meaning ‘win’ or its gsg points to something invalid or NULL. Did I forget an important step? When reconstructing the same code in Python, I didn’t get any crash.
EDIT: Placing a cerr statement tells me that “win” always points to 0x21.

Sounds like you ran into trouble because you added a new buffer, thus modifying wlist, during the draw traversal (while we are iterating through wlist). This is not valid, particularly not with a vector, since adding a new element to the vector could invalidate all the iterators currently open on the vector.

What a mess, huh? Hmm, the simplest way to work around this is to use an integer counter in draw_bins() to iterate through wlist (0 to size), instead of a vector iterator. Not sure if this is the most elegant way, but it will do for now. :slight_smile:

David

Works perfectly, thanks.

I’m nearly there, but I seem to be getting this error:

:display:gsg:glgsg(error): GL texture creation failed for spotlight : invalid operation
:display:gsg:glgsg(error): Could not load spotlight
:display:gsg:glgsg(error): GL texture creation failed for spotlight : invalid operation
:display:gsg:glgsg(error): Could not load spotlight

This is the code that creates it:

    PT(Texture) tex = new Texture(light->get_name());
    sbuffer->add_render_texture(tex,
               GraphicsOutput::RTM_bind_or_copy,
               DrawableRegion::RTP_depth_stencil);
    // If we support the shadow filter, enable it.
    if (_use_shadow_filter) {
      tex->set_minfilter(Texture::FT_shadow);
      tex->set_magfilter(Texture::FT_shadow);
    }

EDIT: with a sort value of -10, the error just shows up twice, while with a sort value of 10, it shows up alot more times.

That’s the sort of error message you get when you try to render a texture that has no associated image: e.g., a texture that has not yet been rendered to. So it’s not surprising that you get it the first frame, since you haven’t yet rendered the texture. Maybe as a workaround, you can preload the texture with a 1x1 white image or something.

David

Hmm, sounds rather hacky. Isn’t there a way to make sure the texture is rendered into before being passed to OpenGL? Otherwise I’d need to use PNMImage in the shader generator. Or, I could manually fill in the ram image data.

By the way, I’ve got it working. Here’s a screenshot showing five shadow-casting pointlights at the same time:
[size=84](click to enlarge. alternative link.)[/size]

Not finished yet though, it contains some workarounds here and there and there are still issues (e.g. when disabling shader generation and then enabling it again).

(Also: A light doesn’t have near and far, does it? Currently, the user needs to set it himself if the near/far aren’t correct for him. But I guess that’s inevitable, since it’s so important to tweak near/far for best depth buffer precision.)

Well, in this case it’s clearly not, because it doesn’t get rendered into until its buffer gets rendered, and you’ve only just created the buffer. So the very first frame, you’re putting an undefined texture in the scene.

Granted, we could do a better job of detecting that you’ve just put an undefined texture in the scene, and filter it automatically before it gets enabled. The fact that it doesn’t do this is arguably a bug in Panda. It is, however, surprisingly difficult to fix; but it will presumably be fixed one day.

That’s what I meant when I suggested putting in a 1x1 white texture. You could fill in the ram image data with the four bytes necessary to define a 1x1 texture.

Awesome! Looks great!

I guess it does now. A LensNode, of course, does provide a near and far; so our shadow lights also have this concept now.

David

pro-rsoft, I have just been stricken with a thought… Do you use the problematic ARB_shadow extension in your code (which we talked about in another thread)? If so, does it mean that shadows will be unusable on ATI hardware? :confused:

Four? I only need grayscale for the depth buffer, right?
By the way, if I increase the component width of the texture, will a higher-precision depth buffer also be created?

No. I’ve added some checks that make sure COMPARE_R_TO_TEXTURE and sampler2DShadow/shadow2DProj aren’t used when get_supports_shadow_filter() returns false (which also returns false on your ATI card). Instead, it uses the older method of comparing the depth value against the light-vertex distance.

Well, OK, if you wanted to be miserly about it. :slight_smile:

Hmm, I don’t think it works that way. You have to request a higher-precision depth buffer in the FramebufferProperties, when you create the offscreen buffer in the first place.

David

Ah, okay. And set_depth_bits(1) means: get as much bits as you can, right?
Does it matter how I fill in my texture, will it automatically be reconfigured to match the buffer once Panda is actually ready to fill it in?
Or, would I need to check how many depth bits the buffer has, and set component_width based on that?

set_depth_bits(1) technically means, get as many bits as you can, consistent with all the rest of the framebuffer requests. If you really want a big depth buffer, sacrificing stencil if necessary, you should explicitly request it.

The texture is supposed to reconfigure itself to match the buffer.

David

I’m still getting the error… here’s my code (just after creating the buffer)

tex = new Texture(light->get_name());
tex->setup_2d_texture(1, 1, Texture::T_float, Texture::F_depth_stencil);
tex->make_ram_image();
sbuffer->add_render_texture(tex, GraphicsOutput::RTM_bind_or_copy,
                                 DrawableRegion::RTP_depth_stencil);

Isn’t make_ram_image supposed to fill in the image with 0’s ?

Yeah, hmm, what happens if you specify T_unsigned_byte and F_rgba for the texture properties instead? (The texture should be reset to the correct depth properties when it is first copied.)

David

Hmm, doesn’t work.

:pgraph: Generating shader for render state 0x7f7b17abde68
:display:gsg:glgsg(error): GL texture creation failed for spotlight : invalid operation
:display:gsg:glgsg(error): Could not load spotlight
:display:gsg:glgsg(error): GL texture creation failed for spotlight2 : invalid operation
:display:gsg:glgsg(error): Could not load spotlight2
:display:gsg:glgsg(error): GL texture creation failed for redSpotLight : invalid operation
:display:gsg:glgsg(error): Could not load redSpotLight
:pgraph: Generating shader for render state 0x7f7b17ab8250
:pgraph: Generating shader for render state 0x7f7b17aba160
:pgraph: Generating shader for render state 0x7f7b17ab7f48
:display:gsg:glgsg(error): GL texture creation failed for spotlight2 : invalid operation
:display:gsg:glgsg(error): Could not load spotlight2
:display:gsg:glgsg(error): GL texture creation failed for redSpotLight : invalid operation
:display:gsg:glgsg(error): Could not load redSpotLight

The first time, it complains about all lights, the second time, it complains about all lights except the first one.

Besides that, there are two more issues bothering me.

  • I’ve tried to make lightLensNode read & write to .bam correctly, but I failed. I get a crash when I try to pview my .bam file. This is my code:
    codepad.org/SGVWZzfs
    Am I doing something terribly wrong there? I really have no idea how the .bam loading/writing works besides what I’ve seen in other files.

  • Storing the generated shader attrib - where would we do that? Currently I store them on the RenderState (by doing a const cast), since the generated shader is the same across different GSGs, but you mentioned that you’d prefer to store them in a table on the RS or GSG? How would we prevent memory leaks in that case?

One detail is that your parameters may be evaluated in the wrong order. C doesn’t guarantee the order in which function parameters are evaluated, but it’s usually right-to-left in practice, so your values will be pulled out of the bam file in the wrong order. You should save them in temporary variables first, and then pass them into the function. Not sure if this is causing your crash or not, but it would mean that set_shadow_caster() would be called with meaningless values.

Actually, I do prefer simplicity over complexity, so storing them directly on the RenderState is preferable as long as it works. If we have to go to a table, then yeah, it gets more complicated and we’d have to come up with a way to prevent leaks, either by providing a cache-clearing mechanism, or by using weak ref-count pointers or some similar system that can automatically remove the pointers when they destruct. Either way it gets messy. Honestly, I haven’t spent much time thinking about it.

David

Thanks, I’ve fixed that but it still gives the same crash. Traceback:
pastebin.ubuntu.com/164836/