offscreen buffers and render textures

Do you typically use a separate buffer for each texture or can you use the same buffer and choose which texture to use on the fly? Still trying to get my head around this stuff.

Each buffer means I’m doing an extra render pass? So less is better? Is this correct?

Say for instance I want a camera to capture a front view and another to view behind. Or can I use same cam/buffer change it’s orientation and render the rear view into another texture? Does this even make a difference as it would have to require another render pass? hmm I’m just confusing myself more…

I probably have the idea all wrong so any help in understanding would be appreciated!

Forgive my noobism,
Tim

A buffer is basically any image that is being rendered/loaded/cached. Some buffers do get rendered directly to the screen, but this need not be the general case. The window buffer in Panda is by default filled with the image being rendered off the primary camera. Depending on how you set it up, a buffer may also contain other ‘bitplanes’ - typically invisible ‘images’ comprised of metadata on each pixel- depth, masking properties, etc.

Any time you refill a buffer as a function of doing math on a particular source (e.g. a camera or certain blending/postprocessing operations), it’s a render pass. You may be able to set up your render pipeline so a buffer is processed back into itself to save on buffer allocation, but that won’t cut down on render passes so long as the processing still needs to be done.

You can probably change a buffer’s source on the fly, by associating it with a different camera, loading a different file into it, etc., but apart from old graphics cards short on texture memory, that may not gain you much over setting up multiple cameras with independent buffers and just disabling the cameras you aren’t using.

For your situation, you could in theory set up one buffer and switch between associating it with 2 different cameras, but I don’t think you could associate it with 2 cameras simultaneously and get 2 independent images out of it barring some ugly manual futzing with target texture coordinates, in which case you’d have an easier time with 2 independent buffers anyway. And if you’re going to juggle one buffer between two cameras, why not make one camera and just switch its position as needed? Likewise, you could keep one camera and switch the buffer it renders to on the fly, which would possibly be useful if you, say, had 2 objects, one textured with renderTextureA and one with renderTextureB and only one or the other needed to update at any given time, but that’s the sort of trick I’d only expect to come up as an elegant solution to a specific problem, not as a general case usage.

I recently toyed with an application that had 5 cameras rendering to 5 texture buffers, with those 5 buffers passed as textures into a shader which composited them in a particular way to fill a UV space on a card mapped to the final screen output. Other applications use multiple buffers to get different types of specialty data, e.g. rendering a scene from the perspective of a light source, tagging the textures/UVs accessed in the process, and using that data to paint a light-map/shadow-map onto the scene as rendered by a second camera. You can probably find other theory/examples on The Internet.

hi Ninja,

I have this question in my mind for quite some time. If I have a postprocessing blur filter that requires 10-20 passes (on a scaled down screen), would it be a problem if I create 10-20 buffer correspondingly and run it smoothly on old graphic cards ?

If I want to reduce the number of buffers, how can I do it in Panda ? I don’t know how to do it in FilterManager, or code it directly.

I have not used buffers extensively in Panda, and what I have done was mainly for multiple cameras rather than postprocessing. Hence, I can’t quote you any exact code for setting up postprocessing buffers, but from a theoretical perspective I’d think you could get by with as few as 2 buffers, repeatedly rendering from one to the other, or even 1 if you can render directly back into the same buffer, provided you had tight enough control of the render pipeline.

Either way, though, I don’t imagine using fewer buffers would give you ~too~ much benefit other than texture memory footprint, and I’m pretty sure fill rate would be your biggest bottleneck, and that would be unaffected regardless of which buffer you’re filling.

Again, I haven’t played with filters in Panda, but if you have any control of creating/setting buffers, one quick experiment would be to plug the same buffer into each filter stage and see how it reacts. As long as postprocessing is applied in a strict read-execute-write order, and each pixel doesn’t care about its neighbors or the pixel value further back than the previous pass, I see no reason this shouldn’t work. For bloom/glow/motionblur filters, however, this would break down. Blurs I think would need surrounding-pixel info, so you couldn’t overwrite your source image right away, and motionblur in particular would need information from multiple frames/passes back. But this is all theory as I see it, and I’ve already been corrected once on my understanding of the Panda graphics pipe.

Alright. I got bored and decided to research this a bit, since I really should know it in some from. The manual page
panda3d.org/manual/index.php/G … ge_Filters
looks to have your answer: Yes, it is possible to have the same texture as a shader (filter) input and as the render target. Whether this makes sense, however, still likely depends on what your shader wants to do with the image. Unless an intermediate buffer is implicitly created during the shader pass, if your shader needs access to pixels other than the current fragment, you’ll want top be sure you aren’t overwriting them until after the whole image is processed.

This still isn’t telling me how panda determines the order in which to run the stages- if each stage renders to a different texture, the order can be implied by tracing which textures are needed as inputs to each shader, and which texture is output to. If you try to do something tricky like have the camera processed into tex1 processed into tex2 which is processed back into tex1 which is processed into the window… my gut says it should be possible, and that there must be a way of specifying the render order, but that’s lower level than I’ve ever really attempted. The simple experiment is to just declare the shaders in a particular order and see whether that alone is enough.

In any given frame, Panda renders each buffer once, and in the order specified by buffer.setSort(). Thus, if your frame requires multiple render passes, you will need to use a different buffer for each pass.

David

Well, huh. Then care to explain how the multi-stage filter sample code on the page I linked is working? It gave me a bit of a headache until I traced it out on paper, but it looks like

  1. camera renders to finalquad, implicitly to buffer tex1
  2. tex1 is rendered via stage1.sha into interquad, implicitly to buffer tex2
  3. tex2 is rendered via stage2.sha into finalquad, implicitly tex1 again, which is displayed in the window.
    Unless, that is, renderScreenInto generates a quad with 2 associated buffers: the tex plugged into the call and the shader target rendered to the screen.

Unless… the syntax is making this all incredibly counterintuitive, with each render*Into call saying “yeah, render this thing to where it was going to render to, but anything that was going to render to it instead renders to a texture”. Meaning the code reads more like

  1. camera which normally renders to windowbuffer is hijacked, windowbuffer is now attached to the handle ‘finalquad’ but camera renders into tex1 instead
  2. a new null buffer is created and simultaneously hijacked, with the buffer object pinned to the handle ‘interquad’ but any rendering redirected to tex2.
    2b. shader1 renders “into interquad” which means it’s really rendering into tex2 because interquad, which would normally not have any particular render target, is now rendering to tex2
  3. finally ‘finalquad’ is rendered to explicitly by shader2, and since it’s still the windowbuffer, the shader2 output is rendered to screen.

I… think I got it.
But I’m a little too hungry atm to follow through on what that would mean for the idea I previously had of trying to set up multiple render*Into calls to use the same texture…