2D and 3D Rendering Layers

Hi Guys,

Is there a way I could set the order in which entities on the screen get displayed? Can I make it so that for instance, even if actor1 is farther to the camera and thus being blocked by actor2, actor1 would still get fully rendered over actor2? Is there something like this similar to ‘rendering layers’ that would allow me to set this? What I have in mind is putting an actor in an environment with so many entities (trees, pillars, walls) where in our view of him would be unobstructed regardless where he is.

Second question is, is there a way I could include the default GUI system to the layers? Like for example I wanted to display an Actor over a scroll bar or a 2D image?

Thanks! :slight_smile:

Yes, this is certainly possible. Panda gives you lots of power over the rendering order if you want it. However, it is complicated; and it may not give you the visual results that you want. (If your character is drawn in front of a tree, the eye will believe he is standing in front of the tree, even if he obviously isn’t. It can be very confusing.)

If you want to go this route, your primary tools are DisplayRegions and bins. DisplayRegions are the toplevel sorting, and within DisplayRegions, you can control sorting via bins. The depth buffer is also important, because in most normal cases the depth buffer makes things appear behind each other correctly regardless of the order in which they are drawn.

I recommend reading the manual page Display Regions and the following few pages, as well as the page How to Control Render Order and Depth Test and Depth Write.

David

As a sidenote question: at the moment one object is rendered, does the framebuffer already contain what was previously rendered in this frame (the objects that are behind it) and can I use that texture? I have a refraction shader that uses multiple DisplayRegions, render-to-texture and a projected shadow depth map for occlusion, but I was wondering if it couldn’t be done easier.
Basically, in my water shader, I want to get the pixel currently behind the pixel being rendered.

Well, the framebuffer contains the pixels from what has already been rendered, sure, but you can’t use it as a texture until the frame is done. (When you use render-to-texture, the texture is locked while the frame is rendering–this is a hardware requirement.) Of course, if you’re using copy-to-texture instead (as you might fall back to on some drivers), you can render with the texture, but it won’t contain the current pixels until it has been copied, which again happens at the end of the frame.

David

Thank you so much for this. I guess I’ll start playing around with display regions.

:slight_smile: