Offscreen buffers don't use depth-bits without depth-render?

I’m not sure of whether this is a bug, or intended behaviour. It was unexpected to me, so I mention it here, both in case it’s a bug, and to potentially inform others that might bump into it.

It seems that, when setting up an off-screen buffer, the number of depth-bits specified in the frame-buffer properties has little or no effect unless the buffer is also made to render depth to a texture.

For example, the code below shows no apparent depth-fighting, I believe, despite having only sixteen bits specified for depth:

        frameProperties = FrameBufferProperties()
        frameProperties.setMultisamples(multisamples)
        frameProperties.setRgbaBits(8, 8, 8, 0)
        frameProperties.setDepthBits(16)

        windowProperties = base.win.getProperties()

        self.mainBuffer = base.graphicsEngine.makeOutput(base.pipe, "Main Buffer", -1,
                                                  frameProperties, windowProperties,
                                                  GraphicsPipe.BFRefuseWindow,
                                                  base.win.getGsg(), base.win)
        
        # (Further initialisation here omitted here)

        self.sceneTexture = Texture()
        self.sceneTexture.setWrapU(Texture.WM_clamp)
        self.sceneTexture.setWrapV(Texture.WM_clamp)

        self.mainBuffer.addRenderTexture(self.sceneTexture,
                                         GraphicsOutput.RTMBindOrCopy)

        self.card.setTexture(self.sceneTexture)

While this, I believe, does produce what appears to be visible depth-fighting:

        frameProperties = FrameBufferProperties()
        frameProperties.setMultisamples(multisamples)
        frameProperties.setRgbaBits(8, 8, 8, 0)
        frameProperties.setDepthBits(16)

        windowProperties = base.win.getProperties()

        self.mainBuffer = base.graphicsEngine.makeOutput(base.pipe, "Main Buffer", -1,
                                                  frameProperties, windowProperties,
                                                  GraphicsPipe.BFRefuseWindow,
                                                  base.win.getGsg(), base.win)
        
        # (Further initialisation here omitted here)

        self.sceneTexture = Texture()
        self.sceneTexture.setWrapU(Texture.WM_clamp)
        self.sceneTexture.setWrapV(Texture.WM_clamp)

        # NB!
        self.depthTexture = Texture()
        self.depthTexture.setWrapU(Texture.WM_clamp)
        self.depthTexture.setWrapV(Texture.WM_clamp)

        self.mainBuffer.addRenderTexture(self.sceneTexture,
                                         GraphicsOutput.RTMBindOrCopy)
        
        # NB!
        self.mainBuffer.addRenderTexture(self.depthTexture,
                                         GraphicsOutput.RTMBindOrCopy,
                                         GraphicsOutput.RTP_depth)

        self.card.setTexture(self.sceneTexture)

Note that the only difference is that the second code-excerpt adds a second texture, and has “self.mainBuffer” render depth to it.

This was observed in Panda3D 1.10 (a home-built version, specifically), running under Ubuntu 16.04 (“Xenial”), I believe.

Is this also the case when you request 24 or 32 depth bits? 16 depth bits isn’t very much, and the selection code in the RTT case differs from the non-RTT case, so I’m wondering whether in one case it’s simply choosing a different bit depth.

It’s hard to say whether it happens with 24- or 32- bits: there is easily-visible depth-fighting with 16 bits, but none when I set either 24 or 32, regardless of whether I have the buffer render depth or not. I also don’t see an obvious way to check the number of depth-bits used by an off-screen buffer. (I can retrieve the frame-buffer properties via “GraphicsOutput.getFbProperties()”–but that seems to return the value set by me, even when that value is 16 bits.)

As to using only 16 bits–you’re right, that does turn out to be rather low. I simply hadn’t realised as much due to the scene looking fine, presumably thanks to this effect. I’ve since switched to 32 bits. ^^;

That sounds like it confirms my suggestion: that in one scenario it’s simply giving you an upgraded depth buffer. In any case, you should probably just use a 32-bit depth buffer, and it sounds like you’re saying there’s no problem with doing that.

(You can get away with lower depth if you increase the near distance on your lens, but it’s unlikely to be worth it.)

You should be able to use getFbProperties, but you possibly need to put that in a doMethodLater since it will be updated the first time the buffer is used.

Especially since it looks as though I’ve effectively been using that all along! XD

Indeed–but I’m using both a first-person view and portal-culling. I really don’t want to increase my near-distance! ^^;

Ah, that makes sense. Thank you for clarifying that. :slight_smile: