Need help understanding depth bits

I have created an example program that will create another window + camera, render to that window and copy the depth buffer to numpy. It seams to work but please tell me if I did something wrong.

However, the thing that confuses me is that I can set fb_prop.setDepthBits() to 1, 16 or 24 and it works but the depth_texture is still of shape (width x height x 4) bytes (see print() in code). Why is that?
Also if I set fb_prop.setDepthBits(32) it crashes with:

:display(error): Could not get requested FrameBufferProperties; abandoning window.
  requested: depth_bits=32 color_bits=24 red_bits=8 green_bits=8 blue_bits=8 
  got: depth_bits=24 color_bits=24 red_bits=8 green_bits=8 blue_bits=8 accum_bits=64 force_hardware force_software


from typing import Any

import numpy as np
from direct.showbase.ShowBase import ShowBase
from direct.task import Task
from panda3d.core import (

class MyApp(ShowBase):
    def __init__(self):
        # Load the environment model.
        self.scene = self.loader.loadModel("environment")
        # Reparent the model to render.
        # Apply scale and position transforms on the model.
        self.scene.setScale(0.25, 0.25, 0.25)
        # Depth task
        self.taskMgr.add(self.get_depth, "get_depth")

        # Request 8 RGB bits, no alpha bits, and a depth buffer.
        fb_prop = FrameBufferProperties()
        fb_prop.setRgbaBits(8, 8, 8, 0)
        fb_prop.setDepthBits(1)  # <----

        # Create a WindowProperties object set to 512x512 size.
        win_prop = WindowProperties(size=(512, 512), fixed_size=True)

        self.cd_window = self.graphicsEngine.makeOutput(
            name="My Buffer",
            flags=0,  # GraphicsPipe.BF_refuse_window,
        self.cd_region = self.cd_window.makeDisplayRegion()

        self.cd_camera = Camera("color_depth_camera")
        self.cd_camera_np = NodePath(self.cd_camera)

        # View render, as seen by the default camera

        # Depth texture
        self.depth_texture = Texture()
            self.depth_texture, GraphicsOutput.RTMCopyRam, GraphicsOutput.RTPDepth

    def get_depth(self, task: Task) -> Any:
        depth_data = self.depth_texture.getRamImage()
        if len(depth_data) == 0:
            return Task.cont

        size = self.depth_texture.getXSize() * self.depth_texture.getYSize()
        print(len(depth_data) / size)  # >> 4

        depth_image = np.frombuffer(depth_data.get_data(), np.float32)
        depth_image.shape = (
        lens = self.cd_camera.getLens()
        world_depth = lens.far * lens.near / (lens.far - (lens.far - lens.near) * depth_image)
        print(world_depth.min(), world_depth.max())
        return Task.cont

app = MyApp()

Hmm, try to set the texture format directly.


I have checked your code, I can set fb_prop.setDepthBits(32) without any exceptions.

The crash is caused by self.graphicsEngine.makeOutput() returning None so it never reaches the self.depth_texture.setFormat() line.

But I can also set fb_prop.setDepthBits(24) and self.depth_texture.setFormat(Texture.FDepthComponent16) without problem, I would have expected this to cause a crash but self.depth_texture.getRamImage() still returns a float32 depth. It is like these values have no effect (except fb_prop.setDepthBits(32) which makes it crach)

First, try to set 24 bits.


This should affect the output texture. As for the 32-bit depth, it is possible that these are problems with the hardware or the driver.

If you set:


Does this line

print(len(depth_data) / size)  # >> 4.0

Print 3.0 for you? I.e. do you get a 24 bit float? (I don’t think numpy supports 24 bit numbers)

Actually, this is not a forum for numpy. To find out the bit depth and other texture parameters, just print it out.


Sorry, numpy is not really important here I’m just using it to verify that I get what I expect from Panda3D.

Regardless of what values i set for fb_prop.setDepthBits and self.depth_texture.setFormat the print print(self.depth_texture) says:

  2-d, 512 x 512 pixels, each 1 floats, depth_component24, compression off
  sampler wrap(u=repeat, v=repeat, w=repeat, border=0 0 0 1) filter(min=default, mag=default, aniso=0) lod(min=-1000, max=1000, bias=0)  1048576 bytes in ram, compression off

I.e. i always get depth_component24

That’s not why I said it. I meant that I am not very familiar with this library.

I have these parameters.


I get a value of 4

    def get_depth(self, task: Task) -> Any:
        depth_data = self.depth_texture.getRamImage()
        size = self.depth_texture.getXSize() * self.depth_texture.getYSize()
        print(len(depth_data) / size)
        return Task.cont

Please note that I have shortened your code.

Okay thanks
Then maybe I have something strange going on with my hardware forcing 24bit depth and maybe .getRamImage() converts to 32bit depth

The problem is that I get it regardless of the set parameters, hmm.

Couple of things you’re doing wrong.

  • You need to pass a host / gsg. Otherwise you get a very inefficient type of buffer.
  • If you want 32-bit float depth, you also need to do setFloatDepth(True).
  • fixed_size and other window properties are not supported for buffers, just specify a size only.
  • Setting the format yourself using setFormat is not recommended, Panda takes care of setting the right format for you.

If you do all these things right you should be able to get a 32-bit float depth buffer that you can also download as a 32-bit float depth buffer.

1 Like

Thank you @rdb!

Regarding your advice in order:

  1. I’m a little confused about that, should I use the existing ones like, or create new ones? If so, how? I’m reading here but I don’t quite get it Creating Windows and Buffers — Panda3D Manual
  2. Setting setFloatDepth(True) makes the application crash with:
:display(error): Could not get requested FrameBufferProperties; abandoning window.
  requested: float_depth depth_bits=32 color_bits=24 red_bits=8 green_bits=8 blue_bits=8 
  got: depth_bits=24 color_bits=24 red_bits=8 green_bits=8 blue_bits=8 accum_bits=64 force_hardware force_software 

Maybe my RTX2060 doesn’t like float and/or 32 bit depth? But I can’t see why. I’m on Ubuntu If that makes a difference
It works if I don’t show the window (flags=GraphicsPipe.BF_refuse_window, but I would like to see the window)

  1. Okay, I wanted an easy way to turn on/off the visibility of the buffer

  2. Removed

Use the existing ones. It means that the existing OpenGL context can be used.

You can’t have a window with 32-bit depth. I suggest using the buffer viewer instead, base.bufferViewer.toggleEnable().

Awesome, thanks!
Can the BufferViewer.enable(1) not be started in MyApp.__init__? I need to put it in a task for it to work. But if I do, everything works!