How to create Framebuffer with only one channel?

Hi community,

I have a depth shader calculating the distance of surrounding objects to the camera. As the output can be saved in one channel, I want to disable other channels and use, for example, only a red channel. In this way, when I retrieve the depth information stored in the single-channel image, it can be faster than retrieving an image with 4 channels where 3 channels are actually redundant.

Actually, I tried to set FrameBufferProperties().set_rgba_bits(8,0,0,0). But it didn’t work. The image returned by the following code is still with 3 channels.

    def get_rgb_array_cpu(self):
        origin_img = self.buffer.getDisplayRegion(1).getScreenshot()
        img = np.frombuffer(origin_img.getRamImage().getData(), dtype=np.uint8)
        img = img.reshape((origin_img.getYSize(), origin_img.getXSize(), -1))
        return img

It ought to work. How are you creating the buffer?

I do it through

frame_buffer_property = FrameBufferProperties()
frame_buffer_property.set_set_rgba_bits(8, 0, 0, 0)
buffer = base.win.makeTextureBuffer("camera", width, height, fbp=frame_buffer_property)

After creating the buffer, I check the buffer property via buffer.getFbProperties(). It shows that the r,g,b channels still have 8 bits respectively.

Please file an issue on GitHub so that this doesn’t get lost.