texture format keeps changing for ram makeTextureBuffer

Hello, I’m trying to use makeTextureBuffer, render to it, then later pull the ram image out so I can do a few things with the pixels. The issue is that the format of the ram image at some point is switching from FRgba8 to FRgb, which I’m not sure how to interpret.

I create my texture buffer:
texture = Texture()
self.altBuffer=mainWindow.makeTextureBuffer(“hello”, 256, 256, texture, True)

Later on, I test that the format is still rgba8:
print (w.altBuffer.getTexture().getFormat(), Texture.FRgba8)
It still is. But after I have called taskMgr.set(), suddenly that print is spitting out “12”, which is FRgb.

Now if I call
print w.altBuffer.getTexture().getRamImageSize()

I get 524288, which implies 8 bytes per pixel, instead of the 4 I was expecting.

So my questions are:

  1. Why is the format changing, is there anything I can do about that
  2. If I can’t do anything about that, is there any way I can determine how to interpret the pixel data returned from getRamImage()?


When rendering into a texture, the texture automatically takes on the format of the buffer. That’s what you’re experiencing. What you need to do is control the format of the offscreen buffer. The texture will follow along.

The routine ‘makeTextureBuffer’ is the simple version of buffer creation. There’s another variant, ‘makeOutput’, which is much more complex because it gives you access to the entire range of functionality and in particular allows you to specify format-related details. Several of the sample programs use ‘makeOutput.’

However, buffer creation is a really complex process. Offscreen buffers have several underlying implementations. For example, under OpenGL, there are pbuffers, FBOs, and glCopyTexImage2D. Each of these has different capabilities and limitations. When calling ‘makeOutput,’ you specify a set of constraints on what sort of buffer you want. It then does its best to find some sort of buffer that meets your requirements, subject to the limitations of the driver. Long story short - it’s hard to get precisely one specific format, but you can usually get something close to what you’re looking for.

Last observation - the results you’re seeing from get_ram_image_size make it seem like there’s a bug. First, the texture shouldn’t even have a ram image: it’s a render-to-texture target, the data was never in RAM. So the function should be returning zero. Second, I don’t know of any existing video card that uses RGBA16 - so 8 bytes per pixel seems crazy.

thanks for the info.

The reason why there is a ram image available for the texture is because the last parameter to makeTextureBuffer() is a parameter that tells the texture to copy to RAM when it is done being rendered.

The easy way to interpret the raw texture data, no matter what its format, is to let Panda do it for you. To do this, you can use:

p = PNMImage()

And then you can use the PNMImage methods like getRed(x, y), getBlue(x, y), and getGreen(x, y) to examine the pixels at your leisure.

Note that this is not the speediest way to examine the pixels, though. If it’s too slow, and you think you can do better yourself, you can also determine everything you need to know to decode the texture format from the methods on Texture. In particular, tex.getXSize() * tex.getYSize() * tex.getZSize() * tex.getNumComponents() * tex.getComponentWidth() == tex.getRamImageSize().

In your case, presumably tex.getComponentWidth() is returning 2, or two bytes per component. You can use this knowledge to build your pixel decoder.

Of course, if you are writing your pixel decoder in Python, it will probably be slower than the C+±based PNMImage decoder, so you might as well just use that one.