FrameBufferProperties Questions

Hi guys,

I’m a bit confused how FrameBufferProperties interacts with the off-screen buffer creation process. After trying to dive through the C++ source code, I’m even more confused.

fprops = FrameBufferProperties()

What does fprops.setColorBits, props.setAlphaBits …etc. mean? Does ‘bits’ refer to ‘enable the bitplane’ or does it actually refer to the actual number of bits of each bitplane. ie., setColorBits(8) = 8-bit RGB

What’s the difference between setColorBits and setRgbColor?

Does setAuxRgba( n ) means enable ‘n’ number of auxillary RGBA buffers right? (ie., o_color2…o_colorn)

What does Hrgba stand for?

My current understanding of opengl’s FBO process is that the byte format of the buffer is declared
in the texture declaration and not in the initial declaration of the FBO itself. (Is this correct?) Is this true for Panda as well?

So if I want to use a 32bit RGB, should I declare it like this:

tex = Texture()

fprops = FrameBufferProperties()

buffer = base.graphicsEngine.makeOutput(…
buffer.addRenderTexture( tex, GraphicsOutput.RTMCopyTexture, GraphicsOutput.GraphicsOutput.RTPColor)

or do

fprops.setColorBits(32) #or …Bits(96)?

‘bits’ means the actual number of bits to request, so setColorBits(24) means you want 24-bit RGB color. It doesn’t include alpha; that would be requested separately with setAlphaBits(8). Note that Panda may return a larger number of bits than you actually ask for, so you could use setDepthBits(1), for instance, to mean that you don’t really care how many depth bits you get, as long as it’s more than 0.

setRgbColor() is just a boolean flag that means you want a normal color buffer instead of an indexed color buffer. Indexed color isn’t really supported by Panda anyway, so this is kind of vestigial.

I’m guessing Hrgba stands for high-dynamic-range, but I don’t have any experience with this part of it.

Panda changes the texture properties to match the FrameBufferProperties, so you should set the properties you want on the buffer.

Thanks David, I think I understand it better now. I’m still having problems running the following simplest case, where I request an offscreen depth buffer (and nothing else.)

The initial buffer request is fine:

Buffer properties depth_bits=1

But after a few frames, the offscreen buffer magically upscales itself to a fullsized buffer:

Buffer properties depth_bits=1 color_bits=1 alpha_bits=8 stencil_bits=1

When this happens, I sometimes get data corruption where the offscreen buffer bleeds into the color buffer. To stop the bleeding, I have to explicitly bind a color buffer to the offscreen buffer. (I had this problem when playing with simple shadow codes, where all I wanted was depth buffer.)

This is not a big deal, I can live with having to store a few extra bytes. But the unexpected behaviour is worrying…

import direct.directbase.DirectStart
from panda3d.core import FrameBufferProperties, GraphicsPipe, GraphicsOutput, Texture, WindowProperties

winprops = WindowProperties.size( 1024, 1024 )
fbprops = FrameBufferProperties()

objBuffer = base.graphicsEngine.makeOutput(base.pipe, 'hello', -4, fbprops, 
					winprops, GraphicsPipe.BFRefuseWindow,,

global n
n = 0
def uptask(task):
	global n
	print "Buffer properties", objBuffer.getFbProperties()
	if n > 5:
		raise 'boo'
	n += 1
	return task.cont
taskMgr.add( uptask, 'hello')



Hmm, I’m not sure whether depth-buffer-only buffers are fully supported by Panda. But in any case, it’s true that the FrameBufferProperties structure is only a request; Panda will do its best to honor the request, but the resulting buffer might have other properties than just the one(s) you asked for. It’s your responsibility to examine the FrameBufferProperties of the opened buffer, if necessary, to see what kind of buffer you got.

Note that the buffer is not actually created immediately by makeOutput(). You also have to call base.graphicsEngine.openWindows(), or let a frame go by, before the buffer is actually manifested. At that point you can safely query its properties.

Incidentally, in your code snippet, you do appear to be requesting a full-featured buffer, with color, alpha, and stencil.


Oops… I put down the wrong calls. The sets should be all zero except for the depth parameter.

I can work around getting a fully packed buffer as long as I can inspect it. When FrameBufferProperties returns color_bits = 1, it means Panda3d is using the most bits the graphics card can provide right? How do I query what that format is exactly?


Oh, hmm. Actually, it should never report color_bits = 1, it’s supposed to report the actual number of color bits determined. This must be a bug. Which operating system are you using?


I’m using Panda3D1.70, WinXP and the wglGraphicsPipe. I have a Nvidia GTX 260

The problems seem to exist on Ver 1.6.2 of Panda as well.

When I try the dx9 pipe (on either version), the buffer creation seems to just fail. For my code above it says

c:\panda3d-1.7.0\panda\src\dxgsg9\wdxGraphicsBuffer9.cxx 477
Buffer properties depth_bits=24 color_bits=24 alpha_bits=8 force_hardware=1
:display:gsg:dxgsg9(error): SetRenderTarget  at (c:\panda3d-1.7.0\panda\src\dxgs
g9\wdxGraphicsBuffer9.cxx:477), hr=D3DERR_INVALIDCALL: Invalid call

For the 1.6.2 Version of Panda + dx9 + Fireflies sample, it says depth texture is not supported. (Fireflies works normally for the opengl pipe.)

It looks like people haven’t been trying render-to-buffer methods for dx9 for a while now.