128bit (D3DFMT_A32B32G32R32F) texture support

I have developed a water shader using vertex shader fetch technique. The water surface is created by computing a dynamic equation through a fsahder. The result of the computation is fed to the vertex shader with vtf.

I believe the default texture format is below 128 bit. The computation result is not very accurate and cause some instability problem.

Now I’ve developed another program for cloth simulation. Basically the technique is the same, but water simulation only compute the vertex displacement in vertical axis, the cloth simulation compute the xyz coordinate and store it to the output texture. The instability issue is magnified and it is very unstable.

Is it possible to create 128bit (D3DFMT_A32B32G32R32F) texture in panda ? If so, I believe the instability issue can be solved ? Any advice ?

How about Texture.FRgba32 ? This maps to GL_RGBA32F_ARB, which, according to this document, is 32-bits per channel.

I have this class written, mainly a copy from FilterManager. And it forms a feedback loop:

class myFilter():
    def __init__(self, SIZE):
        self.buffer = base.win.makeTextureBuffer( 'surface', SIZE, SIZE)
        self.buffer.setClearColor( Vec4( 0.5, 0.5, 0.5, 0 ) )
        self.buffer.setSort(-1)

        cm = CardMaker("filter-stage-quad")
        cm.setFrameFullscreenQuad()
        #cm.setFrame(0,1,0,1)
        quad = NodePath(cm.generate())
        quad.setDepthTest(0)
        quad.setDepthWrite(0)
        quad.setColor(Vec4(1,0.5,0.5,1))

        quadcamnode = Camera("filter-quad-cam")
        lens = OrthographicLens()
        #lens.setFilmSize(2, 2)
        lens.setFilmSize(2, 2)
        lens.setFilmOffset(0,0)
        lens.setNearFar(-1000, 1000)
        quadcamnode.setLens(lens)
        quadcam = quad.attachNewNode(quadcamnode)

        self.buffer.getDisplayRegion(0).setCamera(quadcam)
        self.buffer.getDisplayRegion(0).setActive(1)

        self.quad = quad
        self.quadcam = quadcam

    def Destroy(self):
        self.quad.removeNode()
        self.quadcam.removeNode()
        base.graphicsEngine.removeWindow(self.buffer)

filter = myFilter()
filter.quad.setShader(watershader)
filter.quad.setShaderInput('src', filter.buffer.getTexture())

Is it correct to change it to ?:

filter = myFilter()
filter.quad.setShader(watershader)
tex1 = Texture()
tex1.setMinfilter(Texture.FTLinear)
tex1.setFormat(Texture.FRgba32)
filter.buffer.addRenderTexture(tex1, GraphicsOutput.RTMCopyRam)
filter.quad.setShaderInput('src', tex1)

Or there is better way to do it ?

That looks about right, I think, yes.
To be 100% certain, load it into a PNMImage, and call getMaxVal. (Requires RTMCopyRam).

Note that generally you do not want to use RTMCopyRam - its slow.

I am a bit confused about the operation. Here is the flow I guessed:

  1. A 128 bit texture is created
  2. The texture is set to the shader input
  3. The fshader compute the result and save it to filter.buffer
  4. panda copy the result of filter.buffer to the 128 bit texture

I wonder, will the step 3 and 4 cause a data lost, if filter.buffer is also 128 bit ?
And, if I can directly associate the output of fshader to the 128 bit texture, instead of going through the filter.buffer ?

On second thought, you probably need to set the framebuffer requirements a bit higher as well. Since you’re using makeTextureBuffer, which already sets up a color render-texture, use:

tex128 = Texture()
tex128.setFormat(Texture.FRgba32)
fbprops = FrameBufferProperties()
fbprops.setColorBits(96)
fbprops.setAlphaBits(32)
base.win.makeTextureBuffer('SURFACE', SIZE, SIZE, tex, False, fbprops)

I’m kinda guessing here, so I’m not certain.

Nothing goes through “filter.buffer”, but through it’s assigned texture, added through addRenderTexture. base.win.makeTextureBuffer already automatically adds an output color texture.
If you bound it with RTPColor, it will map to o_xxx : COLOR0 in the shader, subsequent outputs will be mapped to RTPAuxXX bitplanes.
If you use RTMBindOrCopy, nothing will be copied by panda at all which is probably what you want to use.

I am not able to get the 128 bit texture in both ways.

fbprops.setColorBits(96)
fbprops.setAlphaBits(32)
base.win.makeTextureBuffer(‘SURFACE’, SIZE, SIZE, tex, False, fbprops)
=>

:display(error): Could not get requested FrameBufferProperties; abandoning window.
requested: color_bits=96 alpha_bits=32
got: depth_bits=1 color_bits=1 alpha_bits=1 stencil_bits=1 force_hardware=1

:display(error): Could not get requested FrameBufferProperties; abandoning window.
requested: color_bits=96 alpha_bits=32
got: color_bits=32 alpha_bits=8 accum_bits=64 force_hardware=1

Any suggestions ?

Hmm, maybe your GPU doesn’t support them?

Maybe try making the main window use 32-bits per channel, by putting in Config.prc:

color-bits 96

On the other hand, I tried it here, glxinfo tells me there are framebuffer configurations that support 32-bits per channel, while “notify-level-glxdisplay debug” doesn’t. I’m going to investigate.

My card is NVidia 9500. I am able to run NVIDIA demos and other demos with 32 bit per channel.

If I put color-bits 96, the console shows:
FrameBufferProperties available less than requested.

What operating system are you running? If windows, does changing to pandadx9 make any difference?

I am running Windows XP. If pandadx9 is used, most of the demos with shaders crash. Other demos looks very strange and dark in general.

When start, it reports:
:display(error): The ‘textures_power_2’ configuration is set to ‘none’, meaning

that non-power-of-two texture support is required, but the video
driver I’m trying to use does not support non-power-of-two textures.

But it does not complain on color-bit any more.

So is panda supporting 128 bit texture ? Anyone ever use it ?