I have developed a water shader using vertex shader fetch technique. The water surface is created by computing a dynamic equation through a fsahder. The result of the computation is fed to the vertex shader with vtf.
I believe the default texture format is below 128 bit. The computation result is not very accurate and cause some instability problem.
Now I’ve developed another program for cloth simulation. Basically the technique is the same, but water simulation only compute the vertex displacement in vertical axis, the cloth simulation compute the xyz coordinate and store it to the output texture. The instability issue is magnified and it is very unstable.
Is it possible to create 128bit (D3DFMT_A32B32G32R32F) texture in panda ? If so, I believe the instability issue can be solved ? Any advice ?
I am a bit confused about the operation. Here is the flow I guessed:
A 128 bit texture is created
The texture is set to the shader input
The fshader compute the result and save it to filter.buffer
panda copy the result of filter.buffer to the 128 bit texture
I wonder, will the step 3 and 4 cause a data lost, if filter.buffer is also 128 bit ?
And, if I can directly associate the output of fshader to the 128 bit texture, instead of going through the filter.buffer ?
On second thought, you probably need to set the framebuffer requirements a bit higher as well. Since you’re using makeTextureBuffer, which already sets up a color render-texture, use:
Nothing goes through “filter.buffer”, but through it’s assigned texture, added through addRenderTexture. base.win.makeTextureBuffer already automatically adds an output color texture.
If you bound it with RTPColor, it will map to o_xxx : COLOR0 in the shader, subsequent outputs will be mapped to RTPAuxXX bitplanes.
If you use RTMBindOrCopy, nothing will be copied by panda at all which is probably what you want to use.
Maybe try making the main window use 32-bits per channel, by putting in Config.prc:
color-bits 96
On the other hand, I tried it here, glxinfo tells me there are framebuffer configurations that support 32-bits per channel, while “notify-level-glxdisplay debug” doesn’t. I’m going to investigate.