I have a water surface generation algorithm that stores the surface information in a texture. It uses a fshader to generate the next texture based on previous texture. How can I implement this in Panda ? (Iâ€™ve tried to implement this on panda side using PNMImage interface but it is too slow.)
Is this method feasible and the right way to do:
Make a card plane,
make an offscreen buffer and texture,
make a camera with the offscreen buffer associated,
set the offscreen texture to the card plane,
set the camera to look at the card plane,
set the fshader to this card plane.
If this is the way to do it, how can I make the camera place at the right location cover the card plane precisely, so that the output of the fshader feedback to the offscreen texture precisely ?
I believe there shall be better way to do it ? Please advice.
No, this approach sounds fine. I only don’t see the need for an fshader - you could use the texprojector for this (see the manual). Then, you could apply a normal map using setNormalMap if you want.
The other camera should have the same pos as the main camera, but then mirrored over the water plane.
Sorry that I don’t understand your advice.
The water surface algorithm is a bit complicated and I need to compute the vertexes vertical location and normal through either a program in CPU side or thru a shader program in GPU. For performance reason, I need a fshader to compute the surface and store it to a texture.
So why you say fshader is not needed ? I read setTexProjector in the manual and it seems not relevant to my requirement ??
OK, then I misunderstood you.
I have tested the concept. It is basically working but have a major problem.
The fshader is rendering a bigger “screen” than the actual size of the offscreen buffer (e.g. the offscreen size is 64x64 but the size of rendering plane for the fshader is much bigger). It ends up the hardware is scaling the texture and introduce errors into the texture.
I wish you can understand my explanation above.
Or if possible, can anyone provide a cleaner solution for the problem ?
I would like to control the number of pixels generated through the fshader. I want it to exactly match the offscreen texture size, so that I can feed it back precisely in the next rendering cycle.
Did you apply the fshader to a fullscreen card? The fshader should be rendering as much pixels as there are visible on screen.
If you don’t want that behavior, don’t reparent that quad to the window. Instead, create another buffer with a camera where that quad is reparented to.
I created a small card plane and make a camera, with a small buffer, to look at that card. But it still looks like that the fshader has processed more pixels than the buffer I created for camera.
BTW, how can I apply a shader to the fullscreen ? I would like to try it later (say a black and white shader to make the whole screen looks like a b/w TV).
How are you so sure it’s rendering more pixels?
That’s what the FilterManager class is designed for. (CommonFilters makes also use of this). The manual explains how to use it - it renders the scene into a quad, upon which you can apply a shader which makes everything b/w.
You are right, actually I am not very sure.
I have made a very simple shader to do a test. The fshader will change the brightness of a pixel for each render, and reset it to zero if the brightness is over one.
With the output feedback to the input, I start by setting the initial texture to a picture. If everything is correct, I will still see this picture with the brightness of the pixel keep changing.
However, the result is the picture become gradually deformed. I suspect my camera is not facing the plane properly, as a minor deviation can cause this problem.
Or the fshader is actually processing more pixel than the buffer provided, and texture is then rescaled to the buffer by panda later.
…so right now I am stuck here.
Try putting “textures-power-2 none” in your Config.prc and see if that helps - that should tell Panda not to rescale. Alternatively, calling setTexturesPower2(0) on a texture should work.
It might also be a texture filtering issue - try setting the mode to FTNearest.
The technique seems working well. I am able to make a highly interactive water surface that can be played with a mouse.
But I still want to know if there is a lower level interface with the shader in Panda.
In this technique, the fshader cannot return information in the alpha channel. So there are only information returned in rgb. I have the height map return in red channel, and the normal is supposed to be returned in the yzw channel. I have found a way to do it in this case but in general is it possible to use other interface to the shader to get back the result from the rendered texture ? Or there is a way to get back the alpha from the rendered texture ?
Hmm, isn’t it possible to set the clear color of the buffer to (0,0,0,0)?
Otherwise, if you want to output more information from the shader, use Multiple Render Targets. The idea is that besides having just one o_color you have multiple output colors. The Fireflies sample shows how to do that.
Thank you for your help. Here is the result:
The clear buffer color tricks does not help. I will look into the fireflies example later.
Looks awesome! Excellent work, clcheung!
clcheung, your video looks awesome!
I have a question: In the video, the boat bumps around as the waves hit it, how are you doing the bouyancy simulation? All in the shader? Is it using ODE or Panda or no physics at all?
Really nice work!
The water simulation is in fshader, it is originated from the Nvidia SDK 9.5, Vertex Texture Fetch Water demo.
After the water level and normal is computed in fshader, I grab it and add my own modification on it (such as raining, boat engine effect, mouse pushing, …). Then the texture is sent to the vshader, which do the vertex modification according to the texture.
To make the boat look like floating on water, I get the water level of the 4 corners of the boat, and then compute the water level and Hpr of the boat from the 4 corners. It looks better than the original demo with this add-on.
I will put this demo also into the next release of demomaster, after adding the terrain, refraction and etc.
thats cool! I’ll have another look at your demomaster when the next realease comes out! Thanks!