Hey all,
I’m trying to make a shader that returns a depth pass, ranging from white to black from a camera’s near clip to far clip respectively.
I’m having a hard time understanding the “trans_model_to_clip_of_myCamera” technique. Do I pass this as an input to the vertex shader? How do I extract the Z information once I have the space converted and apply it to the shader’s color?
Would anyone care to explain how to put this to use? Also, am I overcomplicating my approach to getting this depth pass?
Any help would be much appreciated.
First of all, what’s your goal ? Is it to create fog ?
yes, trans_model_to_clip_of_myCamera is an input in vtx shader to transform the vtx into camera space.
The depth is a grayscale texture, so you only need to query 1 of it’s first 3 color component (in fragment shader) :
depth.x (if you prefer xyzw swizzling)
or
depth.r (if you prefer rgba swizzling)
it’s just the same.
BTW, it’s black to white (near to far).
add this before importing DirectStart to see the depth buffer :
loadPrcFileData( '', 'show-buffers 1' )
Have you read –[THIS]– ?
I’m not a Cg expert, so I probably missed something.
My goal is to get a test game running on an auto-stereo monitor that uses rgb and depth to generate a stereoscopic image on the fly. Ultimately, I need to put the rgb image into one buffer, the depth into another, and a third buffer containing a line of pixels that carries a message to the monitor about the stereo settings.
I don’t think I’ll have an issue with putting the buffers together, I just didn’t know how to make the depth buffer. I’d also like to have a lot of control in rendering that buffer so that I can control the stereo, ie: what’s in front of the screen & what’s behind it.
OK, now I’ve got the depth pass working! Thanks you.
However, I am trying to put the string of pixels at the top of the frame that tells the monitor to work in stereoscopic, and any technique that I employ is filtering the image.
I have the unfiltered image as a PNG, & I’m trying to slap it on top of the final render. It needs to be exact in order to work. I’ve tried using OnscreenImage, and the directGUI stuff. Is there any way to just put an image straight into a buffer?
I’m trying to run Panda on an Autostereo Philips 3D TV, and I ran across this old forum post. Did you ever have any luck getting it working?