Post-processing filter shader, how to calculate world xyz position of pixels using the depth buffer

Hello everyone,

I am writing a post-processing filter shader (using glsl), where I need to calculate the world xyz coordinates of the objects from the pixels. To do that, I am fetching the color and the depth buffers of the scene. THe code for the creation of my filter looks like that.

manager = FilterManager(self.win, self.cam)
color_texture = Texture()
depth_buffer = Texture()
quad = manager.renderSceneInto(colortex = color_texture)
quad = manager.renderSceneInto(depthtex = depth_buffer)
quad.setShader(Shader.load(Shader.SL_GLSL , "vshader.glsl", "fshader.glsl"))
quad.setShaderInput("u_resolution" , self.getSize())
quad.setShaderInput("color_texture", color_texture)
quad.setShaderInput("depth_buffer", depth_buffer)

My vertex shader is very simple. (In fact I would expect not to need a vertex shader for a post-processing filter, but without one I get only black screen) So here it is.

void main(){
        gl_Position = p3d_ModelViewProjectionMatrix * p3d_Vertex;}

The magic is meant to happen in the fragment shader, where (among others) I try to calculate the xyz world coordinates in the following way.

void main(){
    vec2 st = gl_FragCoord.xy / u_resolution.xy;
    // ------------- get information from buffers
    float aspect_ratio = u_resolution.y / u_resolution.x;
    vec2 texture_uv = st * aspect_ratio;
    vec4 color_pixel = texture2D(color_texture, texture_uv.xy);  
    vec4 depth_pixel = texture2D(depth_buffer, texture_uv.xy); 
    vec3 world_pos_object = World_position_from_depth(depth_pixel.x, st.xy);
....}

and the function World_position_from_depth(in float depth, in vec2 uv) is as follows:

vec3 World_position_from_depth(in float depth, in vec2 st){
    vec2 st_ = st * 2.0 - 1.0;          //translate 0s at the center of the image, range [-1,1]
    depth =  depth * 2.0 - 1.0;   //do the same for depth values 
    vec4 clipSpacePosition =  vec4( st_.x,  st_.y, depth, 1.);
    vec4 viewSpacePosition = p3d_ProjectionMatrixInverse * clipSpacePosition;    
    viewSpacePosition /= viewSpacePosition.w; // Perspective division
    vec4 worldSpacePosition = p3d_ViewMatrixInverse * viewSpacePosition;
    return worldSpacePosition.xyz;
    }

However when I color my pbjects according to their distance from the camera-eye, the results I get seem wrong. Does anyone have an idea how I can get the world xyz coordinates using the depth buffer?

I think this is wrong, you must create your render buffer in one go :

quad = manager.renderSceneInto(colortex = color_texture , depthtex = depth_buffer)

I am far from shaders, however in the RU zone I can find enough examples for this task.

1 Like

I think that the p3d_ViewMatrixInverse (or any other p3d_* matrix) in a post-process shader refers to the quad the texture is displayed on and what you actually need is the matrix from the camera the original scene was rendered with.
You would need to set the camera as a shader input (quad.set_shader_input('camera', base.camera)) and use one of the ‘cg-style’ shader inputs, I think uniform mat4 trans_view_of_camera_to_world;'

Would be nice to have glsl style names for these (eg. cameraViewMatrixInverse)

1 Like

Many thanks for your useful remarks! After doing a bit more of research, I realized that indeed, as @wezu says, in post processing filters the original perspective camera is removed, and a new orthographic camera is used to render the quad. That explains why my p3d matrices have been giving me the wrong results.

This detail is mentioned here in the documentation:
https://www.panda3d.org/reference/python/classdirect_1_1filter_1_1FilterManager_1_1FilterManager.html#ab54169f81bfc39974a7558325eb1b220

So in my case, I need to find out how to feed to the shader the matrices of the original perspective camera instead of the orthographic camera. Regarding this, I now have these two follow-up thoughts/questions:

  • @wezu suggests that I pass to the shader the nodepath of the camera. My question is then, how can I declare a NodePath in the glsl shader, and how can I access its matrices? I couldn’t find any relevant example in the manual.

  • An alternative solution could be to only feed in the shader the matrices of the perspective camera inside a task, so that they update in every frame. Looking into the camera class reference, I have spotted where I can find these matrices, on panda3d.core.PerspectiveLens(). (https://www.panda3d.org/reference/python/classpanda3d_1_1core_1_1Lens.html#a52fb21427ee71031dd84d939341ee974)
    There I see the following matrices, and I believe they are what I am looking for:
    -camLens.getLensMat() : Returns the matrix that transforms from a point in front of the lens to a point in space
    -camLens.getProjectionMatInv() : Returns the matrix that transforms from a 2-d point on the film to a 3-d vector in space
    -camLens.getViewMat() : Returns the direction in which the lens is facing.

My intuition was to feed in my shader the view_matrix = camLens.getViewMat() and projection_matrix_inverse = camLens.getProjectionMatInv() and then to use these in order to substitute the p3d_ViewProjectionMatrixInverse that I need for my calculation.

i.e. instead of: p3d_ViewProjectionMatrixInverse * vec4(uv.x, uv.y, 1., 1.)
to use these : view_matrix * projection_matrix_inverse * vec4(uv.x, uv.y, 1., 1.)

However this once again returns results that are a bit off. So my question boils down to the following:

How I can use these matrices of the panda3d.core.PerspectiveLens() in order to subsitute the p3d_ViewProjectionMatrixInverse in my shader?

I think I’m slowly getting to the bottom of this, many thanks for your time!

If I’m not much mistaken, it’s done like this:

In your Python code:
(Where “self.myNodePath” refers to the NodePath to which the shader has been applied, and “self.sceneCamera” refers to the camera to be passed in.)

self.myNodePath.setShaderInput("myCamera", self.sceneCamera)

Note that the camera is identified to the shader as “myCamera”.

Then, in your GLSL shader:

uniform mat4 trans_clip_of_myCamera_to_world;
// Or something like that--adapt the above if called for to get the matrix that you want.

The “myCamera” part of the above comes from our call to “setShaderInput”–if we’d used another name there, we would use that other name here.

You should find more information on accessing such inputs on this page:
https://www.panda3d.org/manual/?title=Shaders_and_Coordinate_Spaces

You might also find these useful:
https://www.panda3d.org/manual/?title=List_of_Possible_Cg_Shader_Inputs
https://www.panda3d.org/manual/?title=List_of_GLSL_Shader_Inputs

(Note that, while some of the above refer to CG inputs, those same inputs are available in GLSL too, I believe.)

1 Like

Thanks a lot for all your answers, I managed to solve my problem with the combination of the ideas above!

Now the function for finding the world coordinated from the depth value looks like that:

vec3 World_position_from_depth(in float depth, in vec2 uv){
    vec2 st_ = uv * 2.0 - 1.0;  //translate 0s at the center of the image, range [-1,1]
    depth =  depth * 2.0 - 1.0; //do the same for depth values 

    vec4 clipSpacePosition =  vec4(st_.x, st_.y, depth, 1.);
    vec4 viewSpacePosition = trans_clip_to_model_of_myCamera * clipSpacePosition;
    viewSpacePosition /= viewSpacePosition.w;      // Perspective division
    return viewSpacePosition.xyz;
    }

Many thanks!

2 Likes