Reconstructing position from depth buffer

I’m trying to reconstruct the world-space position from the depth buffer. I’m calculating the matrices like this:

gsg =
camTransform =

# transform the coordinate system
cs_inv_transform = Mat4.convert_mat(gsg.get_internal_coordinate_system(), gsg.get_coordinate_system())
viewInverseMat = Mat4(cs_inv_transform * camTransform)
projectionInverseMat  = Mat4(base.camLens.getProjectionMatInv())

And reconstruct the position like this:


// read depth
float z = texture2D(depthSampler, texcoord0).r;
float depthVal = (z); // also tried 1/z,  z*2-1

// get window-space
vec2 pos = (texcoord0 - 0.5) * 2.0;
pos.y = 1.0 - pos.y;

// compute the inverse viewProjectionMatrix
mat4 world_viewProjectionMatrixInverse = world_ProjectionInverse * world_ViewInverse;

vec4 projected = vec4(pos.x, pos.y, depthVal, 1.0);
vec4 worldSpacePoint = world_viewProjectionMatrixInverse * projected;

However, my result is wrong:

Any suggestions? :slight_smile:

Thanks in advance

I can’t find my own notes on how I did this before but have you looked at the firefly example
amongst the folders? The online documentation gives a pretty detailed walkthrough of
how to do the math.