Depth map to point cloud

Hello!
I’m experimenting with depth maps and have a question.

I have a depth map as a Numpy float32 array with shape 512x512 in world distance and I would like to project this out into a point cloud, i.e. I want to do some kind of reverse rendering to get a 3d coordinate for each pixel in the depth map. Is this possible and if so how?

Numpy is not really relevant here, that is just how I store the data. Just getting a way to transform a single point in this depth map with regards to the lens (default PerspectiveLens) would be great!

Thanks!

Perhaps this stackoverflow thread could be of use to you here: python - How to convert the depth map to 3D point clouds? - Stack Overflow

Thanks! Do you know what the centerX and centerY are?

A guess:

  • centerX = (width - 1) / 2
  • centerY = (hight - 1) / 2

Is this the focal length I want getFocalLength ?

Perhaps you can adapt this example to your needs.

@Simulan Thanks for the link, I figured it out because of that!
@serega-kkz That looks very interesting as well for my project!

1 Like