Hello!
I’m experimenting with depth maps and have a question.
I have a depth map as a Numpy float32 array with shape 512x512 in world distance and I would like to project this out into a point cloud, i.e. I want to do some kind of reverse rendering to get a 3d coordinate for each pixel in the depth map. Is this possible and if so how?
Numpy is not really relevant here, that is just how I store the data. Just getting a way to transform a single point in this depth map with regards to the lens (default PerspectiveLens) would be great!