Hi, I have a snapshot of a model from a camera view. I have a render object. In another forum, I saw that you can use object picking to pick a point using mouse, and get the 3D point corresponding to that point. I have a different problem. I somehow get the outline of the snapshot. I want to find the 3D points correspnding to the edge (outline) points (without using mouse). I can pass the (x, y) coordinates, but I don’t know where to do that. Sorry if it’s too trivial. I am a newbie in Panda. Thanks in advance.
I think it’s not trivial as far as I understand your question. If you explain context in which it would be used, then may be we’ll find workaround or more simple way.
Thanks ninth for the reply. Actually, I am working on find the outline pixels of a model’s snapshot as seen by a camera. In other words, suppose the I take the model’s snapshot from a camera position. Then somehow I find the outline (edge) of the snapshot using some edge finding technique. I want to find the 3D points corresponding to all the 2D edge points (i.e. points on the outline).
I have found a way to do that by simulating mouse clicks, but the problem is the Picker class I use, uses the base.camera, whereas I have to have my own custom camera. Can you please give me a solution, or a workaround. Thanks in advance.
Hmm. Ok I have not ready solution, but I can try to give you idea.
You already have bunch of points, corresponds to the 2D edge, right? If so, then you should test intersection of ray, which extruded from each 2D point and the plane, which parallel with camera viewport. Your plane should positioned in the center of the needed object bounding box.
About own camera - each camera has lens and you can use this lens to extrude your ray.
Example with ray-to-plane intersection is here Super-fast mouse ray collisions with ground plane
P.S. when I said about context - I mean purposes in which you wish to get 3D points corresponds to the 2D points. For example it may be projection or selection or something else. Perhaps this purposes can be done by another way.
Yes sir, you’re right. I am trying to find the reprojection error between the model and an image of the model, using the camera projection matrix. Is there any other way to do that ?
But I need the 3D points anyway, to use them for other optimization purposes.
Why not to do vice versa: compare 2D points?
Another way: if you have depth buffer of you snapshot than you can reconstruct 3D position of each pixel.
Hi, using the definition of getProjectionMatInv()
So can’t I use this to find the 3D location of point. If yes, then why does it say 3-d “vector” ? And how can I use this to find the location, because it returns a 4X4 matrix (Mat4). So, how can I use this 4X4 matrix on a 2X1 image point to get the 3X1 3D location. Thanks.
It’s a vector because there is no one 3D point corresponding to a particular 2D point. There is an infinite number of them.
You need more information than just the 2D point to determine a 3D point. If you know the distance, you can use Lens.extrudeDepth(), which uses the inverse projection matrix to do this.
If you instead have geometry to project your point onto, you should look at the solutions presented in this thread:
Thanks @rdb for the reference. I looked up extrudeDepth(). Apparently it needs only the 2D point on the lens, and no other paremeters. I assume 2D point on the lens is the same point on the image. As in, I can simple pass the (x,y) of the snapshot point to the function, right ? Thanks again.
Did not knew about extrudeDepth, this function appeared in 1.9 as I can see?
Anyway if this function use depth of current camera then you can get unexpected result if use 2d points from another projection, as I understand
sud, it takes a 2D point and a depth value. I’m not sure if you understand that you can’t convert a 2D point to a 3D point without more information. A 2D point on screen corresponds to an infinite line extending outward from the camera in 3D space, rather than a single point.
I guess I don’t really understand your use case - what’s a “snapshot” in your terminology? Perhaps you can show us a screenshot or a diagram showing what you are attempting to do?