From my understanding, in real world cameras the extrinsic matrix of the camera is the “transformation matrix” that defines xyz position and rotation, and intrinsic matrix is the “projection matrix” that maps 3D space to 2D pixel space.
In 3D graphics there’s the projection matrix that the fragment shader uses to project the clipped space to pixel coordinates (intrinsic?), and the position+rotation+scale of the camera that defines the transformation matrix (I’m guessing extrinsic?)
Question is, are these assumptions correct and if so how do I set them in panda?
I found the MatrixLens.setUserMat(), and base.make_camera() but don’t really know what do to from here
Thanks
UPDATE:
I got the matrices wrong according to this article:
You know the extrinsic matrix corresponds to the modelview matrix and the intrinsic is the projection matrix
So what’s the difference between all the matrices?
To set a custom projection matrix, you can create a new MatrixLens, apply the matrix using the method you found, and then replace the existing lens on the camera using base.cam.node().setLens(lens).
To set the “view” matrix (the modelview matrix includes the transformation of the model currently being rendered) you simply set the transformation of the camera in 3D space, using base.camera.setMat(render, mat)
Hunting around, it looks like camera.getMat() returns the pos/orientation transformation matrix for the camera. But I’m having trouble finding the call to get the lens projection matrix.
I want to compute the 2d (window) coordinate of some 3d world point.
Ultimately I want to draw a “2d” icon (probably a card?) centered on top of a real world 3d point. I’ve done this the hard way in past opencv projects, maybe panda3d provides things to already compute this and make it easy?
# Given a 3D point named "3DPoint", and a camera-NodePath called "myCamera"...
# First, convert the 3D point into the camera's space
converted3DPoint = myCamera.getRelativePoint(render, 3DPoint)
# Next, construct a 2D Point in which to store the end-result
2DPoint = Point2() # This will be filled in by the method, below:
# Finally, get the camera's lens and use it to project the point!
lens = myCamera.node().getLens()
lens.project(converted3DPoint, 2DPoint)
# The 2D point should now be in the variable named "2DPoint"!
Thanks, I tried to follow your suggestions, but the converted3DPoint I’m getting doesn’t make any sense. I’m calling camera.setPos() and camera.setHpr() but getRelativePoint() is returning giant numbers when I give it something close by … like this isn’t giving me relative to the camera, but relative to global coordinate system somehow which the camera is moving within too? I’m still pretty green with panda3d so I’m sure I’m confused about something (or multiple things more likely.) Visually all my other stuff is working correctly so it’s not like I’ve completely convoluted my entire coordinate system and am only seeing grey … I’m seeing all the stuff where I expect it.
Well, what I’ve posted above does assume that your initial 3D point is in the coordinate space of “render”–i.e. is in “pure world-space”, and not relative to some other node. Could it be that said assumption is incorrect?
Otherwise, could you post your current code, please? It might be easier to spot the issue by looking at that.
Quick follow up … it looks like I have it working, good news! As I was trying to distill things down to a snippet, I saw what I was doing wrong. My coordinates were coming in NED (north, east, down) and I hadn’t remapped those into the OpenGL coordinate system correctly, so once I accounted for that, everything started to make sense. Thanks again!
I have a follow up question. As I add more components to my scene, I notice that the solution I have is not 100% correct. The error seems to change when the camera rolls or the aspect ratio of the screen changes. I am drawing a 3d object in the scene (simple rectangle to represent a runway) and then attempting to compute the 2d screen coordinates of a point on the 3d object (the target touch down point) and these are not lining up visually. When the camera rolls, the solution error changes and changing the aspect ratio of the window also seems to do weird things to the 2d point.
Does getRelativePoint() consider the orientation of the camera, or only the position? Is there more to do if the camera can roll and pitch freely? It’s possible I’m screwing something up at a fundamental level, but the error changes when the camera rolls, so it seems plausible that I’m missing a step or there is something more to consider?
How are you determining the validity of the 2D point? Are you placing a marker on the screen somewhere…? If so, then I wonder whether the issue doesn’t lie with how you’re applying that step.
Of course, it is possible that something’s going wrong in your call to “getRelativePoint”–perhaps for example your particular setup has some specific conditions that call for unusual handling, or it may even be that a bug has slipped into Panda’s code.
But I think it more likely that the problem lies with the handling of the 2D point, myself.
Could you perhaps share some relevant code, and/or some screenshots showing the error that you’re seeing, please?
I’m doing this little side project on my work time so I need to be a little careful about sharing code, but in the mean time I think I found the answer. base.camLens.project() returns a 2d point in normalized screen coordinates. But I need to multiple the x (first component of the 2d vector) by base.getAspectRatio() in order for the 2d result to be “stuck” to the proper 3d location as the camera moves around.
Or more accurately, if I’m not much mistaken, for that position to be accurate to the “aspect2d” node; the coordinates produced by the “project” method should be perfectly accurate to the “render2d” node.
(Indeed, I believe that you should also be able to make the adjustment that you found by multiplying your coordinates instead by the result of “aspect2d.getSx(render2d)”, at least for a window that’s wider than it is tall.)