Camera's intrinsic and extrinsic matrices

From my understanding, in real world cameras the extrinsic matrix of the camera is the “transformation matrix” that defines xyz position and rotation, and intrinsic matrix is the “projection matrix” that maps 3D space to 2D pixel space.

In 3D graphics there’s the projection matrix that the fragment shader uses to project the clipped space to pixel coordinates (intrinsic?), and the position+rotation+scale of the camera that defines the transformation matrix (I’m guessing extrinsic?)

Question is, are these assumptions correct and if so how do I set them in panda?
I found the MatrixLens.setUserMat(), and base.make_camera() but don’t really know what do to from here

Thanks

UPDATE:
I got the matrices wrong according to this article:

You know the extrinsic matrix corresponds to the modelview matrix and the intrinsic is the projection matrix

So what’s the difference between all the matrices?

My understanding is this, in short and perhaps a little broadly:

  • The model-matrix provides the transformation between the local space of a model (such as its vertices might be defined in) to world s[ace.
  • The view-matrix provides the transformation between world space and camera-relative space
  • The projection matrix provides the transformation between camera-space and screen space.

Matrices like “ModelView”, then, are simply combinations of the above.

See the following manual page for more:
https://docs.panda3d.org/1.10/python/programming/shaders/coordinate-spaces

1 Like

To set a custom projection matrix, you can create a new MatrixLens, apply the matrix using the method you found, and then replace the existing lens on the camera using base.cam.node().setLens(lens).

To set the “view” matrix (the modelview matrix includes the transformation of the model currently being rendered) you simply set the transformation of the camera in 3D space, using base.camera.setMat(render, mat)

1 Like

Hunting around, it looks like camera.getMat() returns the pos/orientation transformation matrix for the camera. But I’m having trouble finding the call to get the lens projection matrix.

I want to compute the 2d (window) coordinate of some 3d world point.

Ultimately I want to draw a “2d” icon (probably a card?) centered on top of a real world 3d point. I’ve done this the hard way in past opencv projects, maybe panda3d provides things to already compute this and make it easy?

Thanks in advance,

Curt.

I think that this would be done via a call to “getViewMat” on the relevant lens. Like so:

matrix = myLens.getViewMat()

(This per the manual, here.)

[edit] Ah, sorry, that’s of course the view-matrix, not the projection matrix. It seems that there’s a separate method for that. [/edit]

It should, I think, be possible to do this via a call to the “project” method of the “Lens” class. Something like this:

# Given a 3D point named "3DPoint", and a camera-NodePath called "myCamera"...

# First, convert the 3D point into the camera's space
converted3DPoint = myCamera.getRelativePoint(render, 3DPoint)

# Next, construct a 2D Point in which to store the end-result
2DPoint = Point2() # This will be filled in by the method, below:

# Finally, get the camera's lens and use it to project the point!
lens = myCamera.node().getLens()
lens.project(converted3DPoint, 2DPoint)

# The 2D point should now be in the variable named "2DPoint"!

Thanks, I tried to follow your suggestions, but the converted3DPoint I’m getting doesn’t make any sense. I’m calling camera.setPos() and camera.setHpr() but getRelativePoint() is returning giant numbers when I give it something close by … like this isn’t giving me relative to the camera, but relative to global coordinate system somehow which the camera is moving within too? I’m still pretty green with panda3d so I’m sure I’m confused about something (or multiple things more likely.) Visually all my other stuff is working correctly so it’s not like I’ve completely convoluted my entire coordinate system and am only seeing grey … I’m seeing all the stuff where I expect it.

Well, what I’ve posted above does assume that your initial 3D point is in the coordinate space of “render”–i.e. is in “pure world-space”, and not relative to some other node. Could it be that said assumption is incorrect?

Otherwise, could you post your current code, please? It might be easier to spot the issue by looking at that.

1 Like

Hi Thaumaturge,

Quick follow up … it looks like I have it working, good news! As I was trying to distill things down to a snippet, I saw what I was doing wrong. My coordinates were coming in NED (north, east, down) and I hadn’t remapped those into the OpenGL coordinate system correctly, so once I accounted for that, everything started to make sense. Thanks again!

1 Like

Ah, that does make some sense! Well, I’m glad that you found the problem, and that you got the feature working! :slight_smile:

1 Like

Hi Thaumaturge,

I have a follow up question. As I add more components to my scene, I notice that the solution I have is not 100% correct. The error seems to change when the camera rolls or the aspect ratio of the screen changes. I am drawing a 3d object in the scene (simple rectangle to represent a runway) and then attempting to compute the 2d screen coordinates of a point on the 3d object (the target touch down point) and these are not lining up visually. When the camera rolls, the solution error changes and changing the aspect ratio of the window also seems to do weird things to the 2d point.

Does getRelativePoint() consider the orientation of the camera, or only the position? Is there more to do if the camera can roll and pitch freely? It’s possible I’m screwing something up at a fundamental level, but the error changes when the camera rolls, so it seems plausible that I’m missing a step or there is something more to consider?

Thanks in advance,

Curt.

Ah, interesting…

It should indeed take into account orientation.

How are you determining the validity of the 2D point? Are you placing a marker on the screen somewhere…? If so, then I wonder whether the issue doesn’t lie with how you’re applying that step.

Of course, it is possible that something’s going wrong in your call to “getRelativePoint”–perhaps for example your particular setup has some specific conditions that call for unusual handling, or it may even be that a bug has slipped into Panda’s code.

But I think it more likely that the problem lies with the handling of the 2D point, myself.

Could you perhaps share some relevant code, and/or some screenshots showing the error that you’re seeing, please?

I’m doing this little side project on my work time so I need to be a little careful about sharing code, but in the mean time I think I found the answer. base.camLens.project() returns a 2d point in normalized screen coordinates. But I need to multiple the x (first component of the 2d vector) by base.getAspectRatio() in order for the 2d result to be “stuck” to the proper 3d location as the camera moves around.

1 Like

Or more accurately, if I’m not much mistaken, for that position to be accurate to the “aspect2d” node; the coordinates produced by the “project” method should be perfectly accurate to the “render2d” node.

(Indeed, I believe that you should also be able to make the adjustment that you found by multiplying your coordinates instead by the result of “aspect2d.getSx(render2d)”, at least for a window that’s wider than it is tall.)

In any case, I’m glad that you found a solution! :slight_smile:

1 Like