How could this be done in Panda3D? What function call can I make if any to extract the modelView matrix? I know that there is one to do so for the projection matrix in Panda3D.

The “modelview matrix” in OpenGL is used to describe the placement of the world relative to the viewpoint. In Panda3D, you control this instead by moving the camera object, base.camera. It is a more abstract, higher-level concept than OpenGL’s concept of modelview.

So, as Hypnos says–the answer to your question depends on what, precisely, you had in mind doing.

Without trying to go into too much detail, our application requires us to figure out a homography between different coordinate systems (For example, between a camera and a projector). The homography is essentially just a matrix that we decompose into its respective modelView and Projection matrices. Once this is done, we simply set them using the calls I mentioned previously in this post.

We have it working just fine in openGL, but we would like to do this in Panda3D.

I think the camera.getMat() call might be what I am looking for.

If I want to set the modelView matrix in Panda, would I just do a camera.setMat() call?

In openGL, Y is up and positive Z is into the screen, however in Panda3D, its almost like you rotate the world -90 degrees about the X axis. Z is up, and Y is into the screen.

We are dealing with different coordinate systems here, so how would I take that into account when calling setMat()? Would i have to move my row and column vectors around in my matrix to compensate for the slightly different coordinate system?

Panda will automatically compute the net transform between any two nodes. node.getMat(other) returns the net transform of node, as seen from other: that is, the accumulation of transforms from the root to node, composed with the inverse of the accumulation of transforms from root to other.

If you are not taking advantage of a nested scene graph and all of your nodes are attached to the root, then node.getMat(other) is equivalent to node.getMat() * invert(other.getMat()).

In Panda’s model, a camera is an entity that can be positioned in the scene to represent the viewpoint. Thus, the transform assigned to the camera is actually the inverse of the modelview matrix, since the modelview matrix is defined as that which transforms the scene. The modelview matrix can also be defined as the relative transform of the scene as scene from the camera, so the modelview matrix can be therefore retrieved by:

scene.getMat(base.camera)

If you want to apply a particular modelview matrix, you need to apply the inverse matrix to the camera. If you insist on thinking in terms of OpenGL’s matrices, this could be:

base.camera.setMat(inverse(modelview))

but this is only valid if you have no deep scene graph. If you do have nested nodes, and you want to set the camera to a particular modelview matrix relative to one of the nodes, it would be:

base.camera.setMat(node, inverse(modelview))

The whole design of Panda is, of course, intended to abstract the user away from having to think in terms of modelview matrices, though these low-level operations are of course still possible.

Oh, yes, the coordinate system thing. If you want to express your modelview matrix in OpenGL’s Y-up coordinate system, you will need to rotate the matrix before you apply it. It is, just as you have observed, a 90-degree rotation; and this can trivially be done by applying another matrix.

In principle, though, there’s (almost) nothing in OpenGL that is intrinsically Y-up. Y-up is just a convention. OpenGL just receives matrices, and you can load a Z-up projection matrix as easily as loading a Y-up matrix, and it works just fine. (There’s one exception we’ve found: the sphere mapping equation built into OpenGL is intrinsically Y-up, curse them.)