[HACKED]matrix representation and projections

i’m writing a projector calibration routine for displaying on physical 3d models, like a lamp or a mannequin. the procedure is the dual of the gold standard camera calibration algorithm outlined in multiple view geometry by hartley and zisserman.

the brief outline is that a number of points on the physical model are registered with their locations is image space (the coordinate plane of the projector’s lcd, say) and from this correspondence, a 3x4 matrix can be derived. this matrix, the ‘camera matrix’, takes homogeneous points in 3d all the way to homogeneous coordinates in 2d, effectively transforming from model space to window space. going all the way to window space is a no-no in current graphics apis, i think. usually we only go as far as clip space and then ask the hardware to do everything else. i can derive a transformation to clip space by transforming the image-space points to clip space before solving the correspondence.

as far as i can tell, the matrix ‘works’. that is, if i back-project the world-space points using the matrix, the points returned are accurate to 3 decimal places.

the matrix can be decomposed to give a translation vector and rotation matrix representing the extrinsic parameters of the camera. the rotation matrix can be unwound to determine the camera’s hpr and this is nice. if i apply(1) these extrinsic parameters to base.camera, and display a virtual model of the object that i’ve used to calibrate, things are in pretty good shape. everything seems perfect except the world is offset by a few pixels in each direction.

ok, fine. the camera matrix is aware of this. the way we found the rotation matrix was by taking the first three columns of the camera matrix and performing Givens rotation RQ decomposition. the matrix Q is the rotation matrix, and the matrix R is a right (upper) triangular matrix, which is the camera’s projection matrix. the vector (R[0,2],R[1,2]) is called the camera center. typically, this is the center of the image ((width/2,height/2) for corner origins or (0,0) for center origin). my matrix has non-zero values in these entries, and they correspond to the offsets i’m seeing when i use the standard PerspectiveLens’s projection matrix. but i don’t know how to tell panda to use this offset(1).

(1) the big problem here is that i’m having a very hard time coping with panda’s coordinate system/transformation matrices:

my derivation of the translation vector went just right – (x/w,y/w,z/w) == camera.getPos() == camera.getMat().getCol(3). but why is the translation vector in the last column instead of the last row? oh. because Mat4.xform() multiplies vM rather than Mv_transpose. ok.

the rotation matrix was more problematic. to get from my matrix Q to the upper 3x3 of camera.getMat() i had to 1) swap rows 1 and 2, 2) transpose, and 3) scale by -1. the transpose is consistent with my experience with the translation vector. i assume the row (now column) swapping has to do with the difference between y-up and z-up coordinate spaces? and i lost my will before exploring the negation.

there’s a similar problem with the projection matrix. camera.getChild(0).node().getLens().getProjectionMat() has rows 1 and 2 swapped from the perspective of opengl or direct3d projection matrices. i assume that only the ratio of the focal lengths is important, right? not the focal length values themselves? also, my projection matrix, R, is 3x3. i’m not sure how to transform it into homogeneous coordinates. also, obviously, it contains parameters that are typically not part of a projection matrix (this offset vector, for instance), and i’m not sure if i should try to incorporate them in the virtual camera’s projection matrix or not. i’m not interested in the camera’s near and far planes. would turning the depth test off disable clipping? finally, the camera center is actually a homogeneous point consisting of the third column of the projection matrix, so those offsets are only appropriate after dividing by R[2,2].

basically, nothing i’ve done has given me the results i really want. here’s what i’ve tried:

-manually setting the pos and hpr of the camera (as described above). this got me all the way to the offset problem
-moving the camera to the origin and setting a MatrixLens’s userMat with my camera matrix as follows:

camera.setPos(Vec3(0))
camera.setHpr(Vec3(0))
lensMat = Mat4()
lensMat.setRow(0,camMat.getRow(0))
lensMat.setRow(1,camMat.getRow(1))
lensMat.setRow(2,lensMat.getRow(3)) #wbuffer
lensMat.setRow(3,camMat.getRow(2))
newLens = MatrixLens()
newLens.setUserMat(lensMat)
camera.getChild(0).node().setLens(newLens)

this failed in a big way when my matrix was transforming to window space. i have not tried this trick again with a camera matrix that transforms to clip space.
-passing the camera matrix into a shader program and replacing the using it in place of the modelviewprojection matrix (i replace the model matrix as well as the view and projection because i’m assuming the object that i’m projecting on is at render’s origin). this didn’t work, but i’m not confident that i passed the parameter appropriately. i’ve tried:

uniform float4x4 k_camera_mat

with

myCam=NodePath('cam')
myCam.setMat(lensMat) #or myCam.setMat(camMat)
render.setShaderInput('camera_mat',myCam)

i didn’t try to use a trans_x_to_y_z parameter because i think that’ll take me back to the problems i was having before with cameras.
-lastly, i spent a bunch of time in interactive mode trying to multiply transforms (the camera NodePath transform, the projection matrix, etc) together to get a transform that performs the same action as the camera matrix, which has failed. here’s a fundamental question: if Ax = a and Bx = b where a and b are equivalent modulo perspective division, what is the relationship between A and B?

my experiments have not been exhaustive, though they have exhausted me. the next thing i will try is to return to manually setting the camera NodePath’s pos and hpr and then feeding the offset vector from the intrinsic matrix R into a vertex program that will adjust the vertex coordinates uniformly by that offset after the standard modelviewprojection transform. i’m hopeful that this hack will get me good results for the time being, but i’d really like to know what the right way to deal with this data is so i can do things in The Right Way.

jeremy

Seems to me like The Right Way would be to load the desired matrix into a MatrixLens, since that’s what it’s designed for. I’m curious to know in what big way this failed. (I’m not 100% sure that the MatrixLens works properly with shaders, though, since I’m not sure anyone’s ever tried that.)

From your description, I think the piece you are missing is Panda’s coordinate system transforms.

The logic that Panda uses to convert its various transforms into OpenGL is the following.

To set up the scene, compute:

C = Mat4.convertMat(CSYupRight, lens.getCoordinateSystem())
P = lens.getProjectionMat()
Load the matrix C * P as GL_PROJECTION.

To render a given GeomNode, compute:

S = Mat4.convertMat(gsg.getCoordinateSystem(), CSYupRight)
V = the net composition of transforms to the camera node.
N = the net composition of transforms to the GeomNode.
M = N * (V)-1
Load the matrix M * S as GL_MODELVIEW.

The default value for both lens.getCoordinateSystem() and gsg.getCoordinateSystem() is the global user coordinate system setting, which is set in the Config.prc file as the variable “coordinate-system”, and defaults to “zup-right” or CSZupRight. Thus, you are getting burned by two different 90-degree rotations, one in the modelview matrix and one in the projection matrix.

One way to avoid both of these is to set “coordinate-system yup-right” in your Config.prc file. Assuming that your matrix is designed for a right-handed Y-up coordinate system, of course. Note that fiddling with the global coordinate system settings may break certain modules that were strictly designed with a Z-up coordinate system in mind, like DirectGui.

You can see the matrices that are actually being loaded to OpenGL by setting:

notify-level-display spam

in your Config.prc file. This will also print a lot of other noise, but if your scene is very simple it shouldn’t be too overwhelming.

David

the two different rotations are in opposite directions, though, right? so their product is identity in the multiplication M * S * C * P.

so if we set

mvp = M*C*S*P
myMvp = Mat4()*C*S*myProj
knownGood = mvp.xform(x)
attempt = myMvp.xform(x)

i find that the x/w and y/w coordinates of knownGood and attempt match. the z coordinates do not match.

something interesting happened when i did this. when rendering with the default camera, this is the frame breakdown:

:display(spam): begin_frame(render): osxGraphicsWindow window1 0x842e04
:display(spam): clear (): osxGraphicsWindow window1 0x842e04
:display:gsg(spam): Setting GSG state to 0x15df234:
  0 attribs:
    SimpleHashMap (0 entries): [ ]
    SimpleHashMap (0 entries): [ ]
:display:gsg:glgsg(spam): glEnable(GL_RESCALE_NORMAL)
:display:gsg:glgsg(spam): glDisable(GL_NORMALIZE)
:display(spam): do_issue_light: 0x16f6ffb4:2
  LightAttrib:identity
:display:gsg:glgsg(spam): glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT|GL_STENCIL_BUFFER_BIT|)
:display(spam): Drawing window window1
:display:gsg:glgsg(spam): glMatrixMode(GL_PROJECTION): [ 2.79904 0 0 0 ] [ 0 3.73205 0 0 ] [ 0 0 -1.002 -1 ] [ 0 0 -2.002 0 ]
:display:gsg(spam): Setting GSG state to 0x15df660:
  2 attribs:
    CullFaceAttrib:cull_clockwise
    RescaleNormalAttrib:auto
    SimpleHashMap (0 entries): [ ]
    SimpleHashMap (0 entries): [ ]
:display:gsg:glgsg(spam): glLoadMatrix(GL_MODELVIEW): [ 0.83205 0.412021 -0.371391 0 ] [ -0.5547 0.618031 -0.557086 0 ] [ 0 0.669534 0.742781 0 ] [ -2.6226e-06 0 -53.8517 1 ]
:display:gsg:glgsg(spam): glDisable(GL_NORMALIZE)
:display:gsg:glgsg(spam): glDisable(GL_RESCALE_NORMAL)
:display:gsg:glgsg(spam): begin_draw_primitives: card 4 rows: [ vertex(3f) normal(3f) texcoord(2f) ]
:display:gsg:glgsg(spam): draw_tristrips: GeomTristrips, 1, 4
:display:gsg:glgsg(spam): glMatrixMode(GL_PROJECTION): [ 1 0 0 0 ] [ 0 1 0 0 ] [ 0 0 -0.001 0 ] [ 0 0 0 1 ]
:display:gsg(spam): Setting GSG state to 0x15df234:
  0 attribs:
    SimpleHashMap (0 entries): [ ]
    SimpleHashMap (0 entries): [ ]
:display:gsg:glgsg(spam): glDisable(GL_NORMALIZE)
:display:gsg:glgsg(spam): glDisable(GL_RESCALE_NORMAL)
:display(spam): do_issue_light: 0x16f6ffb4:2
  LightAttrib:identity
:display:gsg:glgsg(spam): glClear(GL_DEPTH_BUFFER_BIT|)
:display:gsg:glgsg(spam): glMatrixMode(GL_PROJECTION): [ 1 0 0 0 ] [ 0 1 0 0 ] [ 0 0 -0.001 0 ] [ 0 0 0 1 ]
:display(spam): end_frame(render): osxGraphicsWindow window1 0x842e04

but when i create the new lens, apply it to the camera and zero the camera’s pos and hpr, the frame output seems to override my matrix with the old projection matrix:

:display(spam): begin_frame(render): osxGraphicsWindow window1 0x842e04
:display(spam): clear(): osxGraphicsWindow window1 0x842e04
:display:gsg(spam): Setting GSG state to 0x15df234:
  0 attribs:
    SimpleHashMap (0 entries): [ ]
    SimpleHashMap (0 entries): [ ]
:display:gsg:glgsg(spam): glEnable(GL_RESCALE_NORMAL)
:display:gsg:glgsg(spam): glDisable(GL_NORMALIZE)
:display(spam): do_issue_light: 0x16f6ffb4:2
  LightAttrib:identity
:display:gsg:glgsg(spam): glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT|GL_STENCIL_BUFFER_BIT|)
:display(spam): Drawing window window1
:display:gsg:glgsg(spam): glMatrixMode(GL_PROJECTION): [ -0.0258709 -0.0172256 0 -0.00427873 ] [ -2.8952e-05 -0.0277582 0 0.00859958 ] [ -0.0172504 0.0257217 0 0.00608695 ] [ 0.000507549 -0.000195238 1 -0.60216 ]
:display:gsg:glgsg(spam): glMatrixMode(GL_PROJECTION): [ 2.79904 0 0 0 ] [ 0 3.73205 0 0 ] [ 0 0 -1.002 -1 ] [ 0 0 -2.002 0 ]
:display:gsg(spam): Setting GSG state to 0x15df660:
  2 attribs:
    CullFaceAttrib:cull_clockwise
    RescaleNormalAttrib:auto
    SimpleHashMap (0 entries): [ ]
    SimpleHashMap (0 entries): [ ]
:display:gsg:glgsg(spam): glLoadMatrix(GL_MODELVIEW): [ 1 0 0 0 ] [ 0 0 -1 0 ] [ 0 1 0 0 ] [ 0 0 0 1 ]
:display:gsg:glgsg(spam): glDisable(GL_NORMALIZE)
:display:gsg:glgsg(spam): glDisable(GL_RESCALE_NORMAL)
:display:gsg:glgsg(spam): glDisable(GL_NORMALIZE)
:display:gsg:glgsg(spam): glDisable(GL_RESCALE_NORMAL)
:display(spam): do_issue_light: 0x16f6ffb4:2
  LightAttrib:identity
:display:gsg:glgsg(spam): begin_draw_primitives: card 4 rows: [ vertex(3f) normal(3f) texcoord(2f) ]
:display:gsg:glgsg(spam): draw_tristrips: GeomTristrips, 1, 4
:display:gsg:glgsg(spam): glMatrixMode(GL_PROJECTION): [ 1 0 0 0 ] [ 0 1 0 0 ] [ 0 0 -0.001 0 ] [ 0 0 0 1 ]
:display:gsg(spam): Setting GSG state to 0x15df234:
  0 attribs:
    SimpleHashMap (0 entries): [ ]
    SimpleHashMap (0 entries): [ ]
:display:gsg:glgsg(spam): glDisable(GL_NORMALIZE)
:display:gsg:glgsg(spam): glDisable(GL_RESCALE_NORMAL)
:display(spam): do_issue_light: 0x16f6ffb4:2
  LightAttrib:identity
:display:gsg:glgsg(spam): glClear(GL_DEPTH_BUFFER_BIT|)
:display:gsg:glgsg(spam): glMatrixMode(GL_PROJECTION): [ 1 0 0 0 ] [ 0 1 0 0 ] [ 0 0 -0.001 0 ] [ 0 0 0 1 ]
:display(spam): end_frame(render): osxGraphicsWindow window1 0x842e04

sorry, i coludn’t figure out how to add emphasis inside a code block. obviously the most important lines are the glMatrixMode() calls, but i didn’t know if there’d be more in there that was important, so i left everything.

i haven’t found the old projection matrix lying around in the scene graph, but maybe i’m not looking in the right place?

oh! but i did find Lens.setFilmOffset() if i manually set all the other parameters on the camera from the decomposed camera matrix, i ought to be able to use this function to correct for the offset issue i mentioned in the OP.

jeremy

Yes, of course; the coordinate system transform multiplies out by the time you get to clip coordinates. But the point is that if you are working in eye coordinates, which doesn’t involve the entire production, then you still have a coordinate system transform.

Is myProj designed with a z-up coordinate system or a y-up coordinate system in mind? Panda’s is a z-up projection matrix, in the default configuration. If yours is a y-up projection matrix, it will obviously transform the z coordinates differently.

Each GL_PROJECTION call represents a the beginning of a DisplayRegion. Since I see one call to GL_PROJECTION (presumably your matrix) immediately followed by a different call to GL_PROJECTION, it suggests to me that you have added your lens to a new DisplayRegion, while leaving the original one still in the scene. Furthermore, there doesn’t appear to be any geometry in your new DisplayRegion (nothing is drawn between these two GL_PROJECTION calls).

David

hmm… i am working in view space, since i’m manipulating the lens’s projection matrix, but the matrix that’s replacing said projection is the model-view-projection matrix. everything else in the equation (the geometry’s transform, the camera’s transform, etc) are all identity, so i’d expect to be able to set the projection matrix to something that’s been verified to transform the points appropriately and call it a day. for some reason it’s not working.

it shouldn’t really matter if i’m working in view space or model space, etc if all the other transforms are identity, right?

myProj is derived from a y-up righthanded coordinate system, so i expect that, yes, it is also y-up righthanded. the reason the z-values are not as expected is that the camera matrix i derived is 3x4, not 4x4. to make this fit in the panda/opengl world, i inserted e4 between the second and third rows of the matrix before transposing (see code snip in OP), so the transform takes w to z. i shouldn’t have to care about this, because there’s only one object being rendered in this scenario – the virtual model of the physical display surface.

hmm… this is something i’ve been fearing. i’m not sure what the best practice is for alternating between cameras in the same display region of the same window. is the best thing to render each camera’s view to texture and just swap textures on a render2d card, or is there a nice way to do this via setActive(), setScene(), setCamera(), etc? the manual seems to deal mostly with permanently adding views rather than alternating between them.

this worked, btw. so “everything’s working”, but i’m still interested in figuring out what i’m doing wrong and why i can’t just setUserMat to resolve this.

is it a problem for the camera and the geometry to be in the same place (the origin) if the projection matrix performs a translation to make the geometry visible? in what reference frame is the z-coordinate of the clipping space point computed?

would it be helpful if i posted some numbers?

jeremy

Everything except for the coordinate-system transform. Have you tried it with setting “coordinate-system yup-right”?

I admit I didn’t quite follow your math in the OP. I’ll take your word for it that you know what you’re doing, and that this manipulation of the matrix is valid and produces a valid 4x4 matrix. I’ve never seen a 3x4 matrix used as a projection matrix before, and I don’t understand how it would work. Normally, every cell is meaningful in a 4x4 projection matrix. But the hardcore matrix math has never been my strongest skill.

Ugh, don’t render to texture, that’s just silly. If you want to be able to keep both cameras available and switch back and forth at runtime, just use one DisplayRegion and call setCamera() on it when you want to switch. Or, use two DisplayRegions, and keep one of them inactive at all times with setActive(). Don’t use setScene(), that’s a largely deprecated function, and doesn’t do anything useful anyway except cause confusion.

Sounds OK, but you might want to disable culling temporarily, just to prove that it’s not causing a problem. You can do this with “view-frustum-cull 0” in your Config.prc.

Not sure I understand this question. The clip space vertices are the result of the modelview matrix and projection matrix, applied to the object vertices. The near plane is at 0 in clip space, and the far plane is at 1.

David

but the two coordinate-system transforms together are identity, right? am i skipping something important by just combining all the matrices together?

i have not changed the coordinate system in the .prc file because of your warnings re: DirectGUI, etc. i lied a little when i said that the virtual model was the only thing being rendered. there are also gui widgets, etc being rendered on another screen. can different outputs have different coordinate spaces? i assume the answer is yes, but it requires setting the coordinate system in code rather than the .prc file.

the 4x4 projection matrix generates a tuple (x,y,z,w) representing a 3d point in homogenous coordinates. after the perspective divide, we have (x/w,y/w,z/w,1) and hereafter we ignore w. since the window is 2d, we only really need x/w and y/w. we only use z/w for the depth test. my 3x4 matrix produces a 2d homogeneous coordinate (x,y,w), which is everything we need to draw pixels in the window.

yes! ok, i think this is what i’ve been trying to ask for by saying “can’t i just turn off the depth test in order to not care about the generated z-coordinates?” i will try it.

are you saying that any vertex v with v.z<0 or v.z>1 in clip space doesn’t get rendered? to me that says the clipping z-coordinate is relative to the eye’s reference frame (which is not to say the panda camera’s reference frame).

jeremy

I don’t know. Maybe not, but can you be completely sure? As a rule of thumb, I try not to rule out any possibility without testing it first, especially when it’s easy to test. Can’t you set the coordinate system to y-up just to see what happens, even if it temporarily breaks your gui?

OK. And you are confident that your 4x4 matrix produces (x,y,z,w), where x, y, and w are the same as that produced by your 3x4 matrix?

Also, here is where the coordinate system transform can bite you: Panda will transform the (x, y, z, w) your matrix produces into (x, z, -y, w). Panda automatically produces a z-up projection matrix when it is in z-up mode, and a y-up projection matrix when it is in y-up mode, so that the resulting clip space (after the coordinate system transform) is always y-up. But if you are running Panda in z-up mode and feeding it a y-up projection matrix, your resulting clip space will be off by 90 degrees.

This is not the depth test. This is view-frustum culling. There are three different parts of the system that can cause geometry not to be rendered:

(1) The depth test. This is the comparison of each pixel’s z value (in clip space) with the z-buffer. You can easily turn this off with render.setDepthTest(False).

(2) View-frustum culling. This is Panda’s omission of geometry that appears to be completely outside of the view frustum. Such geometry is not transformed or sent to the graphics card for rendering. If Panda gets an incorrect idea of the view frustum’s bounding volume for some reason, this can incorrectly cull geometry that should be visible.

(3) The clip planes. There are six of these, around the unit cube in clip space. Normally, the side planes are not an issue, because they correspond to the edges of the screen anyway, but the near and the far clip planes can cause you grief. You can’t disable this clipping, but you can design your projection transform so that the near and far planes are arbitrarily far apart.

Yes, this is standard OpenGL behavior. Strictly, any pixel not in the range 0 < v.z < 1 doesn’t get rendered. Since the clip space is a linear transform from the camera’s reference frame, the clipping z-coordinate is relative both to the eye’s reference frame and to the camera’s reference frame.

David

no, but i’m confident that x/w and y/w are the same as those produced by the 3x4 matrix.

i’ve gotten this to ‘work’ now, but it’s a very ad hoc method that got me here. first i decided not to alter the z coordinate at all in the multiplication by setCol(2,Vec4(0,0,1,0)), but nothing rendered. i’m assuming everything was outside the clipping volume.

next i used setCell(2,2,getCell(2,3)), which gave good results up to a point. it appeared to me that vertices with y-coordinate beyond some bound were being clipped out.

so finally, i chose a (semi-arbitrary) small number, which seems to have solved everything. setCell(2,2,-.05).

obviously this merits a more rigorous analysis, but i don’t have time to investigate just now. hopefully this will be of some help.

jeremy