I have a C++ program that has my coordinates in an OpenGL ModelView matrix and I’m streaming those values over a socket to Panda (Python) in the form of an array. Index values 0 to 3 are row 1, 4 to 7 are row 2, etc.
I’m doing all my Panda programming in Python:
A) I don’t know how to create the 4x4 LMatrix4f type from my array. Is this right?
C) I’m not sure of the appropriate transform to bring this into the Panda coordinate system. All I want to do is place a box onscreen then position and rotate it based on the matrix values.
You’ll just have to get your coordinate frames right which can be tricky.
If your application is using default modelview coordinates it’s probably right handed Y up. Panda is right handed Z up.
Keep in mind that in OpenGL, the modelview coordinate doesn’t really “move” the model. It more or less moves the camera to make the model appear like it’s moved. If this last sentence is confusing don’t worry about it.
Basically there’s 2 ways to do take an OpenGL matrix and shove it inside Panda. The way you’re doing it seems syntactically correct.
First Method (easiest, least flexible):
If you want to just shove the modelview into directly into the system. The simplest way is to just put it into the view matrix of the lens node of your camera.
Lens.setViewMat()
If your using the default camera this is
base.lens.setViewMat()
However, please note this will move the camera NOT the object. This will work fine as long as you are ONLY displaying that object.
Second Method (easier):
If you can also use NodePath.setMat() with the inverse of the modelview matrix and this will also work. Panda can invert a matrix for you.
Third Method:
You can also use a shader to do this. Give the matrix using the new shader inputs system and construct the proper model-view-projection matrix in your vertex shader.
All of theses methods may require you to rotate the camera pitch -90 so that you switch coordinate frames.
In my application I’m going to end up with two separate objects that need to interact with each other and determine collisions. There will be one stationary object and then the object that we’re talking about which has the ModelView coordinates.
You stated three suggestions above, lens, nodepath & shader. Which one of these will be the best for my application?
You should probably invert the transform and multiply it with the inverse of the model view matrix from Panda3D, and set that as the object’s local transformation.
Note that you can set Panda3D’s coordinate system to be the same as OpenGL’s, this will make your life easier. You can do so using the “coordinate-system” configuration variable.
rdb, can you fill in the blanks here or correct where i’m wrong? i’m not sure how to get the “model view matrix from Panda3D” to invert it.
myMatrix = Mat4()
myMatrix.set(data[0], data[1], ... ,data[15)
myMatrix.invertInPlace()
modelViewMatrix = ? #how do i get the model view?
modelViewMatrix .invertInPlace()
myMatrix = myMatrix.multiply(myMatrix, modelViewMatrix)
myNode.setMat(myMatrix)
Wait, it’s probably that, but you need to invert the matrix first - because you’re setting the transformation on the object in the camera’s coordinate space, not the camera’s transformation in the world’s coordinate space.
Note that this assumes your coordinate system is set to y-up-right, otherwise you need to convert first (which is easy, just a multiplication by CoordinateSystem::convert_mat(a, b) or so.)
I’m close to figuring this out, but my axis are all switched. (My data is coming in from an external ARToolKit library.)
myMatrix = Mat4()
myMatrix.set(data[0], data[1], .... data[15])
myMatrix.invertInPlace()
C = myMatrix.convertMat(CSYupRight, CSZupRight)
myMatrix = C * myMatrix
myNode.setMat(base.cam, myMatrix)
Is it correct to go from Yup to Zup and is multiplying it like this correct?
Also, when I use setMat(base.cam, myMatrix) my model is so zoomed in I can’t see it. I try to zoom out with the mouse, but since I’m continually setting setMat from my ARToolKit feed it keeps zooming in.
And lastly, I’m not getting the position and scale coordinates from my matrix in my above code.
Thanks rdb. I’ve looked at that code previously and can’t make heads or tails of it.
Also, I’m using the professional / licensed version of ARToolKit and need some of the extra features in their code that is not in Panda’s implementation.
It looks like in the code that they are swapping Y & Z columns in the matrix and I’ve tried that, but it didn’t work for me.
For some reason no matter how I switch these around, only 1 axis operates correctly.
By switching Y & Z and then negating Z I’m able to rotate the object around Y correctly, but the others are wrong. When I tried to swap X & Z I get the same result.
Strange.
Just so we are the same page here the matrix in both OpenGL and Panda is (where t is translation)?
After Y&Z are swapped, the Panda / AR code sends the matrix to decompose_matrix and then unwind_zup_rotation_new_hpr, do I need to mimic this code?
static void
unwind_zup_rotation_new_hpr(FLOATNAME(LMatrix3) &mat, FLOATNAME(LVecBase3) &hpr) {
TAU_PROFILE("void unwind_zup_rotation_new_hpr(LMatrix3 &, LVecBase3 &)", " ", TAU_USER);
typedef FLOATNAME(LMatrix3) Matrix;
// Extract the axes from the matrix.
FLOATNAME(LVector3) x, y, z;
mat.get_row(x,0);
mat.get_row(y,1);
mat.get_row(z,2);
// Project Y into the XY plane.
FLOATNAME(LVector2) xy(y[0], y[1]);
xy = normalize(xy);
// Compute the rotation about the +Z (up) axis. This is yaw, or
// "heading".
FLOATTYPE heading = -rad_2_deg(((FLOATTYPE)catan2(xy[0], xy[1])));
// Unwind the heading, and continue.
Matrix rot_z;
rot_z.set_rotate_mat_normaxis(-heading, FLOATNAME(LVector3)(0.0f, 0.0f, 1.0f),
CS_zup_right);
x = x * rot_z;
y = y * rot_z;
z = z * rot_z;
// Project the rotated Y into the YZ plane.
FLOATNAME(LVector2) yz(y[1], y[2]);
yz = normalize(yz);
// Compute the rotation about the +X (right) axis. This is pitch.
FLOATTYPE pitch = rad_2_deg(((FLOATTYPE)catan2(yz[1], yz[0])));
// Unwind the pitch.
Matrix rot_x;
rot_x.set_rotate_mat_normaxis(-pitch, FLOATNAME(LVector3)(1.0f, 0.0f, 0.0f),
CS_zup_right);
x = x * rot_x;
y = y * rot_x;
z = z * rot_x;
// Project X into the XZ plane.
FLOATNAME(LVector2) xz(x[0], x[2]);
xz = normalize(xz);
// Compute the rotation about the -Y (back) axis. This is roll.
FLOATTYPE roll = -rad_2_deg(((FLOATTYPE)catan2(xz[1], xz[0])));
// Unwind the roll from the axes, and continue.
Matrix rot_y;
rot_y.set_rotate_mat_normaxis(-roll, FLOATNAME(LVector3)(0.0f, 1.0f, 0.0f),
CS_zup_right);
x = x * rot_y;
y = y * rot_y;
z = z * rot_y;
// Reset the matrix to reflect the unwinding.
mat.set_row(0, x);
mat.set_row(1, y);
mat.set_row(2, z);
// Return the three rotation components.
hpr[0] = heading;
hpr[1] = pitch;
hpr[2] = roll;
}
Mathematically that unwind code backs out the necessary rotations from the matrix given.
I’m not familiar with the Pro ARToolkit but if I recall the regular version, it gives a modelview and projection matrix along with the background plate.
Since all the math is given too you with both matrices the easiest way to do this without getting into any hairy matrix math is probably to do this using a shader. This way, there is NO axis rotation confusion.
Projection and view matrices all have a coordinate system and all of these must be rectified for it work the way your trying it. Take it from someone who has done a lot of this type of work, its really tricky getting everything right.
If you use shaders it will just become a straight up multiply and I believe (I may be wrong on this) you won’t even have to convert column major to row major if you use PTALMatrix4f in the new snapshot builds.
A simple vertex shader will really help you out here.