# mesh matrices and textures

Right, sorry, I keep forgetting that you’re constructing this animation using low-level operations directly, avoiding all of the egg structures. This makes it much less intuitive. The NodePath matrix is composed with the joint matrix. But I wasn’t referring to the NodePath matrix, which is presumably remaining constant when you play the animation. I meant that the original matrix that is assigned to the joint (the rest matrix) isn’t matching the matrix that is assigned to the joint when you play the animation (the animation matrix).

I think you are misunderstanding what yToZUpMat() does. It doesn’t modify the matrix you call it on. It only returns a new matrix, a 90-degree rotation, that you can multiply by any given matrix to make the intended conversion.

David

Sorry, can you explain this with other words? I think I know what you mean but I’m not 100% sure what you mean by composed.

Yes, it doesn’t… should it?

Actually I thought it applied the rotation and return the new matrix:

``mat = mat.yToZUpMat()``

Are you saying this is the right way of doing it?

``mat = mat * mat.yToZUpMat()``

It doesn’t seem like it.
Hm, it wasn’t like this in Blender API. I think the function there returned a new matrix with the rotation applied.

I’ll summarize the issue:

1. The eyeballs have mesh matrices. If not applied they appear at (0,0,0), which is near his feet.
2. If I assign the mesh matrices, the eyes are positioned correctly in rest pose (image 1).
3. If I assign it an animation (AnimationNode), then all the vertices but the eyes are in correct place. So I don’t think I have issues with the joint matrices.
My guess is that Panda doesn’t assign the mesh/joint matrices in the order I need (NodePath (Geom), then joint) or something.

Compose is another word for multiply.

If you don’t expect the eyes to move (very much) when you apply the animation, then you shouldn’t expect the joint matrix to change (very much) when you do so either. If the joint matrix is changing greatly (particuarly if the bottom row, the position vector, changes greatly) when you apply the animation, then the eyes will leap of the head.

That’s not what it does.

That’s the right way to apply a Y-to-Z-up conversion to a matrix, in general. I can’t say whether it’s the right thing to do in your particular context, because I don’t know anything about your context. You could also try multiplying in the other order, e.g. mat = Mat4.yToZUpMat() * mat.

Sure, every API is different.

David

Yes, it doesn’t move too much, as much as the head. it’s just not in the correct place, like you can see in the image.

Okay.
Please tell me what else I should mention.
EDIT: Not much luck with multiplying in opposite order.

But it is in the correct place before you bind the animation, so that means your initial joint matrix is correct. Because it moves to the incorrect place after you bind the animation, it means that your animation joint matrix is incorrect. Compare these two matrices to get some clue where it’s going wrong.

This is a bit like asking me to tell you where the obscure bug in your program is. I don’t know; transforms are complex. You have to debug problems with transforms the same way you debug problems with code: try to break it down to smaller and smaller pieces until you can isolate the part that’s doing the wrong thing. For this to be successful, of course, you do have to understand what each part should be doing; but you’ve said that you have a weak understanding of transform matrices, so I’m not sure how much more help I can be. My suggestion is either to study this subject further and improve your understanding of transform matrices, or just punt and (as I suggested earlier) simply put a single 90-degree rotation on the top node, and do nothing else.

David

The eyeball joint matrices work the same way as the ones for the rest of the body joints. Those are correct. The difference here is that the eyeballs also have matrices on their NodePath.

No, I just mean is there any info I should give which I haven’t yet.

Those are two different problems I have. Sorry, I forgot to mention that the issue when converting from y-up to z-up was fixed when you explained how yToZUpMat() worked

``mat = mat * mat.yToZUpMat()``

The pillow on the floor in the image is correct now when converting matrices to z-up system.

The only problem I have now is getting mesh matrices to work with joint matrices correctly (the eyeball example).

OK. But I also understand you to say that the eyes were in the correct position until you applied the animation. If so, that means that all the transforms were correct until you applied the animation, at which point at least one of the transforms was incorrect. The only thing that changes when you apply animation is the joint matrix, and this receives the transform matrix from the animation. Thus, it follows that the animation supplied the wrong transform matrix, replacing the originally correct matrix on the joint.

If you had told me that it was wrong from the beginning, even before you applied the animation, then I would come to a different conclusion. But the fact that applying the animation changes it from correct to incorrect leads me to point the finger at the animation matrix and nowhere else.

David

I see what you mean. Still, it would be weird that from all 95 joints only those two had wrong animation matrices, as they are read with the same function and most importantly, the eyes appear normal in the original deprecated game we are porting and it uses the same file (the original is closed-source by the way, so we have to reinvent everything).

I will try to print out the actual matrices and try to spot any problem, but I have the feeling there’s somethign more complicated going on here, specifically the way the original game and Panda multiply mesh and joint matrices.

BTW, do you also convert joint matrices from y-up to z-up? It seems like rotating the mesh matrices does the trick and rotating the joint matrices the same way screws up the model when animated, kind of like when you forget to invert your matrices.

I certainly believe the problem is related to the matrix on the NodePath; but I’m just suggesting a place to start looking. Once you discover what, precisely, is different between the animation matrix and the original joint matrix, it may become more clear in what way things are going wrong. For instance, perhaps your animation tables are meant to replace the original NodePath matrix, instead of only the joint matrix. But still you’ll have to understand precisely what’s going wrong before you can understand how to fix it.

Who are you referring to with “you” here? Panda does no implicit coordinate system conversion, except when loading egg files; but you’re avoiding the egg file path. So all the conversion is entirely up to you. You could choose to convert joint matrices, but then you’d have to convert all of the animation matrices to match, and that gets complicated–I’ve already advised just putting the whole model under a single rotation and calling it done.

David

I tried ignoring the NodePath matrix completely. If the animation tables (matrices) were meant to replace it, then you mean doing this would probably put the eyes in their correct place when binding an animation, right?

I mean me or anyone who would want to do this. I mean when you (someone) convert from one coordinate system to another, do you convert both the mesh matrices and joint matrices? Because if you convert mesh matrices, the Geoms are already rotated. As the mesh matrices are multiplied by the joint matrices, also rotating the joint matrices would be a double rotation.
And also it seems like rotating the joint matrices the same way “destroys” the animation, a similar effect like when you forget to invert your joint matrices.
Bear with me, I’m trying to get this working and these manual rotations might have something to do with it, I know you have already suggested to just rotate the NodePath.