Tricky bone rotation issues

We have a children’s interactive as part of a larger project that is meant to mirror a persons motion on a variety of models. Major caveat: this was originally coded by a subcontractor, but they couldn’t get the bone rotations worked out in time, and although I am a programmer by trade, I’m a database guy, and this is all a bit beyond my ken. I’ve stripped out hundreds of lines of code, and rewritten much of the rest, and gotten it to a somewhat workable state, but it feels like someone with more experience than me could probably overcome the remaining issue with ease.

The basic pipeline starts with x,y,z coords in coming from mediapipe. They are just screen coords relative to the mocap image, and they are translated into panda coords using rdb’s amazingly handy camera extrude trick. So far, so good. We’re only trying to move 2 arm bones per side, and 2 leg bones per side.

Then, each frame, I’m creating a dummy node for the 2 keypoints that should track the head and tail of a given bone, for example left_forearm is calculated from the relative offsets between left_elbow and left_wrist by having the left_elbow dummy look at the left_wrist dummy (this helps avoid issues around body position relative to the mocap camera, etc). I can then apply that rotation to the controlled joint, and it will happily track just fine, UNTIL it hits a perpendicular, at which point the lookAt rotation jumps from say (h=90,p=89) to (h=-90,p=89) rather than (h=90,p=91). As an example, if a user holds their arm in a classic bicpe flex, as the angle between their forearm and upper arm closes from >90 to <90, the whole forearm contorts under the upper arm.

            joint = self.skeleton_joints.get(bone_name)

            pos_base = raw_keypoints[raw_pair['base']]
            dummy = self.dummy_nodes[bone_name]
            dummy_base = dummy['base']
            dummy_base.setPos(pos_base)

            pos_target = raw_keypoints[raw_pair['target']]
            dummy_target = dummy['target']
            dummy_target.setPos(pos_target)
            dummy_base.lookAt(dummy_target)

            joint.setQuat(dummy_base.getQuat())

I understand that in some sense those are equivalent rotations, but the resulting motion of the actual bone is not correct. If I hard code a rotation of (h=90,p=91), it looks great. I’m currently trying to force this by sniffing the resulting rotation, but it’s a little janky, and not always easy to paramterize post hoc for each bone.

I’ve spent a ton of time crawling this incredibly civilized and useful forum, and I’ve tried everything that I could find to correct this issue, and there was a ton of useful info on dealing with absolute vs relative rotations for parented joints, etc. I get similar results whether applying the rotation via hpr or quats, and no amount of late night monkeying around in Panda or Blender has been able to resolve this issue. At this point, we’re a week late on delivering this particular interactive, and other projects are piling up while I’m still fighting a losing battle on this bone rotation.

If anyone more experienced can offer some guidance or a silver bullet that is obvious to them, but not to me, I would be hugely appreciative. This is also paid client work, and I would be much happier to pay a bounty for a solution than to keep banging my head on this unfamiliar wall :slight_smile: Any code, model files, or screenshots totally available if it helps. Thank you!

Greetings, and welcome to the forum! I hope that you find your time here to be positive! :slight_smile:

Hmm… This does look like a tricky issue!

What happens if you apply the pitch first (i.e. via a call to “setP” instead of “setHpr”), and then separately apply the heading?

Otherwise, perhaps it might help to, on each update, compare the new rotation with the previous rotation, and then if they’re found to be greatly different, to adjust the new rotation mathematically?

(After all, that’s arguably the problem: I imagine that the orientation produced by “lookAt” is technically accurate–but inconsistent with the previous frame’s orientation (and human anatomy).

Interesting thought! Just gave it a shot, and no luck. Comparing previous rotations was something I was thinking about trying. We already store the starting rotations so that we can reset when there is no one in frame of the mocap camera; I wonder if I can tell from those initial rotations where the crossover points are so that I can catch them later, or if I need to buffer up recent rotations to catch the flip that way. I’ll experiment some today focusing on one or two joints to see how that goes. :slight_smile:

I’m very appreciative of the suggestions!!

1 Like

What does the roll look like in these situations?
Are these supposed to behave as ball joints or hinge joints?
Can you give me an example of some input positions for the nodes so I can play with the issue on my end?

Some general suggestions without knowing more: lookAt takes a separate up vector argument, which defaults to (0, 0, 1) which can be used to disambiguate certain cases, you could perhaps use that to preserve the existing up vector. When the vector between the joints aligns with the given up vector, the result will also be ambiguous. There is also headsUp, which is like lookAt but has a strong preference for maintaining the existing up vector.

1 Like

Hey rdb, thank you for chiming in :slight_smile:

I was finally able to work out a solution. I reduced the complexity of the armature and oriented all of the bones in one direction in Blender to reduce the number of variables (I was later able to undo this), and was finally able to sort out the translation of rotation angles calculated from the mocap in absolute space to local rotations for each joint.

        orientation_quat = Quat()
        orientation_quat.setHpr((180,0,0))# set the dummy quat to face the same direction as the panda3d camera
        
        down_quat = Quat()
        down_quat.setHpr((0,90,0))# all bones look straight down from their head, at the base keypoint, to their tail, at the target keypoint

orientation_quat gets applied to dummy_base, and lookAt takes dummy_base as its first arg, and that gets me a correct angle in world space.

down_quat is multiplied with dummy_base’s resulting quat, and then optionally multiplied with the inverse of its parent bone’s quat, if it has a parent.

I did try adding the vector arg to lookAt at some point, but I think without correctly orienting the dummy, it was giving me weird unusable results. It’s totally possible that I’m going about this in a needlessly roundabout way, or that some aspect of the orientation of my Blender exports is causing trouble, but as I said, I’m a database guy, and so I don’t have a great intuition for the asset pipeline.

Everything is working nicely, now, and I’m pretty impressed with the performance we’re able to achieve, particularly with the mocap running, and considering that we’re ill-equipped to do low-level optimization.

2 Likes