Controlling joint movement of model with realtime hands movement using mediapipe

I want to control the hands of 3d model with the movement of my hand using mediapipe

mediapipe detects hands landmarks position in realtime and gives output in x,y,z coordinate of each landmark in normalized value between 0 to 1

so I want to use those x,y,z values to control the movement of the model’s hand or body in a realistic way
and I’m able to control the model but not with a realistic movement of the model
is it possible to do this ??
to sync the coordinate of panda 3d with mediapipe coordinates
if somehow we can do that please tell me
i think we can create some amazing projects with this

I would imagine that the thing to do would be to figure out what world-space ranges those normalised coordinates map to, and then scale them appropriately before applying them to your in-game nodes.

There are two potential caveats that I see, however:

  1. I forget whether joints handled by “controlJoint” operate in the coordinate space of their parent-nodes; I would guess that it is so. If so, you might have to convert your coordinates for that, perhaps by using relative transformations.

  2. Depending on how your joints are set up, it might not be feasible to translate them; it might only be feasible to rotate and scale them. This might call for either a more-complex system that employs something along the lines of inverse kinematics, or changing your joint hierarchy. (The former being more likely to produce good results, I think.)

1 Like

only major problem is to find the right ratio to set between these two coordinate system is i try many ways to find it some works a little bit to rotate some part of model but nothing is able to make a realistic move

and what i think
mediapipe gives only x,y,z value and somehow if we found the h,p,r value or each point then we have all 6 values needed in any action of 3d model
then just by standing in ideal position same as the 3d model we can compare the x,y,z h,p,r values of both mediapipe and 3d model of each point and can find the value of constant between them then may be we can sycn the both coordinate systems
but how to h,p,r values with just x,y,z values

1 Like

Well, that may depend on just what’s being measured. It may be as simple as a bit of trigonometry–or it may be that there isn’t enough information in just a location from which to reliably and accurately infer orientation, too. I don’t know offhand, I’m afraid.

You can build a tree of dummy nodes and add some visualization lines to them. Then you can easily rotate and scale the whole tree to match your character model. The next step would be implementing inverse kinematics for solving the bones rotation.

I did something similar:
Gamepad tracking: DualShock4 Motion Sensors Demo for Panda3D - YouTube
Hand tracking using Leap Motion device: Leap Motion Demo for Panda3D - YouTube
Face tracking using OpenSeeFace library: OpenSeeFace Demo for Panda3D - YouTube
Improved hand tracking using Leap Motion device and IK library: Testing IK for Panda3D, custom VTuber software prototype - YouTube

Then I have combined everything together: [VTuberWebcam] Custom VTuber software prototype - gamepad tracking - YouTube

Panda3D is missing IK feature, so I have used this library - GitHub - TheComet/ik: Minimal Inverse Kinematics library
There is also another library, which I haven’t tried yet - GitHub - Germanunkol/CCD-IK-Panda3D: Simple Cyclic Coordinate Descent Inverse Kinematics Implementation for Panda3D

This mediapipe project looks so cool. It’s a shame that I didn’t heard about it before. I’m going to play around with it too.


thanks, you did something really cool I wanna do the same that you do, with mediapipe panda 3d I will definitely follow your steps