Advice on advanced/flexible skeletal animation


I have learnt how skeletal animation works in Panda, and I’ve got good results in terms of applying an animation to a skeletal humanoid character, seeing it walk from one place to another, using intervals and so on, blending between animations, etc. I’ve also seen how individual joints can be controlled.

But I would like some advice about what sort of approach should be taken to create a more flexible system of character movement/position. For example, when a character is sitting on a chair, their legs should be in a different position from say when sitting on a bed. The rest of the animation might be the same (e.g. breathing), but their legs must be bent differently e.g. at the knees. Should one then create two separate animation files, one for when the man is on a bed, one for when in a chair? How about if you want to get off the bed and walk? - again this would be a different animation/sequence than if getting up from a chair and walking, so this would need to be different again. And so further issues are raised, e.g. how to deal with chairs/beds of a different height/size?

I think what I am getting at is to find a sense of how one might develop more high-level approaches to animation/control: presumably most games at the moment use some kind of large state machine so characters ‘remember’ what position they are in and then use a different set of animations for a given behaviour at a given time (state).

I appreciate one needs to draw the line at a given level of abstraction/complexity, I would say between the two extremes of:

  • Having a dumb pre-determined sequence of animation files loaded according to specific circumstances (and some lookup table of e.g. hundreds of possibilities to account for different scenarios, e.g. man on bed + wounded + breathing fast => load bed_wounded_fastbreathing.egg animation) to
  • Some highly complex programmed model of the human brain that via some complex chain of reasoning and motor control simulation individually animates individual joints to produce an overall effect.

Any thoughts/advice/links would be hugely appreciated.


Hi there. I am also working on animations in panda3d and want to use very advance animations. Blending and having many variations from combining 2 or more animations with a IK solver type to force the animation to adapt to an event/object in the game. So you only need create a ‘sit’ animation and it will adapt accordingly to the height of the seat and where your feet meets the ground or such as walking up an incline object. To get such an effect you would need something like an IK solver which seems to be used for physics too ‘i think’.

I am currently working towards this but have only just started yesterday.

there was a techpaper about realtime improvisational animations based on already existing animation (e.g. Mocapdata). i am pretty sure it was a cmu paper but i cant find it. i think it is pretty dates now. maybe aroudn 1997 or 96 or so but the animations themselves where quite nice for beeing semi-generic. i’ll let you know if i find that thing again

In the meantime, any other strategies one can suggest?

not really. most of the realtime motion-synthesis approaches involve the segmentation of already existing animation samples, some basic ik, a really messi-looking state-machine (especially transitions between states are a nasty thing)
best thing is to try googling for “realtime motion synthesis” there are literally dozens of different approaches and implementations.

Hmm, what about sub/multipart animations? That would at least decrease the number of animations.

Well well, it seems that what I have been getting at has been until recently a near impossible problem, but not so with the work of this company:

Check out the video. That is amazing, really. So no doubt that kit is v. expensive, however, and a little overkill for humble old me.
I’m wondering then, what’s a good second best option that won’t cost me $100000s?

I am not sure if this approach is relevant. I am also new to panda, modeling and animation.

I developed some software in python to emulate a walking robot and a robot arm with computer vision.

The walking robot simulator:
It is a programmable script language, controlling a number of motor joint, and use ODE to do physic simulation.

The end result is a walking robot:

The chess arm robot:
It is mainly IK + opencv imagng processing.

In a simulated world, like panda, I think the above approach is applicable. A control mechanism is needed (like ODE), some IK code, and scripts to control the model joints. It can be a pre-generated animation or realtime animation, depending on how dynamic is your environment.

I think a programmable script like the one used in the walking robot, may be developed to do such a job, rather than a FSM.

Other control mechanism can replace ODE, ODE is too slow for realtime applications.

Again, my experience in animation is zero. Not sure if it is a way to do it.

Thanks for this, but I’m not sure how relevant it is.

Certainly I like the idea of Behaviors which influence the underlying animation. I could imagine there might be a Behavior for breathing running concurrently with e.g. one for sitting comfortably in a chair - both would occur at the same time.

The issue is how to combine/control/manage Behaviors, I suppose. It seem that the DMS engine I posted earlier takes it to one extreme of simulating the underlying AI and biomechanics. I think what I would like to aim for would be more like a FSM and still use underlying keyframe animation files.

There should also be room for controlling joints though and this brings me on to a specific question: I am trying to get my human to move their head to look at the player (camera). The problem is that head.lookAt on a joint controlled with actor.controlJoint() causes the head to look the wrong way (as if the HPR values are the wrong way round or something), and I gather that the transform is a Local one on the controlled joint (not sure if that is relevant). The tutorial example isn’t relevant here because that doesn’t do quite the same thing. Can someone suggest a remedy/how I should be doing it? I appreciate that just pointing the single head joint isn’t exactly a ‘high level’ way of doing it either - I mean presumably the shoulders etc. should turn a bit if necessary, etc.


You can exposeJoint() the joint above your head, and attach the head’s controlJoint() to that node. That way, the controlJoint transform will inherit its correct local transform from the scene graph, and lookAt() will compute the correct relative transform.