Physics simulation for character hair

Hi all,

The following is as much an is-there-interest-in-this-kind-of-feature question as a general posting.

I’m working on a character builder for an upcoming game project, and I’m also interested in doing physics-based procedural animation. Which brings us to the topic: physics-based procedural animation of a character’s hair.

When I started this project about a year ago, relatively extensive Googling showed that while Panda supports both Bullet and ODE, neither of them are connected to Panda’s logic in such a way as to easily make possible the procedural animation of a character’s hair or clothing.

The important property is that the logic needs to be able to use the physics engine only for some subset of the joints of a model (which is generally a multipart Actor for maximum runtime customizability), and the simulation must observe certain constraints (e.g. the length of each hair segment must not change).

Hence, having some background in classical mechanics, at this point I felt it was simpler to code a custom physics simulation and connect that to the procedural animation features of Panda3D than to try to adapt the existing integration to either Bullet or ODE to perform the calculations.

Whence, I have a working prototype:

Left: initial state from model file. Right: final state.

The simulation is initialized from the joint hierarchy in the model file, enabling the creation of different hairstyles in Blender:

However, the current prototype runs slowly, and is missing a couple of features that I still need to add to make a final usable version.

It turns out that NumPy is not very well suited for this particular task, as the chains of joints representing the hair segments have wildly differing lengths - or at least, it is very difficult to keep the calculation properly vectorized if chains of differing lengths are needed in the same simulation. (If multiple arrays are used, NumPy’s efficiency gain is lost.) Also, the calculation requires relatively many function calls, which are slow in Python.

Therefore, if I’m going to keep the custom code approach, the next logical step would be to move the slow stuff into a C or C++ extension module.

The question to the community is: does anyone else want this functionality, and especially, is there interest to include it in Panda if I implement a “procedural hair animation” module as a Panda C extension?

Specifically, the module would consist of a small physics code to perform the actual simulation, and auxiliary code to read in a model and animate a user-definable subset of its joints using the physics simulation.

The physics code simulates chains consisting of rigid rods, connected via ball-joint bending springs (with configurable stiffness). It will be possible to apply external forces such as gravity, air resistance, and the fictitious forces arising from inertial effects (when the simulation runs in a noninertial coordinate frame, such as the local coordinates of a character’s head). Some very basic collision shapes will be implemented, to prevent hair intersecting a character’s head or body. Collision detection with any external objects in the scene is NOT planned.

I might add support for branching chains in a later version; this would be useful for simulating the motions of trees subjected to wind. Combined with some small additional work in the fractal tree generator in the Panda examples, this might be useful for procedurally generated game level backgrounds.

As part of the system, there will be a thin logic layer that reads in a model, takes control of a subset of its joints, and then updates those joints procedurally based on the simulation. The neutral state of the joints can be initialized from the model (so the joint is not necessarily straight in the orientation where it applies zero force).

Note that the current Python-with-NumPy prototype already has most of this; the only parts missing are fictitious forces (inertial effects), collision detection and branching chains.

Mostly, the remaining work is a matter of implementing a more efficient (and slightly expanded) version in C/C++.

At least personally, I think a module like this would speed up game development, specifically making it much simpler to create characters with long, freely flowing hair.

If it will run at a decent speed and be included (at some point) in a official Panda release - then yes, I would be interested in a thing like this.
If it would be possible to write it in a way that would allow it to be used for something like a cape or skirt the then I would be interested in it even more.

Speed will probably not be an issue once I get the simulation ported to C. We’ll see :slight_smile:

Skirts and capes are on the expected to-do list. They are a bit trickier than hair for two reasons. First, they need a different kind of physics primitive, as they are two-dimensional.

The question of how to create a suitable physics model for this case requires some thought. If the rod lengths are fixed in both directions, the range of possible motions “locks” in an undesirable way. One possibility is a discrete approximation of an elastic sheet, built out of point masses and springs. I’ll have to think about this.

Secondly, the two-dimensional nature makes them trickier to configure in the modelling package than hair, because the horizontal connections between neighboring joint chains cannot be represented in the armature. It might be possible to encode the horizontal connections via a naming scheme for the joints, but that feels hacky.

Anyway, I’ll probably experiment with this, as I’m also interested in having support for physics-enabled clothing.

Hi all,

I needed a break from postprocessing, so I decided I’d look into re-implementing my hair physics simulator in C++.

At this point I think I need some help with Panda’s C++/Python integration. I understand Panda includes a tool called interrogate that will generate Python bindings for a C++ module, but I’m having trouble finding documentation (or even relatively recent forum posts - I suspect posts from 2006 or 2007 such as [url]Cracking open Python objects in C++], [url][Solved] Mixing C++/Python, accessing the engine in parallel] or [url]Interrogate questions] may be out of date by now :slight_smile: ).

Basically, I want to make a C++ extension for Panda that takes in a multipart actor and some extra information, takes control of a subset of the actor’s joints, and then runs a custom physics simulation to animate those joints procedurally (while letting everything else in the actor to be animated normally).

I think a custom simulation is the way to go for two reasons: getting the Bullet or ODE features to take control of a subset of an actor’s joints seems painful, and at least to me it seems that Bullet’s softbody rope does not have support for bending rigidity (nor is it easy to extract its shape to apply it to a sequence of joints). (For example, sewing thread has almost no bending rigidity, while a guitar string does. In a character’s hair, it seems to me that e.g. braids should be more resistant to bending than free bunches of hair. Hence, user-settable bending rigidity for artistic control.)

As stated earlier in this thread, I have already made a working prototype in Python, but the physics simulation needs C++ for speed. I’m familiar with both languages, but aside from some simple experiments with Cython, I have never integrated the two.

So, I have some questions:

  1. Where do I start if I want to make a C++ extension for Panda that plugs into the makepanda build system? Are there examples? (I’d like to build this in a way that it can be included into Panda if the quality is sufficient.)

  2. What is the preferred way for passing Python objects and strings from Python to a C++ Panda extension? I think I saw a simple example somewhere last year (and can’t find it now that I’d need it!), but it was passing only an integer, whereas I would need to pass more complex data types, like Python lists (of strings) and Panda objects such as Actor.

  3. The simulation needs to update the controlled joints at each frame. What is the preferred way of implementing update tasks in C++ extensions?

  4. The final version of the simulation will need motion information in order to implement fictitious forces arising from non-inertial motion of the Actor (see e.g.

Is there a more preferred, general way to extract this information in Panda, than using finite differences based on the last known position(s)? The simulation needs the local acceleration at each joint that is generated by the combined effect of non-inertial rigid-body motion (i.e. acceleration and rotation) and any prerecorded animation that may be simultaneously playing on the Actor (affecting the hair parent joint or its ancestors). Linear velocity may also be useful for a crude simulation of air resistance based on the local velocity plus some random noise.

Finite differences should work, but will cause a one-frame lag in angular velocity and a two-frame lag in linear acceleration, even when inaccurate first-order backward differences are used. Higher accuracy would require even more points and thus more lag; see e.g.

Thus, if the motion information is somehow directly available, it would be better to use that in order to avoid the lag. But it depends on how game models are usually animated and moved - if there is no velocity and acceleration information, then position-based finite differences are the only option. (A hybrid that calculates velocity from position and acceleration from velocity is possible - and works better if the framerate varies - but this does not remove the problem.)

As usual, any help would be appreciated :slight_smile:

The easiest would probably to check for one of the existing extension modules for Panda3D, like “rocket”, and search for the string “rocket”, and copy the relevant instructions. If you want to try your own hand at playing with interrogate, looking at how makepanda invokes interrogate would be a good statr.

There’s also a page on Interrogate at the manual, though it’s a bit limited.

You can’t pass Python objects to a C++ extension. Actor inherits from NodePath, so if you pass an Actor to C++, the C++ code will only see NodePath. As for strings, interrogate automatically writes code to map those to C++ strings.

The task manager is C++, so that’s possible, but you might also consider subclassing PandaNode and overriding something like cull_callback (called when the node is being considered for rendering), or transform_changed, or something of the sort. Or, even directly with Panda’s joint animation system. You can just add a task but working in C++ gives you opportunities to work much closer to the metal.

Panda doesn’t really store this information, but it does store the last frame’s transform for fluid collision checking (but it would also require you to use nodePath.setFluidPos instead of nodePath.setPos). You could possibly make use of that. Otherwise, you’re on your own, and you’ll have to use a traversal that runs each frame, or transform_changed hook, or something like that (though keep in mind transform_changed only gets called when a node position changes, not that of its parents).


Now that you mention it, I think I’ve actually read that earlier ( It covered the basics pretty well, but didn’t delve into the details on how to pass parameters containing more complex data types.

(The simple example was probably somewhere in the skel/ directory, as the manual page on interrogate mentions a sample C++ extension there. That must be where I picked it up earlier.)

Ah, I see now I was confusing C-based Python extensions (involving PyObject, which allows accessing Python objects in C code) with Panda’s C++ extensions.

I was also incorrectly assuming that pretty much everything in Panda (except a few select things such as the postprocessor) is implemented on the C++ level. It seems the first task will be to check which of the classes needed for this are C++ and which are not.

The simulator does not strictly need Actor. What the code really needs to do, technically, is to traverse the children of a user-specified joint that is contained in a user-specified part of the multipart actor. The Python implementation does this by traversing the PartBundles contained in an Actor, and when it finds the correct bundle, traversing CharacterJoints. I suppose I can set up something similar in C++ once I figure out which classes are available (for which I suppose the API reference is useful).

(It would otherwise be fine to leave the whole thing in Python, but the problem is that this computation does not vectorize well, so the low-level physics needs for loops. Cython would be another option for this, but introducing a dependency on Cython may lead to packaging problems considering the distribution of projects using this simulator. Also, I think there is a potentially important efficiency gain if the function calls to update the joints are performed in C++, as there can be a large number of hair segments in the scene. I can’t see a better way to do this than to move the whole simulator into C++.)

Ah, that’s a good point.

The task manager may indeed be too high-level to be the appropriate abstraction for this. I currently don’t know the architecture of Panda well enough to make an informed choice from the other options. Is there any particular approach you would prefer in this situation? (If not, I can look at all of them, but deciding which one is best may take some time.)

EDIT: now I think I understood the PandaNode suggestion: subclass that, make cull_callback() timestep the simulation and post the updated transforms to the affected joints, and plonk this HairPhysicsNode (or whatever) into the scene graph (setting it to sort early in the cull traversal) just as if it was a ComputeNode. This seems very elegant, and should minimize the amount of additional code needed in the simulator. I think I’ll try this solution first.

I see. Thanks.

I think in effect, that would do the exact same thing as the solution with custom finite differencing, the only difference (no pun intended) being where the information about the previous position is stored. Maybe the most robust and also easiest option is to use traversal at each frame, so that the simulation works the same regardless of whether the character is being moved using setPos() or setFluidPos(). The simulator must in any case compute some derived quantities from the position history, so it is logical to extract and store the positions locally.

Anything in direct.* is a Python class. Anything in panda3d.* is a C++ class.

I’m not really sure what the best approach is, sorry. But I’ve thought a tiny bit of a vaguely similar problem in the past (IK), so maybe sharing a bit of thoughts on one possible approach I thought of might give you some ideas.
As you may know, the way characters work in Panda is that each Actor really is a wrapper around the C++ class PartBundleNode (or rather, a derived class called Character) which stores a bunch of PartBundles, each of which is the root of a hierarchy of MovingPart objects, which itself is a base class representing, well, a moving part. A common example of a MovingPart is a CharacterJoint.

(The way this actually links up to the geometry is that each vertex stores an index into a table of VertexTransform objects stored on the Geom, which is an abstract base class that represents a particular source of transformations that may be applied to a vertex. One particular implementation of VertexTransform is a JointVertexTransform, which takes its transformation from the CharacterJoint in the PartGroup hierarchy.)

Now there is also AnimBundleNode and AnimBundle, which have a very similar hierarchy, but represent a bunch of animations. Instead of MovingParts this hierarchy stores AnimChannels, which provide a virtual interface for querying the position and orientation of the channel for a given frame. The most common type of AnimChannel is of course the one that takes its transform from a predetermined table, as defined by an animation file.

Now, when you bind an animation, Panda basically walks through an AnimBundle hierarchy and the corresponding PartBundle hierarchy, and basically connects each MovingPart to the appropriate AnimChannel. The AnimChannel basically provides the source for the local transformation matrix of the joint in the hierarchy.

The way Actor.controlJoint function works is that it creates a different type of AnimChannel that doesn’t take its value from an animation table, but instead reads out the position of a node in the scene graph. It assigns this to as a special “forced channel” onto the MovingPart, so that it overrides any bound animation.

Knowing all this, it sounds like it might be an idea to try implementing your own type of AnimChannel, one that takes its value from whatever fancy algorithm you have there. This means you don’t really have to worry creating a task that updates all the joints - instead, you just have to create your hierarchy and bind it once, and Panda’s animation system would be “asking” your channels (by means of a virtual method call) to update their transform appropriately whenever it needs the next animation frame. You would basically be providing local transformations and Panda would automatically calculate the net transformation based on the joint hierarchy.

This isn’t really speaking from experience, though, these are just ideas, and it could be that Panda’s AnimChannel abstraction is a poor fit for what you’re trying to do here.

Ah, that simple? Thanks!

Fair enough. Afterall the hair simulator was my idea :stuck_out_tongue:

FWIW, some technical details (concerning the interaction with controlJoint) are described in one of my earlier posts, [url]global coordinates of a joint].

It would be nice to post the code in order to explain this whole thing more clearly, but I’m not yet satisfied with its quality - the current implementation is missing some important features. Furthermore, the code resides inside an unmaintainable mess of miscellaneous experiments, so I would first need to spend some time to make a minimal working example containing just the hair simulation before it’s readable to anyone but me.

(As for the missing features: currently, the external force is a global constant vector; this must change to support fictitious forces. Also, upon closer thought I have noticed that there is currently no mechanism for specifying a custom rest position for any except the first element in the chain - the neutral position of the chain is always a straight line in the direction pointed by the first segment. This needs to change to make it possible to simulate trees, too. Finally, the chain abstraction must go; the simulation needs to be able to support an arbitrary tree topology.)

Thanks for the interesting and detailed technical overview. This should help a lot.

With that information, I think that there are two options: either the PandaNode subclass approach, utilizing the existing controlJoint mechanism, or the AnimChannel approach. It seems an AnimChannel might be marginally faster, and at least in any case cleaner, as it would avoid one extra layer of logic (namely controlJoint and its dummy nodes).

I imagine that in this approach, I would modify Actor to add a method controlJointTreeByPhysics (or something more compactly/sensibly named) that would take over the control of the specified joint and all its children, and make their animation follow the physics simulation. This would be implemented in a PhysicsAnimChannel (or something), which would set the joint transformations based on the simulation state (in a forced mode, overwriting animation).

But where would the simulation code go? It is something that needs to be run once per frame per controlled joint tree, since the joints in the tree interact.

There must also be some way to set simulation parameters - some of these are per-simulation (gravity vector, timestep length), but others are per-joint (stiffness, damping). In the PandaNode approach, the node provides a natural container for all such data, but at the moment I don’t see a logical place for this data in the AnimChannel approach.

(Background: the simulation needs the global positions of all the mass points representing the joint origins to compute the next timestep; though “global” may here be a misnomer since it is easier to work in the coordinate system of the character, or even that of the hair parent joint. I haven’t considered whether a purely local approach is possible.

An important property of hair simulation is that the “bone lengths” (in the Blender sense) stay constant. Working in rotation space (using quaternions to represent orientations) would be one possibility (very elegant since the choice of space automatically enforces the constraint), but I think the forces affecting the mass points are easier to model in a Cartesian xyz frame. This is the approach I have used. The Cartesian approach needs the positions of the neighboring nodes (joints) to enforce the length constraints. Then there is also a second constraint, requiring that the correction to the position, applied by the length constraint, does not change the kinetic energy.)

Something to keep in mind for future development is how to handle collisions. Not many games do hair collisions well or at all, so here is a chance to shine. (Hair? Shine? Get it? :stuck_out_tongue: ) One possibility - probably enough for a first version - is to code some very limited collision support into the simulation itself (to handle the most egregious cases such as hair intersecting the character’s head). This is however not an ideal solution.

It would be better to interface with Panda’s existing collision system, but then the simulation needs to be able to talk with that. Currently, I have no idea where to even start looking - and maybe it is best left for later anyway to keep the project from ballooning out of hand. But, it would be ideal to take into account the possibility to add this later, in order not to do any critical design mistakes.

It occurs to me now that the same mechanism could also be used for ragdolls (running the simulation on the model root joint), but doing ragdolls properly would require support for constraints on allowed rotations.

Setting up such constraints manually is painful to say the least - for this to be usable, the constraints would need to be exported from the modeling package. At least Blender supports bone constraints. Right now I have no idea whether YABEE can export them, or if indeed the egg file format supports that or not (I suspect not?).

(I don’t really need ragdolls right now, but it would be a natural extension of the hair simulation system.)

Based on your description, it actually sounds like a pretty good fit, provided that we can find a logical place for the simulation code, which needs to work, not per-joint, but per-controlled-joint-tree. This seems to me the cleanest of the ideas so far.

It doesn’t sound to me like the two things you mention are mutually exclusive. You could have an encompassing class that inherits from PandaNode (directly or indirectly), which binds the appropriate AnimChannels to the joints, just as we have both Character as well as CharacterJoint.

Since you’re working in C++ and don’t have controlJoint, keep in mind that there is no escaping from using AnimChannel with that approach either. As I said before, the way controlJoint works is by assigning an AnimChannelMatrixDynamic to the joint, which takes its input from an assigned dummy node. The only difference in what I’m proposing would be that you assign your own version of AnimChannel that computes its value from your algorithm whenever Panda asks for it, rather than having to use a task to assign values to dummy nodes each frame. So whether you make a custom PandaNode implementation or not, you’re going to end up assigning an AnimChannel of some sort to the joint.

Now, it gets a bit tricky if (as you say) you need the global coordinates of the parent joints. As far as I know, an AnimChannel does not have access to the transform of the parent joint - it just delivers a local transform. You could of course use the exposeJoint mechanism, which on the C++ side would have you set a flag on MovingPart to update an external node position when the transform changes. It sounds a bit unnecessarily expensive to be relying on the scene graph for this. One idea might be that you just replace the joint itself with your own MovingPart derivative (ie. inheriting from CharacterJoint), which does have access to this information.

Or maybe we should just extend the AnimChannel mechanism so that it is passed the matrix of the parent joint so that the AnimChannel might take that into account when calculating its matrix. I don’t think the AnimChannel mechanism was designed to be used in this way, but maybe this would be the neatest way.

Hmm, right.

Aaah, controlJoint() is defined in Actor! Good point.

Ok. Thanks for the clarification.

The algorithm needs not only the transform of the parent joint, but also the global transform, because it must know the local direction of the gravity vector (which is defined in global coordinates). The parent joint’s transform is needed for enforcing the bone length constraint.

The simulation is based on treating the joints (including the terminator) as the endpoints of rigid rods. One endpoint of the whole hair chain (or tree) is considered fixed. The other end (the “leaf” end) is free. All points except the fixed one are subjected to a Newtonian physics simulation under some constraints.

The physics simulation updates the positions of these points at each frame, and the joint transforms are updated by converting this position data into orientation data, using lookAt(). The conversion runs from the root of the tree toward the leaf level. The position difference between two successive points in the simulation basically gives the +y axis of the joint. The initial version tracked only the y axis and used a hack for locking the joint’s roll, but in the final version I intend to track also an auxiliary axis (z) representing the local “up” vector, genuinely producing a unique orientation. The neutral orientation of a joint can be encoded as a position (with points representing the “y” and “z” axes) in its parent joint’s coordinate system.

I’ve been thinking a bit about what you said. You’re right in that the scene graph sounds expensive, but on the other hand, it seems to me that it could be the correct abstraction for coordinate system conversion between the local cartesian frames (as in coordinate frame in physics) of arbitrary objects (joints, nodes and the scene root).

As I think I might have mentioned, I’m planning to split the motion into two parts: run the simulation (which has its own data structures and is in principle completely independent of the joints) in the local coordinate system of the hair root joint (to which all the first segments of the hair chains are attached), and then account for the motion of this hair root joint (w.r.t. the global scene coordinates) by introducing fictitious forces. The fictitious forces formally convert the moving coordinate frame into a stationary one, where the usual equations of motion apply.

Tracking the motion of the hair root joint directly in global coordinates automatically accounts for the combined effect of any rigid-body motion of the character, and any animations that move (or rotate) the head - the trick is to notice that although the character deforms, the head can be treated as a rigid body.

The gravity vector must be converted from global coordinates to hair root joint coordinates so that it can be applied in the simulation. And in order to compute the fictitious forces, the linear acceleration, the rotation axis, and the rotation speed (angular velocity) of the hair root joint must be determined in the global scene coordinates.

The acceleration can be computed by backward-differencing the position to produce velocity information, and then backward-differencing this velocity to get the acceleration. During the first two frames (as in rendered frame in 3D graphics) this of course produces nonsense, because the previous velocity is not yet initialized, but it is easy to catch this special case, and just pretend that the acceleration is always zero at the start. Games always render many more than three frames :stuck_out_tongue: so this is not a problem in practice.

As for the rotation axis and angular velocity, I made some (Python-based) experiments comparing the orientation at successive frames (as in rendered frame) using Panda’s quaternion system and the scene graph, and I think I got the necessary information. Basically:

# setup
hairRootNode = XXX  # this is an exposed joint

prevNode = PandaNode("HairRootPreviousTransformStorage")
prevNP = NodePath(prevNode)

self.initDone = False

# ...more code goes here...

def normalizeAngleDeg(angle):
    """Normalize an angle (given in degrees) to [-180, 180)."""
    result = angle
    while result <= -180.0:
        result += 360.0
    while result > 180.0:
        result -= 360.0
    return result

def rotationTask(task):
    # initialize prev if not initialized yet
    if not self.initDone:
        prevNP.setTransform( hairRootNode.getTransform( other=render ) )
        self.initDone = True
        return task.cont

    # difference in orientation w.r.t. previous frame
    Q = hairRootNode.getQuat( other=prevNP )

    # axis of rotation (vector)
    r_local = Q.getAxisNormalized()

    # rotation increment (degrees), effectively theta = omega*dt
    # without normalization, we may get e.g. 357 degrees per frame, whereas we want the equivalent -3.
    theta = normalizeAngleDeg( Q.getAngle() )

    # convert to scene global coordinates
    r_global = render.getRelativeVector( other=hairRootNode, vec=r_local )
    x0 = hairRootNode.getPos( other=render )

    # for debug visualization, it is possible to use something like this (given the appropriate definitions):
    scaleMult = 10.0  # exaggerate for easier visibility
    vertex3 = GeomVertexWriter(vdata3, 'vertex')
    halfvec = (r_global/2.0) * (2.0*np.pi/360.0) * theta * scaleMult
    vertex3.setData3f( x0 - halfvec )
    vertex3.setData3f( x0 + halfvec )

    # at the end of the update task, update prev:
    prevNP.setTransform( hairRootNode.getTransform( other=render ) )

    return task.cont

taskMgr.add(rotationTask, 'MyRotationTask', priority=10000, sort=0)

The variables x0, r_global, and theta encode all the necessary information about the motion of the head in global coordinates.

This should always work, because any rotation in 3D space can be expressed as a single rotation (by Euler’s rotation theorem), and thus regardless of the specific rigid-body motion, the axis/angle representation always exists. (Strictly speaking, one point must be held fixed in the rigid-body motion for Euler’s theorem to hold, but this is abstracted away, because when we look only at the orientation, any linear motion of the origin is effectively discarded. Thus the origin can be considered a fixed point.)

This sounds elegant. But as mentioned, the algorithm needs more than just the parent joint. If it is possible to get the whole chain of parents this way (all the way to the scene root), it could work… but I’m not sure if that would be elegant anymore, or if it’s better just to use the scene graph.

Yes, maybe it is easiest to use the scene graph to mirror the joint hierarchy, as you indicated, even if it seems less elegant.

I was trying to make a cloth-cape simulation using Bullet softbodies but I’m not very good with Bullet and/or Bullet is not very good at it (there are some Bullet cloth sim clips on youtube, but I’m not sure if they are realtime or prerendered).

The only thing that did work was the ‘cheat’ solution - I’ve attached a rope with an weight to the characters hips, as the character moved and stopped the weight would swing back and forth, I would then send the delta movement of the weight (how far it moved from its rest pose) to a shader and move some of the vertex (those that I hand painted a certain color) according to that movement.
After I’ve updated to 1.9(dev) my code stopped working and I haven’t yet tried to fix it.

If I as to make this simulation more accurate, then I’d used a few ropes - like 4 (front, back, left and right of the head), maybe one more for the top -but I don’t think it should be a rope - and moreover I’m not sure if using ropes is a good idea anyway, a hinge or ball constrain could be much better (faster?). If there should be collisions then only with the head (represented as a sphere), maybe also neck (cylinder/capsule) and shoulders (another capsule?) - I think anything more is too much for games (unless you are aiming beyond next-next-gen).

Ah, ok. A nice simple solution, though.

Now that you described that, I think Dragon’s Dogma behaves as if it does something like this :stuck_out_tongue:

I see.

My approach is somewhere between: I have a discrete mathematical model built out of point masses, massless rigid (inextensible, straight) rods and bending springs. This produces a sort of a “discretized rope”, where straight segments are connected by ball-constrained joints.

The joints are programmed to push toward their neutral position by a configurable strength, creating bending rigidity. This causes the hair to bend in a natural fashion when subjected to gravity; without bending rigidity the equilibrium position would point straight down. There is also friction in the joints, modeled as a simple damping factor for the velocity, so that the motion gradually dies away (instead of oscillating ad infinitum as a purely elastic system would (not accounting for numerical error in time integration, which typically introduces additional dissipation)).

What I’m essentially wondering is whether there is an efficient way to tell Bullet to do something similar (using ball constraints and something?), or whether it is better to manually implement a small custom physics code for this special purpose, as I’ve been doing so far.

I’ve been thinking about this. I agree that the head is essential. It is also easy to model, as you suggest, as a sphere. A spherical head approximation should be enough for anime-style characters.

If the hair parent joint in the character model is placed exactly at the center of the head, then it is possible to automatically determine an appropriate radius for the collision sphere by taking the smallest (or average, or largest) distance from this joint to any hair chain root joint. These distances can be computed by Panda - the head center must be an exposed joint anyway in order to connect the hair and body submodels of the multipart actor, and by some selective NodePath parenting trickery, it is possible to read the default global transformation of controlled joints, too.

For more realistic models (such as those from MakeHuman), I’m thinking an ellipsoid could be a good match for the head. It’s almost as simple and cheap to detect collisions with (as a sphere), and it allows for oblong shapes. It might be possible to automatically determine best fit x/y/z axis lengths by solving some simple optimization problem (given the positions of the head center and the hair chain roots), but I haven’t thought about this in any detail yet. (Maybe least-squares fit the sum of distances from each hair chain root to the surface of the collision solid, parametrized by the axis lengths?)

As for the rest, what is required I think depends on the hairstyle. Ideally, I would like to have some characters with e.g. really long braids, which requires collision detection at least with the arms and the torso in addition to neck and shoulders. Maybe with the legs, too, for completeness - but here this already runs into a problem, because skirts present a special case. Capes, too.

On the technical side, spheres, ellipsoids and capsules should be sufficient as collision shapes. Sphere or ellipsoid for the head, and capsules for everything else. Capsules are nice in that they are not much more complicated to check collisions with than spheres, and they avoid the complicated handling of the end faces of cylinders (see e.g.

I think the speed is probably not a concern, given a few assumptions. Namely: this must be done in C++; we should keep the collision shapes simple by design; use only a handful of them per character; and keep collisions local to each character. Also, ignore collisions between hair segments; almost all of the time, the motion is such that these won’t occur. Then each joint needs to do only a small constant number of collision checks (against the targets on the body side), making this scale as O(n) in terms of number of hair segments in the character.

The physics simulation already requires some math, so it shouldn’t be much heavier (in the relative sense) if it runs a few collision checks. This could of course also be made configurable. It may also be possible to automatically switch some checks off (e.g. torso and legs), if the hair is determined to be short enough (by summing segment lengths in the chains).

Some speed optimization should be possible by assuming that for the purposes of collisions, the mass points in the simulation can be represented by spheres. The sizes of these collision spheres can be automatically approximated as the average of the half-distances to the next point in each direction along the chain. (I.e. for point n, look at the distances to points n-1 and n+1, divide each by 2, and average the results.) The sphere shape is advantageous here, because it is the cheapest one to check collisions with, and there may be potentially dozens of hair segments per character in more complex hairstyles.

The most difficult part, I think, is setting up the collision geometry on the body side automatically. It is difficult to determine the appropriate placement and size of the collision solids without any external help, because the joints themselves are just transforms, and thus have no “bone thickness” information.

This would need using some information from the actual character mesh - e.g. for a capsule, find the endpoints of the axis, and to determine the radius, compute the average (or smallest, or largest) distance of vertices affected by the corresponding joint, as measured in the direction perpendicular to the axis of the capsule. I’m pretty sure Panda has the necessary infrastructure for this, but the technical details may become a bit hairy (no pun intended). :slight_smile:

(This is of course ignoring vertex morphs. Based on my experiments, it seems Panda applies joint (bone) transforms on the CPU, but vertex morphs are applied on the GPU. In any case, their effects to vertex positions cannot be retrieved by just examining the deformed mesh geometry as Panda sees it.)

I haven’t cared much about which gen I’m aiming at - basically, for now I’m just solving interesting technical problems and slowly building my character creator while at it. The project will take some time so it’s better to plan ahead :stuck_out_tongue:

I know, its been almost 7 years by now, but was there any further progress made?
is there any repository or similar, where others could pick up this work?

I am very much interested in this.

Or is there a working solution already?

Here’s another example.