Generic conversion model -> bullet body


I’m desperately trying to obtain this, without success so far. My goal is to design a model using an external tool, and have it usable as a visual+physical object inside panda3d, without having to manually recreate all it’s topology in the code.

This seems like a common need to me, so maybe I’m missing something already existing.

Here’s what I tried, for example:

    def model_to_bullet_node(self, model_np):
        body = BulletRigidBodyNode('BodyNode')
        for geom_np in model_np.findAllMatches('**/+GeomNode'):
            print 'geomnode'
            geomNode = geom_np.node()
            ts = model_np.getTransform()
            for geom in geomNode.getGeoms():
                geom = geom.decompose()
                #mesh = BulletTriangleMesh()
                #shape = BulletTriangleMeshShape(mesh, dynamic=False)
                geom = geom.makePoints()
                shape = BulletConvexHullShape()
                body.addShape(shape, ts)
        return body

I get various results, from a program seemingly hanging, to garbage polygons.

For a start, I’m OK if the model has to be convex, or if a convex hull is produced out of it. I don’t want to get into the trouble of decomposing it, if I can’t get the simple version to work.

Any hint would be welcome.

I’m not very familiar with Bullet physics, but are you meaning to make a new BulletConvexHullShape object for every geom in the model? Would it not make more sense to just make one “shape = BulletConvexHullShape()” and put it above the loop and then simply add each geom to that single shape object (with “shape.addGeom(geom)”).

Again I don’t know what you’re intending (or what shape your model is), but it doesn’t seem intuitive to create a completely new BulletConvexHullShape for every geom in the model.

Hi, this is the method I use to do this. My nomenclature is a bit different, but it should be intelligible. You’d probably want to change “model.findAllMatches(’**/=collide’)” to your +GeomNode one since you don’t have your collision geometry tagged.

    def readCollisionDataFromModel(self,model,deletegeom=False):
        taggedgeoms = model.findAllMatches('**/=collide')        
        if not taggedgeoms:
            print "model did not contain any collision geometry"
        transform =  model.getTransform()        
        node = BulletRigidBodyNode('mesh')        
        for geom in taggedgeoms:
            mesh = BulletTriangleMesh()   
            for geom2 in geom.node().getGeoms():
            shape = BulletTriangleMeshShape(mesh, dynamic=False)  
            if deletegeom: geom.hide()

Thanks for the answers.

@cslos77 : To me it wouldn’t seem “not intuitive” to do so, but maybe (I’m not familiar either):

  • it could be less efficient. but I don’t care about efficiency before I get something correct
  • it could allow handling of non convex models (provided they were decomposed beforehand)

Anyway for my first tests, the model only has one geom, and reworking the loop gives the same results.

With triangle mesh/shapes, I obtain something that looks similar to my model (according to bullet debug draw), but is rotated, and scaled in one direction, and probably as a consequence, not colliding (or maybe it’s because the model is not convex).

With convex hull, if I don’t call makePoints(), the debug draw shows something like a correct convex hull for my model (although with many excessive points), but still rotated/scaled (and this time strangely, more or less colliding).
Calling makePoints() takes forever with my latest model (which is not so big).

I should mention I’m working on linux. Could there be some parts of the engine notably not/badly implemented ?

Edit: also, my model has been created in blender, exported as .x, and converted with x2egg, if that matters.

“the right shape but rotated and scaled wrong” leads me to believe that something wonky is going on with transform matrices.

If your individual geom has a transform on it inside the egg file, then your code won’t pick up on that. It’s only correcting for the transform on the model as a whole and the vertices relative to the geom’s local origin. The difference between the geom’s local origin and the model’s origin isn’t accounted for.
put in the line

print geom_np.getTransform()


for geom_np in model_np.findAllMatches('**/+GeomNode'): 

If these don’t come back as T:(identity), this is probably the issue. You’re not accounting for this transform when you tell Bullet what the geom looks like.

If this is the issue, I suggest you simply make sure that every object in Blender starts at the origin, has no rotations and no scale on it. You should move things into place in edit mode, not object mode.

Edit: Or you could use ts = geom_np.getTransform(render) instead of ts = model_np.getTransform() when you give the transform to addShape. This will get the transform of the geom relative to the global coordinates, which is what Bullet wants. You would need to recalculate it for every geom_np.

You might want to have a look at the source code of the following methods:

  void add_shapes_from_collision_solids(CollisionNode *cnode);

  NodePathCollection from_collision_solids(NodePath &np, bool clear=false);

A few hints:

  • Scale is bad when it comes to physics. Bullet does support scale, but only to a limited degree and at the cost of performance. So do yourself a favour and get rid of any scale within your modeling application (look at the exporeted egg -> does it have transforms with scale?)

  • Convex meshes should be used for dynamic objects, and they should have as little vertices as possbile. Reuse them where ever possible, i. e. have multiple shapes with the same mesh.

  • Triangle meshes should be used for static objects, and can have lots of vertices/triangles. If you have multiple geoms with few vertices merge them to one huge triangle mesh.

Indeed it has some non-identity transform inside.
However, after loading the model, in the code I see T:(identity) both at model level, and at geomnode level (do the geoms themselves have another transform under another name ?).

If this is necessary, I would very much prefer to do it in the code at loading time, rather than to put constraints on modeling. Is there some way to “apply” a transform to all vertices somehow, and remove the transform itself ?

It is simpler to do this in Blender. There is no “constraint” set on modelling by not using object transforms instead of edit transforms. They have the exact same effect on where things show up, but they store the information in different places. Applying transforms in edit mode instead of object mode is a much simpler way of doing things. Object mode is for arranging complete objects into a scene. Edit mode is for defining what those objects look like.

If you want to apply an object transform to vertices, you can select the object in Object mode and press ctrl+A. A little menu will come up with what to apply. Do it for both Location and Rotation and Scale. Your object will now have an identity transform, and your vertices will be moved by the correct amount to make them stay in the same place.

Edit: unfortunate wording.

I still see this as a constraint, because it limits the range of posible edits :slight_smile: Now with your “apply” trick in blender, it is less constraining (only one more operation at export time, and I guess it could be automated). Using this, it solves the problem for scaling, but there is still a rotation (axis reordering, I would say). Maybe the X exporter enforces this:

<Group> Root {
  <Transform> {
    <Matrix4> {
      1 0 0 0
      0 0 1 0
      0 1 0 0
      0 0 0 1
  <Group> Cube {
    <Transform> {
      <Matrix4> {
        1 0 0 0
        0 1 0 0
        0 0 1 0
        0 0 0 1

I would love being able to find where panda stores this transformation. Not only would it allow me to solve my problem, but I would also progress in understanding the engine.

Every exporter should add a line to the egg file which specifies the coordinate system it uses when exporting. for example:

<CoordinateSystem> { Z-up } 

If there is no such line then Panda3D will assume a default.

But that would explain, at most, differences between what I see in my editor, and what I get in Panda3D. This is not what worries me (or not yet. Maybe I have such differences, but they’re not important at this point.

What worries me is differences from loading a single egg file, between the visual model that Panda3D interprets out of it, and the bullet model.

For the record, any transform including scale or rotation can be applied to a model at runtime, and applied onto the vertices, with:


But, yeah, it’s probably best to understand your modeling package well enough not to require this runtime step.


Calling flattenLight() on the model before using it to create the bullet shapes seems to do the job !

This is good news but I still can’t understand where the transform was hiding: all I could see in my code were identities.

After loading the model you have a NodePath object. Please call on this object and post the output. returns None (before and after calling flatten).

It’s not the return value that’s desired, I believe, but rather the console output that “” should produce.

Haha sorry I didn’t realize this was causing an output :slight_smile:
So here it is. Before flatten:

ModelRoot myplane.egg
    PandaNode Root T:m(hpr 0 90 0 scale 1 1 -1)
      GeomNode Cube (1 geoms: S:(ColorAttrib MaterialAttrib))

After flatten:

ModelRoot myplane.egg
    PandaNode Root
      GeomNode Cube (1 geoms: S:(ColorAttrib MaterialAttrib))

Seems like the transform is in this “Root” node. So maybe the “findAllMatches” approach is a bit too brutal, and I should do a proper tree traversal instead ? How should I do that ?

That’s a coordinate-system transform. It’s the kind of thing that Panda has to introduce automatically whenever you load a model that’s not encoded in the same coordinate system you’re using at runtime.

In particular, that’s the transform from a y-up left-handed coordinate system (which is used by all .x files) into Panda’s default z-up right-handed coordinate system.


Ok. I thought I could handle it just like any transform (flattenLight() seems to be able to handle it).

I came up with something that is supposed to do the same as flattenLight(), more or less:

    def model_to_bullet_node(self, model_np):

        def add_shapes(body_container, nodepath, higher_transform):
            node = nodepath.node()
            local_transform = nodepath.getTransform()
            global_transform = higher_transform.compose(local_transform)
            if type(node) == GeomNode:
                shape = BulletConvexHullShape()
                for geom in node.getGeoms():
                    geom = geom.decompose()
                body_container.addShape(shape, global_transform)
            for child in nodepath.getChildren():
                add_shapes(body_container, child, global_transform)

        body = BulletRigidBodyNode('BodyNode')
        add_shapes(body, model_np, TransformState.makeIdentity())
        return body

But the result is not the same (some axis is flipped). If you guys can bear with me a little more, what am I doing wrong here ?

Have you tried exporting with Yabee instead of .x to .egg? Yabee writes the files as z-up eggs in one step. It’s probably easier that way.

You can find it here: … 41&start=0