I’m desperately trying to obtain this, without success so far. My goal is to design a model using an external tool, and have it usable as a visual+physical object inside panda3d, without having to manually recreate all it’s topology in the code.
This seems like a common need to me, so maybe I’m missing something already existing.
Here’s what I tried, for example:
def model_to_bullet_node(self, model_np):
body = BulletRigidBodyNode('BodyNode')
for geom_np in model_np.findAllMatches('**/+GeomNode'):
print 'geomnode'
geomNode = geom_np.node()
ts = model_np.getTransform()
for geom in geomNode.getGeoms():
geom = geom.decompose()
#mesh = BulletTriangleMesh()
#mesh.addGeom(geom)
#shape = BulletTriangleMeshShape(mesh, dynamic=False)
geom = geom.makePoints()
shape = BulletConvexHullShape()
shape.addGeom(geom)
body.addShape(shape, ts)
return body
I get various results, from a program seemingly hanging, to garbage polygons.
For a start, I’m OK if the model has to be convex, or if a convex hull is produced out of it. I don’t want to get into the trouble of decomposing it, if I can’t get the simple version to work.
I’m not very familiar with Bullet physics, but are you meaning to make a new BulletConvexHullShape object for every geom in the model? Would it not make more sense to just make one “shape = BulletConvexHullShape()” and put it above the loop and then simply add each geom to that single shape object (with “shape.addGeom(geom)”).
Again I don’t know what you’re intending (or what shape your model is), but it doesn’t seem intuitive to create a completely new BulletConvexHullShape for every geom in the model.
Hi, this is the method I use to do this. My nomenclature is a bit different, but it should be intelligible. You’d probably want to change “model.findAllMatches(’**/=collide’)” to your +GeomNode one since you don’t have your collision geometry tagged.
def readCollisionDataFromModel(self,model,deletegeom=False):
taggedgeoms = model.findAllMatches('**/=collide')
if not taggedgeoms:
print "model did not contain any collision geometry"
return
transform = model.getTransform()
node = BulletRigidBodyNode('mesh')
for geom in taggedgeoms:
mesh = BulletTriangleMesh()
for geom2 in geom.node().getGeoms():
mesh.addGeom(geom2)
shape = BulletTriangleMeshShape(mesh, dynamic=False)
node.addShape(shape,transform)
if deletegeom: geom.hide()
self.bulletworld.attachRigidBody(node)
@cslos77 : To me it wouldn’t seem “not intuitive” to do so, but maybe (I’m not familiar either):
it could be less efficient. but I don’t care about efficiency before I get something correct
it could allow handling of non convex models (provided they were decomposed beforehand)
Anyway for my first tests, the model only has one geom, and reworking the loop gives the same results.
With triangle mesh/shapes, I obtain something that looks similar to my model (according to bullet debug draw), but is rotated, and scaled in one direction, and probably as a consequence, not colliding (or maybe it’s because the model is not convex).
With convex hull, if I don’t call makePoints(), the debug draw shows something like a correct convex hull for my model (although with many excessive points), but still rotated/scaled (and this time strangely, more or less colliding).
Calling makePoints() takes forever with my latest model (which is not so big).
I should mention I’m working on linux. Could there be some parts of the engine notably not/badly implemented ?
Edit: also, my model has been created in blender, exported as .x, and converted with x2egg, if that matters.
“the right shape but rotated and scaled wrong” leads me to believe that something wonky is going on with transform matrices.
If your individual geom has a transform on it inside the egg file, then your code won’t pick up on that. It’s only correcting for the transform on the model as a whole and the vertices relative to the geom’s local origin. The difference between the geom’s local origin and the model’s origin isn’t accounted for.
put in the line
print geom_np.getTransform()
after
for geom_np in model_np.findAllMatches('**/+GeomNode'):
If these don’t come back as T:(identity), this is probably the issue. You’re not accounting for this transform when you tell Bullet what the geom looks like.
If this is the issue, I suggest you simply make sure that every object in Blender starts at the origin, has no rotations and no scale on it. You should move things into place in edit mode, not object mode.
Edit: Or you could use ts = geom_np.getTransform(render) instead of ts = model_np.getTransform() when you give the transform to addShape. This will get the transform of the geom relative to the global coordinates, which is what Bullet wants. You would need to recalculate it for every geom_np.
Scale is bad when it comes to physics. Bullet does support scale, but only to a limited degree and at the cost of performance. So do yourself a favour and get rid of any scale within your modeling application (look at the exporeted egg -> does it have transforms with scale?)
Convex meshes should be used for dynamic objects, and they should have as little vertices as possbile. Reuse them where ever possible, i. e. have multiple shapes with the same mesh.
Triangle meshes should be used for static objects, and can have lots of vertices/triangles. If you have multiple geoms with few vertices merge them to one huge triangle mesh.
Indeed it has some non-identity transform inside.
However, after loading the model, in the code I see T:(identity) both at model level, and at geomnode level (do the geoms themselves have another transform under another name ?).
If this is necessary, I would very much prefer to do it in the code at loading time, rather than to put constraints on modeling. Is there some way to “apply” a transform to all vertices somehow, and remove the transform itself ?
It is simpler to do this in Blender. There is no “constraint” set on modelling by not using object transforms instead of edit transforms. They have the exact same effect on where things show up, but they store the information in different places. Applying transforms in edit mode instead of object mode is a much simpler way of doing things. Object mode is for arranging complete objects into a scene. Edit mode is for defining what those objects look like.
If you want to apply an object transform to vertices, you can select the object in Object mode and press ctrl+A. A little menu will come up with what to apply. Do it for both Location and Rotation and Scale. Your object will now have an identity transform, and your vertices will be moved by the correct amount to make them stay in the same place.
I still see this as a constraint, because it limits the range of posible edits Now with your “apply” trick in blender, it is less constraining (only one more operation at export time, and I guess it could be automated). Using this, it solves the problem for scaling, but there is still a rotation (axis reordering, I would say). Maybe the X exporter enforces this:
I would love being able to find where panda stores this transformation. Not only would it allow me to solve my problem, but I would also progress in understanding the engine.
But that would explain, at most, differences between what I see in my editor, and what I get in Panda3D. This is not what worries me (or not yet. Maybe I have such differences, but they’re not important at this point.
What worries me is differences from loading a single egg file, between the visual model that Panda3D interprets out of it, and the bullet model.
Seems like the transform is in this “Root” node. So maybe the “findAllMatches” approach is a bit too brutal, and I should do a proper tree traversal instead ? How should I do that ?
That’s a coordinate-system transform. It’s the kind of thing that Panda has to introduce automatically whenever you load a model that’s not encoded in the same coordinate system you’re using at runtime.
In particular, that’s the transform from a y-up left-handed coordinate system (which is used by all .x files) into Panda’s default z-up right-handed coordinate system.