COLLADA importer

Thanks alot! How is bones support? My team uses bones to layout potions of stuff for ut3 and wonder if we might use that setup for 2aw. Well test bones and tell you if something breaks.

Joints should basically export fine, although the animations are not working well yet.

Well, I’ve tried out your model and it looks fine to me - can you post some screens showing how it is supposed to look?
(Also, the textures are missing.)

Did you change any thing with the importer?

All the turrets are located inside the model at 0,0,0
you have to turn on wire frame to see them. They should be smaller and located on the little turrets platforms on the sides, top and bottom not at 0,0,0 in side the model.

I see the turrets. I still don’t understand – getPos(render) on the turrets definitely doesn’t return 0, 0, 0 at all.
Do you have a reference .egg file that actually is exported fine? So I can compare the files.

I’ve managed to fix the bug. Was a really stupid bug: I just had to replace all nodes by nodes. The turrets show up fine now. (Or is it too ugly to do it this way?)
I’ve uploaded it on the same URL.

Another question: collada files support having instances to other nodes. Reading the EggSyntax file, it says this regarding instances to other nodes: (using the {{}} stuff)

A special syntax of <Instance> entries does actually create shared geometry in the scene graph.  However, please note that this special syntax is not supported by Panda3D at this time.  It is documented here against the day that it will be supported.

Has this changed? I ask this since this document is very old, so it might be outdated. If it’s implemented, I can directly translate it from collada. If not, would I then just duplicate the referenced node myself if multiple instances occur in the file? (The latter is what I’m doing atm.)

That’s acceptable, but because you have to do this, it means you have violated one of egg’s principle tenets: the vertices should be stored in global coordinates, not local to their transforms. It’s a weird convention, because virtually every other 3-d model format stores vertices in local coordinates, but it does have some nice advantages (for instance, you can easily hand-edit an egg file to adjust the transforms, or take them out altogether, without disturbing the vertices).

Fortunately, the egg library makes it easy to follow the egg convention, and apply the appropriate conversions to your vertices, from local to global, as you add them to the vertex pool. Just multiply them by egg_node->get_node_to_vertex().

It hadn’t changed before yesterday, but when I saw your post, I thought, gee, I’ve been meaning to make that change for a while, and I don’t think it’s very hard. So I did. So, as of today, you can use the syntax as described in the document to store true shared instances.


Awesome! You rock man! Thanks alot! :slight_smile:

EDIT: hmm, this is going to be a little bit tricky. I have my function like this:

PT(EggGroup) DAEToEggConverter::process_node(const FCDSceneNode* node) {

Inside the function, the EggGroup is created, transforms applied and returned. In the function that calls this one, I do the reparenting. Got any ideas?

The only solution I can see is to rework the function to do the parenting first. The node’s parentage is critical information for determining the proper coordinate space of its vertices, so you can’t really assign vertices until after the node is in the right place.

Or, you can continue to keep all the groups as s. That will work fine, though it’s not standard egg. It might also give you trouble when it comes time to making animations work properly.


Thanks. I have some more questions though:

  • Are transforms inside elements also global, or are they relative to the parent node?
  • Joint transforms, are they global or relative?

Also, do you think this would be worth for inclusion into Panda3D, when it’s stable? It just two directories that go into pandatool, and a few lines to ptloader. And of course references to fcollada in the ppremake files.

and transforms are both relative to their parent. It’s only vertices that are given in global space. I don’t really know why; it’s lost in the mists of time.

I’d love to have this become an official part of Panda.


Ah, okay. Are the transforms an EggXfmSAnim also relative to the parent or global, like the vertices?
Currently I’m just copying those from collada without any conversion, but that might explain why my animations look like this. :slight_smile:

Also, is it possible to apply ColorBlendAttribs per-primitive instead of per-group? Collada stores them in the materials, which are applied per primitive.

Transforms in an EggXfmSAnim are relative to their parent. Transporter accidents like you’re getting are real common when first writing an egg converter, though. I’d start debugging it by converting just the rest pose as a single-frame animation. In the rest pose, the transform in the EggXfmSAnim should exactly match the transform in the joint. If that doesn’t come out looking like the model in his rest pose, then it must be that your joint and anim transforms don’t exactly match. If it works properly, then try a slightly modified pose (move the arms a bit or something), so that the rest transforms in the joints are no longer exactly the same transforms as in the animation frame, and see what happens.

Sorry, the color blend can only be applied per group. You’ll have to subdivide your mesh into collections of polygons with the same color blend applied.


Thanks for the useful advice. However, exporting with the bind pose as anim transform doesn’t look much different. I am guessing I did something wrong with the vertex references or so.
Sorry if this is a noob question, but what exactly is this bind-pose matrix? It’s a relative transform that transforms the joints to the original position, right?
FCollada provides both a joint transform and an inverse bind-pose matrix, should I somehow transform one with the other or so?

The transform that appears in the entry is the initial transform of each joint, when the actor has not yet had any animations bound to it. This also defines the transform space of the vertices.

When an animation is playing, this initial transform is replaced with the transform that appears in the current frame of the animation. (There are other animation systems in which the animation modifies the initial rest transform. Not so with Panda; here it completely replaces the initial rest transform.)

Thus, if you correctly export a one-frame animation of the initial rest pose, then it will have the following two properties:
(1) The transform defined within each will describe the exact same transform as the one defined in within the corresponding

(2) The model with the animation playing on it will look unchanged from the model without the animation playing on it.

If (2) is not the case, then it follows that (1) is also not the case. Therefore, you should take a look at your egg files and convince yourself that they are, indeed, different. (You can compose the components of the transform in the

to compare it to the matrix, of course, or decompose the matrix and compare it to the components, to help you determine what went wrong.) Then you should examine your inputs and see what you need to do to make them be the same.

I don’t know what is meant by FCollada’s joint transform and inverse bind-pose matrix.


Ah, thanks. I did some more research and found out that collada actually provides three matrices for the joint. One is the initial joint transform, the other is the global skin bind matrix, and a third is the per-joint inverse bind pose matrix.
I also found out that the world-space joint transformation can be calculated through:

initialWorldspaceJointTransform * bindPoseInverse * skinBindMatrix

How to convert this global transform into joint-space? Sorry, I suck at matrices and transformations.

I found another curiosity though:
As you can see, part of the model does show up – that leads me to believe that it’s actually the vertex influences are wrong.


initialWorldspaceJointTransform * bindPoseInverse * skinBindMatrix

really compute the worldspace joint transform per each frame? Then perhaps:

initialRelativeJointTransform * bindPoseInverse * skinBindMatrix

computes the relative joint transform per each frame. But it’s probably not that easy, because you probably need to account for the transforms that are animating above. So what you really need is:

currentNetRelativeJointTransform * inverse(currentNetWorldspaceJointTransform) * above calculation

for each frame.

Still, this is all a little beside the point. If the vertices are coming out moved at all with a single-frame animation, it means that the transform does not exactly match the

transform. This is true regardless of the vertex influences. Whether the vertex influences are wrong or right, if the transforms exactly match, they won’t move from the original rest position–because you’re replacing transform A with transform A. Since they are moving, it follows that the transforms don’t exactly match.

If Collada insists on giving you worldspace transforms instead of relative transforms, you can hack around that by flattening out the hierarchy and moving every joint to the root of the hierarchy. That makes the relative transform the same thing as the worldspace transform. If that works, then you just need to figure out how to convert worldspace to relative transforms.


Thanks! I got a step further.
Collada indeed gives the bind-pose matrices in world-space coordinates. But, I found out the actual per-frame animation matrices are in joint-space coordinates! So, I exported one with the matrix of the first frame, and it looks like this:
Looks like the first frame is set to the bind pose this time or so?

However, the rest of the matrices still don’t look right yet:
But at least they are not as much out of proportion as they were before.

That’s looking better all right!

Hmm, the next step might be to try to export the same thing using a different static pose. If that works properly, then try mixing and matching: apply the animation for pose B onto the model for pose A, and vice-versa. They should apply correctly, and after binding, should move the model from the rest pose to the anim pose you have applied. If that doesn’t work, then I think the vertex influences must be wrong.


I didn’t entirely figure out the problem yet, but I did add it to CVS. I don’t think I broke anything (ppremake will only build it if you have HAVE_FCOLLADA set) but I did make some changes to main files like Global.pp and Package.pp, so let me know if I broke anything.

Hi pro_rsoft !

I’m currently trying to incorporate your collada importer in our pipeline.
Since I need only basic stuff ( that’s it : no animation) I figured it will be ok.
( Fyi I need :

  • Scene graph with transforms.
  • Basic geometry, including normals, uvcoords, binormals, tangents, etc.
  • Multitexturing.

Problem: it seems nice on some meshes, but fail on other: by “fail” I mean that the converter work without error, and the egg file seems correct. But once loaded in panda, no polygon are displayed.

After some test, it seems that it fails with meshes created with collada exports plug-in from maya ( maya 2008, colladaMaya v3.05B )

I don’t know if it’s related to the bug you already speak of ( with group or joint)

I’m still using panda3D 1.5.2, and I’m using the compiled version that you provided here.

I can give you some of the meshes that don’t work if you need them

anyway, thank for your work, it will really help us!