mesh matrices and textures

Hello.
We have mesh matrices in the 3d format we use. The format holds few meshes which have their own textures and matrices. The issue is that each mesh can have multiple textures, so in Panda I need to create more than one geom for the mesh to be able to assign all the textures, but this is where the problem starts, as when I create multiple geoms instead of one, their center point is not the same and the matrices don’t transform them as they should. How would you solve this issue?

I think you need to apply multiple textures to a single geom, rather than making multiple geoms. Try looking through the manual for ‘multitexturing’ and TextureStage.

You should not need repetitive geometry in order to create multiple texture effects, unless I have misunderstood something about your question.

Goodluck!

Upon a second reading, maybe you are trying to apply a single texture layer to a geom, but that single layer is being built from many sub-textures. I’m still not entirely sure I understand exactly what you asking to do, but if this sounds more like the situation you are describing you could try and use you’re UV coords cleverly and still apply all the textures to only a single geom. (What 3d platform are you using? You may be making extra work by just not using an egg export tool…)

Alternatively, you could parent all of your multiple geoms to a single GeomNode and apply your transformations on that node instead, but this will not allow for as much freedom in the possible transforms. Again, I can’t speak to your exact use.

Hm, I think you still don’t understand the issue.

My format allows to hold multiple meshes in a single file. Each mesh can have multiple textures and a transofrm matrix. When I say multiple texture, I mean some of it’s vertices are assigned to one texture, others to other.
Now how to represent this data in Panda? In panda geometry is stored in a data object called Geom. Correct me if I’m wrong, but Geoms can only have one texture assigned to them. So I ca’t map the meshes of the 3d format 1:1 with Geoms. The only solution seems to break the mesh to mutliple Geoms, according to the vertices. But then you can’t properly assign the matrices anymore, because if you break the mesh down into two, their pivot or center points won’t be the same as the original meshe’s, so the transform matrix won’t transform them properly.
I hope you understand, I’m trying to be as clear as I can.

That’s not correct. 3D models in Panda can have up to 4 (or up to 8) textures assigned.

panda3d.org/manual/index.php … troduction

You are talking about multitexturing: assigning textures to all faces of the same Geom on top of each other. In this case you can have 2-8 texture “layers”, depending on your GPU (not Panda).
What I’m talking about here is, again, having different textures for different faces of your Geom (Mesh in 3d format), and that I think is not possible.

I think it is. All you need is multiple texture coordinates, one per texture, and a correct texture wrap mode, probably WMBorderColor. You should set the texture coordinates in such way that only those faces that are supposed to be covered by a specific texture stay within the [0, 1] UV range. It should work this way.

Obviously, you are bound by the texture count limit here, which would be specifically painful if you want anything more than a diffuse texture.

About dividing the model into multiple parts, though. I don’t think anyone could give you any real advice no knowing what format you actually use (I assume custom) and how it works. However, looking at it from the Panda level, you can simply parent all parts to an empty node set wherever you expect the center to be, relative to the whole model, and apply all transforms to that empty node. That way, it should behave like one object, effectively.

That would be a very inefficient approach, really a hack, and like you said could occupy the allowed texture amount and would prevent you from using them for other things, like shadows, normal maps, projected textures, etc.

I said how my format works, each mesh, or geom in Panda, has a transform matrix, which are the transform matrices used in Panda, your 3d modeller or anything 3d related. The only difference is the orientation, which Panda (and most libraries dealing with 3d) can change.

Yes, I thought about this and it could work, I was wondering if there was a cleaner way to achieve this though. I wouldn’t in the end want to have an empty parent node with the actual geometry nodes attached to it.

I think the answers so far are your best options. A quick summary:

  1. Pre-composite all your textures into a single texture. Use your UV coords to make the correct sub-portion of your composite texture show on each geom.

(And, just as a note this will actually load faster than loading all your textures individually. There is no performance hit for using your UV coords this way.)

  1. Break your geom’s apart and parent them to another node which you will operate on. (Cleaner than you seem to think. Hierarchy is basically for this.)

Option 2 will have worse performance than 1, but you can probably get around that by flattening everything after loading.

Good luck!

1st solution won’t work because we have 600 models already, and that’s just the official stuff. We would need to reconvert all of them and modify the tools which export them to consolidate those textures themselves, also tell people to edit their own models to work with the new code. This wouldn’t work even if we did all this, as for some models we would need to scale the images down to be able to put them in a huge 4096x4096 texture, because that is limit for some hardware.

2nd solution was already mentioned by coppertop and I’ve already replied to that. No need to repost things.

I agree that multitexture isn’t really the answer here. There’s no reason you can’t have a different Geom for each different texture you want to apply.

But I don’t understand the original problem. What does the Geom have to do with the transform’s center point? If by “center point” you mean origin, or (0, 0, 0) point, of course all of your Geoms can certainly share the same origin. This is just a matter of how you define the vertices. In any case, the transform is a property of the GeomNode, not of the Geom; and you can certainly attach all of your Geoms to the same GeomNode, and then there is absolutely no doubt about them all sharing a common transform.

David

You mean what the transform has to do with the Geom’s center point (origin)? Well correct me if I’m wrong, but I think the center point is the first vertex you create, unless Panda recalculates it by using the bounding box or something. But in both cases, then the origin of a single Geom, or the origin of Geoms you create by using the start and end vertices of the “Mesh” which are assigned to different textures, in the end those two (or more) Geoms won’t have the same center point (origin) and so if you’ll assign the matrix of the Mesh (Mesh is like Geom in the 3d format, but it can have more than one textures assigned, by specifying start and end vertex index for each) in the 3d format to both of them, they won’t we transformed with the transform matrix as the should, because they have different origins now.

The first vertex has nothing to do with the origin. The origin is always at (0, 0, 0), by definition. The first vertex could be (1, 2, 3), or (3, 0, 1), or (-100, -200, -400), or really anything at all. Whatever numbers you use for the first vertex (or any other vertex) defines the position of the vertex relative to the (0, 0, 0) origin, but doesn’t change the origin itself.

David

I must be confusing it with something else then.
In that case it shouldn’t matter how many Geoms I generate then, if I assign them to a single GeomNode and apply the matrix to it, it should look the same as making a single Geom and applying the matrix, right?
There must be something wrong with the model I’m testing then.
Just in case, Character Geoms (which are animated with joint matrices) can have matrices assigned too, right?

Right, that’s what I was trying to explain above.

Yes, any node can have a transform; but usually you wouldn’t have both a transform on the GeomNode in addition to a joint matrix animating the vertices, because the two transforms can interoperate in confusing ways. And there wouldn’t be any reason to do this anyway–if you’re already computing the joint transform, why would you then want an additional transform to be inherited from the scene graph?

David

Interpolate in confusing ways?
Well the format seems to not distinguish between skinned meshes and static ones. If number of bones is set to 0, the section is just empty. In other words you can have a matrix for any mesh.
I tested most of the 3d models and levels seem to use them often. I could only find one case of animated meshes also having mesh matrices, a characters eyeballs. So it’s quite noticeable…

I don’t really understand what you are referring to; the egg loader won’t create internal transforms when it loads a Character model, unless you configure it to, and then only on nodes that don’t themselves contain dynamic geometry.

Still, as I said, it technically works to do this. It’s just weird and potentially confusing. Why would you want to further transform dynamic geometry, instead of simply doing all of the required transforms in the vertices themselves?

But maybe we’re talking about two different things, since I don’t fully understand what you are referring to. Is there a problem that you’re trying to solve?

David

Well I honestly don’t know why it’s like this. Like I said I’m not the one who made the 3d format specs, made the 3d models and don’t have the chance to redesign the format without breaking support for hundreds of assets by the original devs and users.

What I meant was I think the egg format in theory allows this also, correct me if I’m wrong. So from that I guessed Panda supports this, which you confirmed later.

Anyway, I’m having some issues doing this.

First problem: some Characters, who have matrices for their eyeballs for some reason. If you won’t assign the matrices, they will be positioned at (0,0,0) (their feet).
I wrote the code for assigning the matrices and as you can see in the image below, it’s positioned correctly now, but still out of place when animating.

Any ideas?

Second problem, I want to convert from y-up system of the 3d formt to Panda’s z-up system, but the matrices seem to mess up for some reason. I don’t know too much about matrices so there could be something obvious I’m missing.

Well, of course I can’t tell from looking at the pictures what you’re doing wrong. But I agree these are common kinds of transform errors.

To understand the eyes problem, you need to understand that the animation table defines a transform matrix for each frame of the animation (it just defines it componentwise, as rotation, position, scale, etc. separately, but taken together all the components define a matrix). The animation model also defines a matrix for the rest frame of the animation–this is the original transform that is applied to the vertices. When the animation is bound, each frame, the matrix from the animation table replaces the rest frame matrix. If the eyes are moving dramatically as soon as the animation is applied, it means the animation table matrix is very different from the rest frame matrix.

As to your coordinate system transform problem, well, it does appear that you’re not applying the correct transform. This should normally be just a 90-degree rotation, and you would apply it only to the very top of the hierarchy (because all the child nodes would then inherit it automatically). If you wanted to get fancy, you could apply a 90-degree rotation and also a 90-degree counter-rotation to each node of the hierarchy, but I wouldn’t recommend doing it that way unless you really understand transforms. Just apply a single 90-degree rotation to the top node.

Or don’t even attempt to convert the coordinate system, and rotate the model 90 degrees after you load it.

Or run Panda in y-up mode with the config variable “coordinate-system y-up”.

David

I’m not sure what all this means.
There are two matrices: mesh (NodePath) matrix and joint matrix. Is the joint matrix (animation table) applied on top of the NodePath matrix? Or does it replace the former?
I tried to simply call NodePath.flattenLight() before assigning animation, but then the animation wouldn’t play.

The Geoms in the file each have their matrix. There is no hierarchy for the Geoms in the file. I just call yToZUpMat() on the matrices before assigning them to the GeomNode NodePaths.

I’m not sure if any of these are a good idea as we also want to allow people to use Panda’s egg format, together with the game’s old in-house format, so I would want both loadModel and our function to have the same output.