font (StaticFont) from image?

Well I simply thought I could get rid of an extra dependency if I could. PIL is a nice library, but this will be the only thing we’ll be using it for.

Do you mean store() will convert it to RGB data? I don’t mind the extra work, but it sounds like it will take longer to finish and will take up more memory if that’s how it works.

store() will save it into a PNMImage. It will take a bit longer, sure, because there’s an additional step; but it might not be much longer. The memory usage shouldn’t be an issue because you can allow each Texture to delete immediately.

David

I mean store() will convert it to RGB right? So the resulting image(s) will take up more space?

PNMImage supports greyscale images. So no, not necessarily, as far as I’m aware.

Oh, so I can’t use PNMImage that way not because PNMImage doesn’t support one channel data, but only because it can’t read such raw data?
Sounds like it wouldn’t be too hard to add support for this…

It supports grayscale images, but only by ignoring two of the three channels. Still, that’s only a minor detail–the PNMImage class won’t be kept around anyway once it’s loaded into a Texture, so I don’t see how this is an issue for memory usage one way or the other.

You’re right, it wouldn’t be difficult to add this feature. Patches, as always, are welcome.

David

well my first impression was that PNMImage doesn’t support one channel images, so saving the RGB PNMImage to Texture would cause the texture to waste some space too. Nevermind then.

I think the problem now is performance.
We convert the image data to Texture, then PNMImage, then Texture again.

I wish I could contribute a patch myself but C++ is really not my language.

Before I print some dumped mesh and joint matrix data here, their multiplication and the actual result in panda, I want to ask this question:
Why can’t we just bake the mesh matrices with flattenLight()? It seems when I use flattenLight(), my model isn’t animated at all. Is there something more flattenLight() does which isn’t mentioned in the API docs? Maybe it remove vertex weights? If so, is there be another way to bake the mesh matrix without removing that data?

Normally, if you create a Character node and put all of the animated geometry below it (which is the way an animated character is supposed to be set up), flattenLight() should do the right thing, which is to not attempt to bake any matrices into vertices below the Character node. This happens because the Character node is set to stop further flattenLight() operations.

Of course, if you call flattenLight() directly on a GeomNode below the Character node, then the Character can’t interfere, and it will flatten the vertices anyway, and damage the animation.

It cannot do otherwise. The animation is intrinsically bound up with the transform. If flattenLight() applied a matrix transform to the vertices and did not remove the vertex weights, then the next time the animation adjusted a frame it would apply completely the wrong transform, because the transform space has changed but the animation hasn’t.

If what you want, though, is to pre-bake the vertices with the node transform while preserving their vertex weights, well, why don’t you just create the vertices that way in the first place, since you’re the one creating the vertices?

David

This part got me confused.
I do call flattenLight() below the Character node. What do you mean by animation here? Will joints be removed (joint matrices “baked”) by flattenLight()? because if it does not remove that data, then I don’t get why I see no animations, even wrong ones.

I guess that’s an option, though not a preferred one because of the format’s structure.

The animation matrices are not stored in the GeomVertexData; they’re stored in the AnimBundles which are loaded and applied separately. So there’s no way that flattenLight() can apply the transform to both the vertices and to all of the associated AnimBundles that might be asked to play on those vertices.

So there’s no point in asking flattenLight() to preserve the vertex weights after applying a transform. Thus, it doesn’t bother, and lets the vertex weights fall away. Because it does, animation stops doing anything after a flattenLight() operation.

If we extended flattenLight() to do otherwise, to go out of its way to preserve the vertex weights after applying a transform, then the animation wouldn’t stop after a flattenLight() operation, but it would just become nonsense.

David

if I understood all this, then not in our unusual case.
I will check what’s wrong with the matrices like you suggested before then…

Right, it might actually repair the animation in your case. But flattenLight() is not meant to be a repair tool; it’s meant to be used on geometry which is already correct, and it changes that geometry only in ways that make it render more efficiently without affecting its visual appearance. If flattenLight() behaved as you suggest, then it would violate that rule.

I think it’s better to create the geometry correctly in the first place.

David

I see.
I would still like to keep your new solution to the last. The thing is like I said the file is structured in such a way that vertex data and everything needed to generate the mesh are in the beginning of the file and the matrices come in the end. Sure, I could store it in memory and create it later when I reach the matrix part, I would just need to restructure the code, however (although I’m also lazy when rewriting stuff, hehe), the fact that these matrices are after that data and that they exist at all, makes me wonder that it might actually be used (modified) somewhere which I probably missed, so if I could manage to get Panda to use the mesh/joint/animation matrices together like in the original game that would be ideal.

I dumped the matrices for the right eyeball:

Mesh matrix:
0.959547 0.281549 0 0
0 0 1 0
0.281549 -0.959547 0 0
0.335318 -0.524948 15.3362 1

Bone matrix:
0 0 1.11781 0
0 -1.11781 0 0
1.11781 0 0 0
0.325036 15.2523 0.400683 1

Animation matrix frame 0:
-0.530577 -0.711086 0.679961 0
0.375792 -0.860429 -0.606584 0
0.909269 -0.0593265 0.647466 0
3.5971 10.948 -2.93094 1

Hm, now how can I know what ‘actual’ matrix Panda generates in the end? These are stuff from the 3d files.

EDIT: Hm, I’m posting in the wrong thread for some time…

Huh, that’s baffling. I can’t guess how those three matrices are supposed to work together in your original animation package. You might have to do some more research to try to figure out what is really meant here.

See, your “mesh matrix” shows an offset of 15 in Z, while the “bone matrix” has an offset of 15 in Y. Curiously similar but different, almost as if there were a Y-to-Z conversion performed on one of these but not the other? Also, the rotation of the mesh matrix is way different from the rotation of the bone matrix. Are these really intended to be composed together to produce the resulting matrix? But the composition of these two matrices is something radically different from any of these, so that seems unlikely.

And the animation matrix is completely different from both the mesh matrix and the bone matrix, as well as from the composition of the two. Plus it’s the only matrix that has a significant translation in X. Huh? How is the animation matrix meant to affect the original two matrices–does it get composed into them, or does it replace one or the other or both of them? Whatever happens, it seems that the result will be in a widely different location than the original position.

So I can’t guess how the animation is supposed to work in a way that won’t move the eyeballs out of their sockets. If you know how it’s supposed to work, then we can figure out how to make Panda do it the same way.

David

Oh, oops. Forgot to remove the y-to-z-up code for the mesh matrix (that doesn’t fix it though), sorry.

The mesh matrix puts the eyeball in the eye socket, the joint+animation matrix do their usual job.

The joint matrices are inverted and assigned to a CharacterJoint object. The animation matrices are decomposed into pos, rot, scale datas, put in PTAFloat arrays which are used to generate a AnimBundleNode. AnimBundle node is assigned to a Character object which ahs these character joints. Works fine until mesh matrices come in play.
As mesh matrix in this example puts the eyeball in the eye socket, than I would guess they are multiplied.

This animation moves the player few units to the left along x axis, that’s okay. Also the character is sitting, so the eye location will end up a bit lower.

What I get right now is his eyeballs floating few units higher his head and around maybe 8 units in his back, also far from each other, when animating.

If you ignore the mesh matrix, the eyes end up somewhere in the ground in the back of the player.

Uh, I don’t seem to get the font loader working correctly. When I create an egg font from the file the same way, it works and I can see the letters, but when I create a StaticFont object directly, I either get a character code error from Panda3d when using it for rendering text or I don’t see any letters without any error message.
I made a small example with an example file, the file’s specification (it’s a very simple file format) and the loader code.

Like I said if I use egg classes to generate an egg from the file, everything seems fine, so I probably do something wrong with Panda’s corresponding classes.
The texture is created correcty, you can see that by dumping the texture to file.

Here is the archive:
megaupload.com/?d=8ZW3C6HS

I’m sorry for the long code, i couldn’t make it any simpler.

I added the line “print font” to your code, and it displayed the following text:

Number of glyphs is: 6947
Glyph size (in pixels): 32
StaticTextFont bin_font; 6929 characters available in font:
  92  162  163  167  168  172  176  177  180  182  215  247  913  914  915  916 

Followed by thousands of specific character codes. None of these character codes are the Unicode characters for the letters in “test”, so it’s not surprising that it can’t render that string (and it gives you error messages about missing characters).

When I give it the string u’\x5c\xa2\xa3\xa7’ instead, which is the first four character codes mentioned, it doesn’t complain but I don’t see anything. This tells me that the individual glyphs you fed it are invisible, or their vertices are not within the range (0, 1).

So then I noticed that your vertex coordinates are Y-up instead of Panda’s default Z-up: addData3f(0,1,0) instead of addData3f(0,0,1). In a Z-up coordinate system, the Y coordinate should be 0, and the vertical coordinate is in Z. I corrected this, and now I can see the glyphs! (Your characters were just being viewed edge-on before.)

But there’s only one glyph. It looks like your offset value is also incorrect, it’s setting the offset in Y instead of X. I changed self.vwriter2.addData3f(0,1,0) to self.vwriter2.addData3f(1,0,0), and now I can see four glyphs.

The texture is white on black, whereas most fonts usually use white on transparent. Perhaps you meant to use FAlpha instead of FLuminance, and “A” instead of “G”, when you set up the texture. This also, of course, requires enabling transparency, with a call to fontNP.setTransparency(TransparencyAttrib.MAlpha).

Now I see white symbols on a transparent background.

I’ll leave it up to you to figure out what the character code mapping is supposed to be; I guess you’ve already solved this problem correctly in the .egg file case.

David

Weird, that’s the vertex positions I used in my bin2egg script, except I positioned the offset vertex correctly there. I also used the same color mode in that script and it seemed fine.

Anyway, it seems right now, except the UVs are wrong.

BTW, I tried to do the same with PNMImage and I can’t get black to be rendered as transparency.

self.fontimage = PNMImage(pixelsize, pixelsize, 1)
...
datachunk = fileobject.read(1* self.glyphsize * self.glyphsize)
image = PNMImage(self.glyphsize, self.glyphsize, 1)
# because of the fact that PNMImage can't read() grayscale images, we need to create a Texture object instead and then convert it to PNMImage with store()
texture = Texture()
texture.setup2dTexture(self.glyphsize, self.glyphsize, Texture.TUnsignedByte, Texture.FAlpha)
texture.setRamImageAs(datachunk, "A")
texture.store(image)
...
self.fontimage.copySubImage(image, self.xindex * self.glyphsize, self.yindex * self.glyphsize)

For historical reasons, the default coordinate system for an egg file, if you don’t specify otherwise, is Y-up. The egg loader automatically converts the egg’s coordinates to Z-up on loading.

Did you remember to add the line:

fontNP.setTransparency(TransparencyAttrib.MAlpha)

?

David