font (StaticFont) from image?

You can examine the nodes that you get back from loadEggData() for specific details, but I can answer your questions from the high level.

You will need a different Geom for each letter. Each Geom will contain a GeomTriangles with two triangles in it (or a GeomTristrips with a two-triangle strip, but the GeomTriangles is a little bit simpler to create). All the Geoms can share the same GeomVertexData.

You will also need another Geom for each letter to hold the GeomPoints object, which contains a single point. You will want to use a different GeomVertexData to hold all of the points, to avoid polluting the main one with useless data.

You can add both of these Geoms to the same GeomNode, and name the GeomNode for the character code.

Then attach all of these GeomNodes to the same parent node.

David

Hm, OK. I have another question. The addVertices() method of GeomTriangles takes only index numbers, not tuples or vertex objects, but that’s not a problem. What I don’t get is that how can a GeomTriangles know what GeomVertexData it reads the vertex data from? You suggested to have a separate GeomVertexData for the GeomPoints, but I don’t see a way to tell GeomTriangles which GeomVertexData to use.It seems like you specify which GeomVertexData to use when creating the Geom, but that’s after adding the vertices to the GeomTriangles. Or do GeomTriangles just store the indexes all along?

Also, I’m having trouble feeding the grayscale PIL image object to Panda. I’ll probably switch to PNMImage later, but just curious:

fontimage = Image.new("L", (pixelsize, pixelsize))

# fill the texture...

width = fontimage.size[0]
height = fontimage.size[1]

data = fontimage.tostring()
tex = Texture()
tex.setup2dTexture(width, height, Texture.TUnsignedByte, Texture.FLuminance)
tex.setRamImageAs(data, "Luminance")

fontNP = NodePath(fontnode) # fontnode is a PandaNode which holds the GeomNodes
fontNP.setTexture(tex)
font = StaticTextFont(fontNP.node())

I get this again, so I’m probably not using the right mode.

Assertion failed: image.size() == (size_t)(_component_width * format.size() * im
gsize) at line 872 of c:\buildslave\dev_sdk_win32\build\panda3d\panda\src\gobj\t
exture.cxx
Traceback (most recent call last):
  File "loadfont.py", line 793, in <module>
    font = loadBinFont('font.bin')
  File "loadbin.py", line 7185, in loadBinFont
    tex.setRamImageAs(data, "Luminance")
AssertionError: image.size() == (size_t)(_component_width * format.size() * imgs
ize) at line 872 of c:\buildslave\dev_sdk_win32\build\panda3d\panda\src\gobj\tex
ture.cxx

As described in the manual, the Geom object contains a pointer to the GeomTriangles and also the associated GeomVertexData. So it’s the Geom that associates the two.

The second parameter to Texture.setRamImageAs() is a string with one letter for each channel in your image data. You are passing the string “Luminance”, which has nine letters, but you don’t have nine channels. Use the string “G” to indicate a one-channel grayscale image.

David

I seem to get a black texture when using PNMImage instead of PIL.

Here are the two versions (simplified), hope you can spot the error:

PIL

def readGlyph():
	# read from the font file
	
	image = Image.frombuffer("L", (glyphsize, glyphsize), data, "raw", "L", 0, 1)
	fontimage.paste(image, (xindex * glyphsize, yindex * glyphsize))

fontimage = Image.new("L", (pixelsize, pixelsize))

# readGlyph() for each glyph in the file

data = fontimage.tostring()
tex = Texture()
tex.setup2dTexture(width, height, Texture.TUnsignedByte, Texture.FLuminance)
tex.setRamImageAs(data, "G")

PNMImage

def readGlyph():
	# read from the font file
	
	image = PNMImage(glyphsize, glyphsize, 1)
	image.read(data)
	fontimage.copySubImage(image, xindex * glyphsize, yindex * glyphsize)

fontimage = PNMImage(pixelsize, pixelsize, 1)

# readGlyph() for each glyph in the file
tex = Texture()
tex.load(fontimage)

Thanks.

I don’t see anything obviously wrong. Try using PNMImage.write() at strategic places to write out the image-in-progress, and see if you can discover at what point it goes wrong.

David

I tried saving the individual glyph images - no warning message, no output.
write() -ing the final big png gives a black image.

No output? Are you sure you are actually calling readGlyph?

David

Yes, a simple print statement inside the function shows that it is called the required amount of times.

And image.write() and fontimage.write() from within that function generates no output at all?

Like I said fontimage.write() generates a blank (black) png, image.write() generates nothing.

I don’t understand what you mean when you say “nothing”. A transparent image? A zero-length file? Or no file at all? Because I assure you that PNMImage.write() will generate some output. If it doesn’t write a file at all, it must at least write an error message to the console. If this isn’t happening, then it follows that you aren’t actually calling PNMImage.write(), and we need to figure out why.

David

I mean no file. I’m pretty sure the function which calls the write() function is called.
Let me test it one more time…

Okay I tested again. Nope, no file and no message.

image = PNMImage(glyphsize, glyphsize, 1)
image.read(datachunk)
print 'I was here'
image.write('dump.png')
print 'I was here as well'

in readGlyph(), where “datachunk” is the pixel data.

And I can’t believe I forgot to mention getting this warning message for every glyph

Non utf-8 byte in string: 0xff, string is 'C:\Users\Me\Desktop\
.
pz'

which is caused by the line

datachunk = file.read(1* glyphsize * glyphsize)

which is very weird, as I don’t get this when using corresponding PIL functions.

Ah, you’re right, my mistake: if you haven’t successfully read the image, then image.write() will in fact quietly do nothing at all. (But it will return False instead of True. I bet it’s returning False in your case.)

So it follows that you haven’t successfully read the glyph image, which is also confirmed by your bizarre error message.

It seems that your file object that you’re calling file.read() on is trying to decode the data from utf-8, as if you had told it the data was utf-8 encoded when you opened it. That’s a mistake, make sure you are opening a binary, unencoded data stream, not a text stream of any kind.

Or maybe something’s going weird with the filename you’re trying to open? It’s strange that it appears to report the filename itself as the error. How are you creating this file object?

David

Yeah, you’re right. it returns False.

Which is weird because it works when using PIL in the same code.

It’s opened in “rb” mode.

filepath = "font.bin"
file = open(filepath, 'rb')

Ah, I think the problem is here:

image.read(datachunk) 

The parameter to image.read() is the filename you should open and read, not the string data to process. Since you’re feeding it the string data, it’s trying to open that data as if it were a filename, and of course it’s failing.

If you want to pass a string data to image.read(), you need to first wrap it in a StringStream object.

David

You’re right.
It seems it needs more than that:

:pnmimage(error): Cannot determine type of image file .
Currently supported image types:
  SGI RGB                         .rgb, .rgba, .sgi
  Targa                           .tga
  Raw binary RGB                  .img
  SoftImage                       .pic, .soft
  Windows BMP                     .bmp
  NetPBM-style PBM/PGM/PPM/PNM    .pbm, .pgm, .ppm, .pnm
  JPEG                            .jpg, .jpeg
  PNG                             .png
  TIFF                            .tiff, .tif

And what format is the image data? Is it one of the supported formats? Most of these image formats include a magic number that allows Panda to identify the image format from the data stream directly, but it is also possible to tell it explicitly what format the data is to be received in (the second parameter to read() would be a string like ‘png’ or ‘tiff’ or ‘jpg’).

But it must be one of these supported formats. The whole point of using the PNMImage approach is that it can decode a supported image format. If you have raw RGB data instead, you should just shove it directly into the texture and avoid the PNMImage approach.

David

The data has a single byte for every pixel, because there’s one channel (it’s grayscale).

Well how can you generate a single texture by smaller glyph image data? In PIL it’s done with

image.paste(smallerimage, (x, y))

With PNMImage I can use

image.copySubImage(smallerimage, x, y)

Or are you saying to have few thousand tiny (32x32) textures for each glyph?
(if you mean adding the data pixel by pixel, then it’s too slow in Python).

In that case, PNMImage is probably not the tool for you. It just wasn’t designed to be used in this way. You can read raw RGB data with the “img” file type (though it’s a bit clumsy to specify the size of the image), but this file type assumes triples of R, G, B, and not a single-byte grayscale image. There’s no easy way to read raw grayscale data directly into a PNMImage.

You can continue to use PIL; or you can use some other tools that can read raw grayscale data, include the Texture class itself (I suppose you could even create a temporary Texture object for each glyph, stuff the raw grayscale data into it, and then use Texture.store() to copy it into a PNMImage so you could collect multiple glyphs into a single, large image. But this seems pretty convoluted.)

David