OK. Let me clarify: it is certainly possible to subclass C++ classes in Python; Actor and DirectGui are both examples of this being done. However, when you do this, you can thereafter only use the resulting class in Python; the C++ side of it knows nothing about its new Pythonic nature. This is why, for instance, when you use render.find() to retrieve an Actor from the scene graph, you only get back a NodePath, not the actual Actor object.
So if you subclassed PNMWriter, you couldn’t use your new Python class with PNMImage, which is itself a C++ class.
Now: PNMImage etc. do have some possibly useful tools for drawing (PNMPainter, etc.). They’re pretty limited, but they work for what they do. PIL probably has a much better suite. If you really wanted to use PNMImage on principle (but I’d recommend you get over your reluctance to combine redundant code libraries
), the right way to do this would be to use Texture.load() to copy the image into a Texture every frame. It’s “slow” by hardware rendering standards, but it’s not any slower than any other copy-pixel technique would be. (For instance, a PNMWriter that did memory DC writes would just do exactly the same thing.)
Note that you could certainly integrate with PIL and use its library directly also. If PIL gives you the image data in the form of a string, as you describe in the OP, then you could do something like this:
pt = PTAUchar()
pt.setData(myImageData)
tex.setRamImage(pt)
to copy that image data into a Texture for rendering. (Note that you have to set up the texture first with tex.setup2dTexture(xsize, ysize, type, format) to tell the Texture what kind of image data you’re giving it.)
This technique, again, will be “slow” by hardware rendering standards, because you’re doing all of this work to generate the image on your CPU, and not taking advantage of your hardware-accelerated rendering capabilities at all. But in practice, it may be acceptable performance. This is, after all, basically how we play AVI files as texture images; it can be done, and sometimes it’s the only way to achieve a particular effect.
But what are you drawing on your map? Circles and squares and dots and stuff? However you are contemplating placing circles and squares and dots on an image, you can use that same logic to place a model of a circle, square, or dot on a scene graph.
I would generally recommend using a completely separate scene graph for your map, rather than having a “zoomed out” view of the same scene graph. You could do it with a zoomed out view, though, if that is more appropriate (for instance, if you want your map to have similar detail to that which you see in the main screen).
In render2d, the default coordinate system is (x, 0, z), where the Y coordinate is 0 or unimportant, and the Z coordinate controls the vertical position on the screen. Note that this is just a 90 degree rotation from (x, y, 0), so if you prefer to make Z be your 0 coordinate instead of Y, you can just put the whole thing under a 90-degree rotation node. Or, if you are putting it in its own DisplayRegion, you can set up that camera according to your own preferences.
You can certainly use multiple DisplayRegions on one screen for drawing side-by-side or picture-in-picture views. This is the primary reason we have the DisplayRegion class in the first place.
In Pirates, the compass view is achieved by parenting little squares to a flat model of a compass background, and this is in turn parented to aspect2d. The whole node then gets rotated around according to which direction you’re facing. There are similar tricks used by minimaps drawn in sample code available on these forums.
David