Designing an RTS Scene Editor using Panda

Hi all,

     I am a new user to Panda3D. From reading the other posts on this forum (or at least those that I was able to access), I have learnt that using PNMImage in real-time is a rather slow solution for real-time 2D Graphics. I have also learnt that PIL - which offers a quick way to edit images on the fly - lacks interfacing with Panda. I hence have a few questions to ask:-
  1. From the way I look at it, it would not have been that hard to provide interfacing code with PIL. PIL Images have a very important function called tostring which allows Image Data to be viewed in the form of a string to allow for easy access by external programs. I wonder why there is no interface in PNMImage to support reading from such a string? Surely someone could implement it? (I would, but sadly have major problems getting C++ compilers to behave.)

  2. I am planning to implement a RTS Map Editor in Panda. It requires for the terrain editing component, the ability to render changes to terrain geometry on the fly - that is to say, the user must be able to pull and push terrain blocks easily and be able to see the changes reflected in the editing window. In addition, the window will show a 2D minimap overlay which will be updated as the user amends terrain or adds/ removes models from the terrain. There are a few sub-questions from this:

    a) Just to confirm, do the Terrain Blocks of a GeoMipTerrain use the same GeomVertexData, or do they use separate GeomVertexData?

    b) Can PNMImage handle the requirements of minimap updating, or is there a [pythonic] workaround for the speed of PNMImage’s pixel changing functions?

    c) How do you use PNMPainter? It is just one of the many classes that remain sadly undocumented.

Regards,
The newbie arix

Ok ill answer some of your questions,

  1. There is an interface with PIL (and pygame) i’ll dig it up but its rather slow.

  2. Yeah that should not be that hard.

a) No structure of GeoMipTerrain is complex.

b) PNMImage not really, it will probably be too slow.

c) PNMPainter - i would not use it for real time updating, and for offline there is PIL.

Observations: You want to draw a 2d texture mini map that is not a very good idea. Better is to draw a 3d representation of world but flat. That is draw every thing in 3d rather then in 2d. With 2d rendering you would have to send entire texture over every frame with 3d rendering you just have to send positions of your units and other stuff down the pipe. Much less and the GPU is geared for that.

If i were to look to do a mini map like that i would just use the standard view but opon zooming out i would turn all the units into icons. Then just have the mini map be the same view as the real view just located in the corner and having its camera looking down on stuff.

I highly recommend looking at Spring open source RTS. I feel its the best rts ever made - its been my favorite game for over 2 years now. You might want to steel some of the ways from spring or maybe your rts is better done as a spring mod. Spring is open source so you even can add features you like. Porting spring to panda3d is also a great idea i have toyed around with. - just my $.02.

  1. Treeform means something like this (note that its still slow):
ram = pygame.image.tostring(pygame.display.get_surface(), "RGBA", True)
x = PTAUchar()
x.setData(ram)
yourPanda3DTexture.setRamImage(x,MovieTexture.CMOff)

a) They use separate GeomVertexData, one per chunk. If you find me a way to make them use the same but don’t require a full update, I’ll use that.

b) That really depends on how you decide everything to work. It’s very hard to tell before you’ve actually tried it.

c) Yeah, pity its undocumented. I was planned on documenting it already a few months ago, but I never really had time for it.
Well, it’s quite easy to use actually, just dig a bit in the API. Its kind of like:

Just keep in mind that PNMBrush has static constructors.

Treeform,

  1. PIL and PyGame? Ohh, I read pro-rsoft’s post. I thought I read somewhere in the forums that setRamImage is only available in C++.

2a) Not very explanatory. By the way, is there a (design) limit to the size of GeoMipTerrain HeightFields?

2b) Ya, k.

2c) I looking for real-time updating.

Observations:

  1. Even if you modify just one pixel, you have to send the whole thing over?

  2. Aren’t icons 2D? Meaning that they would be … err … textures? Anyway, could you embed a viewport in a DirectGui Widget? And generally, are DirectGui Widgets subclassable from Python?

  3. Perhaps, I might help you try and port. But we would have to port from C++ to python too, right? Or is there a Python binding for TA Spring?

By the way, I am looking through the TA Spring source for minimap, and I notice the use of the function glvertex2f. Is that a geometry drawing function or a texturing function similar to texcoord2f?

Rgds,
Arix

P.S.: Actually, I just realized that pyOpenGL could be used to port over TA Spring easily. But we would need pySDL/ pyGame, as well as PyLUA.

pro-rsoft,

  1. Really separate? So there are lots of double vertices in a GeoMipTerrain Mesh then? If I translated a single block upwards, I would get a floating block?

  2. Hmm…

  3. Well, about 90% seems undocumented, in the reference pages at least. The manual covers a few functions, but not all.

Rgds,
Arix

P.S.:A question is to why PNMWriter and PNMReader are not exposed to Python?

(1) Not really much duplicate vertices, but yeah, there are some. Though the poly count is not affected by this fact. And yeah, you could translate a single block upwards. But I don’t recommend doing that, I don’t know if the GeoMipTerrain will like that.

(3) It’s the python classes that are undocumented, unfortunately indeed. I doubt about the 90%, though.

(PS) PNMWriter and PNMReader are internal classes to handle the loading and writing of a PNMImage. They are not intended to be used separately. Though if you have a good reason to have python wrappers for them, it should be just a matter of adding one line to the build script.

  1. Well, I need the terrain to be deformable on a real-time basis for the user.

  2. The online documentation (under the “Reference” link) seems to show C++, rather than Python code. “::” is the C++ scope-deref operator right?

  3. Well, creating a memory DC for PNMImage with a PNMWriter-derived class.

Rgds,
Arix

(1) Yeah, the terrain remains realtime deformable. You can either after you have deformed it manually call update_block() on the blocks you have updated (see the api page for more details) or you can regenerate the entire terrain using generate() after every paint action.

(2) Since there are python wrappers, use the python operator “.” instead :wink: Remember that function names in c++ are written like set_collide_mask while you need to use setCollideMask in python instead.

(3) If you really think its useful, request it to the Panda3D devs. I think they will be happy to just add one line to the build script.

Aside - What happened to Treeform?

  1. I see an update() function, not a update_blocks() function.

  2. I see.

  3. Who are the devs?

Rgds,
Arix

What happened to treeform? I have no clue.

(1) Ah. That’s right, I remember. I made it a private function, because if one would use it to update the blocks to a different level than they already are, it would get odd gaps between the blocks. That’s why I disabled it, it would only give complications. But, now I think more about it, if you set bruteforce to true, it doesn’t matter, because the level is equal at all blocks then.
So, you could either regenerate the entire terrain, or, I could expose that function to python. Your pick. I don’t think I could get the change in before 1.5.3, though, since the release maintainer already tagged the stuff for 1.5.3.

(3) These. The most active two men-in-charge on the forums are drwr and Josh Yelon.

As pro-rsoft suggests, I would be happy to expose PNMWriter to Python, but it’s hard to imagine how it could be useful to you there. The purpose of PNMWriter is to stream the PNMImage into a particular graphics file format, like JPG or TIFF or something. It’s entirely C++ code. Even if we exposed it, all you could do is call the existing C++ code–you wouldn’t be able to subclass it from Python to make a new kind of writer class. That’s not the way the C++/Python interface layer works.

Let’s take a step back and ask what exactly you are trying to achieve. For the purposes of presenting the user with a minimap, the most practical, most efficient, and also incidentally simplest and easiest (!) method is to assemble simple geometry into a scene graph and then render that scene graph. This is what your graphics hardware is good at, and it’s what Panda is good at; it doesn’t make sense to do anything different when the scene graph solution is so easy. You can do this either with a DisplayRegion, an offscreen texture, or simply by parenting your minimap geometry into render2d.

David

-thats what i meant, but drwr said it better!

David,

Thanks for your prompt reply.

  1. I guess we could provide accessor classes. Accessor Classes are sort of wrappers for the wrappers. As they would be pure python classes, they can be subclassed. My request was to have a new memory format created, something like a WXMemoryDC. Because I find it ironic that right now, PNMPainter is exposed to Python but that it is essentially redundant if all PNMImage can do is load and save files to the hard-disk, which anyway makes amending images quite slow? I mean: yes, pro-rsoft makes mention of PIL and PyGame; but to be frank, PyGame and PIL offer alot of other image manipulation tools which are really not needed in a game. (Plus, PyGame even includes a custom graphics engine. Why one should need 2 engines in a single game is truly a puzzle to crack.) Since the basic 2D Image Manipulation - Drawing and Colouring of simple graphics - is already available in the well-documented PNMImage set, I figure only a few tweaks to the accessor interface is required.

ASIDE - Does that mean that DirectGui elements also cannot be subclassed except through C++?

  1. Since each map is going to be different, I suppose that means creation of a vertex for each pixel to be rendered in 2D? Or creation of a grid and resizing. What I wish to clarify about this solution is whether you/ treeform are referring to creating a whole new set of Geometry for the minimap, or rendering the main map from a different (i.e. more “zoomed-out”) angle? But what kind of coordinates would I use for a node that is parented to aspect2D? I checked the source-code for TA Spring as pro-rsoft & treeform suggested, and found out that they use a GL Function Vertex2f to draw pixels for the minimap; this function apparently assumes a z-coordinate of 0 for the vertex drawn. Does the aspect2D render tree function in the same way?

And what is the default pixel length between (0,0,0) and (1,1,1)? To know how far to zoom out, I would have to know that?

Incidentally, I thought the DisplayRegion was more like a viewport kind of thing, something you use for splitscreening? ASIDE - Is it possible to render 2+ scenes in the same window on different DisplayRegions? Something like a left-side 3D Scene and a Right-side 2D (GUI) Scene?

Incidentally too, is this the kind of method one would use to create a widget such as the Compass in PoTC Online?

ASIDE - Btw, something else on the wishlist might be software pixel shaders, like what is available in 3D-modelling programs. Haha:D

Regards,
Arix

P.S.: Is there a compatibility issue between this forum and firefox. I am having problems browsing through search results. I can only access the first page of the results; if I click on the next page link, I get “No search results matching your query was found.” And that is supposed to be when I have 1000+ results.

I think C++ classes actually can be subclassed through python, thats what classes like Actor do, right? Actor inherits from NodePath.

Oh, DirectGui are python classes. PGui is c++, however.

About software pixel shaders – you’re kidding, right? :wink:
Oh, and I use google for browsing the forums. I just type “site:panda3d.org” as well in the search bar.

OK. Let me clarify: it is certainly possible to subclass C++ classes in Python; Actor and DirectGui are both examples of this being done. However, when you do this, you can thereafter only use the resulting class in Python; the C++ side of it knows nothing about its new Pythonic nature. This is why, for instance, when you use render.find() to retrieve an Actor from the scene graph, you only get back a NodePath, not the actual Actor object.

So if you subclassed PNMWriter, you couldn’t use your new Python class with PNMImage, which is itself a C++ class.

Now: PNMImage etc. do have some possibly useful tools for drawing (PNMPainter, etc.). They’re pretty limited, but they work for what they do. PIL probably has a much better suite. If you really wanted to use PNMImage on principle (but I’d recommend you get over your reluctance to combine redundant code libraries :slight_smile: ), the right way to do this would be to use Texture.load() to copy the image into a Texture every frame. It’s “slow” by hardware rendering standards, but it’s not any slower than any other copy-pixel technique would be. (For instance, a PNMWriter that did memory DC writes would just do exactly the same thing.)

Note that you could certainly integrate with PIL and use its library directly also. If PIL gives you the image data in the form of a string, as you describe in the OP, then you could do something like this:

pt = PTAUchar()
pt.setData(myImageData)
tex.setRamImage(pt)

to copy that image data into a Texture for rendering. (Note that you have to set up the texture first with tex.setup2dTexture(xsize, ysize, type, format) to tell the Texture what kind of image data you’re giving it.)

This technique, again, will be “slow” by hardware rendering standards, because you’re doing all of this work to generate the image on your CPU, and not taking advantage of your hardware-accelerated rendering capabilities at all. But in practice, it may be acceptable performance. This is, after all, basically how we play AVI files as texture images; it can be done, and sometimes it’s the only way to achieve a particular effect.

But what are you drawing on your map? Circles and squares and dots and stuff? However you are contemplating placing circles and squares and dots on an image, you can use that same logic to place a model of a circle, square, or dot on a scene graph.

I would generally recommend using a completely separate scene graph for your map, rather than having a “zoomed out” view of the same scene graph. You could do it with a zoomed out view, though, if that is more appropriate (for instance, if you want your map to have similar detail to that which you see in the main screen).

In render2d, the default coordinate system is (x, 0, z), where the Y coordinate is 0 or unimportant, and the Z coordinate controls the vertical position on the screen. Note that this is just a 90 degree rotation from (x, y, 0), so if you prefer to make Z be your 0 coordinate instead of Y, you can just put the whole thing under a 90-degree rotation node. Or, if you are putting it in its own DisplayRegion, you can set up that camera according to your own preferences.

You can certainly use multiple DisplayRegions on one screen for drawing side-by-side or picture-in-picture views. This is the primary reason we have the DisplayRegion class in the first place.

In Pirates, the compass view is achieved by parenting little squares to a flat model of a compass background, and this is in turn parented to aspect2d. The whole node then gets rotated around according to which direction you’re facing. There are similar tricks used by minimaps drawn in sample code available on these forums.

David

drwr,

  1. But in my subclass, I can still call super-class functions? So essentially, I can still have a proper GUI Widget with only a change in appearance? That is, if I inherit from an existent DirectGUI class, my new class can still access to behaviours from the super-class? As long as that is possible, that is okay.

ASIDE - How is DirectFrame’s setGeom method implemented? Is it a formula for setting the actual geometry of the Frame or for setting geometry parented to the Frame?

  1. OK.

  2. Well, it would be nice to have panda self-sufficient in the graphics department. More pragmatically, having those functions available means that extra libraries don’t have to be packaged with a installer, let’s say.

  3. I thought Texture :: setRamImage was only available for the C++ variant?

  4. Well, since it is a minimap, really mostly dots. The most possibly would be very tiny squares (see reply to para 10). I don’t see the use of circles for this except for special effects or emphases. (I shall refer back to TA Spring and AOE3 for clarification on the best forms of geometry to use.) My reservation is over whether texturing simple dots requires the presence of a hardware pixel shader, which my NVDia graphics card doesn’t support.

  5. Yep, I guess. Similar detail rather evades the purpose of a minimap.

8) Nah, it would be simpler just to use (x, 0, z) for rendering points. By the way, is there a good tutorial on using DisplayRegions?

  1. I see. So each display region has its own aspect2D and render nodes?

  2. Interesting. How large in pixels is the default measuring unit for Panda. That is, how large is a 1.01.01.0 cube when rendered using base.cam on default settings in Panda. And does the rendersize change depending on the program used to model the model? (i.e. If I used Maya instead of blender, would the rendersize of the cube change?)

Rgds,
Arix

  1. Yes, you can call up to the superclass across the Python/C++ division. The only restriction is that the C++ part can’t call down across this division.

Your aside: DirectFrame.setGeom() parents the geom you give it to its stateNodePath[0], which is the geometry parented to the frame but within a slightly-hidden scene graph internal to the frame object. Not sure what you might be referring to by the “actual geometry of the frame”, since this is a bit of a philosophical question. (There is a “relief” geometry which is created by default unless you set relief = None; some people might think of this as the “actual” frame geometry. But really, there is no geometry that is truly intrinsic to the DirectFrame itself.)

  1. All right, a fine point. It is nice if Panda gives you everything you need without having to rely on third-party libraries.

  2. Nope, this is published and accessible to Python. There was a time, years ago, when it was not. Maybe you got this idea from reading some very old forum posts.

  3. You can render a single pixel with a GeomPoint primitive. This is classic old-school stuff, no fancy shader engine required.

  4. There is a trolley game in Toontown called the vine game. In this game, there is a playfield in the top portion of the screen, and a map that shows the entire level on the bottom portion of the screen. This one is implemented via two different cameras viewing the same scene, with the map view just pulled back sufficient. The programmer took advantage of LOD’s and camera masks to customize the map view to the level of detail that he wanted. So it is possible, and sometimes appropriate, to do it this way; but you have to go out of your way to make it work.

  5. This depends on the way you want to lay out the map, of course. Remember to take advantage of the scene graph’s power. You don’t really have to do any kind of x/y calculation at all; the scene graph can do it all for you. For instance, you could just do something like this:

for opponent in self.getOpponentList(): 
  blip = self.makePointGeometry()
  blip.reparentTo(self.map)
  relPos = opponent.getPos(self.player)
  blip.setPos(relPos)

And you have created a bunch of blips, one for each opponent, placed in the same coordinate space on your map as the actual opponent is relative to your player. Then you can scale and rotate the map to put it in the appropriate place on aspect2d. No need to fuss with x, y, z coordinates at all. (Obviously, you need a bit more than this–for instance, you might need to prune opponents that are so far away they fall off the map–but this is just an illustration.)

  1. No, but each DisplayRegion has its own camera, and that camera might be a 3-D camera or a 2-D camera, and it might view any scene graph of your choosing–render or anything else. In fact, the standard render and render2d scene graphs are implemented with two different DisplayRegions, which are layered on top of each other.

  2. Don’t think in terms of pixels. Pixels are the standard unit of measure for 2-D engines, but are largely meaningless for 3-D engines. In the render scene graph, your units are 3-D units, which have little relation to pixels (objects use fewer pixels when they are farther away, for instance). Your 3-D units are whatever you want to call them: feet, centimeters, miles; it doesn’t make a difference, as long as you are consistent. In the render2d scene graph, your units are relative to the size of the window, which is also only incidentally related to pixels.

If using Blender vs. Maya makes a difference in the size of the unit, it would be mainly due to the conversion program that converted the model into egg form. I know little about Blender, but in the case of Maya, you have to specify the kind of units that you want to use within Maya, and if you don’t specify otherwise to maya2egg, it will be converted cm on output (Maya always converts everything to cm internally). This scaling is due to Maya, though, and has little to do with Panda, which doesn’t impose its own real-world unit name on your 3-D units.

We’ve been doing a lot of hand-waving and theoretical discussion. I encourage you to try some of this out–get your hands dirty in the engine and see what it can do. A lot of this will become a lot more clear through experimentation.

David

David,

  1. ok.

  2. As in, is the setGeom() method the equivalent of Paint() or Draw() methods in 2D GUI APIs like WXPython?

  3. Exactly.

  4. Noted.

  5. Yep, Ok. I read the man-page already.

  6. I think I am going to try to download Toontown if it is possible. Apart from that, LODs are implemented using different models right?

  7. The layout of the online reference needs improvement. Overloaded Methods should be documented once for each variant, or else it looks quite deceptive - and cramped.

ANd we need blips for terrain too.

You are still thinking in terms of 2-D engines. There really isn’t an equivalent of replacing Paint()–that’s not the way DirectGui works. But if I were to step into your metaphors to answer your question, I would have to answer no: setGeom() does not replace all the geometry that corresponds to the DirectFrame. It only adds additional, optional geometry of your choosing. However, it might be the case the your additional geometry is the only geometry associated with the frame, in which case it does completely define the frame’s appearance.

Yes, that’s one way to think of them. Or you can think of them as one model with multiple different forms within it.

I certainly agree that it needs help. This is one area that a community Python programmer volunteer could assist in, since the code that generates this is all Python.

However, I do think it is necessary to document all variants of an overloaded method, since frequently the different variants have different behaviors, and each needs to be documented. But it is true that the current system is needlessly confusing. It would help if the function prototype were shown again for each overload variant.

David

David,
eight) Does this method work if I am talking about creating blips for terrain blocks? Or alternatively, is there a way to create a viewport for the entire terrain that updates itself whenever it needs to be redrawn other than RTT? (Remember - the post is about creating a scene editor, not just a game.) For the actual game itself, RTT would be fine, but would it be just as good for a scene editor?

  1. Oh, so you can overlay display regions? Does a camera scale or clip its contents if the display region is smaller than the geometry being rendered?

ten) Nvm, I read some stuff on 3d-programming theory on the relativity of units to the camera. And I saw the forum post that says 1 Blender Unit = 1 Panda Unit. But is the z-up model scaled to size, or does it need to be resized? And is there any way to import EGGs?

  1. Certainly. But I just wanted to get the groundwork right first before putting it into practice. So that I don’t get blasted by forummers later for not “reading up on background” enough.

Anyway, thanks all for help!

Rgds,
Arix