Potentially silly questions herein

I still haven’t actually gotten too far into figuring out python syntax, but I’ve been mainly reading up on 3d engine and such theory, and I have a few thoughts for the game I want to develop, but since I have no idea if my assumptions of the engine are correct, I’m posting here for feedback. Mind you my area always used to be 2d.

The engine performs frustum culling, but if part of a solid piece of geometry is visible, as in one whole object in the 3d editor? then the whole geometry is rendered regardless of parts of it hidden outside the frustrum? Does this mean that if someone was modelling something that was a chunk of a city, one would be wise to say, cut up the ground into several pieces and make those their own seperate objects within the scene, then buildings, plants, etcetera, or otherwise if someone sees any part of the city chunk, all of it gets rendered? (my 3d artist assures me this is how it works, but I’d rather double check)

Now assuming one wants to portray a large outdoor cityscape, my idea was to break the city up into chunks which all become their own scene/model (broken up as above for culling). That way, if the far plane is set so it never crosses more than one chunk at a time, the player would only need 9 of these loaded at a time, as opposed to one huge city sized model. Is that approach viable?

Then ideally I’d like for the chunks to be lightmapped. My 3d artist recommends Gile[s], but we’re not sure if it’s up to the task and if it’s output is useable with Panda3D in any way. Otherwise this might become rather problematic. Any suggestions would be nice. ^^;

And one last question from the 3d artist: does the scene editor handle importing instance files too? Ie. You have a bunch of lamposts that are all the same, so you make one lamppost model and, instead of putting it into the map multiple times, just have the engine load that model into the map multiple times, the difference being that it’s only in ram once.

Anyway, thanks in advance for any answers anyone might have. ^^;

I think the engine finds a way around the first thing (render-wise, it’s still in the memory), but I’m not sure.

About the last thing, there is a function called loadModelCopy(), which prevents you from having to reload the same model, I think that is what you need for that.

Its possible for more than one geom node to exist in a single model. This way, while every geom node gets fully rendered if it appears on screen, not all of the nodes have to be rendered in a model. Arranging groups in your egg file is how you are able to optimize models for use in panda so you won’t have to render out the whole environment when only looking at a piece of it.

By lightmapping, I am assuming you mean baking in lights to the model/environment. This is possible in Panda. The first thing that comes to mind is to paint in the lights into the textures applied to your environment. This can be done through any digital imaging software (Photoshop etc.). Another way to do it is to apply the lights in your 3d modeling software and bake the resulting color into the verteces. Look at the modeling software documentation on how to do this. Using verteces might give less detail in the shadows on low-poly models, but it makes it easier to update/change textures while keeping the same shading.
If you mean dynamic lightmapping, you might need a cg shader for that. I couldn’t help you much there, but versoin 1.1 might be able to support that feature.

Not sure about the scene editor actually importing instances but as stated, loadModelCopy() loads a copy of a model without allocating extra memory. There is also instanceUnderNode() which creates an instance of a node with its own nodepath so you would be able to perform different transformations on those instances.

The above comments about view frustum culling and arranging the nodes within your egg file to optimize it are right on the money, but let me add a few points: First, if it’s not immediately clear, there is generally a one-to-one correspondance between nodes or objects in your modeling package of choice, and nodes within the egg file. So to optimize your city scene properly, it should be modeled as a collection of different objects, maybe one for each building for instance, rather than as one big mesh.

Then you should group the buildings together spatially, so that all of the buildings on a city block, for instance, are collected under a common group node, and you have a different group node for each city block. Then you might collect the block group nodes under another level of grouping, for instance, a single node that contains all of the city blocks NW of center, and another that contains all of the city blocks SW of center, and so on. You want a hierarchical grouping like this to make the frustum calling as optimal as possible, since if you’re standing in the center and looking SE, Panda can tell right away that you won’t be able to see any of the buildings under the NW group, and can drop the whole group out without further consideration.

Even if you don’t do this hierarchical grouping, simply using different nodes (objects, polysets, meshes, whatever they’re called in your modeling package) to model the different things in your world is all you really need to do to enable effective view-frustum culling. But there’s a trade-off: the more different objects there are in the world, the more work it is to render them when they are onscreen.

Finding the ideal balance point is difficult, because it depends on your graphics card. In general, the higher-end graphics cards perform better with more polygons per mesh and fewer nodes, while the lower-end cards perform better with lots of separate little meshes that can be individually culled.

At the end of the day, though, usually the modeling convenience is the more important factor in the subdivision decision; and usually that works well enough. It depends on how important it is to you to squeeze every last millisecond out of the frame time.

As for instancing, there are several ways to activate instancing from Python calls. loader.loadModelOnceUnder() is one; you can also use nodePath.instanceTo(). The call loader.loadModelCopy() actually generates a new copy–duplicating the object in memory–not a shared instance. There’s not presently any way to automatically load up a scene that includes instancing, to my knowledge (although I’m not sure what capabilities the CMU Scene Editor has).

But instancing is overrated. There’s usually no real good reason to instance static objects like, say, your lampposts, even though all of them might be identical. On certain platforms, particularly on consoles like the Playstation and the PS/2, instancing can be a very important optimization, but on a PC it buys you very little, and it’s usually not worth the nuisance factor. Go ahead and duplicate multiple copies of your lampposts.