Understanding culling better...

I need a little help understanding the frustrum culling method.

I believe I read that in Panda3D, you don’t really want to take a terrain and then reparent everything else to it (trees, rocks, buildings) because then if one piece is visible it all will be.

I have all the artwork for a game complete. I just need to find the right way to sew it all up now.

Here is what I was planning on doing and then you can tell me if the idea is sound or not…

Everything I have is a separate model and in some cases (structures) they are built of several different pieces like inside and outside.

I was planning on loading each model into the scenegraph, terrain, trees, structures etc… maybe just breaking up the large terrain mesh into yet more separate models.

Is this a good approach?

What I don’t really get is this… Say a particular section of my terrain is in view of the camera and all objects are loaded. You are looking at a structure (picture and igloo) made of two separate eggs (inside/outside) but the structure entrance is not straight ahead, it’s turned off to the right maybe so you really can’t see inside. Will Panda still load the inside? I guess I’m asking does the shell polygons occlude the rest inside and not load or will it only load if I can actually see the polygons inside once the entranceway of the house becomes visible in the camera?

Sorry folks, this is a hard concept to understand.

Steve

It is a bit of a difficult concept.

First, let me clarify something: simply parenting a model to another model does not make the two models into a single cullable unit. So you can parent all of your rocks, trees, etc. to the terrain, and each of them can still be culled individually. In fact, you usually want to structure your scene graph with some depth to it, so that objects that are physically near each other are parented to the same node together–this makes the culling process faster, since if all of the objects under a common node are outside of the culling volume, Panda doesn’t have to look at the individual objects. On the other hand, you don’t want to go too crazy with this, and create thousands of useless nodes, which can make culling more expensive (since Panda would then have to traverse all of those nodes every frame).

The smallest cullable unit in Panda is a Geom, which is a part of a GeomNode. So as long as your scene is broken up into multiple different Geoms, each Geom could be culled as a unit or drawn as a unit. (Of course, there is also a drawing penalty per Geom–each Geom corresponds more-or-less with a single DrawPrimitive call to the graphics card, which is a relatively expensive call. So you have to balance having lots of Geoms to make your culling effective, vs. having relatively few Geoms to streamline the rendering overhead.)

If you parent several pieces of geometry to a common node, and then call flattenStrong() on that common node, Panda will try to combine all of the Geoms under that node into as few Geoms as possible. (It will help if you also use egg-palettize to reduce the number of individual textures used by this group, since each different texture will necessarily have to be in a different Geom.) Finding the right size of these node groups is part of the art of optimizing your scene, and the right balance point depends on the level of hardware you are targeting–in general, the better graphics cards prefer to have fewer Geoms with more vertices per Geom, while the older graphics cards prefer to have more Geoms with fewer vertices per Geom. But if a particular object will always be either wholly onscreen or wholly offscreen, it is always a good idea to combine this object into as few Geoms as possible (ideally one).

Now. The culling algorithm that Panda employs is called view-frustum culling, which means simply that any object that is not physically within the viewing frustum is culled, and any object which is at least partially within the viewing frustum is drawn. That is to say, if it is in front of the camera, it will be drawn, even if it is behind a wall.

So the interiors of your buildings will be drawn, even if you can’t see through the door from where you’re standing–in fact, even if the door happens to be closed.

It is theoretically possible to arrange it so that the interiors will be culled in a smarter fashion. Cell-portal visibility, for instance, would be perfect for this. This is an algorithm in which you define certain “cells” of space–for instance, the interior of a building would be a cell–and “portals” through which you can look into the cells–e.g. the door is a portal. If you set this up properly, Panda will know that the only way you can see the interior is by looking through the door, and it can cull the parts of the interior that aren’t directly visible through the door. But cell-portal visibility is a bit tricky to set up properly, and its implementation in Panda is still a little rough.

There exist other kinds of visibility algorithms that attempt to make more automatic determinations of what is occluded or not, without requiring you to structure your scene into cells and portals. We have been experimenting with adding some of these algorithms to Panda, with varying degrees of success. Nothing’s ready for prime time yet.

You can also do other tricks, such as to put the interior under an LOD node, so that the actual interior will be visible only if you happen to be standing very near the door; if you are a bit further away, you can have the LOD swap in a lower-level model, which is really just a painted flat inside the building that looks just like the contents of your interior.

And, of course, if your doors are the sort that might be open or closed, you should have the application hide (or remove) the interior when the door is closed.

David

Thanks David,

Well thinking about this further then after your explanation… Picture a valley that only has one entry and one exit point with steep walls all around. Within that valley of course are trees, some buildings etc… How about dividing up the valley into quadrants and raparenting the trees and structures all to their particular quadrant and just loading it all into the scene graph. The actual terrain sections will be separate terrain.egg models. I would then not even load the interior structures. Maybe just have it so you need to click on a particular object and then it will place you into that object and drop the rest (outside). This is what I was thinking about doing to begin with, then found that post that suggested otherwise.

We talked about portals a while back. I still re-visit that post from-time-to-time to see if I yet understand it :slight_smile:

Steve

Sure, that sounds like a fine approach.

Incidentally, you can view the effectiveness of Panda’s culling, by using the call base.oobeCull().

This is a variant on base.oobe(), the oddly-named function which stands for “out of body experience”. Calling this function takes the camera out of the normal application control and puts it in trackball mode. You can roll around in trackball mode and view the scene from different angles.

base.oobeCull() works similarly, except that it still culls the scene as if the camera were still in its original position, while drawing the scene from the point of view of your camera’s new position. So now you can view the scene from your “out of body” placement, and walk around, and you can see things popping in and out of view as your view frustum moves around the world.

David

I’ll try base.oobeCull(). That is one feature I like about Panda3D. It actually lets you visualize stuff like the collision detection etc…

I’ll see what happens.

Thanks,

Steve

If anyone hasn’t tried base.oobeCull() yet in their own environment, they should. It’s a neat effect and really let’s you play god so-to-speak.

I did this with roaming ralph… you can watch as objects like the rocks go past the camera and then are culled from the environment.

Pretty neat!

Most of what you have said applies to a first person perspective. How would you structure a scene for a 3rd person perspective ie… real time strategy overhead view, assuming the following criteria:

say you have a map like what is found in Evil genius.
There are both outdoor and indoor parts.
You can zoom in or out of the scene.
Pieces of the map can be tunneled out and removed in an endless variety of shapes.
Buildings can be placed that allow units to go in or out of them.

The really tricky part I don’t get is how to handle the construction of the map that can be changed ie… the rock that can be tunneled out. Would you make this part of the map in larger pieces and then replace the geom with a new one when a piece of it was tunneled out? How would you handle the ability to select just a piece of one these larger geom’s since the scenegraph would see the geom as one large piece. Consider the following:

geom (4x4) block
XXXXX
XXXXX
XXXXX
XXXXX
now if a player wants to dig out a portion of this say
XXOX
XOOX
XOOX
XXXX
Would you recalculate all the vertices and then replace the node with a new one to indicate the change? How would the culling work in this case?

You could do that approach, sure–sounds like a fine idea.

That has little to do with culling, though. Culling is performed at the object level, not at the vertex level, so your rock will either be entirely drawn, or none of it drawn–regardless of how many vertices it has.

Structuring your scene with an eye towards hierarchical culling is always a good idea, regardless of your point of view. Of course, if your camera is always going to be positioned so that it views the entire scene, then view-frustum culling isn’t going to help you, and you’ll just have to concentrate on keeping your scene simple enough that it can render very quickly, even if it is always visible.

David