Enviroment Size : How to enhance culling?

Currently I have a model with 64k vertices in it but only about 1k would be displayed at any one time depending on location in the enviroment. I presume panda would be testing each poly to determine if it can be culled?

Is there a way I can group a section of the model to help with culling?

Also, one very noob question now, I am currently loading 1 texture. In this texture is lots of other textures. I setup the UV coords on the verts which works ok, but is there a way I can repeat the texture between verts for only part of the entire texture or will I need to split the large texture up into the smaller textures to do this?

Thanks in advance.

Panda performs culling at the Geom level. A Geom is the smallest unit of renderable geometry, a collection of polygons that are all rendered in the same state and all attached to the same GeomNode. So if any part of a Geom is within the viewing frustum, Panda will draw the whole Geom; otherwise, it will draw none of it.

As a developer, this means you need to think about nodes (or groups, in your modeling package). Many modeling packages have a concept of a polyset. In general, a polyset will be converted to a single Geom, so think of your polysets as atomic pieces.

You should structure your model so that polygons that are logically part of one object are in the same polyset. For instance, you might have a polyset that is one crate. If you have other crates in your scene, they should be their own polysets.

Beyond that level of atomic grouping, you should collect things that are near each other into common groups. For instance, if you have a cluster of crates on one end of the world, and another cluster of crates on the other end of the world, those should be two different groups. Then each of those two groups might be children of a common parent group, that collects together other large groups of objects in the world. In general, for optimal culling, the children of each group should represent collections of objects that are spatially distributed within the group.

You can run the function base.oobeCull() to put your camera in a special mode where you are viewing everything from a third-person point of view (it’s an “Out Of Body Experience”, hence the name), but the scene is still culled as if the camera were where it used to be. (There’s also base.oobe(), which puts your camera in this out-of-body mode without playing games with the culling.) With base.oobeCull() enabled, you can look around the scene to make sure that things that are supposed to be behind your camera aren’t being drawn. Use the trackball controls (like in pview) to control the camera in this mode. This mode is a toggle; use base.oobeCull() to return to normal mode.

David

Thanks David, thats what I was thinking.

Shuold be fun to try and break this model down a touch during my conversion process to egg as its not done by a modelling program as such :frowning:

I presume that in the conversion from egg to bam (or perhaps on load), panda works out the global boundries of the model for you?

I just realized I forgot to answer this.

If I understand you correctly, you will indeed need to split out at least the repeating texture, unless you want to mirror the UV’s on your model back and forth across the subtexture UV ranges. Or, if the texture only repeats once or twice, you could stamp out multiple copies side-by-side in your large texture and run the UV’s across those copies. But to really repeat a texture, it needs to be its own texture.

Right, and the boundaries of each node as well. This happens at load time, and also on-the-fly as you move pieces of your model around.

David

Thought so. I will split the textures.

Great. Now I understand how this works I should hopefully be able to break up the enviroment into tidy chunks and help with the culling.

Thanks again for your help and quick responses.