Arbitrary bounding volumes for culling

Do objects use their bounds to cull each other, or is only the camera used by default? If they do, how can I disable this?
Can I pass an arbitrary bounding volume (BoundingSphere, BoundingHexahedron, etc.) to the culling system to be used for culling geoms in the scene?
I’m looking to reduce the amount of geometry that is not “visible” but still making it through to draw.

Panda automatically computes a bounding volume for all nodes, and uses this to cull against the view frustum. The automatic bounding volume is not necessarily the smallest possible bounding volume, but it’s chosen as a good compromise between tightness and ease of computation. You can replace the bounding volume on a given node with a volume of your own specifications.

You can also reveal the bounding volumes; nodePath.showBounds() shows the bounding volume for a particular node. If you call it on a GeomNode, it also shows the bounding volume for each of the Geoms within the GeomNode, which is what all of the parent bounding volumes are indirectly based off.

You can also show the effectiveness of Panda’s bounding volume culling with base.oobeCull(). This is like base.oobe(), the “out-of-body experience” that allows you to view the scene from a point of view outside of the normal camera’s point of view, except that it leaves the culling as if the point-of-view were still in the original position. You can see objects pop in and out on the edges of the view frustum. If too much is being drawn, it may mean your bounding volumes are too large, or you have too many objects consolidated into one.

If you really want to experiment with setting your own bounding volumes, you can use nodePath.node().setBounds(myBoundingVolume). Usually myBoundingVolume is a BoundingSphere that you construct.

David

There is also config variable : bounds-type
Description :
Specify the type of bounding volume that is created automatically by Panda to enclose geometry. Use ‘sphere’ or ‘box’, or use ‘best’ to let Panda decide which is most appropriate.

Sorry I should have been more precise.
I’m not so much concerned with the objects outside the camera’s view (this is working fine), but with objects that should be occluded due to being behind something. For example in a city setting most of the objects in the camera view don’t need to be drawn because they are behind a large building.

That’s called occlusion culling. It is not supported in general by Panda automatically. There is a specific occlusion culling algorithm called cell-portal visibility that Panda does support; this requires some by-hand setup work on your part–try searching the forums for this.

Beyond that, you will have to write the appropriate algorithms into your application, typically based on a priori knowledge about your scene (for instance, on common approach is based on a table lookup: when my avatar is standing here, it means I can see these things).

David

teedee, i don’t think occlusion culling is supported in panda3d.

would be a cool feature to add…

Yeah. Though it should be noted that occlusion culling is hard to do well in general. There exist general-purpose occlusion-culling algorithms, but they usually have severe limitations, or they don’t give good performance. You’re almost always better off using a priori knowledge anyway.

Of course, that can be a lot of work, and if there were something you could just “turn on” to make it happen, even if it were less than optimal, that would be swell. It may happen one day. Patches are welcome. :wink:

David

Ah I see. A system I used previously allowed quads to be placed in the world for the purpose of occlusion culling.
I will likely implement something similar to this tested only against expensive objects like skinned models and the like.
I render the scene from multiple camera views, so I’ll have to use masks to hide the objects on only certain cameras.

you could do raycasting (colliding objects with rays) for detecting visibility and for speeding up something like the bounding-volumes but inside the objects (so they are propably quite small).
this obviously doesnt work well with transparent objects so they should be ignored.