Disable unloading of objects out of view

Dear fellow Panda3d users,

I´m having an issue with models that seem to unload once they´re out of view for a longer period of time.

When I pan my camera over different parts of the scene, I notice that when I pan models out of view for a longer time and then pan back, I encounter a hick up implying the models are loaded again. When I pan back more quickly there is no hick up.

I figure panda3d unloads models as default behavior for games where you walk through a scene, so you don´t return to the same place in the scene.

I there a way to configure this behavior in Panda3d? It would help if I could control what is loaded and what is not.
Or any other solution for this problem of course…?

All help is appreciated


It would surprise me if Panda would unload models automatically; could this not simply be the result of culling? When the camera frustum intersects the bounding volume of a complex model, it will start rendering it and perhaps this causes the sudden drop in frame rate you’re noticing?

You could try to set an OmniBoundingVolume on the model to see if the frame rate remains constant then.


Thanks for your reply, that is indeed what’s happening. When I set the OmniBoundingVolume I have no hick-ups anymore, however, my fps drop 10-15 frames. I guess a way to improve this is making LOD models and further reduce the number of nodes. Any other tips are welcome!

Oh I encoutered exactly the same problem as you man

It could be the case that Panda is uploading the models to the GPU as they come into view. You can force Panda to upload the assets ahead of time (during a loading screen, for example) by using something like:


Thanks for that tip, I use that already and it makes a difference but does not fix it all.

I have tried LOD-nodes but it didn’t give me the expected results. If just disappear my models at a certain zoom using LOD, I only gain 2 fps. I guess my bottleneck is still the number of nodes so I have to find ways to reduce this. I know about rigid body combiner and flattenStrong() etc…

Any other tips are welcome about handling many nodes which are possibly out of view but likely to return.

It could be helpful to use PStats to analyse what is causing the lag.

Are you using the shader generator? There was a bug preventing prepareScene/premungeScene from working with automatic shaders, but it’s fixed in the latest development version.

Normally Panda doesn’t unload objects which are out of view. This would only happen if the GPU is starved of resources.

Yes I am using render.setShaderAuto(), I am using 1.9.4. so I think I will some improvement the next release version.

I am also using PStats and I think I have to many Geoms. I’m making an airline manager game and it’s in a view where the scene consists out of an airport with aircraft on it. With 35 aircraft I have about 900 - 1000 Geoms, this seems too many and unnecessary as well. + I would like to have 100 aircraft on my scene which would even be too much for a high-end gfx card.

I am having a specific problem now I could use help with.

Those aircraft are composed out different models to get variation between aircraft types:

  • The fuse and tail are loaded out of one main model file
  • If the aircraft type has variations, the wings and engines are loaded out of a different model file
  • A couple of other nodes like the nose wheels and lights which are animated

With the first two things, loading the basic model (fuse/tail/wings/engines) I have this issue:

I compose objects like this:

aircraft_model = aircraft.attachNewNode('aircraft_model')

ac_mainfile = loader.loadModel("models/aircraft/" + model_filename)

if variation_in_types:
      wing_engine_mainfile = loader.loadModel("models/aircraft/" + wing_engine_filename")
      wing_engine_mainfile = ac_mainfile




This is the result of the .analyze():
2 total nodes (including 0 instances); 0 LODNodes.
0 transforms; 0% of nodes have some render attribute.
1 Geoms, with 1 GeomVertexDatas and 1 GeomVertexFormats, appear on 1 GeomNodes.
1631 vertices, 1631 normals, 0 colors, 1631 texture coordinates.
GeomVertexData arrays occupy 90K memory.
GeomPrimitive arrays occupy 11K memory.
1779 triangles:
0 of these are on 0 tristrips.
1779 of these are independent triangles.
0 textures, estimated minimum 0K texture memory required.

2 total nodes (including 0 instances); 0 LODNodes.
0 transforms; 0% of nodes have some render attribute.
3 Geoms, with 1 GeomVertexDatas and 1 GeomVertexFormats, appear on 1 GeomNodes.
2979 vertices, 2979 normals, 0 colors, 14895 texture coordinates.
GeomVertexData arrays occupy 536K memory.
GeomPrimitive arrays occupy 19K memory.
3213 triangles:
0 of these are on 0 tristrips.
3213 of these are independent triangles.
0 textures, estimated minimum 0K texture memory required.

The thing is, that both aircraft have 1 GeomNode but both have a different number of Geoms? The first aircraft shown in this example, is loaded out of 1 file, the second out of two. Some aircraft even go up to 5 Geoms…
Does anybody know how to solve this?

I also have a question about the 3rd part, the animated objects. I think I could add them to a separate rigid body combiner node because they all are low poly objects. If I, just to test, put my aircraft entirely in a rigid body combiner, my animation time in PStats takes over the fps. I figured this could be because the entire aircraft is animated and Panda has to recalculate animation for the entire aircraft model (4000 polys instead of 50) every frame. If I draw the right conclusion, is it possible to make the rigid body combiner calculate animations relative to the aircraft instead of render?
If not I could keep the aircraft model and the rigid body combiner separate, but I would like to know how it works.

If you are using the shader generator, you may indeed gain a benefit from using 1.10. Besides the fact that prepareScene now effects the shader generator, I’ve done a bunch of work on improving the performance of applications that use the shader generator, especially when a great number of render states is involved. I’d be happy to hear your experiences.

Panda can’t combine two Geoms together when they have different render states. The render state includes properties like colour, texture and material. So for most effective flattening, parts should share these properties as much as possible. That your model reduces to 3 Geoms probably reflects that there are three uniquely different render states active on parts of your model.

A common way to further reduce this is that instead of having separate textures for each part, you have one big texture, and you use UV mapping to assign each part to a particular area in that texture. This is a lot easier to do during authoring than at runtime, which is why flattenStrong() does not attempt to do this.

Please note that the rigid body combiner has no benefits over flattenStrong() when applied to static parts. It is only useful for systems that have a lot of nodes that move independently, and it does not perform very well for larger meshes, although it may be worth a try if alternatives fail.

What exactly do you mean by “animated objects”? If you are using skeletal animation, you may get a benefit from using hardware accelerated animation, which is new in 1.10. There may also be a benefit for the rigid body combiner, but I’ve never tested that.


Thanks for your responds again.

Are there any downsides to using 1.10 other then the obvious that some bugs might be present? I will try this soon anyway.

For the flattening strong part, that is what is the odd thing, those objects are supposed to have the same textures/materials/colors. I think I will check if it helps to reset all those properties for the object before flattening. It could also be that my mate accidently included a texture reference or applied a wrong material but if I look at a couple of eggs this only happens rarely. The wierd thing if you ask me is that more Geoms occur explicitly when a model is loaded out of 2 eggs or more.

About the animated objects. Those are mainly the lights and the nose gear of the aircraft. Lights are a square plane model (I might as well have used a texture card), with a transparent texture on it. Some of those lights flash, a LerpFunc takes care of that. I now use show() and hide() to flash them which will not work in the rigid body combiner, but I could scale them as well to make them disappear and re-appear again. To get the transparency I use a light.setTransparency(TransparencyAttrib.MAlpha), I can’t get this to work with the rigid body combiner, maybe this is not possible?

In my other view consists of a globe, there are more than a 1000 low poly objects (cities) on it which lighten up by adding a texture upon rolling over it with the mouse, the objects have to be separate for that. To do that I’m using a rigid body combiner per field of cities. The fields behind the globe don’t have to rendered and the number of fields in view are only 20 nodes or something. That works fine!

So for the airport I figured to put all those light meshes I’ve animated on the aircraft (All in LerpFuncs) in either one rigid body combiner or one rigid body combiner per aircraft. This is quite interesting for me as it might save up to 400 nodes on a busy airport.

Any tips to do this more efficient are welcome!

The only downside I can think of is that there is no reliable deployment pipeline yet for 1.10. A lot of progress has been made on the new system, deploy-ng, but some parts of it are still untested or unimplemented. But if you need to ship a game tomorrow using pdeploy, 1.9 might be a better choice.

Otherwise, I certainly recommend upgrading to 1.10. If you find any downsides, let me know and we’ll fix those as soon as possible. :slight_smile:

Ah, by default, the loader inserts a ModelNode at the top, which prevents flattening multiple eggs together. You can prevent this by calling clearModelNodes() before your call to flattenStrong().

As for lighting up a part of the model, you could do this by using an mask texture mapped to the model that is white where the lights are and black where they are not, and assign it to the whole model using the TextureStage.MBlend mode. Then, you can modulate the value of that texture at runtime, using TextureStage.setColor. This may or may not produce a convincing effect for your needs. (You could reuse that same mask texture as a glow map for postprocessing glow effects.)

Another approach might be to implement lights using point sprites, by creating a GeomVertexData yourself, where each vertex represents the location of an individual light. This is certainly efficient and can be very convincing if you have a lot of small lights, but it is more complex to implement.