freeing actors/models/textures

Hi - Thanks in advance for any helpful comments on the following:

I have an actor - i create it, use it and then need to free the memory it is using ( I’m hitting the 2 gigabyte process maximum in windows… and no I don’t have a memory leak ). So I use the delete() command and I removeNode() and I track down all of handles to it. The total active memory for the process doesn’t decrease. So I assume that I have missed some references. I finally import sys and use the getrefcount() function to make sure I have all references handled. It returns 2. My code looks like this (roughly)


print sys.getrefcount( actor )

actor = None

My understanding of sys.getrefcount() is that 2 means that by the end of my function actor will have 0 references to it. This should trigger garbage collection ( and yes I forced collection about 100 times after this to make sure ) Maybe I am confused on this - BUT if I am not confused on this here are my questions.

Should I expect my memory usage to decrease after unloading an actor or will the texturepool/modelpool hold on to the memory it has taken already, and simply re-use it? (I can’t find any evidence of this being the case, but I am stumped. )

Thanks again.

Also gc.get_referrers(actor) gives me just 1 item -> the actor handle being passed to it.

I am anticipating that there will be separate problems with textures, animations, and the model, but so far none of these are freeing themselves.

there is the command:


which removes models from memory. not sure if animations or actors need a special handling.

Sorry I forgot to put that in my first post! I also use that command after all the others, as was suggested in another forum post. =(

Panda 1.3.2 FYI

a tiny bit more info:

sys.getrefcount(actor) returns 2. ( the handle I have, and the handle sys is holding??? )

gc.get_referrers(actor) gives me a 1 item list containing an object of type ‘frame’

then I garbage collect and check the gc.garbage list, which is empty. Memory usage stays constant.

My understanding of ‘frame’ is that it is the traceback routine.

I then sit for another couple of minutes and watch more iterations of the garbage collection occur, but memory usage is still constant.

Anyone have any ideas? Thanks!

also now digging into the modelpool and individually releasing the model, and every animation of the model’s from the pool, then calling garbage collection on the pool, and in python. I get back 800 kbtes now, from the 20 megs that are grabbed when I allocate the actor and its animations + textures. I am pretty certain the 800kbtes is the model and not the animations - and that it is only happens when I do the garbage collection in the modelpool and not from the release command.

One last bit of info -

When I call garbagecollect on the modelpool it collects 47 items, which is exactly equivalent to the number of animations + model that I have just unloaded with the loader and released directly from the pool. I still don’t get any memory back from this.

Does delete() equivalent to destroy() in 1.3 ?

i was just looking in the code from my version

for delete i have:

def delete(self):
self.Actor_deleted = 1

All I could figure out from this was the cleanup was a good fallback, so I started using that directly in my code - no change.

How big is the actor and its animations? Is it really a big enough chunk of those 2 GB that you would notice if it went away? What sort of objects

It sounds like you’re doing everything right with regard to the actor and its animations, at least. You might need to clear the geom cache if its vertex table is large (GeomCacheManager.getGlobalPtr().flush()). It wouldn’t hurt to release all graphics objects too ( I’m not sure if both of these methods existed back in 1.3.2.

Panda does provide tools for researching memory usage in the C++ space, similar to Python’s gc tools; but it imposes a runtime overhead so it’s compiled out by default. This means you need to be able to build your own Panda in order to turn them on. It might be worth upgrading to the latest version of Panda anyway; 1.3.2 is pretty old.


Hi David - thanks for the response!

This is a trial run on a single actor with the final intent being to be able to free actors who will not be used again in the path of game-play. This will free hundreds of megs of space… eventually (hopefully).

Will try the geomcache and gsg tricks - not holding my breath on them though +(

And finally I would LOVE to upgrade from 1.3.2, but I am quite sure I don’t have the time! =) Unless it is 100% backwards compatible? haha. Back with more in a second.

Also note that some of Panda’s memory usage is of the allocate-once-and-reuse-forever policy. This means that it never gets returned to the system, but can still be reused by Panda. So a better test of whether you have successfully freed the actor is to load a new actor that is the same size or smaller, and see if your memory usage increases at all.


GeomCacheManager – -- flush is unavailable in 1.3.2
gsg releaseAll() – -- functions, but does not free any memory.

Panda Memory allocation:

I had thought this might be the case, and was hoping that it would be. I will run my test again, but when I first tried unloading an actor like this, then loading another actor, memory growth seemed equivalent to simply loading the second actor. Will double check.

Have you also dumped the TexturePool? That might be the biggest part of your memory.

Try setting:

geom-cache-size 0
transform-cache 0
state-cache 0
preload-textures 0
keep-texture-ram 0

in your Config.prc.

If you were using 1.6.2, you could also set:

max-independent-vertex-data 5242880
max-resident-vertex-data 52428800

to limit total memory usage by vertices to 55MB, for instance.


Testing this out:

loading a modelA regularly = +13 megs ram usage
loading my test model, unloading it then loading modelA = +8 megs of ram usage

test model was +20 megs of usage

difference of 15 megs.

I am less certain that my textures are being entirely hunted down and destroyed - so maybe that accounts for some of the 15. Thats my next spot to look, i guess.

I am garbagecollecting the texturepool, but I am not explicitly calling loader.unloadtexture and texturepool.releasetexture yet

I think those are both on my list for further work, as I do both of those for the anims and model.

Explicitly unloading a particular texture from the TexturePool before garbageCollect() would unload it actually risks consuming more memory, because it means the Texture object is still in memory somewhere, but you are breaking the pointer in the TexturePool. This means if you subsequently load a model that references the same texture on disk, it will then have to create a new Texture object.

You can test whether textures are your problem at all by putting:

textures-header-only 1

in your Config.prc file, which disables the loading of textures in the first place.


1 Like

Ok - more questions and random tid bits!

Running with these settings:

geom-cache-size 0
transform-cache 0
state-cache 0
preload-textures 0
keep-texture-ram 0

instantly knocks me into a memory leak and I crash. So I can’t test this.

unloading every anim via actor.unloadAnims() + modelpool.releasemodel(), the actor model via, actor.delete, actor.removeNode(), loader.unloadmodel and modelpool.releasemodel(), and release every texture associated with the actor by loader.unloadtexture() and texturepool.releaseTexture() and calling
garbagecollect on both the modelpool and texturepool and in python. ( seems to exhuastive, I know ) Then I check the python refcount on the items, the gc.get_referrers list and the gc.garbage list. Everything looks like it should be released. ( More than once )

Actual memory usage:

Load TestActor + anims + textures = +20 megs usage
perform unload process described above = -800Kbytes usage
Reload same TestActor = +8 megs usage ( either models or textures are STILL held in the pool )

Load TestActor + anims + textures = +20 megs usage
perform unload process described above = -800Kbytes usage
Load NewActor = +10 megs of ram

Load NewActor = +13 megs of ram.

I don’t know if any of this helps… Additionally, I’ve tried the lengthy unload process above in many different configurations, most of which seem less redundant than as described. The only time I ever see any memory returned is if I call garbagecollect on the modelpool. That seems to be where the 800kbs comes from.

What! I don’t believe this. If anything, those variables should reduce apparent leaking, by disabling caches. Can you test it with some subset of these variables to determine which of them is the one that causes you to crash?

Are you loading egg files or bam files? Bam files are much better for memory analysis. Although the egg loader doesn’t leak to my knowledge, it does allocate and free lots of tiny objects during its processing, which can contribute to memory fragmentation–and which in turn limits the ability of the system to recover freed memory.

I really think you should try upgrading to 1.6.2. It’s not that different, API wise, from 1.3. If nothing else, you could try these memory tests with that version, and determine if the memory mangement is better.