In these test cases I am loading egg files, though they are bam files in final deployment.
Upgrading to 1.6.2 is, at this point, not an option. I can’t really explain better, or I would.
I am still pretty sure the remaining memory is texture memory.
I have been failing to get a handle to the textures applied to the model. Using findAllTextures() on a variety of the nodepaths returns empty ( Including actor.getGeomNode() which is where i thought i should get them from since my textures are loaded by the egg file).
I will do some more testing on those config.prc variables. What I suspect is happening is that it is interacting with some of the funky/fancy render variables I have setup. In fact i’m almost positive it will be the geom cache, but let me get back to you on that.
I think you should make these tests with bam files.
Also, you should try disabling textures for these memory tests, as I suggested above, to prove whether the culprit is indeed texture memory or whether it is something else. Speculation is fruitless, especially when proof is so easy to achieve.
The scene loads without showing anything which made me assume it was working, but the memory usage is identical to loading with textures enabled.
Fresh load w/ or w/o that command in the config file = 131mb ram usage.
The test cases again were the same, except I couldn’t see anything.
I’m fairly certain it is both textures and models that are the problem. What I can’t know is if the 5 megabytes that I get back when I load another model is reclaimed from the texturepool or the modelpool, or some mix of both.
I will re-run everything with BAM files.
Any ideas for grabbing the texture objects from the actor?
I don’t understand why you couldn’t see anything with textures-header-only. It should have loaded your models normally, with a 1x1 blue texture in place of every original texture. So, you should have seen a blue model. Are you doing some funny rendering or shader tricks that would cause it not to render in this case?
I also don’t understand why findAllTextures() isn’t returning anything, unless you really have no textures on your model. Is it actually loading your textures? Does the model appear textured onscreen?
If you are seeing the exact same memory bloat with and without textures-header-only (assuming that variable is working at all), it sure does seem that textures have nothing to do with the problem after all.
The next thing I would try is to ditch the Actor class, and just use loader.loadModel(), to make sure the Actor isn’t doing anything funny. You can also loader.loadModel() all of the animation files as well. And this way you can also narrow down the problem between the model and the animation files.
Finally, I suggest trying to run loader.loadModel() in a loop. Load the model, then release it thoroughly, and repeat, say 10,000 times. Does the memory usage keep growing at a constant rate, or does it cap out? The purpose here is to determine if you have found a genuine memory leak, or if there are just caches filling up that we haven’t found yet.
When using texture headers only the whole screen was blue. I am pretty sure that is because I have a full screen card that is usually mostly transparent and pretty much always close to the front of the scene. It could be a variety of other things as well.
the findAllTextures() call troubles me as well. The models are definitely textured - I can scroll the textures, cycle them through an animated texture loop etc etc. I think I need to dig into texturestage/texture definitions some more and see if I am just confused on something. The textures are there, and functioning correctly, I am just not finding the handle to them easy to get.
I had assumed that with texture headers only 1 the overall memory usage for the app would be lower on first start. Its not, which makes me worry about this test. Maybe my assumption is wrong, however. I do see everything as blue, as you described.
I am pretty sure that the actor class is already just being handed files from the loader. This is definitely the case for the model - I will have to double check the animations.
for x in range( 0, 50 ):
model = loader.loadModel( self._mesh.getPath() )
loader.unloadModel( self._mesh.getPath() )
print 'Collecting: ', ModelPool.garbageCollect(), 'models from the pool.'
print 'Collecting: ', TexturePool.garbageCollect(), 'textures from the pool.'
This adds 50 megabytes of memory usage. I think I’m looking at my problem. The same thing with textures didn’t have any problems. I am going to have to leave it here.
I’m not quite sure I understand what the conclusion is. Where did the 50 megabytes come from? Something in loadModel()? Does it make a difference whether you enable textures or not? Does the particular model you use make a difference? Does it lose 100 megabytes if you load your model 100 times?
Sorry, I was a little vague there. There wasn’t much of a conclusion. =( The problem looks to be much bigger than I have the time for. I may get to come back to it, but at this point I think I have to cut my losses on this work.