This problem is a, sort of, continuation of what was reported while ago in this thread [url]Problem with rapid memory build-up]. I am using OnscreenImage to display successive images loaded from disk one image at a time. Only one OnscreenImage object is at use and I change images by calling its setImage() method to which I pass string with a full path to the image. So, with unnecessary details omitted, the example looks like this:
imageObject = OnscreenImage()
while True:
for i in range(720):
imgFileName = imagePath[i] # string
imageObject.setImage(imgFileName)
taskMgr.step()
Such a fragment produces ramp-like accumulation of memory used by the python process (up to 2.5Gb in my case) then it stays there. The amount of memory used corresponds to what is required to allocate all images at once. So it is obvious that imageObject.setImage() while allocating memory for the new image does not release memory that has been allocated for the previous image.
After reading the thread that I mentioned above, I tried all advices that helped to solve it:
- To pass image path I used Filename() class instead of just string
- I use distinct Texture object to pass as an image to OnscreenImage
- I create OnscreenImage object anew for each new image
- Before loading new image I use a bunch of cleaning (perhaps sometimes redundant) statements trying to release anything that can be at use
The accumulated solution looks like this (again, with unnecessary details omitted):
while True:
for i in range(720):
imgFileName = Filename(imagePath[i]) # string
tex = loader.loadTexture(imgFileName)
imageObject = OnscreenImage(image = tex)
taskMgr.step()
imageObject.clearTexture()
imageObject.removeNode()
imageObject = None
tex = None
imgFileName = None
Taken all those precautions memory still keeps climbing up to 2.5Gb. So I’m stuck.
I am still pretty new to Panda3d (installed it only few days ago) so clearly I am missing here something perhaps quite simple, something that the expert’s eye can catch easily. Please help if you see what’s going on here and thanks in advance.