That’s quite a complex program. I find it difficult to follow through everything it is doing. For the purposes of testing for leaks, wouldn’t it generally be better to have a much simpler program, for instance, one that runs a simple loop of loading an object (or 100 objects) and then freeing the previous object (or 100 objects)?
In your case, I tracked down the memory leak to the fact that your SpEfs_BldSplsh class stores a pointer to a task that it spawns, as self.DwnDriftBld. This task, of course, in turn keeps a pointer to the class method, self.DrifDwnCtrl, which in turn keeps a pointer to self, causing a circular reference, and therefore creating an object that can never be freed. The solution is to add “del self.DrifDwnCtrl” in the destroy() method for this class.
Note, by the way, that all of your objects inherit from ShowBase. You should never do that. There should always be exactly one instance of ShowBase in your application, never more than one. Each time you instantiate a new ShowBase, you’re reinitializing many low-level things within Panda, and no doubt causing all sorts of terrible things to happen. In fact, importing DirectStart is sufficient to create a single, global ShowBase. Since you are already importing DirectStart, there’s no need to inherit from ShowBase at all. Inherit from DirectObject instead; that’s probably what you meant to do.
Still, since you never upcall to the base class init() method, you never actually instantiate any of these other ShowBases, so that’s probably not causing you any real harm in this particular example.
No, there isn’t a need to use ShowBase if you use DirectStart. However, you’ll find a large percentage of the Panda community (myself included) prefer to use ShowBase instead of DirectStart, since ShowBase encourages you to pass your ShowBase instance around manually (as a parameter), instead of using the global “base” variable DirectStart creates. This leads to cleaner, more robust, and more readable code.
Yeah, it’s a nuisance. It’s really a problem with Python more than Panda, though, but that doesn’t make it less of a problem.
Still, it’s not just any pointer, it’s only circular pointers, and you learn to watch out for things like that. Storing a pointer to your own task is a circular pointer. Using self.accept() to call your own class method is a circular pointer. But most other things aren’t. So if you create any circular pointers, you have to delete them explicitly in your destroy() method; you don’t have to delete everything else, though.
And there are tools to help you track these things down. Python has the gc module, and things that it can’t delete are supports to be placed in gc.garbage, for you to inspect later and discover them. Also, Panda has the MemoryUsage class which can tell you what Panda objects are accumulating, which can also give you a clue (but it’s not usually as helpful as knowing the Python objects that are holding them, which gc provides).
No. From time to time you will make calls into the global ShowBase instance, which is called base. For instance, you might want to reference base.win to do something with the default window. But you will never need to invoke the ShowBase class by name, and there’s no reason to import ShowBase.
Like in the program, where you solved the memory leak. If I wanted to count the number of all those objects being created, within so many seconds, to get a total (for that amount of time passed), would that be possible with the P3D engine?
As I said, you can use the Python gc module to find some kinds of memory leaks. Use gc.collect(), and then check gc.garbage. If Python has detected any leaks, they will be listed in gc.garbage.
That doesn’t work for all kinds of memory leaks, though. I tracked yours down with the MemoryUsage class. This is a bit tricky to use, but the gist of it is: set “track-memory-usage 1” in your Config.prc file (this will slow things down a whole lot). Load up your app with python -i, then break into it with control-C. Type MemoryUsage.freeze() to remove all previously-allocated objects from the list and show only currently-allocated objects from this point on. Type run() to let the application run for a while. From time to time, break into it again with control-C and type MemoryUsage.showCurrentTypes() to list the currently-allocated objects since the call to freeze(), by type. Look for upward-increasing trends. In your case, I saw that PythonTask was increasing without limit, which told me that something that had a pointer to a task object was leaking. (MemoryUsage doesn’t track the Python objects, only the Panda objects, so it couldn’t tell me that it was a SpEfs_BldSplsh object that was holding the task object–I had to figure that out myself).
With a bit more effort, you can find out more details about the leaked Panda objects. For instance, I was able to look at each of the task objects and see that they were all named “DriftBldsp”, which is what pointed me at the SpEfs_BldSplsh class. To do this, you need to call MemoryUsage.getPointersOfType(). I leave the rest of this interface as an exercise; it’s documented in the API ref.
Also note that simply adding “track-memory-usage 1” to your Config.prc will allow Panda to report the distribution of allocated objects by type in PStats, when you drill into the system memory category. That may be an easier way to visually see leaks by object type.
No, it stops the task from running, but the task object you have created persists until you explicitly “del self.ATask”, or you simply reassign it like “self.ATask = None”.
It has to persist until you reassign it or del it, because that’s the way Python works. You have assigned self.ATask to a task object. That member will remain assigned to a task object until you assign it to something else.