How to use AsyncTask

I’m trying to subclass AsyncTask to play with threaded tasks :

class MyTask(AsyncTask):
	def __init__(self):

	# Waste of cycles
	def func(self):
		glop = range(0, 1700)
		res = 1
		for i in glop :
			for j in glop :
				res += math.sqrt(j)
			res = math.sqrt(res)
		return Task.cont

	# I'm not sure which one i should override
	def doTask(self):
		return self.func()

	def do_task(self):
		return self.func()

tm = AsyncTaskManager('test', 1)
t = MyTask()

and I get that odd error :

>>> tm.add(t)
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
TypeError: AsyncTaskManager::add() argument 1 must be AsyncTask, not AsyncTask

I’ve looked at asyncTaskManager.cxx that subclasses AsyncTask but I don’t understand how should I convert that in python.

Sorry, the AsyncTask class is intended for C++ use, not Python. If you want to write threaded tasks in Python, you’ll have to write your own task manager for this. Fortunately Python makes this relatively easy.


Ok so I go on with Python thread and threading modules.
Thanks for answering !

Actually, I wasn’t aware that Python doesn’t enable multi-threading.
So if I run a heavy function in a separate python thread (eg thread.start_new_thread) on a dual-core CPU, it won’t run faster.

What if I run the task in an AsyncTask in C++ ?

Actually, I have a special collision scenario : a single sphere colliding with a static, small, quite detailed ground. I need the detail on the ground for specific physic effects ; and the point is that the CPU limits the speed a lot.
I thought I could give a try running the traverse() function in a separate thread, considering that :

  • the small ground is a single static Geom and there is only one sphere to test
  • the sphere may not move “too fast”, ie I can consider for colliding that it hasn’t moved for 3 frames

If the ground is not modified by anything, is there a need to lock its access ?

It’s quite frustrating to see the framerate limited by a 50% running CPU.

Python supports multi-threading, but most applications and/or engines are not setup for it, so you have to run your own threads, making sure that they don’t interfere with the main thread. This is probably quite tough to do with Panda.

If your framerate really is limited by collision detection, check out this thread, which talks about improving collision performance (about in the middle):

Right. Note that going fully parallel is not always the best way to go fast. Most algorithms are fundamentally single-threaded, and trying to parallelize them can overly complicate things or lead to incorrect behavior. Sometimes it even ends up running more slowly than the single-threaded version.

Multiprogramming is hard. Things that are intuitive and easy in a single thread suddenly become counterintuitive and mysterious in parallel threads. It’s very easy to accidentally write race conditions or deadlocks when you write for multiple threads–and very hard to debug these sorts of things when they do happen (and they will happen).

All that being said, we are currently working on a fully multithread-capable version of Panda. Ideally, we’d like to be able to run the graphics rendering fully in parallel with the Python application. If we are successful, then the application programmer can still write a simple, single-threaded Python application, and watch with satisfaction as it runs to 100% utilization on both his CPU’s.

But that lofty goal may be some ways off still. In the meantime, the current version of Panda as distributed on the website is not compiled to be thread-safe, so you should not attempt to make any calls to Panda in two different threads (for instance, you should not run collisions in one thread while you are moving nodes in another thread), or you will certainly crash eventually.

It is possible to download and build your own custom version of Panda that will be thread-safe. It’s just a matter of turning on the appropriate flag when you build. This will, of course, add a bit of additional runtime overhead to manage the mutexes, and it will make everything run a little slower, but it will also allow you to perform multiple interactions with Panda in separate and parallel threads.

One easy application for this, for instance, is to load models asynchronously, so that you can continue rendering while you are loading models in the background.

It is also theoretically possible to run collisions in a separate thread. Of course, how your application deals with collisions that happen asynchronously to your movement commands is another can of worms altogether. Probably you don’t really want to run collisions asynchronously, since that would introduce really weird artifacts.


Thanks David,

If you let me quote you,

(read in this thread given by Arkaein, thanks !)

I think this is the real problem, and I’ll try to fix it.

You talk about running in parallel the graphics rendering and the python application. If I remember well my DirectX experience, the vertexbuffer & indexbuffer are filled or modified and then uploaded to the graphic card memory. I think that is done asynchronously because I had to lock the buffers when accessing them in write mode (so I guess it’s because the render is accessing them in parallel). It’s the same with render to texture (ie, downloading data from the graphic card memory), you have to lock all the graphic resources.

If I understand well, the current release of Panda (1.3.2) makes the application wait during such transfers ?
I know it actually performs the mathematical transformations consequently to the application tasks (not in parallel) and I have understood that you are working on making it parallel ; I’m only wondering about the data transfer.
I don’t know how it works with OpenGL and I admit that the successive call of glFunction still intrigues myself.

About multiprogramming you are certainly right. The problem with games is that you don’t do anything else on your computer when playing, and then your multi-core CPU isn’t really useful, though it is when you work with classical desktop apps.
The industry seems to take the multi- way and I’m wondering how to take advantage of this.

About loading in the background, it seems to be already implemented in PandaLoader, am I wrong ?

PS : I would like to encourage you as much as I can in your work, and I would like to thank you for all the detailed answers you give on the forum.

OK, let me clarify a bit: Panda does work in parallel with your graphics card operations, such as data transfer, as it stands today. Panda will issue a bunch of drawing commands to the graphics card, and then go on to begin the next frame, while the graphics card is still working on the drawing commands we sent it last frame.

I’m talking about parallelizing CPU-based operations. That is, the actual issuing of graphics commands is currently single-threaded with your application: your Python application does some stuff, and then it calls base.graphicsEngine.renderFrame() (this happens in the igloop task), and while Panda is processing the frame and issuing commands to the graphics card, your Python application is waiting. This is why you see only 50% utilization on a dual-core machine. (You don’t see the statistics on what your graphics card is doing while your Python application is running).

PandaLoader has an interface to load models in the background, but it will not actually load them this way unless Panda has been compiled with thread support enabled.


Ok thank you, that’s clear for me now.