TaskChains

Hey, I just have a few questions on the new taskchains setup. Mainly, I just want to understand how they work for say, so if I understand right, taskchains works under taskmgr to make a new thread or are they new taskmgr all togather?

Would they actully help load things faster by loading two things at once?

I’m still a little lost on what cuses deadlocking. Is it cus a varible is in use or is it cus the cpu has to spend more time running the code?

And will this make use of multi cores?

A task chain is a list of tasks. Prior to 1.6.0, the task manager contained a single list of tasks. Now, the task manager actually contains a set of task chains, and each of the chains is itself a list of tasks. There is one “default” task chain which is the same list of tasks that the task manager has always contained. If you don’t specify a task chain when you add a task to the task manager, it gets added to the default task chain. So if you don’t do anything specific to task chains, all of your tasks are on the default task chain.

Task chains are not necessarily related to threads. If you create a new task chain and don’t give it any thread-related parameters, it will run in the main thread with the default task chain, which means that all of the tasks in the default task chain will run, and then all of the tasks in your new task chain will run. So it doesn’t really buy you very much in this case.

The magic is if you specify that a task chain should be in its own thread, with numThreads = 1 on the setupTaskChain() call. This means that all of the tasks on this chain will be handled in parallel with the default task chain. This is something new that’s not been readily possible before. It’s also really dangerous and easy to write code that goes horribly awry.

The two things that can go wrong with multiple threads (this is called “multiprogramming”) is: race conditions and deadlocks. A race condition is code that behaves randomly according to which of two threads happens to be running first, and is really hard to discover and harder to debug. You protect against race conditions by using synchronization primitives like mutexes, condition variables, and semaphores. However, when you use these things, you introduce the possibility of a deadlock, which means you have multiple threads all waiting for each other to finish, and nothing happens.

I cannot give you instruction on the proper use of multiprogramming techniques in a forum post. There is plenty of literature on the internet, but it is a deep topic to explore.

Python as a language is inherently single-threaded, which means that none of this will take advantage of multiple cores to run CPU-bound code in parallel. This means your threaded code will not run faster overall, and it may actually run slower. However, it may still be advantageous to do this, to smooth out long chugs over several frames–a smooth 30 fps is better than a choppy 60 fps. Also, I/O-bound processes (like loading models) can truly run in parallel, to a certain degree.

However, if all you want is to load models in the background, it is probably better to use the callback parameter to loader.loadModel(), which will handle all the messy work of loading a model in the background and calling a function when it’s done, without you having to deal with the complexities of multiprogramming.

Thank you drwr. That does help me understand how you all have it setup now.

Yes, I know deadlocking is a deep topic, and I ran into it a lot it seems like when I take a guess how on someone else writen their code.:slight_smile:

By “loading” I was meaning from models, to code blocks, to math. Sorry I was little vage there.

I hope I can actully push this to its limit then ^^.

I’ll look up thos methods for dealing with deadlocking, but why not look at how network deals with collisions of pakets into fixing how a thread can be ran safe giving the “main” thread pioirty to the one asking information from the said block of code? This inturn will deal with race condition as the one asking for the information would set tp “pause mode” while the thread with piorty with said x block of code can have its way? (I think windows 7 did it that way)