Panda3d Threading and Python Global Interpreter Lock

Does using panda3d circumvent the issue with python parallelization and the GIL?
Oh, this is helpful. NM

Panda does not affect python in any way. And as you have known for a long time, you need C++ for this. It is worth adding that the speech in the post that you cited as a link is about rendering, and this is not what you think.

It depends on what you mean.

You cannot use Panda3D to somehow execute two pieces of Python code simultaneously. The GIL prevents two lines of Python code from being interpreted at the same time.

However, Panda3D is mostly implemented in C++, so most Panda3D calls that take a significant amount of time (such as model loading, rendering) will release the GIL. That means that during a loader.loadModel or graphicsEngine.renderFrame call in one thread, the interpreter can execute code in another Python thread simultaneously.

Also, asynchronous operations in Panda3D (such as asynchronous model loading, or the multithreaded render pipeline) run on a dedicated C++ thread, and never affect the Python interpreter, because they do not execute Python code (with some rare exceptions, like draw callbacks). Many of Panda3D’s C++ threads can run simultaneously since they do not suffer from the GIL, and they do not block Python from executing one other thread in the meantime.

1 Like

Thanks for the response guys…

I did mean in a way where I have the app running a loop that already started another thread (using Panda3d stdpy) lets’ say a socket receiver… Then what you’re saying is those are still sharing the same core…


Maybe I want to take another look at the connection writers and readers…

Thanks for your time.

class Main(ShowBase)
def __init__(self):
     server = Server()
    def run(self):
        #   stuff

from direct.stdpy import threading

Class Server(threading.Thread)
    def run(self):

Well, maybe it won’t be so bad. In a server application, a lot of time is spent waiting for I/O, and that generally causes the GIL to be released. You won’t be able to serve multiple requests at once, but if you can serve a request quickly enough for it not to matter, then that’s OK.

This is how so many web servers are written in node.js despite node.js being single-threaded as well.

Ok! Thanks for your feedback. I will probably keep going forward and see if it matters. I was reading through panda docs and two things…

Do connection managers (writer/reader) work async by running the network in C++ and only need to acquire the GIL to stuff messages into the connection? Or is it about the same?

The distributed networking utilities seem really convenient as well. Might help. I was going to take a look at details this week.

Thanks again!