Instead of just threading, maybe add in prosses too?

Hey everyone, I know threading is a big deal, but it’s not the answer to actully speeding up anything. It only helps with “smoothing” things out. This can be both good and bad.

But instead of using threads, why don’t we just move it to different prosses all togather?
docs.python.org/library/multiprocessing.html

As you see here, the new package would alow panda 3d to be fully “multi core” use able. This would help both with threading and using unused cores to help smooth things out. With added cores, unlike threads, they should help more linar than spreading one core out to do two things.

What do you all think?

Sidenot, I did try adding the new package, but thows off a werid bug where it’ll just restart up another one of its self.

There is no problem with using multiprocessing in Panda. I don’t think it conflicts with Panda’s threading system.

I could’ve been using it wrong, but everytime it would restart up another game proccess every few secs. After about 3 sec I could come back to 4-6 “games” running at once with more comming.

I thought the multiprocessing thread is still limited by the GIL?

No, the whole point of multiprocessing is that it spawns a separate process in its own address space, and thus is not limited by the GIL. You can have Python processes running in true parallel on multiple cores.

Multiprocessing does have its own limitations, though. You have to arrange communication via sockets or shared memory. Communication can become a bottleneck. Not all algorithms are suited to multiprocessing.

Sounds like you were probably using it wrong. :slight_smile: I haven’t experimented with Python’s multiprocessing library myself, so I can’t give specific advice.

David

I think I got the multiprocessing library working with Panda. As David pointed out, the multiprocessing lib is forking the process, so each process will potentially hold its own copy of Panda if you’re not careful. (On my machine, being not careful means a complete lockup.)

Thus the trick is to start the child process before any Panda libs are loaded.

Although I’m confused why loading a Panda construct (eg Vec3) in the forked process after it has safely loaded causes my computer to lock up. So it seems, like threading, one cannot use safely use Panda variables in the second process/thread.

I’m guessing this is because loading any of Panda’s modules loads a giant shared dll which becomes ‘greedy’ across all process space. Thus, each process is not really loading a unique Panda module with its own refcounts, causing resource conflicts.

import os
import time
from multiprocessing import Process

def sleeper(name, seconds):
	print 'child: starting child process with id: ', os.getpid()
	#print 'parent process:', os.getppid()
	print 'child: sleeping for %s ' % seconds
	while True:
		a = 4,5,6
		print "hello", time.time()
	print "child: ThreadExit"
	
if __name__ == '__main__':
	osid = os.getpid()
	print "in parent process (id %s)" % os.getpid()
	p = Process(target=sleeper, args=('bob', 5))
	p.start()
	
	import direct.directbase.DirectStart
	print "I got main"
	print "in parent process after child process start"
	np = loader.loadModel('smiley')
	np.reparentTo(  render )
	base.cam.setPos( -15, -15, 15)
	base.cam.lookAt(0,0,0)
	run()

That would be really strange. It should load the code into a common shared memory pool, but the data should be unique to each process, since that’s what’s meant by a process.

On the other hand, I don’t have any other bright ideas about why this would cause a crash. :frowning:

David

I don’t think it’s a huge problem not to be able to load render-related objects like Vec3 in the other process. (Although it is wierd.)

I think most people use the second thread for something like physics or AI which should have its own internal representation anyway.