problem with multi-processing

Hello, i have question, it’s possible to load object in 1 process et render this object in other object ?

this is my code:

import multiprocessing as m
import pickle
from panda3d.core import *
from direct.showbase.ShowBase import ShowBase

class Store:
    pass
 
class Shareable:
    def __init__(self, size = 2**10):
        object.__setattr__(self, 'store', m.Array('B', size))
        o = Store() # This object will hold all shared values
        s = pickle.dumps(o)
        store(object.__getattribute__(self, 'store'), s)
 
    def __getattr__(self, name):
        s = load(object.__getattribute__(self, 'store'))
        o = pickle.loads(s)
        return getattr(o, name)
 
    def __setattr__(self, name, value):
        s = load(object.__getattribute__(self, 'store'))
        o = pickle.loads(s)
        setattr(o, name, value)
        s = pickle.dumps(o)
        store(object.__getattribute__(self, 'store'), s)
 
def store(arr, s):
    for i, ch in enumerate(s):
        arr[i] = ch
 
def load(arr):
    l = arr[:]
    return bytes(arr)
 
 
#tes objects ou structure a partager, voici un exemple
 
class Foo(Shareable, ShowBase):
    def __init__(self):
        super().__init__()
		
    def CreateObject(self):
	    self.m = loader.loadModel("model.bam")


	
if __name__ == '__main__':

    s = Foo()
	
    p = m.Process(target=s.CreateObject, args=())
    p.start()
    p.join()	
    s.m.reparentTo(render)

No. Theoretically it might be possible to get it to work, but it wouldn’t be worth the effort. Getting data from one process to another requires serialising the object to .bam, but since your model was in .bam to begin with, it would be faster to load it in the process you render with. (pickling a Panda model works by serialising it to a .bam stream in memory.)

However, you can load a model in another thread, which would have the same effect. Asynchronous loading in Panda3D is not affected by the GIL since model loading is done entirely in C++. Panda3D exposes quite a simple interface for this - just pass a callback argument to the loadModel function. Search for “asynchronous loading”, or look in the Fireflies sample program to see how it is done there.

I would like to code a multicore IA

If in a thread I move ralph example, would I block by gil or not ?