You’re not actually bound to use anything other than a scene graph, that can start at an arbitrary
NodePath you define.
For the physics part you can define how much gets updated by calling
bullet_world.do_physics(deltatime, ...), where deltatime is whatever time scale you want to pick per tick/frame, so you’re not bound to do anything in realtime. Add a camera to your scene, render it to an offscreen buffer and access said buffer directly through a
memoryview as stated earlier.
I don’t think you need the task manager, since you can run the simulation in sequential steps such as:
do_physics -> memoryview of buffer -> DQN stuff -> repeat
To save and restore the scene graph, I believe that you could save the scene graph (NodePath) to
.bam format every step or so (rather wasteful, otherwise store position and orientation of every NodePath recursively)… Though I think there are more versed people on this forum than me, that could steer you in the right direction.
This part of the manual might also be of interest:
Also, there’s a Discord and irc with many helpful people that can help you navigate to the best solution.
As a side note: regarding the frame buffer data you plan to use in the DQN, it might be useful to use a depth buffer instead of a color image since this gives your AI distance information for free, next to contours on a grayscale.