Hi everyone,
I was trying to use an offscreen buffer to be able to export and externally alter some camera image of my scene, read it in again and display it somewhere on the screen in the next frame.
This is what I have so far:
framebuffer = self.world.win.makeTextureBuffer("", 800, 600, None, True)
camera = self.world.makeCamera(framebuffer, scene=self.world.render)
camera.reparentTo(self.world.render)
tex = framebuffer.getTexture()
image = tex.getRamImageAs("RGB")
I had to set to_ram = True
to get it running. So now it is a bit slow (just like the documentation told me beforehand). Though, when set to False, my buffer is empty.
Then I tried to understand GraphicsStateGuardians
, since having multiple graphics contexts might probably be a reason for the lack of performance as I read in the documentation.
So I tried to use GraphicOutput’s makeBuffer()-function. Though, I don’t really get it to work as i wish:
framebuffer = base.graphicsEngine.makeBuffer(base.win, "", 1, 800, 600)
camera = self.world.makeCamera(framebuffer, scene=self.world.render)
camera.reparentTo(self.world.render)
tex = Texture()
base.win.addRenderTexture(tex, GraphicsOutput.RTMCopyTexture, GraphicsOutput.RTPColor)
# texture is empy
When I use GraphicsOutput.RTMCopyRam
it kind of works but I guess it does not help in terms of performance. Also, now this renders the whole window and not just the desired camera view. Is this the consequence of having only one GSG, which basically means that you only have one view of the scene? I think I do not understand it that well yet.
Does anyone know how to proceed here or if there is any performance improvement possible at all?
Thanks in advance to everyone who tries to give me some smart hints!