RenderToTexture: getScreenshot Vs. getTexture

Hi all,

I have a scene that’s rendered to an offscreen buffer and later used as a texture. The buffer is created this way:

self.customTextureWindow = base.graphicsEngine.makeOutput(base.pipe, 'textBuffer', sort,props, winprops,GraphicsPipe.BF_resizeable,,
self.customTextureWindow.addRenderTexture(tex=some_texture, mode=GraphicsOutput.RTMBindOrCopy,bitplane=GraphicsOutput.RTPColor)
self.customTextureCanvasCamera = base.makeCamera(self.customTextureWindow)

lens = OrthographicLens()
self.customTextureCanvas = NodePath("My Scene")

An orthographic lens is used, since the generated texture is going to consist of flat coloured planes. To add content to the scene:

gotProcGeom = self.customTextureCanvas.attachNewNode(flatPlanesNode)
self.customTextureWindow.setSize(greatestX, greatestY)
lens.set_film_size(int(greatestX), int(greatestY))
lens.set_film_offset(greatestX/2.0, -greatestY/2.0)

Now, to get the texture, when using getScreenshot, everything works well, the UVs of the model align properly with the returned texture:

self.activePieceModel.setTexture(texStageUse,textGot, 1)

That works well, except that I have to render the frame from the current frame, which causes a flicker. Using getTexture circumvents this problem, except that the UVs do not align correctly with the returned texture:

self.activePieceModel.setTexture(texStageUse,textGot, 1)

So the actual question is, how would I cause getTexture to return the exact same texture that getScreenshot does? While getScreenshot does get the job done, getTexture is noticeably faster, at least on my machine, since base.graphicsEngine.renderFrame() need not be invoked while using it.

Thanks in advance.

To simplify: is it possible to get the same texture from getTexture() that one would get from getScreenshot()? getScreenshot() returns whatever the camera in the offscreen buffer is seeing. Is there any way to make getTexture() do something similar? To return whatever the camera in the offscreen buffer is seeing as well? I don’t mean to be pushy in the least, I just thought I’d simplify it to the bare-bones if anyone knows of a way to achieve this.


Looking into it a bit more, the texture returned by getTexture() is always automatically scaled to a power of 2, but this is not the case for the texture returned by getScreenshot(). Further, attempting to manually set the size of the returned scaled texture doesn’t fix the UV-alignment issue. Therefore, it seems for my case, getScreenshot() is the viable option.


This should help.

some_texture = Texture()
1 Like

getScreenshot is meant for one-time snapshots, it is also probably more expensive than getTexture, which returns (in the case of RTMBindOrCopy) the texture that is directly being rendered into as opposed to a one-time copy.

Thanks. Yeah I noticed that it is slower, which is why I had wanted to use getTexture, but the problem is, the textures I generate don’t always have to be a power of 2 in dimensions and where I forced them to not be a power of 2, the results were skewed. I opted for getScreenshot therefore, because it fulfills this condition more easily (non-skewed textures gotten from whatever the camera is viewing in an offscreen buffer that aren’t a power of 2 in dimension).