Generating a texture with render to texture

What is the/a correct way to render a single frame to an offscreen buffer and use it as a texture? I will be needing to to do this several times, each pass building on the last.

I have tried a lot of different things, and I keep having issues.


i use base.screenshot() and then just load the texture from file … screenshot

1 - its simple.
2 - i get intermediate steps saved to files gets you a PNMImage.

If you need it every frame, look at the Teapot On TV sample program.

Well, first, I’m using a hidden buffer, so would be different, but also that provides a PNMImage, which is hard to get into a texture (The only way I see is saving a temp file and loading it, or copying all the pixels over to a texture ram image)

pro-rsoft, I actually started this project by modifying that example. It however seems to update the texture every frame. I want to use the texture every frame yes, but I don’t want to recompute it (it does not change over time). I tried various methods of copying the texture and throwing away the buffer, setting the buffer to oneShot, and stuff like that. I always had the issue that I had to wait for a frame to happen, then copy the texture and destroy the buffer (which seemed tended to make my texture copy gray). The only way I found to do this was to have the task manager call a method back after some amount of time (hopefully after the next frame, but after 1 second to be sure). Even doing this, the second time I tried to do it (using the texture from the first) never seemed to work.

Is there a way to actually use one shot buffers to make textures and run code when it finishes?

You can copy a PNMImage to a Texture with:

tex = Texture()

Which would work fine. This is the same thing as the oneShot trick if you also specify toRam = True on the makeTextureBuffer() call.

Either of these techniques copies the texture to RAM and then back to the the graphics memory, which is fine. If you don’t want to take the time for the extra copy back and forth, you can avoid the copy to RAM, and use the oneShot trick exactly as you have been–but in this case, as you have discovered, you cannot destroy the buffer until you are done with the texture (the two objects share the same graphics memory). It’s OK, though, it won’t be rendering every frame any more, since you specified oneShot.


I cleaned up my code to clarify the issue:
In my init:

        #Wate some random amount of time for textures to be done.
        #It should be a long time in case the computer is slow or busy.
        def makePlanetTex(self,texModel):
        taskMgr.doMethodLater(2,makePlanetTex,'wait for texture!',(self,texModel))
    def getTex(self,model,shader):
        #we get a handle to the default window
        #we now get buffer thats going to hold the texture of our new scene   
        altBuffer=mainWindow.makeTextureBuffer("texBuffer", 512, 512)
        #now we have to setup a new scene graph to make this scene
        altRender=NodePath("new render")

        #this takes care of setting up ther camera properly
        from pandac.PandaModules import OrthographicLens
        oLens = OrthographicLens()
        oLens.setFilmSize(1, 1)

        myNode = model

        tex = Texture()
        from pandac.PandaModules import GraphicsBuffer
        altBuffer.addRenderTexture(tex, GraphicsBuffer.RTMTriggeredCopyTexture)
        #stop the texture from updating itself after some time
        def stopUpdating(self,altBuff):
        taskMgr.doMethodLater(1,stopUpdating,'stop updating tex',(self,altBuffer))
        return tex

Please note the two waits. The wait in getTex actually runs twice, so thats 3. I would rather have no arbitrary waits. Also, if I set the buffer to one shot, I can’t seem get a texture from it, so it updates every frame during the wait in getTex. If I shorten the waits a lot, the first texture pass does not make it in.

Is there some way to texture my model without waiting a whole bunch and hoping their computer is fast enough for my specified wait times?

Waiting one frame is sufficient. You can force a frame to render immediately by calling base.graphicsEngine.renderFrame().

Why are you using RTMTriggeredCopyTexture? Just use RTMCopyTexture, and then you don’t have to call triggerCopy, and you can also call setOneShot() immediately. Using TriggeredCopyTexture means to wait until you call triggerCopy to grab the texture image, but oneShot means to render only one frame and then stop rendering, so these two features are incompatible with each other (because by the time you call triggerCopy, it has already stopped rendering).

Better yet, since you have created altBuffer with a call to makeTextureBuffer(), which has already set up a texture with an appropriate RTM mode, just throw away everything beginning at tex = Texture(). It’s creating a redundant and wasteful Texture object. Instead, just return altBuffer.getTexture().


I started with something like that, but it did not work. I tried it again with no success.

If I set setOneShot(True) if causes my texture to go to gray. Apparently inactive buffers do not keep their data (they turn gray), and this is why I was making an extra copy. Without the copy, I had no way of stopping the rendering of the texture every frame without losing the texture.

Making a copy of the texture without waiting seems to always result in white, regardless of calling base.graphicsEngine.renderFrame() first.

Thus I was forced to wait, then make a copy. To get the copy to go into the texture my function had already returned, I used trigger copy. This of corse required what ever called the function to wait after it got the texture back which is bad.

Oh, and if I:


I get a NULL texture because the buffer is already gone.

Well, after lots of poking around, I discovered that this:

	return tex

works on my new notebook computer, but not on my older desktop (see issues described above) (screw you intel GMA 950!). Good thing I bought the laptop yesterday. The main reason I even tried this was because when testing my laptop’s graphics, I noticed that the resulting texture was different (1/4 size centered) and figured something was wrong with Panda running my desktop’s low end graphics. My guess is the vertex shaders, and some other issue with the parasitic buffer.

Well, I have so work to do to get it looking right on my notebook (how it was on the desktop) but at least I solved the waiting issue (and dropped support for low end graphics).