Render to texture pipeline optimization

Hi,
I’m setting up a simple render pipeline to perform a feedback accumulation and a final shader pass. This is what I have now (-> means “render to”):

  • main scene -> scene card
  • accumulation scene (accumulation card + scene card blended) -> accumulation card
  • output scene (main scene + accumulation card) -> output card

I could do what is done in the output scene into render2d but I need an additional pass for a shader I’m planning to add.

Is this pipeline optimized ? I ask because the last pass to outputBuff slows down the speed by 2, and I’m wondering what does take time in it.
I’ve read in the manual :

When the texture is not in RAM, in what consists the transfer ?
Just some curiosity :slight_smile:

# Scene
self.knot = loader.loadModel(MODELS_PATH + 'knot.bam')
self.knot.reparentTo(render)


# Render scene
self.sceneBuff = base.win.makeTextureBuffer('SceneBuffer', 256, 256)
self.sceneBuff.setSort(-1)
self.sceneBuff.setClearColor(Vec4(0,0,0,1))
self.sceneCard = self.sceneBuff.getTextureCard()
self.sceneCam = base.makeCamera(self.sceneBuff, aspectRatio=base.getAspectRatio())

base.cam.node().setActive(False)
base.cam = self.sceneCam


# Accumulation scene
self.accBuff = base.win.makeTextureBuffer('AccumulationBuffer', 256, 256)
self.accBuff.setSort(-2)
self.accBuff.setClearColor(Vec4(0,0,0,1))
self.accBuff.clearDeleteFlag()
self.accCam = base.makeCamera2d(self.accBuff)
self.accScene = NodePath("Accumulation scene")
self.accCam.node().setScene(self.accScene)

self.accCard = self.accBuff.getTextureCard()
self.accOutCard = self.accBuff.getTextureCard()


# Add the 2 cards in the accumulation scene
self.sceneCard.reparentTo(self.accScene)
self.accCard.reparentTo(self.accScene)

self.sceneCard.setY(2)
self.sceneCard.setTransparency(1)
self.accCard.setY(1)
self.accCard.setTransparency(1)
self.accCard.setAttrib(ColorBlendAttrib.make(ColorBlendAttrib.MAdd))



# Output buffer
self.outBuff = base.win.makeTextureBuffer('OutBuffer', 800, 600)
self.outBuff.setSort(-3)
self.outBuff.setClearColor(Vec4(0,0,0,1))
self.outScene = NodePath('Output scene')
self.outCam = base.makeCamera2d(self.outBuff)
self.outCam.node().setScene(self.outScene)
self.outCard = self.outBuff.getTextureCard()
self.outCard.reparentTo(render2d)



# Add the result of accumulation to output
self.accOutCard.reparentTo(self.outScene)
self.accOutCard.setTransparency(1)
self.accOutCard.setColor(1,1,1,1)
self.accOutCard.setAttrib(ColorBlendAttrib.make(ColorBlendAttrib.MAdd))

Result :

1 Like

My first guess: your outputBuff size is not a power of two. Depending on your graphics hardware and driver combination, the driver may be performing an implicit scaling operation internally to suit the hardware’s capabilities.

Try making this a power of two and see if it improves your performance.

David

Ok, thanks for the reply.
I knew that using a non power of 2 texture is bad, but I would like the final texture to fully cover the window. If I try 1024*1024, the framerate falls from 120 to 30 in gl mode, from 180 to 130 with dx9.

I remember from DX9 that if you create a W*H texture, it will automatically chose the nearest power of 2, but won’t scale the texture when rendering, and will use only the rectangle you are interested in.

I thought makeTextureBuffer did the same.So what is the usual way to apply a pixel shader to the hole window ?


Edit :
Strange : Changing the output texture size from 512x512 to 513x513 does not affect the FPS

Hmm, drat. Well, it was just a guess.

You could try putting:

prefer-parasite-buffer 0

in your Config.prc, to see if this makes a difference too.

David

Please excuse me from answering so slowly :
Yes it makes a difference, in gl mode the fps falls from 120 to 50 when prefer-parasite-buffer is set to 0.

Actually, the performance is not so bad, I’m just wondering if this is the best way to do, especially if in the case of a fullscreen pixel shader.
I’m also wondering if there is a transfer if my hardware supports render to an offscreen buffer.

Thanks for answering !