render to multiple buffers

How do you make one camera render to multiple TextureBuffer 's ?

Create multiple TextureBuffers and set the same camera on each one.

But why would you want to do that? You’d end up with multiple TextureBuffers that all looked the same.


I’m trying to implement ThomasEgi’s Shaderless Bloom Effect.

It involves rendering the scene to texture at full resolution and add-blending the same scene at a lower resolution.

Rather than re-scaling the texture every frame I thought it would be easier to render to a second, lower resolution buffer.

Great, how do you set a camera on a buffer?

You can examine what ShowBase.makeCamera() is doing, but in a nutshell, do this instead of calling makeCamera:

dr = buffer.makeDisplayRegion()


what’s “cam” ?

It looks like it’s easier to use two cameras parented to a common node then to rewrite ShowBase.makeCamera()

cam is the NodePath representing the camera you’ve already got for the first buffer, for example (which is the default camera for the main window), or the result of calling base.makeCamera() on the first buffer.

You don’t need to rewrite base.makeCamera() for the second buffer. If you’ve already got a Camera, you just need the two lines I pasted above. I just pointed you at base.makeCamera() so you could see what it is doing in the normal case of creating a Camera from scratch, though most of the lines of code in that method are dedicated to handling the esoteric optional parameters you might or might not have supplied. If you take out all of the if statements, you’re left with not much more than the two lines I pasted above–which is the core functionality of that method.

Of course, you can certainly call base.makeCamera() for both buffers, and then parent both cameras to the same node. That works too, and it’s easy to understand how it works. The advantage to actually using the same camera twice, rather than having two different cameras pointed in the same direction, is that Panda will realize that you’re using the same camera twice, and can share some of the rendering effort (specifically, the cull traversal) when drawing the second buffer, saving a few milliseconds on your overall frame time.


ahm… actually i cant really contribute to the solution of your current problem but since you’r worknig on the thing… i experimented a little around.

for performance:
from what i found out its best to render the actual scene only once. thats the normal image you would see. rendering a single flat quad with a texture should be faster than rendering a sceen with lots of triangles twice (even if its low resolutoin).

for quality:
… a single image wont be much of a problem but if the image starts to move you’ll get some ugly stuff… to prevent that it’s best to render a small image with high antialising (or well scaled down, perhaps with linear interpolation) and scale it up step by step.
the process would be.
render the normal image, keep it somewhere in a buffer.
render a small image, render THIS (the small image) image with double resolution, double the resolution again and so on until you reach half (or full?) of the resolution of the “normal” image.

(optional for tuning)
[ add/multiply with fixed colors to shift the color range of the image]

last thing to do is add the bloom-image to the normal image.

in the end you might end up with 3,4,5 or even more buffers but it should be still fine. looks like rendering a single triangle is very fast =)

if you scale up the image step by step the final bloom image will look much smoother than scaling directly from a few hundret to over 1k resolution. antialising should improve quality, too.
perhaps some kind of texture filtering could help ,too when downscaleing.

well it’s up to you how you do it =)
don’t miss to tell us about your progress!

thomas e

Here’s my test code:

from pandac.PandaModules import*
import direct.directbase.DirectStart
#Make the quad
DECK = CardMaker('Quad')#to generate the quad with
card = render2d.attachNewNode(DECK.generate())
card.setTransparency(1)#allow texture alpha
#get buffer that will hold the texture of the new scene"bloom", 2**9, 2**9)"bloom", 2**8, 2**8)
#new scene graph
bRender=NodePath("bloom render")
#camera setup
bCamera = base.makeCamera(bBuffer1)
dr = bBuffer2.makeDisplayRegion()
#setup render to texture
tsBase = TextureStage('ts1')
tsBloom = TextureStage('ts2')
#Render something
environ = loader.loadModel("models/environment")

I can’t move the new camera with the mouse so I don’t know. Does anybody know how to enabe drive mode or something on a custom camera?

try to rotate the environment like the carusell in the tutorial/example. should be enougth to see if the bloom image “jumps”.
i’m currently compiling panda again to make it run on my updated system. once it’s finished i’ll give your code a try and see what and how thing can be improved =).

one thing …maybe… if you dont darken your bloom-image befor adding it to the original image it will get terrible bright. even a middle gray would be white in the final image… try to darken the bloom image with TextureStage.CSConstantColorScale , setting up a additional render buffer and makt the card semi-transparent with black backgruond or perhaps with loading a seperate file as the image’s alpha with setAlphaFilename… wll there might be even more ways to darken it =) … from my experience it’s good to darken the bloom image by 30-40% befor adding it to the real image.

all statements abover are just advices^^ no need to follow them

I added the line

bCamera.hprInterval(20, Vec3(-360, 0, 0)).loop()

before run()
with bBuffer2 resolution at 28 motion doesn’t look too bad, but 27 looks weird. And less is worse. It appears from your zelda post that it should be at something more like 2**4 to get a good bloom.

I hadn’t gotten around to that. drwr, is there a way to darken a texture directly, or do I need to make another camera?

hm… as far as i can think of the only 2 solutoins would be to really scale the original image down using some linear or trilinear texture filtering (if this is actually possible when scaleing down) or some other kind of texturefiltering… or use antialising.
i guess there have to be seme kind of texturefiltering helping out to scale a image down. all you need to do now would be to scale it up, step by step. if you do it all in one step you’ll get what you have now… a jumpy, not all to smooth bloom effect. (habent tested your code yet… panda is still compiling^^)

but i’m glad you tried it out. even more so that it has the potential to work :stuck_out_tongue:

keep up the work!