FilterManager, how to do multistage?

I follow the code mentioned in: … ge_Filters

        self.manager = FilterManager(,
        tex1 = Texture()
        tex2 = Texture()
        finalquad = self.manager.renderSceneInto(colortex=tex1)
        interquad = self.manager.renderQuadInto(colortex=tex2)
        interquad.setShaderInput('vTexelSize', 1.0/800, 1.0/600, 0, 0)
        interquad.setShaderInput("tex0", tex1)
        finalquad.setShaderInput("tex0", tex2)

It shall create a sharpen-edged screen, and then remove the red channel on this screen. The result is not chained, what is the correct way to do it ?

finalquad.setShaderInput("tex0", tex2)

This makes the texture available as k_tex0, not tex0 – make sure you are referencing k_tex0 in your code, otherwise it will use the color tex (in this case, it will use the original color tex, undoing the results of the sharpening shader)

I do recommend to rename tex0 into something less confusing, like “src” (like CommonFilters uses)

I see, it is working now ! Thank you !

Since every renderQuadInto create an off-screen buffer, I am wondering if it is possible to reduce this resource requirement if I have a long chain of stages ? Is it possible to rewrite part of to use just a few off-screen buffer for this daisy chain effects ?

Hmm, you could use MRT to have multiple outputs per buffer, but I’m not sure whether that will actually be faster than having a chain of buffers. Since there’s only one quad in that buffer, I don’t expect it should be too slow - is it?

I have implemented about 5-10 2D filter, following Ogre’s demo.

I tried to chain them up and it works fine on my card (9500GT). Would it be an issue to run on old cards with that many off-screen buffer created ?

I don’t think it’s an issue. If you can show me a better way to do it, I’d love to implement it. But you really do need a chain, I think, since each filter depends on the previous one (this is why you can’t use MRT as well).
Does OGRE use a chain of filters like this as well?

I don’t know how internally Ogre is implemented.

Since I don’t know the details, I just imagine that it can use 2-3 buffers for a chain (from a normal program point of view). Of course I have not considered any limitation in panda architecture.

Hmm, I don’t see how such an architecture would work - maybe something with swapping buffers each frame or so?

May be my fundamental concept is wrong. Anyway, it is working fine with my card with a long chain. Thank you for the help.