MRT, auxilary texture's background is purple??

I’m completely lost here. All examples I’ve found seem to work for cg,
but even what I’ve set up seems not to work at all.

As usual do I not understand how Panda works.

I want to add a second texture my first shader can render to. If I am not mistaken,
a simple gl_FragColor[1] would address the additional output. Though when I try this,
it only outputs everything in green.

I’ve taken the createOffscreenBuffer from the web, but I don’t think it’s actually usefull.
I feel like things are overly complicated.

Anyhow.

    def createOffscreenBuffer(self, sort, xsize, ysize, auxrgba=False, engine=None):
            winprops = WindowProperties.size(xsize,ysize)
            props = FrameBufferProperties()
            props.setRgbColor(1)
            props.setAlphaBits(1)
            props.setDepthBits(1)

            if auxrgba:
                props.setAuxRgba(1)

            if engine:
                return engine.makeBuffer(base.win.getGsg(), "offscreen buff", sort, xsize, ysize)

            return base.graphicsEngine.makeOutput(base.pipe, "offscreenBuffer",sort, props, winprops,
                GraphicsPipe.BFRefuseWindow, base.win.getGsg(), base.win)


        FirstPassBuffer = CardMaker("FirstPass")  
        FirstPass = NodePath(FirstPassBuffer.generate())
        FirstPass.reparentTo(render)
        FirstPass.setShader(Shader.load(Shader.SL_GLSL, vertex="v.glsl", fragment="f.glsl"))  
        FirstPass.attachNewNode(squareGN)

# the relevant part.
        FirstPassPickingBuffer = createOffscreenBuffer(2, 1600,1000, True)
        FirstPassPickingTexture = Texture()
        FirstPassPickingBuffer.addRenderTexture(FirstPassPickingTexture, GraphicsOutput.RTMBindOrCopy,
            GraphicsOutput.RTPColor)


        from direct.filter.FilterManager import FilterManager
        SecondPass = FilterManager(base.win, base.cam)
        FirstPassOutput = Texture()
        
        FirstPassOutput.setAnisotropicDegree(16)        
        FirstPassOutput.setMagfilter(Texture.FTLinear)
        FirstPassOutput.setMinfilter(Texture.FTLinearMipmapLinear)

        SecondPassOutput = SecondPass.renderSceneInto(colortex=FirstPassOutput)        
        SecondPassOutput.setShader(Shader.load(Shader.SL_GLSL, vertex="v2.glsl", fragment="f2.glsl"))
        SecondPassOutput.setShaderInput("bla", FirstPassOutput)
        SecondPassOutput.reparentTo(render2d)

        base.setFrameRateMeter(True)        

What this piece of code does is render something and then put it into a texture,
using the filtermanager, so I can run a second set of shaders over the output. That works flawlessly,
although I’m still confused about why it’s “SecondPass.renderSceneInto” instead of
“FirstPass.renderSceneInto” which would actually make sense. But whatever.

What’s important is “the relevant part”. I’ve gathered that this is the part that makes the texture
which then Panda magically adds, somehow, no idea how, as usual, because it’s all hidden.

So … how does it work?
What am I doing wrong?
How do I write to the texture?

Thanks!

If the scene is rendered with a shader writing to gl_FragColor[0] and gl_FragColor[1] use this to get both rendered textures to a post-proces filter:

manager=FilterManager(base.win, base.cam)
colorTex = Texture()
auxTex = Texture()
quad = manager.renderSceneInto(colortex=colorTex, auxtex=auxTex)
quad.setShader(Shader.load(Shader.SLGLSL, "v.glsl", "f.glsl"))
quad.setShaderInput("colorTex", colorTex) 
quad.setShaderInput("auxTex", auxTex)

Hold it!

I already have the primary output given to a second set of shaders.
I’m missing how I can use a second render target in the first shader.

This reads as if all I had to do to get a second rendertarget for the first shader …
… is to add the stuff regarding auxTex. Yet, where ever I look in the web,
it’s far more complicated than that. (which is what confuses the hell out of me)

I’ll try this. It looks logical. Almost too logical. Thanks!

If I add a “gl_FragColor[1] = vec4(1.0)” in the first fragment shader,
again all it does is output everything rendered in green colours.

I think a bigger part of code is needed or you need to write what you want to do, not just how you want to do it.

I think I figured it out. The mistake was to use gl_FragColor, instead of gl_FragData.
FragColor is only for the primary and can not be used with multiple.

Now what’s left is the reason why the background colour of auxTex is purple.

Thank you for your help! :slight_smile:

Okay, I understand that I do not render the whole texture,
thus I get whatever background color is being put into it at creation.

Yet, there is no way of changing it, unless I manually draw a black quad… no.

Using base.win.setClearColor does not help. setClearActive does not help either.

How do I get the auxiliary texture to have a black background? o_O
Why the hell is it PURPLE anyway?

        base.win.setClearColor(VBase4(0,0,0,0))
        base.win.setClearActive(DrawableRegion.RTPAuxRgba0, True)        
        base.win.setClearValue(DrawableRegion.RTPAuxRgba0,VBase4(0,0,0,0))

It was just a guess. These three do absolutely nothing to the texture.

So instead I tried to cheat and use makeCopy to simply copy the primary one.
As I expected that didn’t work. I’m still searching for the value to set the clear color…

Why is the primary texture black, yet the second one purple??

From direct/filter/FilterManager.py:

        if (auxtex0):
            buffer.setClearActive(GraphicsOutput.RTPAuxRgba0, 1)
            buffer.setClearValue(GraphicsOutput.RTPAuxRgba0, (0.5, 0.5, 1.0, 0.0))

I think the assumption is that you’re going to use it to store normals vectors, since that color happens to be the color for an up-facing normal.

You would need to do something like this to override it:

# Replace 1 with the proper buffer index for this pass
manager.buffers[1].setClearValue(GraphicsOutput.RTPAuxRgba0, (0, 0, 0, 1))

Though in the vast majority of cases the shader will override the value anyway, so why bother?

Hey again!

It’s not actually for normals, it’s for easy picking. I’ve tried setClearActive etc,
but it didn’t yield any result. Will try again, maybe I did something wrong.
I can work around this anyway, but I’d be more happy if it wasn’t the way it is now.

And in my case there’s not necessarily the whole screen being written to,
which means that the “background color” might at some point cause issues,
even when I work around it.

The usual picking of 3d objects via ray collision would be a nightmare,
because, as far as I know, panda will test ray intersection for all objects.
That’s a no go, even if I reduce the amount down to all visible objects on screen.

So, instead, I’d like to have a 32bit single channel texture as second rendertarget,
where I conviniently store the number of the object and then read the output texture.
It’s much, much cheaper performance wise and only costs me VRAM, which I don’t care about much.

Now, though, I fail at creating a 32bit single channel texture (still have to try depth texture though),
and I fail reading the texture, though “buffer” (using Panda 1.10.0) doesn’t give me an error.

For 1.9.0 you wrote that there’s various new formats for 32bit textures,
yet I don’t know where to find them.

FirstPassOffScreen.setup2dTexture(1600,1000,Texture.T_int,Texture.F_r32i)

This doesn’t give me errors, though I have no idea if it actually works as expected.

Interestingly enough, printing out the object tells me it’s exactly as I wished for.
Though when I do a click, in a task, and print it again, it’s four channels instead of one.

Hmhmhm.

I’ve managed to get TexturePeeker to work on the first output. Not on the second rendertarget.
get_ram_image yields nonetype, modify_ram_image though sends the first output to RAM,
at least a print tells me that it’s in ram and needs 6.4MB at 1600x1000.

Yet a peeking lookup yields 0,0,0,0. Everywhere.

Why can’t I just have the OpenGL IDs and create my own PBO? :confused:

I’ve got some code doing something like you want (I use it to get a x/y pos, but the idea is more or less the same).

Setup:

self.pixel=VBase4()
pickingTex = Texture("picking_texture")
props = FrameBufferProperties()
props.setRgbaBits(16, 16, 0, 0)#you may want it as (32,0,0,0)
props.setSrgbColor(False)
pickingBuffer = base.win.makeTextureBuffer("picking_buffer",1, 1, pickingTex,to_ram=True,fbp=props)
pickingBuffer.setClearColor(VBase4())
pickingBuffer.setSort(10)
self.pickingPeeker = pickingTex.peek()
self.pickingCam = base.makeCamera(pickingBuffer) 
node = self.pickingCam.node() 
lens = node.getLens() 
lens.setNear(32.0)
lens.setFar(2**16)
lens.setFov(2.0)
cull_bounds = lens.makeBounds()
lens.setFov(0.4)
node.setCullBounds(cull_bounds)
#node.showFrustum() 
state_np = NodePath("picking_state")
state_np.setShader(Shader.load(Shader.SLGLSL, "pick_v.glsl","pick_f.glsl"),1)
state_np.setShaderInput("some_input", 3.14)
node.setInitialState(state_np.getState())

Getting the data from the buffer:

if base.mouseWatcherNode.hasMouse():
    mpos = base.mouseWatcherNode.getMouse()
    pos3d = Point3()
    nearPoint = Point3()
    farPoint = Point3()
    base.camLens.extrude(mpos, nearPoint, farPoint)
    self.pickingCam.lookAt(farPoint)

    if not self.pickingPeeker:
        self.pickingPeeker = self.pickingTex.peek()
    else: 
        self.pickingPeeker.lookup(self.pixel, .5, .5) 

After that self.pixel has the pixel under the mouse pointer.

There is also a full working example of ‘gl picking’ here on the forum:
www.panda3d.org/forums/viewtopic.php?p=92815#p92815

And you’re sure Panda will let my shader fill the “picking texture” ?

Looks seriously complicated. It looks like it rerenders the area around the pixel,
but I’m wondering if it’s compatible … I’ll test it. Thanks!

To create a float MRT, you have to use setAuxFloat on the FramebufferProperties object when opening the buffer. The FilterManager API may not offer this level of flexibility; not sure.

Thanks you two! Double-thanks to wezu,
because this was so verbose, I got better insights into how making textures works.

MakeTextureBuffer … haha, I had no idea. -.-

Though I still wished this was closer to OpenGL in regards to naming things
so it would be easier to find information about how to do some things.

Yes, I agree. We have plans to improve Panda’s render-to-texture API.