Read stencil buffer to texture

I have two questions on the stencil buffer. If I understand, the stencil buffer is an array. How to view it for debugging?
I use RenderDoc, however I saw a black screen using an example

The second question: is it possible to write the frame buffer at once in the form of a mask for the stencil? I just want to use a shader to generate a mask.

The stencil buffer is not an array; it is just an 8-bit unsigned integer value. To enable it, add this to Config.prc:

framebuffer-stencil true

Panda3D writes nothing to the stencil buffer by default. You need to apply a StencilAttrib to your geometry in order to write anything to it.

I don’t believe it’s possible to write to the stencil buffer from a fragment shader; it’s a mask that is on or off for an object’s fragments. Though, you can probably use discard; in your shader to avoid writing to the stencil buffer.

I think the buffer viewer should be supporting the debugging of stencil buffers. If this doesn’t work right, we should fix that.

The word framebuffer seems to mislead me. It looks like a bitmask for a collision, only in this case for rendering. I think implement stencil based shader. This should work faster, because, as I understand it, the panda stencil test runs on the processor.

It actually works. The fact is that the values ​​0/255 and 1/255 in color indistinguishable to the eye, which were used in the example.

Sorry for digging up an old topic, but I’m trying to figure out how to pass a stencil buffer as a texture to the shader. As I understand it, you succeeded (only everything had very low values and was very dark, which is still manageable). However, I also tried to visualize the stencil buffer, so to the code of the given example (Stencil Attribute — Panda3D Manual) I added:

manager = FilterManager(,
color_tex = Texture()
depth_tex = Texture()
aux_tex = Texture()
quad = manager.renderSceneInto(colortex=color_tex, depthtex=depth_tex, auxtex=aux_tex)

Additionally, of course, before:

base = ShowBase()

I gave:

loadPrcFileData("", "framebuffer-stencil #t")
loadPrcFileData("", "show-buffers #t")

for stencil buffer to be generated at all and to enable buffer visualization.

The problem is that my screen effect is:

As you can see, in the first 2 buffers the color texture is repeated, and in the third - the depth texture (the panda model is quite far away, so it merges with the plus infinity, but if you bring it closer - it darkens). Okay, but where can I find the stencil buffer?

Oh, everything is pinkish, but that’s only because I haven’t connected any shader - but it doesn’t make any difference, because I should see something in the buffers anyway.

I’m doing through the shader input, sampler2D this should work.

Hmm, I think you can modify this example for this stencil mask generation.

1 Like

I think that Panda does not currently allow capturing the stencil buffer using render-to-texture. It’s a bit of a mess. Normally, a combined depth-stencil texture is used if you have both a depth and a stencil buffer. But OpenGL only allows shaders to access one of those things, and by default it’s the depth texture (and Panda doesn’t allow changing this). This limitation also applies to the buffer viewer.

Since OpenGL 4.4, with texture views, it is possible to access both the stencil and depth aspects of a depth-stencil texture at the same time, but this is not currently implemented in Panda. Please file an issue on GitHub if you would like this feature, but note that this will only work on recent hardware. I am not sure how we would expose this feature, we would need some way to tell setTexture/setShaderInput that you’re interested in the stencil bit and not the depth bit.

If you don’t need to read the depth buffer, I believe it is possible to bind a texture to the stencil slot only, but this is also not implemented, since it is extremely rare not to use a depth buffer. However, depending on your use case, we can see what we are able to change in Panda to accommodate this. Maybe there is a way to bind separate stencil and depth textures after all.

Generally, however, it is better to use MRT (ie. aux targets), with the limitation that they do not have fixed-function testing modes.

1 Like

Thank you @serega-kkz and @rdb for your answers. However, I must admit that I’m a bit confused because @serega-kkz says it’s possible, while @rdb (and being a Panda3D developer) writes that it’s not possible.

@rdb, sending a feature request won’t benefit me - I work on macOS and the highest supported OpenGL is 4.1.

So maybe I will ask differently: what is the easiest way to find out from the fragment shader (which is a full-screen filter) which object a given pixel of the original texture belongs to? I wrote about it in this thread:

Generally speaking, I write my own volumetric lighting filter. In a scene, as you know, some objects cast light and some don’t. Moreover, I have ideas about various interactions between volumetric light and rendered objects (scattering on pixels of floating dust made of particles, or shining on the edge of the surface where a streak of volumetric light falls). This means the need to “code” at least a few categories of objects. For now, since I don’t use the alpha channel in my scene anyway, I thought of using it, which actually works for now. I just make different objects different transparency, and then read it in the shader as the .a component of vec4. It may be stupid, but it works. Of course, as long as I don’t want to switch on and use transparency.

But maybe there are some better methods to solve my problem?

Using the alpha channel sounds perfectly acceptable.

Otherwise, I would advise using MRT for this (multiple render targets). Instead of the stencil buffer, you use an auxiliary render target to which your fragment shader writes this extra information. You can then bind a texture to this aux target using addRenderTexture, with RTMCopyRam mode to indicate that Panda should copy it to RAM for inspection.

To do this you need to call setAuxRgba(1) to the FrameBufferProperties call to request a single aux buffer, and use the addRenderTexture call to bind this to a texture. In a GLSL 120 shader, you can write to gl_FragData[1]. In a GLSL 330 shader, you need to declare the output separately as layout(location=1) out vec4 yourvariablenamehere;.

Let me know if you need more information.


Thank you. All the more I’m glad that my idea with the alpha channel didn’t turn out to be so stupid. :slight_smile:

As for the MRT and the completeness of the information you provided is… I don’t know. Let’s make a deal that I’ll come back here and check it out when I feel like turning on transparency - since the current solution with the alpha channel just works…

Accessing the stencil buffer in a shader doesn’t make any sense, since it’s a test that runs before the fragment shader is output. In other words, this is a utility function of the graphics library when rendering.

For example, if you disable writing to the depth buffer for an object, then in future rendering you will not be able to use this data, even if you have access to the depth buffer, because the data simply does not exist.

It was the generation of stencil masks through an additional buffer that I meant and later transfer to the shader through sampler2d, which is noteworthy in panda3d, this can be done even without a shader. It is enough to use a camera with a tag to create an additional passage.

Sorry for misleading.