Combining Depth and More in a Shader?

In a shader, is there a way either to write a custom value to the associated depth-texture (that is, as specified via “depthtex” in a call to “renderSceneInto”), or to retrieve the depth-value of the current fragment?

To explain, I’m working on a post-process effect. The shader for this effect relies primarily on a comparison of depth-values. However, I also want it to take into account some additional data that I intend to render.

In its most basic form, this isn’t all that difficult: I associate a texture with “auxtex” in my call to “renderSceneInto”, render the additional data into “color2”, bind the resultant texture to my final scene-quad, and then read a pixel from the texture.

However, the post-process effect in question already involves reading multiple depth-values, and I find myself wanting to likewise access multiple points in my additional data.

But I’m hesitant to employ too many pixel-reads. And conversely, it seems to me that the depth-texture only really needs a single colour-channel. As a result, I’m hoping to combine the additional data with the depth-data in a single texture. That way my depth-reads would inherently provide my additional data, meaning that there would be no increase in the number of pixel-reads involved.

However, I’m not sure of how to go about this–or whether it’s at all feasible. Hence my question here!

You can write to the depth buffer by writing to gl_FragDepth. There are some caveats to this, for example that any depth testing that you may have enabled will now take place after the fragment shader runs instead of before it runs (which may be slightly less efficient, because the fragment shader may be run for fragments that were previously discarded under the depth test).

You can also add additional auxiliary bitplanes to write to, you’re not just limited to a single auxtex.

Hmm… I see.

But if I write to gl_FragDepth, from where do I get the current depth-value (in order to include it alongside my data)? Or will that be already filled in by the time that I write to it, allowing me to simply replace, say, the green channel with my data?

Indeed, but I don’t think that I need more right now–I as-yet have only two pieces of data (depth and one other) that are important to me.

Sorry, I only now understand that you’re asking for other channels of data to be combined into the depth texture without affecting the depth value itself.

A depth texture isn’t like a regular RGBA texture; it is stored in a special format that we cannot deviate from. The only exception where combining some other type of data into the depth texture is possible is via the use of a depth-stencil texture. The limitations of using a depth-stencil texture for this purpose are, however:

  • It is limited to a single 8-bit unsigned integer.
  • There’s no gl_FragStencil to write to as far as I know (perhaps some extensions offer it), you are limited to using StencilAttrib for deciding what to write to the stencil buffer. That makes it usable for eg. a small object type number or effect mask, but nothing shader-defined.
  • Sampling the stencil part of a depth-stencil texture in a shader requires OpenGL 4.4 and still requires two separate samplers

An easier approach might be that you don’t sample from the depth buffer at all, but rather write the depth value into one of the channels of your aux texture, so that you only have to sample the aux texture.

An alternative is to write it to the alpha channel of the colour buffer instead, since it’s not used after alpha blending, if you happen to already be sampling colour values. The glow filter that ships with Panda3D does this.

I see–thank you for the information! :slight_smile:

This sounds like exactly the thing for my case–it even has the advantage of allowing me to drop the dedicated depth-texture entirely, I do believe.

My main question, then, is how I access the depth in the shader–but I recall now that I should have that in the transformed vertex (i.e. post-model-view-projection matrix).

I believe that that answers my question then, and thank you! :slight_smile:

gl_FragCoord.z

Would it not also be available as something like the following?

gl_Position = p3d_ModelViewProjectionMatrix*p3d_Vertex;
float depth = gl_Position.z / gl_Position.w;

(I’m not sure offhand of whether it would be the z- or y- coordinate that would be used, and I’m not right now at my code in order to try it, but the basic idea should be much the same.)

Yes, that’s the idea, though using gl_FragCoord prevents you from having to pass the value to the fragment shader yourself.

Ah, fair enough. That is more convenient then, so thank you! :slight_smile: