Screen Space Local Reflections v2

Local reflection in the screenspace. This is a minimal example, it can be extended, for example to make a mask of the intensity of reflections, blur, antialising, etc.

Pros: does not require the creation of additional cameras. You can use g-buffer, already created for other purposes (such as deferred lighting). Performance is almost independence of the scene complexity. Works with autoshaders.

Cons: Multiple artifacts - you need to mask them somehow (possible blur). Reflect only what is seen on the screen. “Voracious” raytrace algorythm.

I have included two version of shader in the source code. They are a bit different in the raytrace cycle. The difference will be noticable while distancing the camera.


Source (updated to v2)

This is awesome!

Thanks )

BTW: I found a strange bug with this example and Intel onboard video - if I disable “show-buffers”, then it not work correctly (used dev version of Panda). On ATI it works as expected.
Have anyone has the same problem?

Looks interesting, is there a (simple) way to control what gets reflected into what?

You can make mask like in the glow filter, ie render scene with the special textures which point where and how strong would be your reflections on the surface. Then you can use this mask in the mix.sha or better in the ssr.sha to control reflections.
I can make example if you wish.

If it would work with “gloss maps” then that would be in “omg/wow” territory for me :mrgreen:

Sure, that’s relatively straightforward. Just render the gloss information into a separate render target (or, the alpha channel of the framebuffer), and use this information in the post-processing shader to attenuate the intensity of the effect.

The shader generator currently doesn’t support writing gloss information to an auxiliary render target, so you’d have to apply your own shader, or add the relevant functionality to AuxBitplaneAttrib and ShaderGenerator (which is fairly straightforward), or simply use glow maps for the purpose and add an AuxBitplaneAttrib to “render” set to ABO_glow. (So, you then apply the gloss map to the glow slot but in the shader you really use it as a gloss map, which should work fine as long as you don’t use setBloom).

This is one of the most awesome things I’ve seen recently. No, seriously. I have a new plan for tomorrow – plow through this code and understand how it works, because OMG. Sure, there are some artifacts, but… wow :smiley:

And I can’t believe this was achieved with less than 250 lines of Cg and Python. You, sir, are amazing.

Can you say anything about how expensive it is and how it scales? Any idea how it would perform in less tech-demo sized scenes?

I want to note - it’s not my idea, I used existed technique, described in several sources. I just ported it to Panda :slight_smile:
Actually technique is not too complex - so the code is rather short. Of course I was able to reduced it by using Panda’s autoshaders to make geometry buffer.
The biggest problems have caused by transformations between different spaces - as you can see, reflections still not fully correct by this reason.
As for expensiveness - shader uses raytracing and each pixel can require up to X texture reading, where X is trace cycles count. By default X = 40. In original technique used full raytracing (from point to screen edge), but I was restricted by Cg feature, which unrolls cycles and not works with dynamic and large cycles count.
I think we can improve performance if we would use mask to control where we need reflections.

I not qute understand question, can you explain?

Sure, I was just asking about how it would scale to larger scenes, more complex environments. Since it’s screen space, I don’t think that overhead would be too big, though.

I’m aware of that you didn’t invent the algorithm, but implementing it is enough to be awesome :smiley:. Correct me if I’m wrong, but isn’t this technique (or a variant of it?) used in CryEngine? They also have another interesting tech in there – light propagation volumes. Basically also reflections, but diffuse. Have you thought of giving that a spin?

Anyway, this can be a great way to complement environment mapping, or even replace it altogether. Good job once again :smiley:.

Ah, yes technique is mostly independent from scene complexity because works with the already rendered image.

You’re absolutely right :slight_smile:

I read about this, but I decided that it has inefficient ratio of “expensiveness - showiness - complexity - portability”, and I not very versed in the details.

Technique has serious limitations - we can’t reflect anything outside from screenspace, or backward side of the object, so it can’t fully replace environment mapping or traditional reflections with additional cameras. But if we do not need accurate reflections then it works mostly well.

Yay! Yes! I did it! My brain was burned, but huh… I did it ) Now I have more or less geometrically correct reflection!


Source coming soon.

Awesome! Great job! :slight_smile:

Thanks )
I’ve updated first message in topic with new source code.

This is great. I have a better framerate with the shader 1.
Is it possible to prevent a model from reflect? For example if I don’t want the teapot to reflect.

It’s only basic example, which show technique, so by default - no, but it not too hard change it for your needs. Here I changed some files to use myNodePath.setShaderInput(‘rintensity’, intencity_value) for reflection intensity control.

It’s not optimized and not better solution, but show how you can change it.

There’s a bug with the masked version when you resize the window - maybe it’s there with the previews version but I’ve missed it. Anyway I think it’s a Panda bug with the FilterManager, same as when using bloom:

Here’s a screen:

There’s some sort of after-image at the top, the mask is not in the place of the teapot and it’s all getting weirder the more I resize the window.

Yes, the same thing happens without a mask. Seems that after buffer resize Panda not update texpad_ variables.

I’ve been toying with the shaders and if I add some blur (well, a lot really) I can have a decent reflection with just 10 rays/steps, with such a setting I got 25 -100% framerate increase, and it looks like this:

If anyone is interested I added the blur in the mix shader like this:

void vshader(
    float4 vtx_position : POSITION,
    float2 vtx_texcoord0 : TEXCOORD0,
    out float4 l_position : POSITION,
    out float2 l_texcoord0 : TEXCOORD0,
    out float2 l_texcoord1 : TEXCOORD1,
    uniform float4 texpad_albedo,
    uniform float4 texpad_reflection,
    uniform float4x4 mat_modelproj)
    l_position=mul(mat_modelproj, vtx_position);
    l_texcoord0 = vtx_position.xz * texpad_albedo.xy + texpad_albedo.xy;
    l_texcoord1 = vtx_position.xz * texpad_reflection.xy + texpad_reflection.xy;
void fshader(float2 l_texcoord0 : TEXCOORD0,
             float2 l_texcoord1 : TEXCOORD1,
             out float4 o_color : COLOR,
             uniform sampler2D albedo : TEXUNIT0,
             uniform sampler2D reflection : TEXUNIT1)
    float4 A = tex2D(albedo, l_texcoord0);
    //float4 R = tex2D(reflection, l_texcoord1);
    //Hardcoded fast gaussian blur
    float2 samples[12] = {
        -0.326212, -0.405805,
        -0.840144, -0.073580,
        -0.695914, 0.457137,
        -0.203345, 0.620716,
        0.962340, -0.194983,
        0.473434, -0.480026,
        0.519456, 0.767022,
        0.185461, -0.893124,
        0.507431, 0.064425,
        0.896420, 0.412458,
        -0.321940, -0.932615,
        -0.791559, -0.597705
    float4 R = 0;
    for(int i = 0 ; i < 12 ; i++)
        R += tex2D(reflection, l_texcoord1 + samples[i] * 0.01);
    R /=13;    
    o_color  = float4(A.rgb * (1 - R.a) + R.rgb * R.a, A.a);


Of course I’m not the kind of person to know how to write a proper blur function, the code is from ninth lens flare shader, so all the fame and glory is his (and John Chapmans) to claim.

Sorry for double-post, but again I find myself doing something wrong and in need of help.

I try to connect this with a deferred renderer, but something is wrong. I see all that I should in the buffer viewer, but there’s no effect on screen

My code so far:

from panda3d.core import *
from direct.filter.FilterManager import *
import random

class DeferredRenderer():
    def __init__(self, base, scene_mask=1, light_mask=2, SSLR=False):
        # Camera setup 
        self.light_cam = base.makeCamera(
        self.light_cam.reparentTo(, 5000.0)
        self.scene_mask = BitMask32(scene_mask)
        self.light_mask = BitMask32(light_mask)

        self.light_cam.node().getDisplayRegion(0).setClearColor(Vec4(.0, .0, .0, 1))
        #Buffers creation 
        self.manager = FilterManager(,
        self.depth = Texture()
        self.albedo = Texture()
        self.normal = Texture()        
        final_quad=self.manager.renderSceneInto(colortex = self.albedo,
                                                depthtex = self.depth,
                                                auxtex = self.normal,
                                                auxbits = AuxBitplaneAttrib.ABOAuxNormal)
        if SSLR:            
            ssr_filter, ssr_quad = self.make_filter_buffer(, 
                                        'ssr_buffer', 1, 'ssr_zfar.sha',
                                        texture = None,
                                        inputs = [("albedo", self.albedo),
                                                  ("depth", self.depth),
                                                  ("normal", self.normal),
            final_quad.setShaderInput("albedo", self.albedo)
            final_quad.setShaderInput("reflection", ssr_filter.getTexture())
        #light list
        #flickering lights
        #geometry list
        taskMgr.doMethodLater(.1, self.update,'renderer_update')
    def make_filter_buffer(self, srcbuffer, name, sort, prog, texture = None, inputs = None):, 512, 512)
        blurScene=NodePath("Filter scene %i" % sort)
        shader = loader.loadShader(prog)
        card = srcbuffer.getTextureCard()
        if texture:
        if inputs:
            for name, val in inputs:
                card.setShaderInput(name, val)
        return filterBuffer, card
    def update(self,task):        
        for light in self.flicker:
        return task.again
    def addGeometry(self, node):
        if isinstance(node, basestring):
        #return the index of the node to remove it later
        return self.geometery.index(self.geometery[-1])
    def removeGeometry(self, geometryID):
        if self.geometery[geometryID]:
    def addLight(self, color, model="volume/sphere", pos=None, radius=1.0, diffuse=Vec3(1,1,1), specular=Vec3(.8,.8,.8), attenuation=Vec3(0.1,0.2,0.003)):
        #light geometry
        if isinstance(model, basestring):
        if pos:
        #self.lights[-1].setAttrib(CullFaceAttrib.make(CullFaceAttrib.MCullClockwise)) #??
        self.lights[-1].setAttrib(ColorBlendAttrib.make(ColorBlendAttrib.MAdd, ColorBlendAttrib.OOne, ColorBlendAttrib.OOne))
        #light shader
        self.lights[-1].setShaderInput("albedo", self.albedo)
        self.lights[-1].setShaderInput("depth", self.depth)
        self.lights[-1].setShaderInput("normal", self.normal)
        self.lights[-1].setShaderInput("Kd", diffuse)
        self.lights[-1].setShaderInput("light_radius", radius)
        #return the index of the light to remove it later
        return self.lights.index(self.lights[-1])
    def removeLight(self, lightID):
        if self.lights[lightID]:
    def setFlicker(self, lightID, min_attenuation, max_attenuation):
        self.flicker[lightID]=[min_attenuation, max_attenuation]
    def doFlicker(self, lightID):
        const=random.uniform(self.flicker[lightID][0][0], self.flicker[lightID][1][0])
        lin=random.uniform(self.flicker[lightID][0][1], self.flicker[lightID][1][1])
        quad=random.uniform(self.flicker[lightID][0][2], self.flicker[lightID][1][2])
        self.lights[lightID].setShaderInput("att_params",(const, lin, quad))