Screen Space Soft Shadows Sample

Screen Space Soft Shadows Sample - that’s a lot of S :smiley:

This is (yet another) attempt at making softer looking shadow, I wouldn’t call this a perfect or even a good example but maybe someone will find some use for it or make it better.
Version 1.9 needed, you also need to move the camera (zoom out) when the demo is started if you want to see anything at all.
s5.zip (41.8 KB)

We need more minerals soft shadows )

//GLSL
#version 110
#define PI    3.14159265
uniform sampler2D depth_map;

varying vec4 lightclip;
varying vec4 shadowCoord;

uniform float bias;
uniform float shadow_pixel;


void main() 
    { 
    
    vec4 shadowUV = shadowCoord / shadowCoord.q;
    float cur_depth = texture2D(depth_map, shadowUV.xy).r;
    float diff = (shadowUV.z - cur_depth)*6.0;

    float shade = 0.0;
    float h, w;
    float dl = PI/1.6;
    float l = 0.0;
    for (int j=1; j < 3; j++)
        for (int i=0; i < 6; i++){
            h = sin(l) * shadow_pixel * 15.0;
            w = cos(l) * shadow_pixel * 15.0;
            cur_depth = texture2D(depth_map, shadowUV.xy+vec2(h, w)*diff*float(j)).r;
            shade += float(cur_depth < shadowUV.z - 0.001);
            l += dl;
        }
    shade /= 12.0;
    shade = clamp(shade, 0.1, 1.0);
    shade = 1.0 - shade*0.4;

    gl_FragColor =vec4(shade, shade, shade, 1.0);    
    }


Ah! Proper shadow filtering, nice. [lie]I was just going to look up and test PCF and VSM implementations.[/lie]

What I think will make the shadows look better is a different blurring algorithm. I don’t thing the one pass, hardcoded, fast gaussian blur I used is best suited for this.

Nice!

And indeed a lot of Ss :slight_smile:

Thanks for the code!

I agree. In my opinion, too, that particular blur implementation does not work very well when the input has sharp edges. Offset copies of the edges will show in the output, because it uses so few samples. In LensFlare it looks decent, but for most other applications I think more samples are needed.

You could use a two-pass blur like the BlurSharpen filter does. At least I got good results in the LocalReflection filter when I replaced the hardcoded single-pass fast gaussian blur with the two-pass approach from BlurSharpen. (LocalReflection actually includes both blur modes so you can compare - single pass uses the fast hardcoded gaussian, while two-pass uses the algorithm from BlurSharpen.)

Granted, the current BlurSharpen filter uses a rectangular kernel (as in convolution kernel in signal processing), which may or may not do what you want. If you specifically want a gaussian, it wouldn’t be too hard to modify BlurSharpen into an efficient two-pass gaussian. This basically only needs rewriting the blur kernels (as in computational kernel i.e. shader) - the control logic can be kept as it is.

The basic idea of the two-pass blur is to notice that the 2D blur kernel is a cartesian product of two 1D blur kernels (along the x and y directions), or in other words K(x,y) = K1(x)K2(y), so you get the same result if you apply two 1D blurs in succession (first blur along x, then blur the result along y, or the other way around - the ordering doesn’t matter). This reduces the number of taps required from N**2 (sample every pixel in stencil) to 2N (sample along two lines), where N is the blur radius in pixels. By further utilizing the “free” (hardware-based) linear interpolation of GPUs, by sampling at cleverly chosen points between pixels and weighting the results appropriately, you can reduce this further to around N taps.

(* Strictly speaking, the side-length of the square-shaped stencil.)

On top of this, depending on the application, you may be able to save a further ~75% of run time in the blur passes by first downsampling the blur input into quarter resolution. As a further bonus, this doubles your blur radius (as measured in screen estate) at the same kernel size (as measured in pixels).

Since we are applying a blur, and thus there shouldn’t be any sharp edges in the output, the lower resolution might not be noticeable in the result. When you do the compositing onto the final image, the quarter-resolution blur texture is then automatically bilinearly interpolated by the GPU back to full resolution.

If you choose to use this downsampling approach, you may find useful the observation that it greatly simplifies the blur shaders if you first apply a separate downsampling pass (making this into a three-pass algorithm), because this allows you to assume a 1:1 mapping between input and output pixels in the actual blur passes. This extra pass is almost required if you want to use the linear interpolation technique, while keeping the shader code as simple as possible.

The two-pass technique with GPU-based interpolation is explained in detail in this article http://rastergrid.com/blog/2010/09/efficient-gaussian-blur-with-linear-sampling/, which I think ninth linked way back in another thread. You can get some more ideas from this comparison of fast gaussian blur algorithms http://blog.ivank.net/fastest-gaussian-blur.html, although this second article focuses on a CPU-based implementation.

I’ve been meaning to do this (I’d like to have the gaussian blur kernel as an option), but I thought I needed a break from postprocessing, so for a change I’m currently looking into making a faster hair physics simulator. :slight_smile:

(On which note, I think you have been experimenting with something Bullet-based along the same lines? Maybe we could combine forces, before I go reinventing the square wheel. I have in mind one way of making a fast(er) hair simulator, that I’ve been discussing with rdb, but it still involves custom physics code.

Using Bullet would have the advantage of having a pre-made collision system available, so that I wouldn’t have to duplicate one of the messiest parts of Newtonian physics, namely contact mechanics. It involves a lot of tricky geometric considerations and not much actual physics - at least when considering the simplest variants with Coulomb friction and the perfect slip model often used in games.)

I’ve also had the idea to downsample the buffer, and that works just fine. But I’ve bumped into another problem with the blur - not all the edges should be blured:


The marked edge is where a shadow is occluded by some geometry, this should not be blurred, but I can’t find a (cheap) way to keep this edge sharp.

Also the screen shows how the shadow look when doing 2x blur in a downsampeled buffer (the shadows have not been pre-filtered).

I’ve also got a more ‘real life’ implementation, where the shadows are rendered into an aux rendertarget, it’s somewhere in my editor :mrgreen:

My hair/cloth sim is very simple, I’ll write about it in your topic, just to keep the forum a wee bit less messed up :wink:

Hmm.

The case of a shadow occluded by geometry in front of it looks like it could be solved by adapting the blur width based on examining a small neighborhood in the depth buffer of the main camera.

The shader examining the depth buffer could then write a blur radius texture, which would be used as an auxiliary input to a variable-width blur shader.

Efficient variable-width blur is the trickiest part. The first thing that comes to mind is the tridiagonal heat equation solver in the depth-of-field filter by Kass et al. (http://graphics.pixar.com/library/DepthOfField/paper.pdf, linked earlier in the thread on CommonFilters). But it is a bit complicated to implement. I might get around to it eventually, though :slight_smile:

Of course this is just an idea, obviously I haven’t tested it :slight_smile:

Nice soft edges. I think it looks pretty good.

A familiar feeling :stuck_out_tongue:

Yes, good point :slight_smile: