When showing objects that represent light-sources (lamps, burning torches, etc.), it’s not uncommon I think to place a sprite depicting a circular “glow” at the location of the light-source, to give the impression of light-bloom. However, if this sprite is a simple billboarded quad, and if the light-source is placed close to a wall, the sprite can ended up clipping into the wall, resulting in an unsightly sharp edge to the “glow”.
Are there any ways within the bounds of out-of-the-box Panda functionality and the automatic shader-generator (i.e. without using custom shaders) to prevent that clipping?
I could simply set a huge depth-offset on the quad depicting the “glow”–but I fear that this could result in the light being left unclipped by geometry that should occlude it.
The problem could be easily solved if the entire quad could be given a single depth, being the depth of the quad’s origin-point–but I’m not aware of a way to do that without custom shaders.
I suppose that I could use ray-casts to every light source, and use the results of those casts to show or hide each individual glow–but that seems clunky.
The depth offset trick came to mind first. You could also try to implement the effect using multiple staggered planes at varying depths with a weaker effect; this will mean there are more planes clipping the wall, but each of them will appear more weakly, possibly resulting in a smoother transition.
The alternative is that you implement the glow entirely via post-processing filters. This is possible by only rendering the light source (without a quad) as a sphere (or any desired shape), with an all-white glow map assigned, and having the post-processing bloom filter configured to operate only on bits of the framebuffer with an alpha value (which is where the shader generator can be configured to write the glow map to).
I’m not sure what you mean by giving the entire quad a single depth. Is this not already the case, due to it being billboarded?
An alternative to all this is to implement soft particles:
It is tempting. I suppose that I could experiment with it, and see just how large an offset would be called for…
Hmm… That’s an interesting idea. It does mean more overdraw, but perhaps not too much. It seems like something to think about, at least.
Interesting. If I’m imagining this correctly, I don’t think that it’ll have quite the effect that I’m looking for–my “glows” have a painted texture, rather than being smooth gradients–but it seems like a neat idea.
Ah, you’re quite right. I think that I had that more or less the wrong way around: what I’m imagining calls not for giving the whole quad a single depth, but rather for having the entire quad be clipped based on a single depth-test, done at the centre of the quad.
That way, if the centre is visible, the whole quad is visible, while if the centre is hidden, the whole quad is hidden.
(Another likely-infeasible-out-of-the-box idea might be to determine the depth of the billboard as if it weren’t billboarded–as if it were parallel to the surface behind–and then (somehow) rotate it to face the camera while still using the unrotated depth information…)
That’s a really neat result!
However, this is all for a side-project, and I really want to stick to out-of-the-box shaders in it.
Thank you for all of the suggestions and feedback!
I’m afraid this kind of thing is impossible because GPUs are highly parallel things: many fragments are being processed at the same time, and being written to the output buffer at the same time. This works because the pipeline for a particular fragment only needs deal with its own little part of the framebuffer. If one fragment started depending on the depth value somewhere else in the framebuffer, then this would introduce a dependency, breaking this parallelisation. So, graphics APIs don’t allow this.
It’s still possible to do this sort of thing if you were to use an early-Z technique rendering technique; first render the depth buffer, copy it to a texture, and then render the color buffer, which can now have random access to the depth buffer. Or, you render everything in the first pass, and all your glow in a second pass. But, that would require custom shaders, and isn’t actually simpler than implementing soft particles.