CommonFilters - some new filters, and the future

Attached is a version with the new filters included. Introducing (in alphabetical order) CartoonInkThick, CartoonInkThin, Desaturation, LensDistortion and LensFlare.

Here are some test screenshots of radioactive cartoon animals (bad framerate because the GPU is only an Intel Ironlake Mobile):



The two new cartoon ink filters are intended to replace the old one in new scripts; the old one is still provided for backward compatibility. The new filters are based on two different algorithms, and produce different-looking results, so both are included. (The screenshots here were made using CartoonInkThin.)

It turns out that the last posted version of the framework was sufficient for all of the new filters. Of the filters so far, the most demanding regarding framework features was VolumetricLighting with its “source” parameter.

So, now it’s only for the final cleanup before the overhaul is done, unless any new points are brought up in review.

The current status is that I’ve updated some of the docstrings in the framework, but not yet all of them. Also, I haven’t yet decided the final names for setup() (both variants), make() and synthesize().

I think I will make the final VolumetricLighting into a CompoundFilter to facilitate ease of use; what is currently called VolumetricLighting will become VolumetricLightingCompositor or some such.

EDIT: fixed a small input handling bug in the code. New version attached.
CommonFilters190_withnewfilters.zip (215 KB)

Here’s a screenshot using CartoonInkThick, for comparison:


And here’s CartoonInkThin again, but now with cubic ink strength profile:


EDIT: Here are the strength profile curves:


Here the x range 0…1 represents those score values that pass voting. The value 0 is votingThreshold, and 1 means that all pixels in the stencil agree.

FXAA antialiaser added. New version attached.

Beside smoothing jaggy edges in geometry (and doing it fast, with only 6 additional texture lookups per pixel), FXAA is actually pretty good for cartoon models - it smooths the light/dark boundaries and material boundaries inside the screen estate occupied by the character. Here’s a screenshot with Panda’s default (thresholding, non-antialiased) light ramping + FXAA:


The earlier screenshots used my custom light/dark transition smoothing that works in lighting space; FXAA, in contrast, is a screen-space spatial filter.
CommonFilters190_withantialias.zip (218 KB)

Now with authentic lo-fi home movie look. FilmNoise implemented, Vignetting split off from LensDistortion into its own filter:


Or to combine a high-end cartoon look with the lo-fi home movie, add AmbientOcclusion to the mix (this is the point where my Ironlake Mobile really starts to beg for mercy - but it runs!):


Note that this is still using cartoon shading (look at the sharp but antialiased light/dark boundaries); the only ‘realistic’ shading components are AmbientOcclusion and VolumetricLighting.

The RNG currently used in FilmNoise sucks (see https://stackoverflow.com/questions/4200224/random-noise-functions-for-glsl); a better one (with a Panda-compatible license) is available from https://github.com/ashima/webgl-noise, but that will have to wait until the move to GLSL.

FilmNoise required a new framework feature: defining additional Cg functions to use in the compositing shader. Currently, to use this feature, return a longer tuple from your synthesize() implementation; FilterStage dumps any extra return values into the shader source code as they are. Name mangling (to uniqify the helper function names) must be done by each filter. Working example available in the FIlmNoise filter.

I have finally figured out what to do with the function names: setup(), make() and synthesize() behave the way events do, so I think they need to become onAttachPipeline()/onAttachStage(), onCompileInternalStages() and onSynthesizeCompositor(), respectively. Similarly, setdown() needs to become onDetachPipeline()/onDetachStage().

As for the interface methods, attach() will become attachPipeline()/attachStage(), detach() will be split into detachPipeline()/detachStage(), and synthesizeFragmentShader() will be renamed synthesizeCompositor().

Maybe similar renaming needs to be applied to the pair resetConfiguration()/reset(); these should become reset()/onReset(), respectively. And update() should be onUpdate() (in Filter, where it is called like an event handler).

The rest of the method names are probably good enough as they are.

Another version coming in the next few days.
CommonFilters190_withlofi.zip (225 KB)

…for large values of “few”. New version attached.

Method renaming complete. For those playing at home (list format: old = new, like assigment):

  • attach() = attachPipeline(), attachStage()
  • detach() = detachPipeline(), detachStage()
  • setup() = onAttachPipeline(), onAttachStage()
  • setdown() = onDetachPipeline(), onDetachStage()
  • make() = onCompileInternalStages()
  • synthesize() = onSynthesizeCompositor()
  • synthesizeFragmentShader() = synthesizeCompositor()
  • reset() = onReset()
  • resetConfiguration() = reset()
  • update() (in Filter only) = onUpdate()

New features / improvements

  • Cutout: new filter providing configurable cutouts for effects such as black bars, sniper scope, or seeing through walls around character in classic Fallout style (the last one of course requires two scene graphs). Rectangle and ellipse shapes, smoothable boundary, inversion option (mask away inside or outside of shape), RGB / A / RGBA blending modes, and configurable mask color.
  • Scanlines: added traveling bright/dark band artifact simulation. Useful also on its own with zero scanlining strength.
  • BlurSharpen: differently sized blurs are now available, using a similar downscaling strategy as in Bloom. Additionally, new filter-blursmall kernel added for extra small blur (17 pixels using 5 texture lookups).
  • BlurSharpen: performance optimization: filter-blurx and filter-blury now need only 4 texture lookups each, while retaining the original kernel size of 7 pixels. The optimization requires the input and output textures to have the same dimensions; to provide this, the internal stage setup logic in BlurSharpen.py has been simplified to work the same way as in Bloom.
  • VolumetricLighting re-implemented using CompoundFilter, containing instances of Bloom and VolumetricLightingCompositor (which was previously called VolumetricLighting).

Bugfixes (for bugs already present in 1.8.1)

  • Bloom: large bloom was, at the same settings, 4 times as bright as other sizes, since filter-down4 was missing a division. This fix makes filter-down4 a generally useful downscaler for other filters, too.
  • BlurSharpen: y size of the blur was twice the x size due to incorrect use of “div” parameter when calling FilterManager. The comments in BlurSharpen.py have been updated to reflect the correct usage, and to document more clearly what is being done. Now the blur has the same radius in both the x and y directions.
  • SSAO: same blur size bug in the blur phase as in BlurSharpen.

Small technical change: texture “bloom3” is now “bloomOutput”. Due to the presence of the “texture borrowing” feature, it is recommended to give meaningful ( = reusable ) internal textures meaningful names, since the names are exported to the FilterStage level.

Sensible naming makes the names stable in the sense that if the number of internal stages in some filter changes later, any filters borrowing some particular texture from it will still get that same texture (i.e. they won’t get something entirely different or even a runtime error).

The postprocessing filter system is pretty much complete now, except for the upcoming integration of ninth’s SSLR filter.
CommonFilters190_method_name_cleanup.zip (390 KB)

Thank you so much for your work! I’ll take a look at it later (in the process of moving right now). I plan on making a beta release of 1.9 somewhere in February, so I’ll be integrating these things somewhere around that time.

Ok!

Concerning the cartoon shader improvements, beside this new postprocessing framework (that also happens to contain the new inkers) there are also the light ramping changes that allow smoothing of light/dark boundaries in lighting space.

If possible, I’d like to get this feature too into 1.9. Should I prepare a patch containing only the light ramping stuff? It’s basically some relatively simple changes to panda/src/pgraph/lightRampAttrib.* and panda/src/pgraphnodes/shaderGenerator.cxx.

While with the presence of FXAA, the smoothed light ramping is no longer absolutely critical to have, the different approach produces a different visual style, so I think it would be a nice option.

(I should probably make a graphical comparison table of the different cartoon shading feature combinations for the manual.)

You can always send a patch, but I can give no promises about what I can and cannot get into 1.9 at this point. I can’t afford delaying it, and there’s significant risk associated with adding in new features closely before a new release.

Fair enough. I know :slight_smile:

I’ll prepare the patch - even if it doesn’t make it into 1.9, if you think it’s ok to include in some future release, I’m not that particular about the timing.

Patch prepared, tested, and posted at https://bugs.launchpad.net/panda3d/+bug/1221546.

This patch adds two new cartoon shading features to light ramping:

  • Quantization for the specular component.
  • Smoothing (by linear interpolation) of light/dark boundaries in lighting space.

The patch resolves also https://bugs.launchpad.net/panda3d/+bug/497297, where the possibility to quantize the specular component was requested.

It may also interact with the solution of https://bugs.launchpad.net/panda3d/+bug/1219422 (“Shader generator errors out with some material property combinations”); my originally proposed solution and the solution currently in git (in https://github.com/panda3d/panda3d/blob/master/panda/src/pgraphnodes/shaderGenerator.cxx) are different enough so that some care may be required to avoid a regression.

Here’s a new version, now with the local reflections (SSLR) filter included.

The original filter is by ninth ([url]Screen Space Local Reflections v2]); the onepass blur is adapted from wezu’s suggestion in the same thread, and the twopass blur is new.

The version integrated here is based on ssr_base.sha in SSLR_v2.zip posted by ninth in the linked thread. At least on my new Radeon R9 290 it produces better results than ssr_zfar.sha. There is some banding (visible by switching the blurring off), which I think could be due to a depth buffer precision issue.

The screenshots were rendered with the following filters enabled: FXAA, CartoonInkThin (it’s a cartoon teapot), SSLR, SSAO, Bloom, Desaturation (with hue bandpass), Vignetting, FilmNoise, and Cutout (smoothed, partly translucent black bars). The test scene is by ninth; only the direction and intensity of the directional light has been changed.




EDIT: added missing setLocalReflection()/delLocalReflection() to CommonFilters.py (old API).
CommonFilters190_with_SSLR.zip (418 KB)

…and since the text in the window states “lens distortion”, here’s one more.


Ergh. I noticed another mistake.

In useGlowMap mode (for using the glow map as a reflectivity map), I was using the glow map value from the reflected fragment, not from the fragment that originally bounced the ray.

Attached is a version with the bug fixed.
CommonFilters190_with_SSLR_fixed.zip (418 KB)

If you make more fixes, could you give them as patches on top of the last zip? I’m paying by the megabyte at the moment.

Sure.

I’m planning to test one more idea on obtaining good-quality cartoon inking more cheaply (first pass of CartoonInkThick + FXAA), but beside that, and barring the discovery of any more bugs, I think the current version of the framework has everything that I want to include in it for 1.9. At least unless ninth has something cool up his sleeve that I’ve missed when searching the forum.

Ah, except for one technical detail - it occurs to me that during development, I’ve used FilterManager.py from 1.8.1. I haven’t modified it, so this one file in the zip should be replaced with the latest version from git. Judging by the git diff (from https://github.com/panda3d/panda3d/commits/master/direct/src/filter/FilterManager.py ), this shouldn’t require any changes to the other modules.

I have modified the various shader (.sha) files slightly, so the zip contains the correct versions.

I think you missed heat haze/stealth field from ninth :
www.panda3d.org/forums/viewtopic.php?f=8&t=15285

I have a simple ‘posterize-pixelize’ shader that I used for pyweek if you want another effect:
pyweek.org/e/wez/

There where also some interesting effects in Demomaster (like turning the scene into ascii-art).

And maybe it’s beyond the scope of common filters but maybe deferred lighting could fit somewhere in there?

True. Thank you! adds to TODO list

(Actually, I was kind of wondering how to do that. I guess I’ll read ninth’s implementation. :slight_smile: )

Oo, nice, simple, and produces a distinct potentially useful visual style. This could also be a good addition.

While I’m aware of Demomaster, at a quick glance it seemed quite monolithic, making it difficult to quickly extract different effects without first studying the framework in detail.

The standalone filters based on FilterManager were nice in that it was possible to quickly study everything relevant and understand how the implementation works.

Yeah, it’s probably beyond the scope, at least for now. I’ve read on the forums that TobiasSpringer has been working on a deferred rendering pipeline, but I’m not currently familiar with it.

The default pipeline already does everything I need at the moment, with the exception of volumetric rendering (smoke / clouds based on actual 3D voxel textures - more than as a visual effect, I’ll eventually need this for a scientific visualization). I’ve been considering making some experiments in this direction, but after 1.9. Also depth of field (based on the diffusion equation; will require some changes to FilterManager and potentially a lot of head-scratching) is probably more interesting to attempt first.

The retro-look filters have been added.

I split this into three filters:

  • Scanlines has a new option enableTint, which enables CRT pixel matrix simulation. It uses a column-aligned stripes pattern, which is generated in the shader based on pixel coordinates, so it is always pixel-perfect regardless of render target dimensions. The pixel matrix is independent of the darkening of alternating lines; try for example strength=0.2 and enableTint=True to obtain a 3x2 pattern similar to that of crt.png.
  • New filter: Pixelization (in MiscFilters.py). This filter pixelates (subsamples) the image, producing a mosaic. This is placed in its own filter, because the pixelization requires a read into the color texture at a location other than the pixel being processed. (This implies that the filter must begin a new render pass, to ensure it has access to the latest pixel values. In the terminology of the framework, such a filter is “non-mergeable”.)
  • New filter: Posterization (in MiscFilters.py). Quantization and gamma settings available as runtime parameters, just like in the original.

Also a small bugfix: the GammaAdjust filter was missing a default value for gamma in its initialization code.

Results shown here; code attached to separate post below.

The CRT pixel matrix simulation was surprisingly effective - it produces a screendoor-like effect, and characteristic color bleed in small details (see the thin parts of the wings for an example).




Latest version of code attached. This contains the retro-look filters, and the initialization bugfix to GammaAdjust.

Attached are both a full zipfile, and a patch against the contents of CommonFilters190_with_SSLR_fixed.zip.

As for the heat haze and stealth field filters, I will need to take a closer look. On my setup, the stealth field example seems to generate errors some of the time. I haven’t yet determined the cause.
retro.patch (13.5 KB)
CommonFilters190_with_retro.zip (423 KB)

I quickly looked at the source of both. The shaders themselves are very simple, but the effects require nontrivial setup outside the postprocessor.

I’m somewhat divided on whether this type of filters are suitable for inclusion to CommonFilters. For most CommonFilters, it is enough to simply switch them on, and maybe tweak the parameters.

The problem is that the heat haze and stealth field effects require setting up two cameras in a very specific way. This is something that is difficult to automate in a natural way in the postprocessing framework, and is maybe also something that does not logically belong to the job description of a postprocessing pipeline.

VolumetricLighting (which has been in CommonFilters since 1.8.x) used to require an extra camera, too, but with the addition of the “source” parameter and the internal bloom preprocessor, this requirement was lifted. In the new version, the only setup needed is to specify a caster NodePath, and choose whether the bloom preprocessor triggers on rgb or on the glow map.

Of course, this solution amounts to a change in the algorithm used to produce the effect, which in the particular case of VolumetricLighting happened to allow elimination of the extra camera. (Technically, this is because the bloom effect can generate a “volumetric lighting glow map” similar to (and maybe even better than) what was originally extracted from the second render.) Thus this approach is not universally applicable.

Then there is the case of depth-of-field, which I’m planning to add at some point. It is also something that - in the case of a large aperture combined with thin objects in the near-field out-of-focus region (where the radius of the circle of confusion skyrockets; see section 28.3 in http://http.developer.nvidia.com/GPUGems3/gpugems3_ch28.html) - cannot be done as a pure postprocess on a single render. As is discussed in the paper describing the currently most promising real-time DoF algorithm ( http://graphics.pixar.com/library/DepthOfField/paper.pdf ), it requires two separate renders to be able to handle the translucency (due to rays going fully around the object) in the near field. Thus an extra camera will be unavoidable.

I will have to think about a general approach for this category of filters, since it seems several useful effects exist that require this. In the meantime, it could be interesting to re-implement ninth’s examples using the new framework - one of the important points about the framework being that it should be easy to define custom filters that plug in to the pipeline (which enables them to trivially work together with any filters defined in CommonFilters). Such cases could be useful as tutorial code examples for 1.9.