CommonFilters - some new filters, and the future

Thank you so much for your work! I’ll take a look at it later (in the process of moving right now). I plan on making a beta release of 1.9 somewhere in February, so I’ll be integrating these things somewhere around that time.

Ok!

Concerning the cartoon shader improvements, beside this new postprocessing framework (that also happens to contain the new inkers) there are also the light ramping changes that allow smoothing of light/dark boundaries in lighting space.

If possible, I’d like to get this feature too into 1.9. Should I prepare a patch containing only the light ramping stuff? It’s basically some relatively simple changes to panda/src/pgraph/lightRampAttrib.* and panda/src/pgraphnodes/shaderGenerator.cxx.

While with the presence of FXAA, the smoothed light ramping is no longer absolutely critical to have, the different approach produces a different visual style, so I think it would be a nice option.

(I should probably make a graphical comparison table of the different cartoon shading feature combinations for the manual.)

You can always send a patch, but I can give no promises about what I can and cannot get into 1.9 at this point. I can’t afford delaying it, and there’s significant risk associated with adding in new features closely before a new release.

Fair enough. I know :slight_smile:

I’ll prepare the patch - even if it doesn’t make it into 1.9, if you think it’s ok to include in some future release, I’m not that particular about the timing.

Patch prepared, tested, and posted at https://bugs.launchpad.net/panda3d/+bug/1221546.

This patch adds two new cartoon shading features to light ramping:

  • Quantization for the specular component.
  • Smoothing (by linear interpolation) of light/dark boundaries in lighting space.

The patch resolves also https://bugs.launchpad.net/panda3d/+bug/497297, where the possibility to quantize the specular component was requested.

It may also interact with the solution of https://bugs.launchpad.net/panda3d/+bug/1219422 (“Shader generator errors out with some material property combinations”); my originally proposed solution and the solution currently in git (in https://github.com/panda3d/panda3d/blob/master/panda/src/pgraphnodes/shaderGenerator.cxx) are different enough so that some care may be required to avoid a regression.

Here’s a new version, now with the local reflections (SSLR) filter included.

The original filter is by ninth ([url]Screen Space Local Reflections v2]); the onepass blur is adapted from wezu’s suggestion in the same thread, and the twopass blur is new.

The version integrated here is based on ssr_base.sha in SSLR_v2.zip posted by ninth in the linked thread. At least on my new Radeon R9 290 it produces better results than ssr_zfar.sha. There is some banding (visible by switching the blurring off), which I think could be due to a depth buffer precision issue.

The screenshots were rendered with the following filters enabled: FXAA, CartoonInkThin (it’s a cartoon teapot), SSLR, SSAO, Bloom, Desaturation (with hue bandpass), Vignetting, FilmNoise, and Cutout (smoothed, partly translucent black bars). The test scene is by ninth; only the direction and intensity of the directional light has been changed.




EDIT: added missing setLocalReflection()/delLocalReflection() to CommonFilters.py (old API).
CommonFilters190_with_SSLR.zip (418 KB)

…and since the text in the window states “lens distortion”, here’s one more.


Ergh. I noticed another mistake.

In useGlowMap mode (for using the glow map as a reflectivity map), I was using the glow map value from the reflected fragment, not from the fragment that originally bounced the ray.

Attached is a version with the bug fixed.
CommonFilters190_with_SSLR_fixed.zip (418 KB)

If you make more fixes, could you give them as patches on top of the last zip? I’m paying by the megabyte at the moment.

Sure.

I’m planning to test one more idea on obtaining good-quality cartoon inking more cheaply (first pass of CartoonInkThick + FXAA), but beside that, and barring the discovery of any more bugs, I think the current version of the framework has everything that I want to include in it for 1.9. At least unless ninth has something cool up his sleeve that I’ve missed when searching the forum.

Ah, except for one technical detail - it occurs to me that during development, I’ve used FilterManager.py from 1.8.1. I haven’t modified it, so this one file in the zip should be replaced with the latest version from git. Judging by the git diff (from https://github.com/panda3d/panda3d/commits/master/direct/src/filter/FilterManager.py ), this shouldn’t require any changes to the other modules.

I have modified the various shader (.sha) files slightly, so the zip contains the correct versions.

I think you missed heat haze/stealth field from ninth :
www.panda3d.org/forums/viewtopic.php?f=8&t=15285

I have a simple ‘posterize-pixelize’ shader that I used for pyweek if you want another effect:
pyweek.org/e/wez/

There where also some interesting effects in Demomaster (like turning the scene into ascii-art).

And maybe it’s beyond the scope of common filters but maybe deferred lighting could fit somewhere in there?

True. Thank you! adds to TODO list

(Actually, I was kind of wondering how to do that. I guess I’ll read ninth’s implementation. :slight_smile: )

Oo, nice, simple, and produces a distinct potentially useful visual style. This could also be a good addition.

While I’m aware of Demomaster, at a quick glance it seemed quite monolithic, making it difficult to quickly extract different effects without first studying the framework in detail.

The standalone filters based on FilterManager were nice in that it was possible to quickly study everything relevant and understand how the implementation works.

Yeah, it’s probably beyond the scope, at least for now. I’ve read on the forums that TobiasSpringer has been working on a deferred rendering pipeline, but I’m not currently familiar with it.

The default pipeline already does everything I need at the moment, with the exception of volumetric rendering (smoke / clouds based on actual 3D voxel textures - more than as a visual effect, I’ll eventually need this for a scientific visualization). I’ve been considering making some experiments in this direction, but after 1.9. Also depth of field (based on the diffusion equation; will require some changes to FilterManager and potentially a lot of head-scratching) is probably more interesting to attempt first.

The retro-look filters have been added.

I split this into three filters:

  • Scanlines has a new option enableTint, which enables CRT pixel matrix simulation. It uses a column-aligned stripes pattern, which is generated in the shader based on pixel coordinates, so it is always pixel-perfect regardless of render target dimensions. The pixel matrix is independent of the darkening of alternating lines; try for example strength=0.2 and enableTint=True to obtain a 3x2 pattern similar to that of crt.png.
  • New filter: Pixelization (in MiscFilters.py). This filter pixelates (subsamples) the image, producing a mosaic. This is placed in its own filter, because the pixelization requires a read into the color texture at a location other than the pixel being processed. (This implies that the filter must begin a new render pass, to ensure it has access to the latest pixel values. In the terminology of the framework, such a filter is “non-mergeable”.)
  • New filter: Posterization (in MiscFilters.py). Quantization and gamma settings available as runtime parameters, just like in the original.

Also a small bugfix: the GammaAdjust filter was missing a default value for gamma in its initialization code.

Results shown here; code attached to separate post below.

The CRT pixel matrix simulation was surprisingly effective - it produces a screendoor-like effect, and characteristic color bleed in small details (see the thin parts of the wings for an example).




Latest version of code attached. This contains the retro-look filters, and the initialization bugfix to GammaAdjust.

Attached are both a full zipfile, and a patch against the contents of CommonFilters190_with_SSLR_fixed.zip.

As for the heat haze and stealth field filters, I will need to take a closer look. On my setup, the stealth field example seems to generate errors some of the time. I haven’t yet determined the cause.
retro.patch (13.5 KB)
CommonFilters190_with_retro.zip (423 KB)

I quickly looked at the source of both. The shaders themselves are very simple, but the effects require nontrivial setup outside the postprocessor.

I’m somewhat divided on whether this type of filters are suitable for inclusion to CommonFilters. For most CommonFilters, it is enough to simply switch them on, and maybe tweak the parameters.

The problem is that the heat haze and stealth field effects require setting up two cameras in a very specific way. This is something that is difficult to automate in a natural way in the postprocessing framework, and is maybe also something that does not logically belong to the job description of a postprocessing pipeline.

VolumetricLighting (which has been in CommonFilters since 1.8.x) used to require an extra camera, too, but with the addition of the “source” parameter and the internal bloom preprocessor, this requirement was lifted. In the new version, the only setup needed is to specify a caster NodePath, and choose whether the bloom preprocessor triggers on rgb or on the glow map.

Of course, this solution amounts to a change in the algorithm used to produce the effect, which in the particular case of VolumetricLighting happened to allow elimination of the extra camera. (Technically, this is because the bloom effect can generate a “volumetric lighting glow map” similar to (and maybe even better than) what was originally extracted from the second render.) Thus this approach is not universally applicable.

Then there is the case of depth-of-field, which I’m planning to add at some point. It is also something that - in the case of a large aperture combined with thin objects in the near-field out-of-focus region (where the radius of the circle of confusion skyrockets; see section 28.3 in http://http.developer.nvidia.com/GPUGems3/gpugems3_ch28.html) - cannot be done as a pure postprocess on a single render. As is discussed in the paper describing the currently most promising real-time DoF algorithm ( http://graphics.pixar.com/library/DepthOfField/paper.pdf ), it requires two separate renders to be able to handle the translucency (due to rays going fully around the object) in the near field. Thus an extra camera will be unavoidable.

I will have to think about a general approach for this category of filters, since it seems several useful effects exist that require this. In the meantime, it could be interesting to re-implement ninth’s examples using the new framework - one of the important points about the framework being that it should be easy to define custom filters that plug in to the pipeline (which enables them to trivially work together with any filters defined in CommonFilters). Such cases could be useful as tutorial code examples for 1.9.

I didn’t look at your code (yet) and it’s a while since I looked at ninths code, and I’m next to last in writing shaders, so this post may be nonsense.

Isn’t another camera (and/or scenegraph) needed for the cutouts that you wrote about earlier?

For the heat haze one could just ‘mark’ part of the screen (with stencil? mask? map? Z-fail?) as affected by the heat and distort that area with a noise normal map just like in many popular water shaders (or some clever sin/cos function), or am I missing some important step?
For the stealth field it should be more or less the same but using the eye normal of the hiden model, not just noise.

I would expect a framework for making custom effects capable of making such effects. I don’t want to say that this effect should be included and run out of the box with some makeMyGameLookAwesome(True) call, but it would be nice if it could make it at least not harder then it is now. Seting up another camera and scene ain’t that hard and it’s ok to do it manualy if it’s needed for some custom effect.

Anyway, good job on the rest, your ‘old monitor’ shader looks much better then mine hack with a texture pattern :wink:

Main issue with distortion effects is that distortion texture in both case is animated and changed fom frame to frame, so you need to render scene without distort objects to get image behind objects and then render needed objects separately with special textures or special shaders to get image for distortion + depth texture for compare.

b.t.w how about Bokeh filter?

Mm, in the use case of making walls translucent around the character, yes.

I’d forgotten about this case. My main goal for Cutout was to make a filter that can be used to generate animated black bars. This is very useful e.g. for entering/exiting a pause screen: the bars can start at the top/bottom of the screen, growing and fading in (to partial translucency, covering some area at the top/bottom edges) when pause is activated, and the reverse (shrink and fade out) when it is deactivated. Aside from Cutout itself, the rest is a matter of writing a suitable LerpFunc or an update task that dynamically updates the boundingBox and strength parameters.

The “see through walls” feature was something someone once requested on the forum, so I made the cutout filter general enough to make also that possible. But after the filter was done, I promptly forgot about this use case :stuck_out_tongue:

The heat distribution in the example, if I understood correctly, is created by a particle system. This procedurally generates a dynamic distortion texture, by rendering the particle effect using the other camera. It also produces a depth texture for the heat particles, which is used for occlusion testing in the heat shader (the distortion becomes zero in fragments where the heat particles are behind objects in the main scene).

It is true that it is possible to include just the shader in CommonFilters, but without the logic to create the heat texture (and its corresponding depth texture), it is not very useful. At the very least, it requires comprehensive documentation on how to set up the needed additional logic - which I think is most natural to provide in the form of a tutorial example program, as in ninth’s original.

The aesthetic question is whether it is cleaner to have such a “partial” implementation in CommonFilters (providing the rest of the code in a tutorial), or to package all of the components into a tutorial (in which case the effect will be missing from CommonFilters, making it more likely that users incorrectly conclude that this effect is not possible).

Yes, that is correct.

The extra setup here is needed because the stealth effect needs to see another copy of the scene with the hidden object removed, so that it knows what goes behind it. This is pretty simple to set up externally, but it cannot be done in the filter.

(It seems ninth ninja’d me on this! Just when I was about to hit submit… :slight_smile: )

Hmm. Now that you mention it, isn’t that what many CommonFilters are? :stuck_out_tongue:

(Particularly SSAO, Bloom, and now SSLR and maybe the antialiaser.)

I agree, it shouldn’t be harder than it is now.

The code implementing a custom filter may look slightly different, but that’s because it needs to tell FilterPipeline (the new API for CommonFilters) how the filter interacts with the other filters, so that FilterPipeline can make them play nicely together.

Unfortunately this does make the filter implementation slightly longer, but it provides a new advantage: any custom filter can work together with any of the filters that already exist in CommonFilters - which was not possible before (short of hacking them into CommonFilters, making it collapse into an unmaintainable mess in the process).

One of the aims in providing lots of ready-made example filters with the new CommonFilters is to document the features of the new API (i.e. show practical examples of how to use it to make a filter that requires feature X).

Thanks. The method is actually simple: use texpix (size of one pixel in texture coordinate units) to calculate the pixel position (x,y) of the current fragment, and then just index into an array of tint colors with x mod 3. The only complication is that the most reliable, but ancient arbfp1 profile does not support indexing into arrays with arbitrary ints, so I had to use a switch statement instead. The darkening of alternate lines works similarly (using y).

And thanks for the idea and the colors - extracted from crt.png in your pyweek entry :slight_smile:

Ninja’d!

Yes, exactly :slight_smile:

Do you mean depth-of-field (DoF) or something else?

I think extreme bokeh “light spots” (where one spot can have a diameter of over 100 pixels) are better represented by additively blended sprites (which gives the extra advantage of easily specifying the iris shape), but for small or moderate amounts, I’d go for DoF (assuming a circular iris for simplicity).

I’m planning to implement DoF at some point, but probably after 1.9, and I can’t make any promises at the moment. The approach described in the paper I linked above (by Kass et al. from Pixar and UC Davis, maybe from 2006-2007) currently seems the most promising real-time DoF technique. It requires a tridiagonal linear equation system solver (an algorithm for this is also explained in the paper) - overall it’s somewhat complicated, but should be doable.

Yes, I mean DoF with Bokeh.
Just as info: I’ve seen several implementations of this effect in the net. For example, GLSL on Blender Game Engine:
blenderartists.org/forum/showthr … 8update%29