CommonFilters - some new filters, and the future

These are some good ideas. I have some comments.

This makes sense.

I don’t see why. That seems unnecessarily restrictive. People should be able to create their own instances of the individual Filter classes, inherit from them, etc. Part of the flexibility that this overhaul would offer is that it would allow people to customise CommonFilters with their own filters by creating their own Filter class and adding it.

I imagine that CommonFilters can store a list of filters with a sort value each that would determine in which order they get applied in the final compositing stage…

I imagine methods like setVolumetricLighting() to become simple stubs that call something like addFilter(VolumetricLightingFilter(*args, **kwargs)).

Makes sense. By a “single stage”, you mean a part of the final compositing filter, or an individual render pass (like the blur-x and blur-y passes of the blur filter)? These are two separate concepts, though I suppose they don’t strictly need to be.

Ah, so CommonFiltersCore represents a single filter pass? (Can we call it something like FilterPass or FilterStage then?) Can we let the Filter class set up and manage its stages rather than the CommonFilters? It does sound like something that would be managed by each filter, although I suppose some filters might share a filter pass (ie. blur and bloom may share an initial blur pass). Hmm.

I don’t understand what you mean by “capturing a buffer”, could you please explain that? You can already use FilterManager with a buffer, if that’s what you meant, but I don’t quite understand the necessity of that.

Could the user achieve the same thing by subclassing Filter and adding this Filter to the same CommonFilters object?

Then I think that FilterStage would be a far more representative term, don’t you think? :wink:

One thing I don’t quite understand - is a stage a render pass by itself, or a stage in the final compositing shader?

Not all stages need an input color texture. SSAO, for instance, does not.

I think FilterConfig is obsoleted by the new Filter design, since each Filter can just take all of the properties in the constructor via keyword arguments, and have properties or setters that invalidate the shader when they are modified depending on the property. Each setter of a particular Filter could either update a shader input or mark the shader as needing to be regenerated.

I think that each filter could possibly be a Cg function with the arguments it needs passed to it for better organisation.

You could have a filter stage that’s added by default with a negative sort value with its only purpose being to set o_color, which is always applied first.

I agree that this probably belongs in the the individual Filter classes.

I think HalfPixelShift should be a global setting in CommonFilters and not a filter at all.

I think at this point it would help to hack up some pseudo-code that kind of shows how the systems work together and perhaps showing a example filter while skipping over the details. It would help to get a good overview and help me to understand your design better.

Thanks for the comments! Some responses below.

Maybe I should explain what I was trying to achieve. :slight_smile:

The idea was that it should be easy to learn to use the CommonFilters system by reading the API documentation. At least I have learned a lot about Panda by searching the API docs.

If all the modules are placed in the same directory as CommonFilters itself, there will be lots of modules in the same place, and finding the interesting one becomes difficult.

I agree that flexibility is desirable.

Adding it where? In their local copy of the Panda source tree?

Hmm, this would make it easier to contribute new filters to Panda, which is nice.

Yes, that is part of the solution. But there are two separate issues here:

First is where to store the sort values. If I understood correctly, we seem to agree that this information belongs in the Filter subclasses.

Secondly, there are some filter combinations that cannot be applied in a single pass. BlurSharpen and anything else is one such combination - the blur will not see the processing from the other filters applied during the same pass.

Yes, something like that.

Thanks for asking (I’m sometimes very informal about terminology). By “stage of pipeline”, I meant a render pass.

But that doesn’t capture the idea strictly, either. From the viewpoint of the pipeline, the important thing to look at are the input textures needed by each filter.

Filters that share the same input textures (down to what should be in the pixels), and respect previous modifications to o_color in their fshader code, can work in the same pass. I think it’s a potentially important performance optimization to let them do so, so that enabling lots of filters does not necessarily imply lots of render passes.

Some filters may have internal render passes (such as blur), but to the pipeline this is irrelevant. Blur works, in a sense, as a single unit that takes in a colour texture, and outputs a blurred version. The input colour texture is the input to that pass in the pipeline where the blur filter has been set.

If the aim is to blur everything that is on the screen, the blur filter must come at a later render pass in the pipeline, so that it can use the postprocessed image as its input.

My proposal was that the core synthesizes code for a single “pipeline render pass”, so that the pipeline setup can occur in a higher layer (creating several, differently configured instances of the core).

Yes, we can change the name to something sensible :slight_smile:

Any internal stages (passes) (e.g. blur-x and blur-y) are indeed meant to be handled by each subclass of Filter.

About sharing passes in general, I agree. That is the reason to have a code generator that combines applicable filters to a single pass in the pipeline.

About blur and bloom specifically, I think they belong to different passes, because the effects they reproduce happen at different stages in the image-forming process.

I would like to set up the ordering of the filters as follows:

  • full-scene antialiasing (if added later)
  • CartoonInk, to simulate a completely drawn cel
  • optical effects in the scene itself (local reflection (if added later), ambient occlusion, volumetric lighting in that order)
  • optical effects in the lens system (bloom, lens flare)
  • film or detector effects (tinting, desaturation, colour inversion)
  • computer-based postprocessing (blur)
  • display device (scanlines)
  • debug helpers (ViewGlow)

Keep in mind that e.g. chromatic aberration in the lens should occur regardless of whether the result is recorded on colour or monochrome film.

Also note that these categories might not be exhaustive, might not correspond directly to render passes, and in some cases it can be unclear which category a given filter belongs to. For example, I tend to think of blur as a computer-generated postprocessing effect (requiring a complete “photograph” as input), but it could also represent the camera being out of focus, in which case it would come earlier in the pipeline (but definitely after CartoonInk and scene optical effects). I’m not sure what to do about such cases.

(Bloom, likewise, may be considered as a lens effect (the isotropic component of glare), or as a detector effect (CCD saturation). Maybe it is more appropriate to think of it as a lens effect.)

Finally, note that currently, only lens flare supports chromatic aberration. I think I’ll add full-screen chromatic aberration and vignetting to my to-do list, to approach a system that can simulate lens imperfections.

There are two use cases I’m thinking of.

First is daisy-chaining custom filters with CommonFilters. People sometimes use FilterManager to set up custom shaders, but the problem is that if you do that, it is not easy to apply CommonFilters on top of the result (or conversely, to apply your own shaders on top of what is produced by CommonFilters). When you apply either of these, you lose the camera, and can no longer easily set up the other one to continue where the other left off.

For a thought experiment, consider the original lens flare code by ninth (attached in Lens flare postprocess filter), and how you would go about applying CommonFilters to the same scene either before or after the lens flare. If I haven’t missed anything, currently it is not trivial to do this.

The second case is a scene with two render buffers doing different things, which are both postprocessed using CommonFilters, then rendered onto a quad (using a custom shader to combine them), and then the final quad is postprocessed using CommonFilters. There is a code example in my experiment on VolumetricLighting with cartoon-shaded objects: [Sample program] God rays with cartoon-shaded objects which probably explains better what I mean.

The thing is that at least in 1.8.1, setting up the combine step is overly complicated:

quadscene = NodePath("filter-quad-scene")
quadcamera = base.makeCamera2d(base.win, sort=7)
quadcamera.reparentTo(quadscene)
cm = CardMaker("filter-quad-card")
cm.setFrameFullscreenQuad()
self.quadNodePath = NodePath(cm.generate())
finaltex = Texture()
self.quadNodePath.setTexture(finaltex)
self.quadNodePath.reparentTo(quadcamera)

…when compared to the case where the original scene render does not need any postprocessing:

from direct.filter import FilterManager
manager = FilterManager.FilterManager(base.win, base.cam)
scenetex = Texture()
self.quadNodePath = manager.renderSceneInto(colortex=scenetex)

If you have a camera, it is just one line to call FilterManager to set up the render-into-quad, but if you don’t (because CommonFilters took it), you need to do more API acrobatics to create one and set up the render-into-quad manually.

EDIT: Also, then FilterManager (or CommonFilters when it calls FilterManager internally) goes on to obsolete the manually created quad and camera, creating another quad and another camera. It would be nice to avoid the unnecessary duplication. I don’t know if it affects performance, but at least it would make for a cleaner design.

Then, in both cases, we set up the combining shader

self.quadNodePath.setShader(Shader.make(SHADER_ADDITIVE_BLEND))
self.quadNodePath.setShaderInput("txcolor", scenetex)
self.quadNodePath.setShaderInput("txvl", vltex)
self.quadNodePath.setShaderInput("strength", 1.0)

and finally postprocess

self.finalfilters = CommonFilters(base.win, quadcamera)
self.finalfilters.setBlurSharpen()  # or whatever

though here, now that I think of it, I’m not sure how to get the quad camera in the case where FilterManager internally creates it.

In summary, what I’m trying to say is that I think these kinds of use cases need to be more convenient to set up :slight_smile:

Maybe.

The difficulty in that approach is that the user needs to understand the internals of CommonFilters, in order to be able to set up the pipeline pass number and sort-within-pipeline-pass priority correctly, in order to make CommonFilters insert the shader at the desired step in the process. Especially, the user must know which pipeline pass the shader can be inserted into (so that it won’t erase postprocessing by other filters; consider the blur case).

In addition, the user-defined shader must then respect the limitation that within the same pipeline pass, each fshader snippet must respect any previous changes to o_color. I think it is error-prone to require that of arbitrary user code, and especially, this makes it harder just to experiment with shaders copied from the internet.

Also, the user then needs to conform to the Filter API. If the user wants to contribute to CommonFilters, that is the way to go. But for quick experiments and custom in-house shaders, I think FilterManager and daisy-chaining would be much easier to use, as then any valid shader can be used and there are no special conventions or APIs to follow.

Maybe :wink:

As mentioned above, I was speaking of a render pass (but with the caveats mentioned).

Of the code for different filters in the compositing shader, I used the term “snippet” as I didn’t have anything better in my mind :slight_smile:

Good point.

That is another way to do it. May be cleaner.

Does this bring overhead? Or does the compiler inline them?

Also - while I’m not planning to go that route now - Cg is no longer being maintained, so is it ok to continue using it, or should we switch completely to GLSL at some point?

That’s one way of applying the default.

But how likely is the default to be wrong, i.e. do we need to take this case into account?

EDIT: Aaaa! Now I think I understand. If the default is wrong, then override this default filter stage somehow? E.g. sort=-1 means the output colour initialization stage, and if a stage with that sort value is provided by the user, that one is used, but if not, then the default one is used.

Ok.

Ok. I’ll put together an example.

Here’s a more concrete proposal. It’s about 90% Python, with 10% pseudocode in comments.

It’s in one file for now to ease browsing - I’ll split it to modules in the actual implementation. I zipped the .py because the forum does not allow posting .py files.

Currently this contains a Filter interface, a couple of simple example filters trying to cover as much of Filter API use cases as possible, and a work-in-progress FilterStage.

FilterPipeline and CommonFilters are currently covered just by a few lines of comments.

Comments welcome.
filterinterface_proposal.zip (8.56 KB)

Wow, that’s quite a bit more than some simple pseudo-code. :stuck_out_tongue: Thanks.
It looks great to me! A few minor comments.

Instead of getNeededTextures, I would suggest that there is instead a setup() method in which the Filter classes can call registerInputTexture() or something of the sort. The advantage of this is that we can later extend which things are stored about a texture input by adding keyword arguments to that method, without having to change the behaviour in all existing Filter implementations. It seems a bit cleaner as well. The same goes for getCustomParameters.

getNeedGlow seems a bit specific. Can we instead store a bitmask of AuxBitplaneAttrib flags?

I’m not quite sure I understand this stage idea. Is the “stage” string one of a number of fixed built-in stages? Are the different stages hard-coded? Can you explain to me in simple terms what exact purpose the stage concept serves?

I’m not sure if all of those methods need getters - it seems that some of them can simply remain a public member, like sort and needs_compile. I think sort can be a member with a filter-specific default value, but that can be changed by the user.

I think the strange inspection logic in setFilter has to go. We should keep it simply by either allowing someone to add a filter of a certain type more than once (even if that doesn’t make sense), or raising an error, or removing the old one entirely.

Just FYI, cmp= in sort() is deprecated and no longer supported in Python 3. Instead, you should do this:

self.filters.sort(key=lambda f: f.sort)

where Filter stores a self.sort value.

I think there is no reason to keep CommonFilters an old-style class. Perhaps CommonFilters should inherit from FilterPipeline?

I think more clearly when actually coding :stuck_out_tongue:

Thanks for the comments!

Ah, this indeed sounds more extensible. Let’s do that.

Yes, why not.

The other day, I was actually thinking that SSLR will need gloss map support from the main render stage, and this information needs to be somehow rendered from the material properties into a fullscreen texture… so, a general mechanism sounds good :slight_smile:

In this initial design, yes and yes, but the idea is that it is easy to add more (when coding new filters) if needed.

I’m not completely satisfied by this solution, but I haven’t yet figured out a better alternative which does not involve unnecessary bureaucracy at call time.

In short, the stage concept is a general solution to the problem of blur erasing the output of other postprocessing filters that are applied before it.

Observe that the simplest solution of applying blur first does not do what is desired, because then the scene itself will be blurred, but all postprocessing (e.g. cartoon ink) will remain sharp.

The expected result is that blur should apply to pretty much everything rendered before lens imperfections (or alternatively, to pretty much everything except scanlines, if blur is interpreted as a computer-based postprocess).

As for the why and how:

As you know, a fragment shader is basically an embarrassingly parallel computation kernel, i.e. it must run independently for each pixel (technically, fragment). All the threads get the same input texture, and they cannot communicate with each other while computing. The only way to pass information between pixels is to split the computation into several render passes, with each pass rendering the information to be communicated into an intermediate texture, which is then used as input in the next pass.

The problem is that with such a strictly local approach, some algorithms are inherently unable to play along with others - they absolutely require up-to-date information also from the neighbouring pixels.

Blur is a prime example of this. Blurring requires access to the colour of the neighbouring pixels as well as the pixel being processed, and this colour information must be fully up to date, to avoid erasing the output of other postprocessing algorithms that are being applied.

I’m not mathematically sure that blur is the only one that needs this, and also, several postprocessing algorithms (for example, the approximate depth-of-field postprocess described in http.developer.nvidia.com/GPUGem … _ch28.html) require blurring as a component anyway. Thus, a general solution seems appropriate.

The property, which determines whether another stage is needed, is the following: if a filter needs to access its input texture at locations other than the pixel being rendered, and it must preserve the output of previous postprocessing operations also at those locations, then it needs a new stage. This sounds a lot like blur, but dealing with mathematics has taught me to remain cautious about making such statements :slight_smile:

(For example, it could be that some algorithm needs to read the colour texture at the neighbouring pixels just to make decisions, instead of blurring that colour information into the current pixel.)

One more note about stages - I’m thinking of adding automatic stage consolidation, i.e. the pipeline would only create as many stages as are absolutely needed. For example, if blur is not enabled, there is usually no reason for the post-blur filters to have their own stage.

More about this later.

Ok. May be cleaner.

On this note, I’ve played around with the idea of making the filter parameters into Python properties. This would have a couple of advantages.

First, we can get rid of boilerplate argument-reading code in the derived classes. The Filter base class constructor can automatically populate any properties (that are defined in the derived class) from kwargs, and raise an exception if the user is trying to set a parameter that does not exist for that filter (preventing typos). This requires only the standard Python convention that the derived class calls super(self.class, self).init(**kwargs) in its init.

Secondly, as a bonus, this allows for automatically extracting parameter names - by simply runtime-inspecting the available properties - and human-readable descriptions (from the property getter docstrings).

That sounds good. Let’s do that.

Maybe stage should be user-changeable, too. (Referring here to the fact that for some filters (e.g. blur), the interpretation of what the filter is trying to simulate, affects which stage it should go into.)

Ok.

The only purpose here was to support the old API, which has monolithic setThisAndThatFilter() methods that are supposed to update the current configuration.

If this can be done is some smarter way, then I’m all for eliminating the strange inspection logic :slight_smile:

Ok. Personally I’m pretty particular about Python 2.x (because of line_profiler, which is essential for optimizing scientific computing code), but I agree that Panda shouldn’t be. :slight_smile:

I’ll change this to use the forward-compatible approach.

Maybe. This way, it could simply add a backward-compatible API on top of FilterPipeline, while all of the functionality of the new FilterPipeline API would remain directly accessible. That sounds nice.

I’ll have to think about this part in some more detail.

In the meantime while I’m working on the new CommonFilters architecture, here are screenshots from one more upcoming filter: lens distortion.

The filter supports barrel/pincushion distortion, chromatic aberration and vignetting. Optionally, the barrel/pincushion distortion can also radially blur the image to simulate a low-quality lens.




This filter will be available once the architecture changes are done.

EDIT: updated the attached code. Code generation and HalfPixelShift are now done.
EDIT2: fixed bug in code enabling HalfPixelShift and some erroneous comments. Attachment updated.
EDIT3: fixed some comments and asserts.
EDIT4: update task mechanism added; it is now a registrable for each individual Filter. ScanlinesFilter provides an example.
EDIT5: the attachment in this post is the last version before the module split; it is now obsolete. See the later post, including code that has been split into modules.

A first version of the CommonFilters re-architecture is almost complete.

I still need to split the code into modules and add imports, and port most of the existing filters (including my new inker) over to the new architecture, but the infrastructure should now be in place.

I expect to get to the testing phase in a day or two.

Some highlights:

  • Multi-passing with automatic render pass generation based on filter properties. Filters are assigned to logical stages (corresponding to steps in the simulated image-forming process), and the pipeline figures out dynamically how many render passes to create and which stages to assign to each. This allows e.g. blur to see cartoon outlines, opening up new possibilities.
  • Allows mixing filters provided with Panda and custom filters in the same pipeline, as long as the custom filters are coded to the new Filter API (which is the same API the internal filters use). The API aims to be as simple as possible. This also makes it easier to contribute new filters to Panda.
  • Filters may define internal render passes, allowing filters with internal multi-pass processing. (This is just to say that the new architecture keeps this feature!)
  • Highly automated. Create run-time and compile-time filter properties with one-liners (or nearly; most of the length comes from the docstring). Assign a value to a filter property at run-time, and the necessary magic happens for the new value to take effect, whether the property represents a shader input or something affecting code generation.
  • Runtime-inspectable; filters have methods to extract parameter names, or parameter names and their docstrings. You can also get the current generated shader source code from each FilterStage by just reading a property.
  • Object-oriented architecture using new-style Python objects. Using inheritance it is possible to create specialized versions of filters (to some degree).
  • Exception-based error handling with descriptive error messages to ease debugging.

Comments would be appreciated :slight_smile:
filterinterface.zip (48.7 KB)

Code generation and HalfPixelShift done. Previous post edited to match the new version; the attachment contains the latest code.

Success!

The code is now split into modules and it runs! :slight_smile:

It writes shaders that look like this:

//Cg
//
//Cg profile arbvp1 arbfp1

// FilterPipeline generated shader for render pass:
//   [LensFocus]
//
// Enabled filters (in this order):
//   StageInitializationFilter
//   BlurSharpenFilter

void vshader( float4 vtx_position : POSITION,
              out float4 l_position : POSITION,
              out float2 l_texcoord : TEXCOORD0,
              uniform float4x4 mat_modelproj )
{
    l_position = mul(mat_modelproj, vtx_position);
    l_texcoord = (vtx_position.xz * float2(0.5, 0.5)) + float2(0.5, 0.5);
}

// initialize pixcolor
float4 initializeFilterStage( uniform sampler2D k_txcolor,
                              float2 l_texcoord,
                              float4 pixcolor )
{
    pixcolor = tex2D(k_txcolor, l_texcoord.xy);
    return pixcolor;
}

// Blur/sharpen blend pass
float4 blurSharpenFilter( uniform sampler2D k_txblur1,
                          float2 l_texcoord,
                          uniform float k_blur_amount,
                          float4 pixcolor )
{
    pixcolor = lerp(tex2D(k_txblur1, l_texcoord.xy), pixcolor, k_blur_amount.x);
    return pixcolor;
}

void fshader( float2 l_texcoord : TEXCOORD0,
              uniform sampler2D k_txcolor,
              uniform sampler2D k_txblur1,
              uniform float k_blur_amount,
              out float4 o_color : COLOR )
{
    float4 pixcolor = float4(0.0, 0.0, 0.0, 0.0);
    
    // initialize pixcolor
    pixcolor = initializeFilterStage( k_txcolor,
                                      l_texcoord,
                                      pixcolor );

    // Blur/sharpen blend pass
    pixcolor = blurSharpenFilter( k_txblur1,
                                  l_texcoord,
                                  k_blur_amount,
                                  pixcolor );

    o_color = pixcolor;
}

The texcoord handler is based on the latest version in CVS, but it now handles texpad and texpix separately (to cover the case where HalfPixelShift is enabled for non-padded textures; in this case the vshader needs texpix but no texpad).

This source code was retrieved from the framework by:

for stage in mypipeline.stages:
    print stage.shaderSourceCode

In the Panda spirit, you can ls() the FilterPipeline to print a description:

FilterPipeline instance at 0x7f9c4a655f50: <active>, 1 render pass, 1 filter total
  Scene textures: ['color']
  Render pass 1/1:
    FilterStage instance '[LensFocus]' at 0x7f9c3a3cac50: <2 filters>
      Textures registered to compositing shader: ["blur1 (reg. by ['BlurSharpenFilter'])", "color (reg. by ['StageInitializationFilter'])"]
      Custom inputs registered to compositing shader: ["float k_blur_amount (reg. by ['BlurSharpenFilter'])"]
        StageInitializationFilter instance at 0x7f9c3a3cac90
            isMergeable: None
            sort: -1
            stageName: None
        BlurSharpenFilter instance at 0x7f9c3a3caad0; 2 internal render passes
          Internal textures: ['blur0', 'blur1']
            amount: 0.0
            isMergeable: False
            sort: 0
            stageName: LensFocus

(If it looks like the framework can’t count, rest assured it can - the discrepancy in the filter count is because StageInitializationFilter is not a proper filter in the pipeline, but something that is inserted internally at the beginning of each stage. Hence the pipeline sees only one filter, while the stage sees two.)

The legacy API is a drop-in replacement for CommonFilters - the calling code for this test was:

from CommonFilters190.CommonFilters import CommonFilters

self.filters = CommonFilters(base.win, base.cam)
filterok = self.filters.setBlurSharpen()

(The nonstandard path for the import is because these are experimental files that are not yet in the Panda tree. It will change to the usual “from direct.filter.CommonFilters import CommonFilters” once everything is done - so existing scripts shouldn’t even notice that anything has changed.)

Now, I only need to port all the existing filters to this framework, and then I can send it in for review :slight_smile:

Latest sources attached. There shouldn’t be any more upcoming major changes to the framework itself. What will change is that I’ll add more Filter modules and update the legacy API (CommonFilters) to support them.
CommonFilters190_initial_working_version.zip (112 KB)

Excellent work! I’ll try to find some time for this soon; sorry that I’ve not been giving it as much attention as it deserves, I’ve been absolutely swamped. :frowning:

Impressive! I’m no expert in shader-usage, nor with CommonFilters, but at a glance that looks both elegant and useful. :slight_smile:

I suppose that’s on par for the course when a large new release is coming up :slight_smile:

In the meantime, I can proceed with porting the filters (both the old and the new ones). I’ll post a new version on the weekend.

There are a couple of things for which specifically I’d like comments - since this subsystem is pretty big, it might be easier to spell them out here:

  • Legacy API and new filters, and new options for old filters (CartoonInk)? Support them there (so that legacy scripts require only minimal changes to add support for new filters), or leave them as exclusive to the new API (from the user’s perspective, the new API allows even simplifying the calling code, since it is now possible to change parameter values selectively instead of always sending a christmas tree; but this is more work for the user)?
  • Should FilterManager be modified to support partial cleanups, or is the current solution (that almost always rebuilds the whole pipeline) actually simpler? I’d like to have a system that rebuilds only the changed parts - hierarchical change tracking is already there, so for most cases this would be simple if only FilterManager allowed it. (But there are exotic cases, e.g. changing VolumetricLighting’s “source” parameter, or toggling the depth-enabled flag of the new CartoonInk. These affect which textures are required - and if the scene texture or aux bits requirements change, then it’s off to a pipeline rebuild anyway, because FilterManager.renderSceneInto() must be re-applied with the changed options.)
  • I’m trying to keep this as hack-free as possible, but there are some borderline cases. I’d like to prioritize power and ease of use - but if you see something that looks like a hack and have an idea how to achieve the same thing cleanly, I’d like to know :slight_smile:

By the way, I got rid of the inspection logic in setFilter() - now the logic to apply recompiles only when necessary resides in the property setters, where I think it belongs. FilterPipeline still has a setFilter() that either adds or reconfigures a filter (depending on whether it already exists in the pipeline), but now its implementation is much simpler.

Ah, and for now, only one instance per filter type is supported in any given pipeline instance - supporting multiple instances of the same filter in the same pipeline is a bit tricky (details in the code comments and README; in short, this requires some kind of name mangling).

Thanks :slight_smile:

I think the idea of the CommonFilters system was very good - having certain postprocessing operations available that can be simply switched on and configured, in any combination. I find it similar in spirit to the main shader generator: 99% of the time, there is no need to write custom shaders.

The aim of the new FilterPipeline framework is something similar for postprocessing filters. What it adds to the postprocessing system is maintainability (keeping code complexity in check as more filters are added) and extensibility (so that people in the community can write their own filters that plug in to the pipeline). Also, the automatic render pass generator significantly extends the degree how well the different filters play together.

From a user perspective, the interesting part will be new filters. As the first step, I’ll be adding the ones I’ve already coded, i.e. desaturation, scanlines, and lens distortion.

I’ve also been eyeing ninth’s SSLR (local reflection) implementation, which is very cool (and he’s ok with including it to Panda). The latter may require changes to other parts of Panda to supply a fullscreen gloss map texture to the postprocessing subsystem, but it should be possible to do. We already have SSAO, so SSLR would be a nice addition to support more high-end visual effects out of the box :slight_smile:

I also bumped into an independent implementation of an early FXAA (Fast approximate antialiasing) version that was “feel free to use in your own projects” ( horde3d.org/wiki/index.php5?titl … que_-_FXAA ), so I think I’ll be adding that, too. The fshader is just a screenful of code, so it’s very simple. It would be a nice alternative for smoothing both light/dark transitions and object edges in cartoon-shaded scenes, as FXAA is basically an intelligent anisotropic blur filter. It may also be useful as a very cheap filter for general fullscreen antialiasing on low-end hardware.

There’s also SMAA (Enhanced subpixel morphological antialiasing), which is available under a free license, but its implementation is much more complex, and I haven’t yet investigated whether it’s possible to integrate into the new pipeline. See github.com/iryoku/smaa

I’d also very much like to add a depth-of-field (DoF) filter. While no perfect solution is possible using current hardware, the algorithm explained in GPU Gems 3 is pretty good for a relatively cheap realtime filter. See the article, which also contains a nice overview of possible techniques and references to papers discussing them: http.developer.nvidia.com/GPUGem … _ch28.html

Then, there is something that could be improved in the existing filters - for example, I think blur would look more natural using a gaussian kernel. Also, it doesn’t yet use the hardware linear interpolation to save GPU cycles, so the current implementation is not optimally efficient. This was covered in a link posted earlier, rastergrid.com/blog/2010/09/effi … -sampling/

And further, the blur kernel size could be made configurable, to adjust the radius of the effect. For a small blur, it is possible to sample 17 pixels using just five texture lookups in a single pass - and that’s including the center pixel. A diagram of the stencil can be found in the DoF article linked above, in subsection 28.5.2 - it’s pretty obvious after you’ve seen it once.

So, there’s still a lot of work to do :slight_smile:

Added most of the existing filters and fixed some bugs (notably, buffer creation order, and a corner case in the logical stage merging logic (old version failed if all stages were non-mergeable)). Bloom and AmbientOcclusion turned out to need some new features, too (the “paramproc” argument in Filter.makeRTParam(), and “internalOnly” scene textures, respectively).

New version attached.

Hopefully, I’ll get CartoonInk (the old one) and VolumetricLighting done tomorrow; then this is ready for adding in the new filters. (Though I do need to comb through all the comments and docstrings to make sure everything is up to date.)
CommonFilters190_more_filters_and_bugfixes.zip (136 KB)

Just wanted to post to thank you for doing this.

Well, those filters are not in yet, but I did make the code generator a bit cleaner, eliminating some corner cases where unneeded variables were passed to the filter functions. (E.g. ScanlinesFilter needs only texpix for the colour texture, not the actual colour texture (except for the current pixel in the “pixcolor” variable); and StageInitializationFIlter does not need the “pixcolor” argument, because it initializes pixcolor.)

Also, now the code handling the registerInputTexture() metadata should be easier to read. Updated version attached.

Finally, one question, mainly to rdb - how is the “source” parameter in VolumetricLighting (in CVS) supposed to work? I’m tempted to leave that parameter out, since if I understand the code in CVS correctly, its current implementation will only work if the referred texture happens to be one of those already passed to the compositing fshader for other reasons.

Also, related to this, look at line 170 of CommonFilters.py in CVS - I think it should be needtex.add(…), like the others, instead of needtex[…] = True?

Providing a general mechanism for supplying any user-given texture to the compositing fshader is something I haven’t even considered yet, since none of the other filters have happened to need that. Might be possible to do by adding more options to registerInputTexture(), if needed.

But I’m not sure if doing that would solve volumetric lighting. If I’ve understood correctly based on my reading of GPU Gems 3, there are only two ways to produce correct results from this algorithm - a) use a stencil buffer; or b) render a black copy of the scene (with only the light source stand-in sprites rendered bright), invoke VolumetricLighting on that, render another copy of the scene normally, and finally blend those additively to get the final result.

Input on this particular issue would be appreciated.
CommonFilters190_codegen_cleanup.zip (140 KB)

The source parameter selects an input texture to use for the radial blur. Before that, it used the regular scene color texture, which means that it affected all surfaces, which was kind of useless. By allowing people to set it to “aux” they can indicate which objects should be affected by VL using glow maps, or more usefully, they can set it to the result of the bloom pass, so that it is affected by an additional pre-blur.

This feature makes the volumetric lighting filter not completely useless, so it would be good to have it.

Ok.

So it seems I’ve understood the basic idea correctly. But there are some details I’m not clear on:

Setting “source” only affects the texture arguments of the compositing shader - so even if the user passes “aux”, no aux bits will be set. To me this doesn’t seem correct.

This also implies that “source” only supports textures that are already available, i.e. either scene textures, or textures generated by another filter. Is this intentional, or should arbitrary user-generated textures be supported?

As for the glow map, I think the “aux” texture is unrelated to that? For example, the bloom filter sets the ABOGlow aux bit, and then uses the alpha channel of the scene color texture to read the glow map. ViewGlow works the same way.

To my knowledge, in CommonFilters the “aux” texture is only used for fragment normals (with setting the ABOAuxNormal aux bit). The aux texture should probably be kept reserved for that purpose, too, to keep different features orthogonal (so that any combination of filters will work).

Is the intention to use the same glow map for both Bloom and VolumetricLighting, or should those be independent?

Your suggestion to use the output from the bloom pass as the VolumetricLighting source sounds very good. I’d never thought of that. It gives a nice thresholded “black render” much more cheaply than rendering the scene for a second time. Thanks. I’ll document this in VolumetricLightingFilter.

But wait? This implies that the object blocking the light must not trigger the bloom in any pixels that lie in its interior (in the 2D sense) - but this condition fails, if the model has any glowing parts. Consider the Glow-Filter sample that comes with Panda - what if someone wants to add a volumetric light source behind that model?

So maybe the bloom preprocessing for VolumetricLighting needs a separate glow map, independent of regular bloom? But how would that be passed from the scene, through the main renderer, to CommonFilters? As to this, I’m completely lost.

[i]EDIT: it would be possible, at this point, to make it use the regular glow map (or at the user’s option, even just an RGB blend; bloom also has this option), and add VL support for models with glow maps at some later time.

We will need a general mechanism to pass additional aux maps later, anyway - SSLR requires a gloss map, which is currently not supported as an aux target by the main shader generator ( panda3d.org/reference/devel … Attrib.php ).

(On that note, the documentation of AuxBitplaneAttrib needs some attention - ABOAuxNormal uses all three RGB components, contrary to what the documentation currently states (for the code, see panda/src/pgraphnodes/shaderGenerator.cxx, and search for “_out_aux_normal”). Also, the doc mentions also something about “ABO_aux_modelz”, which is not there in the enum - maybe a leftover from an old release?[/i]

The multi-pass system in any case makes implementing the bloom preprocess a bit more complex: VolumetricLighting simulates scattering in the air, so it should be rendered before any lens effects. This way, if we apply e.g. barrel distortion during a lens effects pass, the “god rays” will bend as they should, as they are already part of the color texture when the lens effects render pass begins.

The problem is that, at least in my opinion, (the regular) bloom and lens flare should go into the same render pass (with each other), so that they see the scene as it appears before any “unintended” reflections occur in the lens. But I think lens flares should not bend due to the scene being distorted by the lens (especially, the “ghosts” should lie on a straight line connecting each bright pixel to the center of the picture) - which implies the bloom/lens flare render pass needs to come after lens distortion.

Thus, VolumetricLighting and bloom end up in different render passes (preventing communication) - and furthermore, bloom has not yet been rendered at the point where VolumetricLighting needs to render itself.

It seems VolumetricLighting needs its own internal bloom filter, used as a preprocessor. But Filters cannot currently contain other Filters. I originally thought about it, but decided that it would make the system more complicated than it needs to be. Until now :stuck_out_tongue:

I’ll need to investigate whether it’s simpler to add that feature, or to duplicate the functionality of the bloom filter in the volumetric lighting filter, as this is the only case in the foreseeable future that would need this feature.

Also, it’s not clear whether it’s conceptually more correct to make the VolumetricLighting preprocessor implementation separate (enabling customizations to make it work better for this specific purpose), or whether it should just use the bloom filter (sharing possible bugfixes and new features in the future - but possibly also causing unintended breakage when the behaviour of the bloom filter changes).

Ok. I’ll include it :slight_smile:

I’ve now mostly understood what needs to be done to support it - but I would appreciate any comments that could help to fill in the details.

There’s no point to supporting user-generated textures for that.

The “aux” texture can be related to glow if you set the ABOAuxGlow bit instead of ABOGlow. Not sure if the existing system uses it. If we want it to consistently contain normals, that’s fine. I was simply using it as example.

It makes sense to use the glow map for both bloom and volumetric lighting. I agree that this is a sort of hacky and limited solution, but I don’t have a better alternative right now without needlessly complicating things. And yeah, it sort of fails when the bloom filter is used for glow, but that only covers a small number of cases. One solution for that would be if there could be two bloom filters at a time, but that may not fit the architecture you had in mind.

I will be eventually (post-1.9) overhauling the way that aux bitplanes are handled in Panda. Let’s not worry about it right now.

Yeah, what you describe makes it increasingly clear to me that the list of filter stages should be more variable. Perhaps a filter marks another filter’s output as a “dependency” and the system knows automatically to put it in a previous stage? Or stages are determined by sort value - filters with the same sort share a stage? I think the ideal solution is some sort of DAG, but only if that doesn’t needlessly complicate things right now.

Thanks :slight_smile:

Just trying to make Panda even more powerful out of the box (in a way that will be useful in my own project). Nice to see this is generating interest.