CommonFilters - some new filters, and the future

EDIT: updated the attached code. Code generation and HalfPixelShift are now done.
EDIT2: fixed bug in code enabling HalfPixelShift and some erroneous comments. Attachment updated.
EDIT3: fixed some comments and asserts.
EDIT4: update task mechanism added; it is now a registrable for each individual Filter. ScanlinesFilter provides an example.
EDIT5: the attachment in this post is the last version before the module split; it is now obsolete. See the later post, including code that has been split into modules.

A first version of the CommonFilters re-architecture is almost complete.

I still need to split the code into modules and add imports, and port most of the existing filters (including my new inker) over to the new architecture, but the infrastructure should now be in place.

I expect to get to the testing phase in a day or two.

Some highlights:

  • Multi-passing with automatic render pass generation based on filter properties. Filters are assigned to logical stages (corresponding to steps in the simulated image-forming process), and the pipeline figures out dynamically how many render passes to create and which stages to assign to each. This allows e.g. blur to see cartoon outlines, opening up new possibilities.
  • Allows mixing filters provided with Panda and custom filters in the same pipeline, as long as the custom filters are coded to the new Filter API (which is the same API the internal filters use). The API aims to be as simple as possible. This also makes it easier to contribute new filters to Panda.
  • Filters may define internal render passes, allowing filters with internal multi-pass processing. (This is just to say that the new architecture keeps this feature!)
  • Highly automated. Create run-time and compile-time filter properties with one-liners (or nearly; most of the length comes from the docstring). Assign a value to a filter property at run-time, and the necessary magic happens for the new value to take effect, whether the property represents a shader input or something affecting code generation.
  • Runtime-inspectable; filters have methods to extract parameter names, or parameter names and their docstrings. You can also get the current generated shader source code from each FilterStage by just reading a property.
  • Object-oriented architecture using new-style Python objects. Using inheritance it is possible to create specialized versions of filters (to some degree).
  • Exception-based error handling with descriptive error messages to ease debugging.

Comments would be appreciated :slight_smile:
filterinterface.zip (48.7 KB)

Code generation and HalfPixelShift done. Previous post edited to match the new version; the attachment contains the latest code.

Success!

The code is now split into modules and it runs! :slight_smile:

It writes shaders that look like this:

//Cg
//
//Cg profile arbvp1 arbfp1

// FilterPipeline generated shader for render pass:
//   [LensFocus]
//
// Enabled filters (in this order):
//   StageInitializationFilter
//   BlurSharpenFilter

void vshader( float4 vtx_position : POSITION,
              out float4 l_position : POSITION,
              out float2 l_texcoord : TEXCOORD0,
              uniform float4x4 mat_modelproj )
{
    l_position = mul(mat_modelproj, vtx_position);
    l_texcoord = (vtx_position.xz * float2(0.5, 0.5)) + float2(0.5, 0.5);
}

// initialize pixcolor
float4 initializeFilterStage( uniform sampler2D k_txcolor,
                              float2 l_texcoord,
                              float4 pixcolor )
{
    pixcolor = tex2D(k_txcolor, l_texcoord.xy);
    return pixcolor;
}

// Blur/sharpen blend pass
float4 blurSharpenFilter( uniform sampler2D k_txblur1,
                          float2 l_texcoord,
                          uniform float k_blur_amount,
                          float4 pixcolor )
{
    pixcolor = lerp(tex2D(k_txblur1, l_texcoord.xy), pixcolor, k_blur_amount.x);
    return pixcolor;
}

void fshader( float2 l_texcoord : TEXCOORD0,
              uniform sampler2D k_txcolor,
              uniform sampler2D k_txblur1,
              uniform float k_blur_amount,
              out float4 o_color : COLOR )
{
    float4 pixcolor = float4(0.0, 0.0, 0.0, 0.0);
    
    // initialize pixcolor
    pixcolor = initializeFilterStage( k_txcolor,
                                      l_texcoord,
                                      pixcolor );

    // Blur/sharpen blend pass
    pixcolor = blurSharpenFilter( k_txblur1,
                                  l_texcoord,
                                  k_blur_amount,
                                  pixcolor );

    o_color = pixcolor;
}

The texcoord handler is based on the latest version in CVS, but it now handles texpad and texpix separately (to cover the case where HalfPixelShift is enabled for non-padded textures; in this case the vshader needs texpix but no texpad).

This source code was retrieved from the framework by:

for stage in mypipeline.stages:
    print stage.shaderSourceCode

In the Panda spirit, you can ls() the FilterPipeline to print a description:

FilterPipeline instance at 0x7f9c4a655f50: <active>, 1 render pass, 1 filter total
  Scene textures: ['color']
  Render pass 1/1:
    FilterStage instance '[LensFocus]' at 0x7f9c3a3cac50: <2 filters>
      Textures registered to compositing shader: ["blur1 (reg. by ['BlurSharpenFilter'])", "color (reg. by ['StageInitializationFilter'])"]
      Custom inputs registered to compositing shader: ["float k_blur_amount (reg. by ['BlurSharpenFilter'])"]
        StageInitializationFilter instance at 0x7f9c3a3cac90
            isMergeable: None
            sort: -1
            stageName: None
        BlurSharpenFilter instance at 0x7f9c3a3caad0; 2 internal render passes
          Internal textures: ['blur0', 'blur1']
            amount: 0.0
            isMergeable: False
            sort: 0
            stageName: LensFocus

(If it looks like the framework can’t count, rest assured it can - the discrepancy in the filter count is because StageInitializationFilter is not a proper filter in the pipeline, but something that is inserted internally at the beginning of each stage. Hence the pipeline sees only one filter, while the stage sees two.)

The legacy API is a drop-in replacement for CommonFilters - the calling code for this test was:

from CommonFilters190.CommonFilters import CommonFilters

self.filters = CommonFilters(base.win, base.cam)
filterok = self.filters.setBlurSharpen()

(The nonstandard path for the import is because these are experimental files that are not yet in the Panda tree. It will change to the usual “from direct.filter.CommonFilters import CommonFilters” once everything is done - so existing scripts shouldn’t even notice that anything has changed.)

Now, I only need to port all the existing filters to this framework, and then I can send it in for review :slight_smile:

Latest sources attached. There shouldn’t be any more upcoming major changes to the framework itself. What will change is that I’ll add more Filter modules and update the legacy API (CommonFilters) to support them.
CommonFilters190_initial_working_version.zip (112 KB)

Excellent work! I’ll try to find some time for this soon; sorry that I’ve not been giving it as much attention as it deserves, I’ve been absolutely swamped. :frowning:

Impressive! I’m no expert in shader-usage, nor with CommonFilters, but at a glance that looks both elegant and useful. :slight_smile:

I suppose that’s on par for the course when a large new release is coming up :slight_smile:

In the meantime, I can proceed with porting the filters (both the old and the new ones). I’ll post a new version on the weekend.

There are a couple of things for which specifically I’d like comments - since this subsystem is pretty big, it might be easier to spell them out here:

  • Legacy API and new filters, and new options for old filters (CartoonInk)? Support them there (so that legacy scripts require only minimal changes to add support for new filters), or leave them as exclusive to the new API (from the user’s perspective, the new API allows even simplifying the calling code, since it is now possible to change parameter values selectively instead of always sending a christmas tree; but this is more work for the user)?
  • Should FilterManager be modified to support partial cleanups, or is the current solution (that almost always rebuilds the whole pipeline) actually simpler? I’d like to have a system that rebuilds only the changed parts - hierarchical change tracking is already there, so for most cases this would be simple if only FilterManager allowed it. (But there are exotic cases, e.g. changing VolumetricLighting’s “source” parameter, or toggling the depth-enabled flag of the new CartoonInk. These affect which textures are required - and if the scene texture or aux bits requirements change, then it’s off to a pipeline rebuild anyway, because FilterManager.renderSceneInto() must be re-applied with the changed options.)
  • I’m trying to keep this as hack-free as possible, but there are some borderline cases. I’d like to prioritize power and ease of use - but if you see something that looks like a hack and have an idea how to achieve the same thing cleanly, I’d like to know :slight_smile:

By the way, I got rid of the inspection logic in setFilter() - now the logic to apply recompiles only when necessary resides in the property setters, where I think it belongs. FilterPipeline still has a setFilter() that either adds or reconfigures a filter (depending on whether it already exists in the pipeline), but now its implementation is much simpler.

Ah, and for now, only one instance per filter type is supported in any given pipeline instance - supporting multiple instances of the same filter in the same pipeline is a bit tricky (details in the code comments and README; in short, this requires some kind of name mangling).

Thanks :slight_smile:

I think the idea of the CommonFilters system was very good - having certain postprocessing operations available that can be simply switched on and configured, in any combination. I find it similar in spirit to the main shader generator: 99% of the time, there is no need to write custom shaders.

The aim of the new FilterPipeline framework is something similar for postprocessing filters. What it adds to the postprocessing system is maintainability (keeping code complexity in check as more filters are added) and extensibility (so that people in the community can write their own filters that plug in to the pipeline). Also, the automatic render pass generator significantly extends the degree how well the different filters play together.

From a user perspective, the interesting part will be new filters. As the first step, I’ll be adding the ones I’ve already coded, i.e. desaturation, scanlines, and lens distortion.

I’ve also been eyeing ninth’s SSLR (local reflection) implementation, which is very cool (and he’s ok with including it to Panda). The latter may require changes to other parts of Panda to supply a fullscreen gloss map texture to the postprocessing subsystem, but it should be possible to do. We already have SSAO, so SSLR would be a nice addition to support more high-end visual effects out of the box :slight_smile:

I also bumped into an independent implementation of an early FXAA (Fast approximate antialiasing) version that was “feel free to use in your own projects” ( horde3d.org/wiki/index.php5?titl … que_-_FXAA ), so I think I’ll be adding that, too. The fshader is just a screenful of code, so it’s very simple. It would be a nice alternative for smoothing both light/dark transitions and object edges in cartoon-shaded scenes, as FXAA is basically an intelligent anisotropic blur filter. It may also be useful as a very cheap filter for general fullscreen antialiasing on low-end hardware.

There’s also SMAA (Enhanced subpixel morphological antialiasing), which is available under a free license, but its implementation is much more complex, and I haven’t yet investigated whether it’s possible to integrate into the new pipeline. See github.com/iryoku/smaa

I’d also very much like to add a depth-of-field (DoF) filter. While no perfect solution is possible using current hardware, the algorithm explained in GPU Gems 3 is pretty good for a relatively cheap realtime filter. See the article, which also contains a nice overview of possible techniques and references to papers discussing them: http.developer.nvidia.com/GPUGem … _ch28.html

Then, there is something that could be improved in the existing filters - for example, I think blur would look more natural using a gaussian kernel. Also, it doesn’t yet use the hardware linear interpolation to save GPU cycles, so the current implementation is not optimally efficient. This was covered in a link posted earlier, rastergrid.com/blog/2010/09/effi … -sampling/

And further, the blur kernel size could be made configurable, to adjust the radius of the effect. For a small blur, it is possible to sample 17 pixels using just five texture lookups in a single pass - and that’s including the center pixel. A diagram of the stencil can be found in the DoF article linked above, in subsection 28.5.2 - it’s pretty obvious after you’ve seen it once.

So, there’s still a lot of work to do :slight_smile:

Added most of the existing filters and fixed some bugs (notably, buffer creation order, and a corner case in the logical stage merging logic (old version failed if all stages were non-mergeable)). Bloom and AmbientOcclusion turned out to need some new features, too (the “paramproc” argument in Filter.makeRTParam(), and “internalOnly” scene textures, respectively).

New version attached.

Hopefully, I’ll get CartoonInk (the old one) and VolumetricLighting done tomorrow; then this is ready for adding in the new filters. (Though I do need to comb through all the comments and docstrings to make sure everything is up to date.)
CommonFilters190_more_filters_and_bugfixes.zip (136 KB)

Just wanted to post to thank you for doing this.

Well, those filters are not in yet, but I did make the code generator a bit cleaner, eliminating some corner cases where unneeded variables were passed to the filter functions. (E.g. ScanlinesFilter needs only texpix for the colour texture, not the actual colour texture (except for the current pixel in the “pixcolor” variable); and StageInitializationFIlter does not need the “pixcolor” argument, because it initializes pixcolor.)

Also, now the code handling the registerInputTexture() metadata should be easier to read. Updated version attached.

Finally, one question, mainly to rdb - how is the “source” parameter in VolumetricLighting (in CVS) supposed to work? I’m tempted to leave that parameter out, since if I understand the code in CVS correctly, its current implementation will only work if the referred texture happens to be one of those already passed to the compositing fshader for other reasons.

Also, related to this, look at line 170 of CommonFilters.py in CVS - I think it should be needtex.add(…), like the others, instead of needtex[…] = True?

Providing a general mechanism for supplying any user-given texture to the compositing fshader is something I haven’t even considered yet, since none of the other filters have happened to need that. Might be possible to do by adding more options to registerInputTexture(), if needed.

But I’m not sure if doing that would solve volumetric lighting. If I’ve understood correctly based on my reading of GPU Gems 3, there are only two ways to produce correct results from this algorithm - a) use a stencil buffer; or b) render a black copy of the scene (with only the light source stand-in sprites rendered bright), invoke VolumetricLighting on that, render another copy of the scene normally, and finally blend those additively to get the final result.

Input on this particular issue would be appreciated.
CommonFilters190_codegen_cleanup.zip (140 KB)

The source parameter selects an input texture to use for the radial blur. Before that, it used the regular scene color texture, which means that it affected all surfaces, which was kind of useless. By allowing people to set it to “aux” they can indicate which objects should be affected by VL using glow maps, or more usefully, they can set it to the result of the bloom pass, so that it is affected by an additional pre-blur.

This feature makes the volumetric lighting filter not completely useless, so it would be good to have it.

Ok.

So it seems I’ve understood the basic idea correctly. But there are some details I’m not clear on:

Setting “source” only affects the texture arguments of the compositing shader - so even if the user passes “aux”, no aux bits will be set. To me this doesn’t seem correct.

This also implies that “source” only supports textures that are already available, i.e. either scene textures, or textures generated by another filter. Is this intentional, or should arbitrary user-generated textures be supported?

As for the glow map, I think the “aux” texture is unrelated to that? For example, the bloom filter sets the ABOGlow aux bit, and then uses the alpha channel of the scene color texture to read the glow map. ViewGlow works the same way.

To my knowledge, in CommonFilters the “aux” texture is only used for fragment normals (with setting the ABOAuxNormal aux bit). The aux texture should probably be kept reserved for that purpose, too, to keep different features orthogonal (so that any combination of filters will work).

Is the intention to use the same glow map for both Bloom and VolumetricLighting, or should those be independent?

Your suggestion to use the output from the bloom pass as the VolumetricLighting source sounds very good. I’d never thought of that. It gives a nice thresholded “black render” much more cheaply than rendering the scene for a second time. Thanks. I’ll document this in VolumetricLightingFilter.

But wait? This implies that the object blocking the light must not trigger the bloom in any pixels that lie in its interior (in the 2D sense) - but this condition fails, if the model has any glowing parts. Consider the Glow-Filter sample that comes with Panda - what if someone wants to add a volumetric light source behind that model?

So maybe the bloom preprocessing for VolumetricLighting needs a separate glow map, independent of regular bloom? But how would that be passed from the scene, through the main renderer, to CommonFilters? As to this, I’m completely lost.

[i]EDIT: it would be possible, at this point, to make it use the regular glow map (or at the user’s option, even just an RGB blend; bloom also has this option), and add VL support for models with glow maps at some later time.

We will need a general mechanism to pass additional aux maps later, anyway - SSLR requires a gloss map, which is currently not supported as an aux target by the main shader generator ( panda3d.org/reference/devel … Attrib.php ).

(On that note, the documentation of AuxBitplaneAttrib needs some attention - ABOAuxNormal uses all three RGB components, contrary to what the documentation currently states (for the code, see panda/src/pgraphnodes/shaderGenerator.cxx, and search for “_out_aux_normal”). Also, the doc mentions also something about “ABO_aux_modelz”, which is not there in the enum - maybe a leftover from an old release?[/i]

The multi-pass system in any case makes implementing the bloom preprocess a bit more complex: VolumetricLighting simulates scattering in the air, so it should be rendered before any lens effects. This way, if we apply e.g. barrel distortion during a lens effects pass, the “god rays” will bend as they should, as they are already part of the color texture when the lens effects render pass begins.

The problem is that, at least in my opinion, (the regular) bloom and lens flare should go into the same render pass (with each other), so that they see the scene as it appears before any “unintended” reflections occur in the lens. But I think lens flares should not bend due to the scene being distorted by the lens (especially, the “ghosts” should lie on a straight line connecting each bright pixel to the center of the picture) - which implies the bloom/lens flare render pass needs to come after lens distortion.

Thus, VolumetricLighting and bloom end up in different render passes (preventing communication) - and furthermore, bloom has not yet been rendered at the point where VolumetricLighting needs to render itself.

It seems VolumetricLighting needs its own internal bloom filter, used as a preprocessor. But Filters cannot currently contain other Filters. I originally thought about it, but decided that it would make the system more complicated than it needs to be. Until now :stuck_out_tongue:

I’ll need to investigate whether it’s simpler to add that feature, or to duplicate the functionality of the bloom filter in the volumetric lighting filter, as this is the only case in the foreseeable future that would need this feature.

Also, it’s not clear whether it’s conceptually more correct to make the VolumetricLighting preprocessor implementation separate (enabling customizations to make it work better for this specific purpose), or whether it should just use the bloom filter (sharing possible bugfixes and new features in the future - but possibly also causing unintended breakage when the behaviour of the bloom filter changes).

Ok. I’ll include it :slight_smile:

I’ve now mostly understood what needs to be done to support it - but I would appreciate any comments that could help to fill in the details.

There’s no point to supporting user-generated textures for that.

The “aux” texture can be related to glow if you set the ABOAuxGlow bit instead of ABOGlow. Not sure if the existing system uses it. If we want it to consistently contain normals, that’s fine. I was simply using it as example.

It makes sense to use the glow map for both bloom and volumetric lighting. I agree that this is a sort of hacky and limited solution, but I don’t have a better alternative right now without needlessly complicating things. And yeah, it sort of fails when the bloom filter is used for glow, but that only covers a small number of cases. One solution for that would be if there could be two bloom filters at a time, but that may not fit the architecture you had in mind.

I will be eventually (post-1.9) overhauling the way that aux bitplanes are handled in Panda. Let’s not worry about it right now.

Yeah, what you describe makes it increasingly clear to me that the list of filter stages should be more variable. Perhaps a filter marks another filter’s output as a “dependency” and the system knows automatically to put it in a previous stage? Or stages are determined by sort value - filters with the same sort share a stage? I think the ideal solution is some sort of DAG, but only if that doesn’t needlessly complicate things right now.

Thanks :slight_smile:

Just trying to make Panda even more powerful out of the box (in a way that will be useful in my own project). Nice to see this is generating interest.

Ok! This makes things simpler :slight_smile:

Ah, true, thanks for the correction.

(Indeed, now that I look at it, it says so right in the documentation for AuxBitplaneAttrib. Been a while since I read that. By the way, parts of that particular documentation page are out of date - see the edit in my previous post for details. Would be nice to fix that for 1.9.0, as only minor edits are needed.)

As you’ve no doubt noticed, CommonFilters doesn’t use ABOAuxGlow. I have no idea, either, whether it’s used in some other part of the system. Sounds like a useful option to have, though.

In CommonFilters it may be more elegant to use ABOGlow, because the glow information fits into the otherwise unused alpha channel of the primary color texture (at least I believe it’s otherwise unused - please correct if I’m wrong). This way, we only need to enable an additional render target in the main renderer when the requested aux data would not otherwise fit (namely, when fragment normals are needed; this requires three values per pixel).

Ok. Let’s go with this solution at this point, then.

It’s nontrivial, but possible. I was saving the “multiple instances per filter type” feature for post-1.9.0, but it looks to be the simplest solution here, so I might as well add it now.

The complicated part is automatically mangling the names of custom shader input variables and the corresponding shader inputs (observe that two copies of the same filter may participate in the same compositing shader - allowing this is the simplest solution to account for arbitrary merges of logical stages). This sounds like it needs a FilterPipeline-level name registry, and some more dynamic logic in the setter generated by makeRTParam() in Filter.py. I’ll see what I can do.

It also requires some co-operation from authors of new Filters (self.getMangledName(“myShaderInput”)… “mangled” sounds bad though technically correct? getInstanceName()… getUniqueName()… or something?), but not much more so than supporting the flexible texcoord system that is already included in the current code.

This also requires two more new features:

  • The first bloom filter should not participate in the compositing shader, but only produce an internal texture to be used by VolumetricLighting. Thus, we need a Filter-level flag that controls whether the Filter instance “participates” in the compositing shader (of the FilterStage it happens to reside in).
  • Texture communication between Filters in the same FilterStage. VolumetricLighting needs to get the texture that was rendered by the last internal stage of the bloom filter. This needs a FilterStage-level function that can request the texture reference from the other filter.

These shouldn’t be too difficult to do - I’ll try it, and get back on this.

By the way, requiring the user to enable both filters explicitly in the VolumetricLighting/Bloom case sounds too manual for my tastes - but it might be the simplest solution at this point. For now, CommonFilters can automate that setup in its old-style API function. I’ll look into automating this part properly later.

Ah! Looking forward to that :slight_smile:

Sorry, but I’m not sure I follow.

I think dependency tracking on the Filter level wouldn’t work very well. Unless I misunderstood, this would require each Filter to declare a dependency on all those that must not be placed after it, which is not maintainable as new filter types are implemented and inserted at arbitrary points in the sequence.

A sane default global ordering for the filters exists (assuming one filter of each type - if more, we can assume the user knows what he’s doing, and provide an override). I think this can be modelled by declaring a dependency on some guaranteed-stable common ground - e.g. a “logical stage” - which is what the current version of the architecture already does. (It does this a bit differently, but it should be logically equivalent to this. The filters declare themselves as “belonging” to a named logical stage, and it is implied that each logical stage depends on the one before it in the STAGES list.)

In my opinion, a key idea for keeping this relatively simple is that logical stages do not need to correspond to render passes directly. A Filter can declare whether its logical stage requirement is absolute (necessitating the creation of a new render pass if not already created for its stageName), or whether it allows “falling through” to an earlier render pass (this is declared in the “isMergeable” property of Filter). FilterPipeline can then read this information as it examines the enabled filters, and set up the appropriate render passes (each of which contains one or more logical stages) and assign the appropriate filters to each.

The current design already uses a two-level sort order approach - the outer level (the logical stages) are controlled by “stageName”, while the inner level (ordering within a stage) is controlled by “sort”. I think it is important to have an explicit ordering for filters also within each stage, because the compositing operations usually do not commute (i.e. cannot be reordered without changing the result).

When logical stages undergo a merge, the final sort value for a filter in the merged FilterStage is computed from the number of logical stage “bumps” that were applied to that filter, and the filter’s “sort” value.

If we use such a sort order based approach for the stages, I think the stage level needs to be controlled by an ordered list of strings with sane defaults. This is to support human-readability in the source code (especially that of individual Filters; stageName = “LensOptics” is clearer than stage = 5), in the generated shaders (FilterStage automatically creates a comment describing which named logical stages are being composited in that render pass), and in run-time error messages (which may include the FilterStage name). Also, with an explicit list, it is possible to insert new stages between existing ones without leaving gaps in the original numbering, without using float values, and without requiring to update the implementations of existing filters when a new stage is added.

I do see now that the system can be improved by making the STAGES list an instance variable of FilterPipeline (knownStages?), instead of a module-level variable in Filter. I’ll fix this.

Note that it is also already possible to assign a Filter instance to a non-default stage at run-time. Simply assign a new value to the filter’s stageName property (or provide it while creating the instance) - FilterPipeline then plops the filter into that stage and reconfigures itself automatically. If you want to change both stageName and sort simultaneously (for an existing instance), the Filter class contains an updateConfigurationFromKeywords() method that updates several parameters atomically without invoking intermediate reconfigures. (The constructor also uses it internally.)

DAG?

OK, well, let’s try not to over-engineer things. I can look into it later when you hand off your code to me; maybe I can think of a different hack that would work equally. Making stages a list of strings is fine as well.

I had a look at your code earlier and while it’s good in general, there are a few things that I think may be needlessly complicated and would present a challenge to maintain in the long term.

A good sign that you may be over-engineering is when you find you’re re-implementing basic container types, dynamically synthesising properties and creating a reflection API for your APIs. :wink:

A few remarks/questions:

  1. The CT/RT parameter system. The makeParam functions have a very non-intuitive interface. I think we agreed on a simple property-based system, with getters and setters - even if that’s a little bit more code for the filters to do, it is far easier to understand when reading the code. I don’t think we need a reflection system for parameters.
  2. Why are there setup(“pipeline”), setup(“stage”), make() and synthesize() that do different things? What’s the difference between these four methods?
  3. A single method that takes one of two values as argument and does two completely things based thereupon should really be two separate methods.
  4. What’s a batch update? I don’t recall having discussed this before.

It seems to me that you were trying to implement efficient reconfiguring of existing filters with your parameter system, to support the old CommonFilters-style approach of updating a filter based on the filter type and new parameters. I think that we shouldn’t bend over backwards to support that. We can put hacks in our new CommonFilters to support the old approach, and we don’t even have to be very efficient about it - even replacing the filter class with a new one entirely would suffice as far as I’m concerned.

I don’t mean for this to sound harsh, but keep in mind that you’re coding for other developers as well as users, which means that sometimes an API has to be a bit less convenient if it cuts down the implementation complexity and “magicness” of the code.

Overall, though, this is great stuff, good work. :slight_smile: I think it’d be great if you could get it to a state where it basically works and where the code is understandable; I can make any final adjustments as necessary and make sure it works with everything and commit it. No rush, though, I’m likely to be tied up for the next few days.

Ok. I’ll keep it as a list of strings, then, but move it into an instance property of FilterPipeline.

About alternatives working equally - in case of major changes, I think we might need to define some example cases and desired results for them to make sure both alternatives really do the same thing. I think this is pretty complicated to get right.

Ok. Then we should get rid of those parts, or improve them. This is exactly the kind of thing why review is needed.

And here I thought that a reflection API is a hallmark of a powerful system :stuck_out_tongue:

Seriously, though… reflection is actually one of the reasons I like Python. But maybe this postprocessing filter subsystem is too specific to need that feature. So let’s scratch it.

The goal here was to provide a way for user code to get a list of available parameters from an arbitrary filter - which could be useful e.g. for a live experimentation GUI for postprocessing filters (which wouldn’t need to know anything about the specific filters available). But maybe not so useful for typical applications, which know at design time which filters they will need.

It was also a way to provide some way to semi-easily access the documentation at runtime. The need to do this goes away if we return to manually defined properties with getters and setters (which provide automatically extractable docstrings, which are even better).

About the named tuple issue - that’s definitely a hack. Not my best day coding :stuck_out_tongue:

So, I’m open for better suggestions. Maybe: generate a class using the standard collections.namedtuple container (which basically acts like a code generator - that in itself at least I find very pythonic), and then inherit from the result returned, adding the combineWith() method? Or just provide a module-level function in Filter.py that accepts two or more TextureInputMetadatas (which can be standard named tuples) and combines them? Or something else?

I ended up with this hack because I wanted this to stay in line with the object-oriented nature of the rest of the subsystem. I feel it would be fairly illogical for combineTextureInputMetaData() to be a module-level function, while practically everything else (if we disregard make*Param() which are to be removed) is an instance method of some relevant object.

The important part here is to keep the flag merge logic with the definition of TextureInputMetadata (which is where it logically belongs) - not implement it at the site where it is used. Note that there are two actual use sites; at least currently, both FilterStage and FilterPipeline need to merge the flags. Maybe the functionality could be added to both classes via a mix-in, but that starts to sound like over-engineering to me (and still is not the logically best place for the functionality).

Ok. Let’s get rid of CT/RT/UT parameters. This might not have been the best possible abstraction :stuck_out_tongue:

The aims of implementing the parameters this way were OnceAndOnlyOnce, and making the filter definitions shorter. But you’re right that “more readily understandable” beats “shorter” - i.e. the length should be measured not in lines of code, but in the time it takes for a programmer to parse the implementation :slight_smile:

With a manual implementation… the compile-time parameters have to duplicate only one or two lines of code (one check for the existence of the relevant object, another calling its reconfigure()), but the logic to send shader inputs is too lengthy and error-prone to require filter authors to repeat it in every setter.

Thus: returning closer to the original solution, maybe the Filter class should provide methods for these repetitive tasks, and these methods would be called (manually) from the property setters?

That way, it’s not that many lines per property at the definition site, and actually some of the code is simplified (e.g. the shader inputs in AmbientOcclusionFilter, combining values from several filter parameters into the same input). Also, it becomes easier to support “hybrid type” parameters that affect both a run-time value and require some code generation (“numsamples” in AmbientOcclusionFilter).

Good catch. This is an instance of me thinking in the human equivalent of objects defined dynamically at run-time :stuck_out_tongue:

Better names would be welcome - if indeed all four are even needed. As for how it currently works:

A filter requires two setup phases:

  • Parameter registration, which needs to run when the filter is attached to a pipeline. This is needed so that the pipeline can gather the registration information; especially, it needs to know the required input textures and aux bits from all enabled filters, so that it can FilterManager.renderSceneInto() with the correct options. I think it is simpler if all registration information passes through FilterPipeline, so that it can extract what it needs, and then pass a copy to FilterStage (which can then also extract what it needs). Keep in mind that FilterStages are intended to be, in a sense, temporary objects that are destroyed and instantiated each time the pipeline reconfigures.
  • Creation of internal textures and internal stages. This also sets up the connections between the internal stages created. This needs to run when the filter is attached to a FilterStage. This is to ensure correct creation order (and thus render order) for the buffers requested from FilterManager.

Similarly, there are two different situations where a filter needs to generate code:

  • The make() method loads or generates shaders for the filter’s internal stages. It is intended to run when a Filter-level compile-time parameter changes. This means that the parameter affects only the code generation of the shaders assigned to internal stages - which implies that the FilterStage and FilterPipeline do not need to be reconfigured.
  • The synthesize() method returns the filter’s contribution to the compositing shader (of the FilterStage the filter instance resides in). It is called from FilterStage.reconfigure() (which in turn is called when a FilterStage-level or FilterPipeline-level compile-time parameter changes).

Good catch. Let’s do that.

This is because in the last version we discussed in detail, I hadn’t yet run into the problem for which this is aiming to be a solution.

Maybe I should explain what I was trying to do, and you can then tell me how to do it properly :wink:

So here goes:

I think an important drawback of simple property-based reconfigure automation is that every property change (excluding run-time parameters) triggers a reconfigure. A reconfigure is an expensive operation - it must tear down and then fully re-create the pipeline, so in real-time use at least I think that unnecessary reconfigures should be avoided. It would be much more efficient if, in applicable cases, one could update several parameters (a “batch” of parameters) at once, at the cost of a single reconfigure instead of running one reconfigure for each changed parameter. Hence, batch update.

This is especially a problem with stageName and sort, which may require such an “atomic” update to ensure no unintended sort conflicts occur (in the unnecessary intermediate invocation of reconfigure). Each (stageName,sort) pair must be unique at all times across all Filter instances in the same pipeline.

What’s your position on this? Any recommendations what to do?

This was part of the reason, but see also above.

Now that the configuration lives in the Filter instances, I see no reason to destroy and re-create them at each pipeline reconfigure - I think it is cleaner if they are persistent across pipeline reconfigures (when the new API is used).

Also, I think it would be useful to be able to update several parameters at once (without unnecessary intermediate reconfigures) when using the new API, too.

The filter type as a way to identify filter instances obviously must go when support for multiple instances of the same filter type in the same pipeline is added. But I didn’t yet have time to spare much thought as to what would be the appropriate replacement.

Maybe the cleanest way is to let the caller create/destroy the instances that the application wants, and then add them to the pipeline using FilterPipeline.addFilterInstance(). Then, there could also be a FilterPipeline.delFilterInstance() that removes a filter from the pipeline by its object reference.

Ok.

If the review is fair and keeps to the matter at hand (as yours does), there is just bad code and it needs to be fixed :slight_smile:

Ok. Thanks :slight_smile:

I’ll also be busy for the next couple of days, but I will return to this as soon as I can. It’s also in my own project’s interest to get this done :slight_smile:

Beside completing the framework into an acceptable state, I would also like to contribute some filters to go with it in 1.9.0 - starting with ninth’s lens flare, and adding my improved cartoon inker, and the new desaturation, scanlines and lens distortion filters.

(They might require minor additions to the framework - lens distortion, at least, will need texpad in the fshader to compute the relative pixel position on screen correctly in case of padded textures. Unless there is a better solution I missed.)

But Python already has a reflection system that allows doing this; people can use the dict of a filter and extract the docstrings of properties if they need to. If need be, we could let each filter define a static list of property strings that people can use to find out which properties are actually settable at runtime.

For now, though, I think this is a non-essential feature, and I don’t think this should be a target for 1.9. We can always think about such a thing later without breaking backward compatibility.

What do these metadata classes do, exactly?

There should be two places to set a filter’s properties: (1) at the Filter itself, which doesn’t need to check for its own existence, and (2) via the CommonFilters interface. In the latter case, we don’t really need to worry about making a clean and efficient solution.

I’m not sure what you mean by a “hybrid type” parameter. If something requires code generation, there’s no point in setting it via setShaderInput.

We can solve this problem elegantly by deferring the reconfigure. Instead of having the setters on a Filter call reconfigure() whenever a property is changed, it should call self.markDirty(), which sets some flags or adds itself to a set needing configure. Then, at the end of a frame (a task with a high sort value?), we go through the list of filters needing to be reconfigured and reconfigure them as appropriate.

I think the new approach should be to let the user deal with it. He creates a filter, and is himself responsible for only adding that filter instance once (or face the consequences), without having FilterPipeline second-guess a decision to add a same-type filter twice.

As for CommonFilters, I think we can just do it like this:

class CommonFilters(object):
    def __init__(self):
        self.volumetricLighting = None

    def setVolumetricLighting(self, setting1):
        if self.volumetricLighting is None:
            self.volumetricLighting = VolumetricLighting()
            self.addFilterInstance(self.volumetricLighting)
        self.volumetricLighting.setting1 = setting1

    def delVolumetricLighting(self):
        if self.volumetricLighting is not None:
            self.removeFilter(self.volumetricLighting)
            self.volumetricLighting = None

That’s cool. Let’s first focus on getting the framework done, though, before we add all the different filters, though we can of course still regard those filters as a case study to see how flexible the framework truly is. :slight_smile:

Let me know when the framework is done, maybe I can contribute some filters :slight_smile:

Some small suggestion, if it’s not already in: Some filters like saturate / desaturate don’t need a separate pass. If they could get merged together you could save render targets, which is more efficient on the gpu I believe

Ah! Thinking of anything specific? :slight_smile:

I’ll post about it here when the framework is done.

It’s already in :slight_smile:

When implementing a new filter type, it is possible to tell the system whether that filter requires a new render pass or not. Then, at run-time, the system will create the optimal number of render passes automatically, depending on the filters enabled.

So, for example, if you had one filter requiring a new render pass in the middle of the pipeline, and you disable it (removing it from the pipeline), the pipeline will notice it does not need two render passes any more, and will now create only one render pass when it reconfigures itself.

(Note, however, that in more complex filters, this concerns only the compositing operations. For example, SSAO does three internal render passes and bloom does four. Typically, of course, such filters will also require beginning a new render pass, so that the input textures to the internal passes will show any earlier modifications.)