CommonFilters uses FilterManager under the hood. FilterManager replaces the display region with one that renders a fullscreen quad, then creates an offscreen buffer to render the scene into, and reassigns the main camera to render that one.
So when you are creating your other FilterManager/CommonFilters, it can’t find a display region on the window matching the main camera. You either need to point it at the camera used by the other object to render the fullscreen quad, or you need to point it at the offscreen buffer created by the other FilterManager/CommonFilters.
Thank you for your answer.
I would prefer CommonFilters first, then FilterManager. Because I want to use the built-in setVolumetricLighting() and possibly setBloom(), and then apply some of my own filters (vignetting, radial blur and the like). I tried to somehow find the appropriate base.win and base.cam in the CommonFilters class object, but I didn’t find it (I’m still not sure how it works).
Alternatively, if CommonFilters first was a big problem, then it might be acceptable the other way around and FilterManager first. Although the effect would probably be worse.
I will be grateful for any hints.
With what you’re doing, it may be worth it to dive into the source code of CommonFilters and FilterManager to see what it’s doing, or even just rip out the parts of CommonFilters that you need and reimplement them on top of your own FilterManager.
That said, it should be possible to chain two FilterManager objects (CommonFilters uses FilterManager under the hood, use .manager to access it). However, the second object will not be able to access the original scene’s depth texture, etc, because it will just be capturing the output color that is a result from the compositing process of the first one. Maybe it is possible to interleave two objects’ operations in complex ways, but if I were you, I would really just save myself the trouble and copy CommonFilters.py into my own codebase and modify it to add my own effects.
You can wanted to try and chain them, you can set one up to use the other one’s internal buffer (manager.buffers) with the default camera, or you can do the reverse by using the regular window but using the camera object that the other FilterManager assigned to it (base.win.display_regions.camera, or maybe it is region 1, not sure).
Is that not already the case in this scenario, however? After all, the original post has the separate FilterManager calling “renderSceneInto” alongside usage of CommonFilters. Or does FilterManager have some clever trick of re-using render-results across FilterManager instances…?
In the case of FilterManager chaining, the “scene” rendered by renderSceneInto will be the fullscreen quad that is being rendered by the previous manager, containing the output image.
That’s why I said that the second manager couldn’t access the depth buffers, etc. of the original scene, but I just realised that that’s wrong, because of course you can just have the second FilterManager access the .textures dictionary of the CommonFilters instance.
The first FilterManager you set up is rendering its final quad, with the compositing shader and everything applied to it, into the window. You need to instead capture this final render result into a texture so you can use that as input of one or more of your subsequent filters. This is what the renderSceneInto of your second manager accomplishes, because it redirects the output of whatever “scene” (which may in fact be the final filter pass of the first manager) into a texture buffer, and then it replaces the original scene (ie. the quad rendered to the main window) with a quad to show the final result of the second manager in its place. But renderQuadInto doesn’t do this first step. It just creates a new pass with a fullscreen quad, disconnected from everything.
One way of thinking about these methods is that renderSceneInto sets up the input and output of the filter chain (so the scene pass and the final pass), whereas renderQuadInto is used to set up intermediate passes.
I think that I understand you correctly, and I think that that’s exactly what I had in mind: CommonFilters calls “renderSceneInto”, setting things up, then the developer calls “renderQuadInto” on the FilterManager within CommonFilters to perform subsequent filtering. Note that miklesz indicated about that they wanted “CommonFilters first, then FilterManager”.
But it’s possible that I’m misunderstanding–for some reason, I’ve always had trouble fully keeping in mind the flow of FilterManager…
That’s why I decided to abandon CommonFilters and just create my own versions of these filters, this time already in #version 410.
So, thank you again for your support. Let the thread stay, hope someone else will find it useful. And I already have most of the filters I need rewritten for the newer OpenGL (actually, I could even offer them to the community, but I’m just learning shaders, so my implementations are probably still quite amateur - although they work so-so).
One thing that is still a little bit annoying is that FilterManager depends on the camera. This implies to regenerate all the Filters at each camera change. I don’t know whether or not this can be addressed by another design in a next future…
Having an easy-to-use, customizable and multi-stacked post processing “framework” is one thing that could be interesting for P3D 11.