I didn’t look at your code (yet) and it’s a while since I looked at ninths code, and I’m next to last in writing shaders, so this post may be nonsense.
Isn’t another camera (and/or scenegraph) needed for the cutouts that you wrote about earlier?
For the heat haze one could just ‘mark’ part of the screen (with stencil? mask? map? Z-fail?) as affected by the heat and distort that area with a noise normal map just like in many popular water shaders (or some clever sin/cos function), or am I missing some important step?
For the stealth field it should be more or less the same but using the eye normal of the hiden model, not just noise.
I would expect a framework for making custom effects capable of making such effects. I don’t want to say that this effect should be included and run out of the box with some makeMyGameLookAwesome(True) call, but it would be nice if it could make it at least not harder then it is now. Seting up another camera and scene ain’t that hard and it’s ok to do it manualy if it’s needed for some custom effect.
Anyway, good job on the rest, your ‘old monitor’ shader looks much better then mine hack with a texture pattern
Main issue with distortion effects is that distortion texture in both case is animated and changed fom frame to frame, so you need to render scene without distort objects to get image behind objects and then render needed objects separately with special textures or special shaders to get image for distortion + depth texture for compare.
Mm, in the use case of making walls translucent around the character, yes.
I’d forgotten about this case. My main goal for Cutout was to make a filter that can be used to generate animated black bars. This is very useful e.g. for entering/exiting a pause screen: the bars can start at the top/bottom of the screen, growing and fading in (to partial translucency, covering some area at the top/bottom edges) when pause is activated, and the reverse (shrink and fade out) when it is deactivated. Aside from Cutout itself, the rest is a matter of writing a suitable LerpFunc or an update task that dynamically updates the boundingBox and strength parameters.
The “see through walls” feature was something someone once requested on the forum, so I made the cutout filter general enough to make also that possible. But after the filter was done, I promptly forgot about this use case
The heat distribution in the example, if I understood correctly, is created by a particle system. This procedurally generates a dynamic distortion texture, by rendering the particle effect using the other camera. It also produces a depth texture for the heat particles, which is used for occlusion testing in the heat shader (the distortion becomes zero in fragments where the heat particles are behind objects in the main scene).
It is true that it is possible to include just the shader in CommonFilters, but without the logic to create the heat texture (and its corresponding depth texture), it is not very useful. At the very least, it requires comprehensive documentation on how to set up the needed additional logic - which I think is most natural to provide in the form of a tutorial example program, as in ninth’s original.
The aesthetic question is whether it is cleaner to have such a “partial” implementation in CommonFilters (providing the rest of the code in a tutorial), or to package all of the components into a tutorial (in which case the effect will be missing from CommonFilters, making it more likely that users incorrectly conclude that this effect is not possible).
Yes, that is correct.
The extra setup here is needed because the stealth effect needs to see another copy of the scene with the hidden object removed, so that it knows what goes behind it. This is pretty simple to set up externally, but it cannot be done in the filter.
(It seems ninth ninja’d me on this! Just when I was about to hit submit… )
Hmm. Now that you mention it, isn’t that what many CommonFilters are?
(Particularly SSAO, Bloom, and now SSLR and maybe the antialiaser.)
I agree, it shouldn’t be harder than it is now.
The code implementing a custom filter may look slightly different, but that’s because it needs to tell FilterPipeline (the new API for CommonFilters) how the filter interacts with the other filters, so that FilterPipeline can make them play nicely together.
Unfortunately this does make the filter implementation slightly longer, but it provides a new advantage: any custom filter can work together with any of the filters that already exist in CommonFilters - which was not possible before (short of hacking them into CommonFilters, making it collapse into an unmaintainable mess in the process).
One of the aims in providing lots of ready-made example filters with the new CommonFilters is to document the features of the new API (i.e. show practical examples of how to use it to make a filter that requires feature X).
Thanks. The method is actually simple: use texpix (size of one pixel in texture coordinate units) to calculate the pixel position (x,y) of the current fragment, and then just index into an array of tint colors with x mod 3. The only complication is that the most reliable, but ancient arbfp1 profile does not support indexing into arrays with arbitrary ints, so I had to use a switch statement instead. The darkening of alternate lines works similarly (using y).
And thanks for the idea and the colors - extracted from crt.png in your pyweek entry
Do you mean depth-of-field (DoF) or something else?
I think extreme bokeh “light spots” (where one spot can have a diameter of over 100 pixels) are better represented by additively blended sprites (which gives the extra advantage of easily specifying the iris shape), but for small or moderate amounts, I’d go for DoF (assuming a circular iris for simplicity).
I’m planning to implement DoF at some point, but probably after 1.9, and I can’t make any promises at the moment. The approach described in the paper I linked above (by Kass et al. from Pixar and UC Davis, maybe from 2006-2007) currently seems the most promising real-time DoF technique. It requires a tridiagonal linear equation system solver (an algorithm for this is also explained in the paper) - overall it’s somewhat complicated, but should be doable.
Yes, I mean DoF with Bokeh.
Just as info: I’ve seen several implementations of this effect in the net. For example, GLSL on Blender Game Engine: blenderartists.org/forum/showthr … 8update%29
This filter looks very useful as a source of ideas. Especially the blobs generated by bright light sources look attractive. It’s a good point by the author of that filter that the blobs make bokeh look distinctive, and without them it looks just like a regular blur. So, maybe we should queue up a bloom preprocessor for DoF, too
The code seems surprisingly simple. There must be some corner cases that the algorithm can’t do correctly…?
Hmm. Looking at the first example render, there seems to be a slight “fringe” on top of the head, and at the “balcony” (or whatever it is - it’s out of focus ) at the left. At the edge of the “balcony”, the yellow color bleeds where it shouldn’t, and at the top of the head, the blurring stops about two pixels before it reaches the head.
If I read the code correctly, it seems the blur size is approximated using the CoC (circle of confusion) at the fragment being rendered (which is easily available in a shader), instead of spreading each fragment by its own CoC (which would be physically correct). This might explain the artifacts. Bleeding of out-of-focus background objects onto in-focus midground objects (like at the balcony in the picture) is a typical issue of many realtime DoF algorithms.
Of course, classically it is thought that performing the CoC-based spreading correctly would require a scatter type of computation, instead of a gather, which shaders do. To some extent, it is possible to emulate scatter as gather, but this is typically very inefficient.
[size=70]Scientific computing terminology. Roughly speaking, a scatter computation answers the question “where does this data go?” (with possibly multiple target locations updated by one data item), and a gather answers “what data goes here?” (from possibly multiple source locations). From a parallel computing viewpoint, scatter is a disaster, because it requires write locking to ensure data integrity (so that all updates to the same data item are recorded correctly).
(It is well known that as the number of tasks increases, locking of data structures quickly becomes a bottleneck. For proper scalability, lock-free approaches are required.)
Gather is efficient, because with the additional rule that the computation kernel (shader) cannot modify its input, read locking is not needed (no race conditions). Because each gather task writes only to its own target data item, write synchronization is not required, either, and all the gather tasks can proceed in parallel.
This I think is the underlying reason for using the gather model for shaders, aside from the other useful property that if one wants to exactly fill some pixels, it is best to approach the problem from the viewpoint of “what goes into this pixel” (rather than “where does this data go” and hope that the data set hits all the pixels).[/size]
The main new idea in the approach of Kass et al. is precisely that they recast the CoC scatter problem in a new light (so to speak). The diffusion equation models the spreading of heat in a continuous medium. The heat conductivity coefficient (which may be a function of space coordinates) represents the local diffusivity at each point - which is a lot like the local CoC for each fragment.
The unknown quantity to be computed is the temperature field - or in this case, the pixel color (independently for R, G and B channels). Solving the diffusion equation exploits the physics/mathematics of heat diffusion to perform a scatter computation, while requiring only gather operations.
There have been other approaches to solve this, but at least according to Kass et al., there have always been limitations, either with computational efficiency, or with the ability of the algorithm to perform variable width blur. (Another useful look at the history of different approaches to realtime DoF is given in the GPU Gems article I linked in a previous post.)
(The other problem in DoF is the translucency of thin objects in the out-of-focus foreground, which requires an extra camera.)
In conclusion, thanks for the link! I’ll see what I can do
EDIT: Oops, looks like this post happens to coincide with your post saying that you wanted to take a break from postprocessing. Sorry about that.
I’ve been working on integrating this. The design seems great; I think you’ve done an outstanding job on this, and there aren’t any major issues I can see with your implementation. (I’ve had to make some changes, but they were mostly style points.)
I’m still doing work on this, but there are a few things I wanted to still discuss with you.
One idea I had was to reduce the complexity of onSynthesizeCompositor() a little bit. Particularly, my idea was to allow people to do this for the final compositing shader:
MY_CODE = """
const uniform float4 %(input:myShaderInput)s;
float %(func:myHelper)s() {
return %(input:myShaderInput)s * tex2D();
}
// Main function, as called by the compositing shader
void %(func:main)s(inout float4 pixcolor) {
pixColor.rgb *= %(func:myHelper)s();
}
"""
class MyFilter(Filter):
compositingTemplateCg = MY_CODE
I figured, if they’re going to have to sprinkle %()s in their shader source anyway because of name mangling, why not let the system automatically handle the mangling? We would need to create a class implementing getitem (ie. mimicking a dict) and pass it to the right side of the % operator, and that getitem would perform different mangling based on whether it has a func: prefix, input:, tex:, or w/e.
I’m not convinced that this is better, though, since the source code still looks ugly, and it looks all the more magical. What are your thoughts on this?
Also, I’m still not really convinced that we should have a predetermined set of stage strings that people place their shaders into. I can see the general model, but I think it’s very quickly going to be inadequate. I haven’t finished fully wrapping my head around the code yet, but what are your thoughts on possibly doing away with the stage name sorting and just using user-specified sorting entirely? The user probably has some sort of idea on what order he wants the things to be applied in.
(Note that I’m not asking you to change anything on your side at this point, since I’ve already made various changes to the source code on my computer. I’m willing to make any changes that might be necessary.)
Finally, and this is more of a thought than a question: these extra filters you added are really great, and some of them should most definitely be part of Panda. I’m just not sure if they’re all general-purpose enough to be precanning into Panda. That’s okay, though, since that’s the reason we made this system modular - so that these filters can be made separately of Panda and people can just drop them into their project. We could even have a sample program that includes various interesting filters.
Filters like FXAA definitely have general use though and it’d be great to include them and promote them as a feature of Panda3D. There’s clearly a line that should be drawn at some point, I’m just not really sure yet where.
No problem. It just means I’m taking a break from adding new features. Discussion is welcome
As for new features, I think the depth-of-field algorithm of Kass et al. requires some changes to FilterManager. And at least currently, I think I’ll later want to look at solving PDEs using shaders. Given the raw compute power of current GPUs, it looks very interesting even for 3D (voxels), at least with algorithms that are able to take advantage of coalesced memory accesses.
The current version of the postprocessing framework is probably enough for 1.9.0, so this is a good point to pause and think. It’s also refreshing to switch to something else for a while, so that was my primary intention.
(A small correction: on second thought, what I said about the circles of confusion in the previous post might be nonsense. I’ve thought about it some more and now I believe the local diffusivity will affect how much the neighboring pixels will bleed into the current pixel - i.e. it works the wrong way around, just like the simpler implementation. But importantly, the algorithm does what the authors claim: it prevents any color bleed into sharp in-focus midground objects, because pixels with zero circles of confusion act as insulators.
I haven’t yet been able to wrap my head around how to mix this approach and the light blobs (characteristic to bokeh) from the simpler filter posted on blenderartists - if that is even possible. Would be nice to get both features into the same filter. But currently this requires more thinking.)
Anyway, on to the primary topic:
Thanks!
The discussions and your earlier review helped a lot
I think it sounds good, but I’m not sure if that proves anything
It seems conceptually clearer to let, as you suggest, the system to handle the name mangling. Also the prefixes seem a particularly self-documenting way to tell the system what to do.
But I think you are correct in saying that it appears a bit magical. At least it needs to be documented well, if we go this route. (An immediate question, seeing code like that, is “what prefixes are available and where are they documented?”)
However, my overall impression is that this looks clearer than my original solution.
I think a two-level sorting system in one form or another is necessary, because there are some filters that must begin a new render pass (so that they see the fully up-to-date color texture as their input), and there needs to be some mechanism to tell the system this.
I’m not saying it needs to be stage names, specifically, but to me that solution seems intuitive (which is why I did it that way).
I agree that a default list of stage names quickly becomes inadequate - in fact, I think the current list is already inadequate for adding depth-of-field
There are two important points here. The first is that the multipass compositing process itself is an important new feature, fixing cases like the combination of CartoonInk and BlurSharpen that previously did not work (the blur erased the cartoon outlines, because it did not see them). I’m not sure if anyone actually filed a bug about this, but I think I mentioned this back in 2013
The other is that the outer sort level is an abstraction separate from individual filters. This cannot be solved by a “start new logical stage” flag in Filter, because sometimes two filters that would normally each begin a new logical stage can be included in the same logical stage - because they need the same version of the scene color texture!
LensFlare and Bloom are a practical example of this, when these filters are used to simulate lens glare. Each of them should get the image just before any glare is applied as the input. Thus, if only one of them is enabled, that filter should begin a new render pass, but if both are enabled, only the one that happens to be placed first should begin a new render pass.
This to me suggests that there must be some abstraction, separate from individual filters, that controls the versioning of the scene color texture during the postprocessing sequence. Hence, logical stages.
“Logical stage” and “render pass” are separate concepts, because in many cases - but not always - it is possible to concatenate several logical stages into the same actual render pass. See FilterPipeline._createFilterStages() for the logic that implements this.
Concerning this aspect of the system, the most important thing to keep in mind is that, if the compositing fragment shader of a filter needs to access input pixels other than the pixel currently being processed, then that filter must obtain an up-to-date scene color texture as input to the compositing shader. Such a filter is termed “non-mergeable” (this is indicated by its isMergeable flag having the value False).
At first glance, it would appear non-mergeability means that the filter must begin a new render pass (and hence it must have a sort value of zero), but the technical definition actually is that the filter must get the same version of the scene color texture as other filters that are assigned to the same logical stage (and any sort value is allowed).
“Logical stage” boundaries are basically a formalization of points in the postprocessing sequence where an updated color texture becomes available, and hence a new render pass begins. However, the system reserves the right to optimize by suppressing the creation of a new render pass, if none of the filters (that are currently enabled) in the later logical stage are non-mergeable. Even several render passes will be merged when possible - this is fully automatic, controlled by the value of the isMergeable flag of each of the enabled filters.
This is precisely the origin of the term: mergeable filters, if no non-mergeable filter blocks this from occurring, are concatenated (merged!) to the end of the previous logical stage. The final sort value of merged filters is determined by the combination of the original logical stage and the filter’s sort value. This preserves the global ordering.
A mergeable filter must respect certain limitations: it must not access input pixels (in the scene color texture) other than the one being processed, and it must respect any previous modifications to pixcolor. The latter requirement means that the filter must in a sense add in its effect, instead of completely overwriting previous processing. (Obviously, a black bars filter may completely overwrite some pixels - that does not make the filter non-mergeable, because it does not blindly overwrite the whole image.)
Filters which inherently must overwrite pixcolor (due to algorithmic details) must be declared non-mergeable, and in addition must have their sort value set to zero. (AntialiasFXAA, BlurSharpen, LensDistortion and Pixelization are examples of this.)
Aside from that, I think it is somewhat tricky to get the render order correct, so I would prefer to err on the side of caution, and provide a sensible default ordering.
This makes the system much easier to use, because one can simply enable effects without being concerned about their ordering. If an advanced user wants to do something exotic, the system allows overriding everything precisely for this reason.
One specific example of this trickiness is that volumetric lighting must be applied before lens distortion, because the volumetric scattering occurs in the scene - so if the lens distorts the image it receives, those light rays should appear bent. However, lens flare must be applied after that, so that the ghosts will be placed on straight lines - these lines must not be bent. The lens may also apply some chromatic distortion - so, if there is a desaturation filter, it should come after any lens effects. And if a CRT display is being simulated, Scanlines should be placed near the end of the filter sequence, after almost everything else. Except GammaAdjust, which needs to come last.
As another example, I think it is not immediately self-evident (unless you have spent some time thinking specifically about things like this) that the sequence FXAA-CartoonInk-LocalReflection-AmbientOcclusion-VolumetricLighting must be in that exact order. Reflections are important to do before AO, so that the result of AO won’t get garbled by reflection, which does not account for AO. VolumetricLighting must come after AO, because it is implemented as a 2D radial blur which has no concept of depth (although it is physically true that VL occurs in the volume, while AO occurs in the corners at the far boundary of the volume, and hence VL is always in front. In this particular case, this happens to suggest the same render order as the argument based on implementation details, but I see no reason why this property would generally hold; generally, one must know the implementation details).
The default ordering is full of considerations like this. I’m not saying that the stage name system is perfect, or actually any good , but I think we need something of equivalent functionality that automatically does something similar, in order to spare the users the trouble of studying the whole filter sequence, and repeating the considerations that have led to the current default ordering (and these exact starting points of new logical stages).
It would be theoretically possible to just explain everything in the manual and leave the ordering fully up to the users (with some mechanism of indicating desired logical stages (names, numbers, whatever), and which logical stage each filter maps to), but I think the problem is fairly complicated, and hence this runs a significant risk of getting the solution wrong in a significant fraction of user software. Basically, only technical types, and of them only those with enough time to study the whole postprocessing sequence, would be able to use the system correctly. I think an important part of the appeal of CommonFilters was that it was simple to use - part of the goal here is to have this “feature” carry over to the new FilterPipeline.
(Note that this is separate from the two-level sorting, which enables versioning the scene color texture - that feature is needed regardless of whether we specify a default order or not, and what kind of mechanism we use to implement a default order if we do specify one.)
Maybe I’m overly pessimistic?
Ok. Thanks
Let me put it this way, there’s no KitchenSink filter in the package simply because so far I haven’t been able to figure out what the users would expect it to do
Seriously, though, I realize that we cannot provide everything by default. Nor that we would want to - for example, Audacity or GIMP can look pretty intimidating once you install the plugin packs, because there is too much choice.
I think that of the new filters, at least FXAA, LocalReflection, LensFlare, Desaturation and Vignetting are generally useful. Vignetting is sort of trivial to do, but many games use the effect (and noise) to achieve an authentic film look, so it would be nice to have out of the box, just to eliminate one practical hurdle.
As for the new cartoon inkers, my motivation behind this whole thing was that I found the old inker lacking. It did not have any antialiasing, so I think the antialiased lines are an improvement - and make the render quality slightly closer to that of the Blender inker (although not yet there). I think these are generally useful for that segment of the user base that use the cartoon shading features.
FilmNoise is currently useless, as it really needs a better RNG. It works on some GPUs, but looks awful on others. We can have a better RNG the moment we move the system over to GLSL, though.
LensDistortion is pretty specific. The barrel effect is sometimes useful, but maybe not very often. Also, RGB-based three-component chromatic distortion does not look good beyond very small amounts if there are sharp edges in the scene. Maybe this is the least useful of the new filters?
Cutout and Scanlines are corner cases, but personally I would prefer to err on the side of including them. Sure, similar effects could be quickly hacked together by users that need them, but these try to go beyond the bare essentials - they are configurable precisely to cover different use cases, and to provide a standard implementation that has a reasonable feature set.
There is also a technical reason behind the large number of examples: the existing filters illustrate how to use different features of the API better than plain documentation could. This would need better documentation on which filters demonstrate which features, though.
But that said, I agree on the reasoning behind the modularity. Pluggability of custom filters is one of the major new features of this system.
A sample program sounds nice. Maybe we could move the less generally useful filters into a set of samples, as you suggest.
I’m also tempted to rewrite the heat haze and stealth field examples using this new system. Once I get around to it, and unless wezu or ninth does it first
I will rewrite the heathaze/stealthfield/distortion for the new system, but after 1.9 is official. I want it to be universal and useable for all sorts of distortion, be it a explosion shockwave, refracting glass, water, bullet trails, force fields, predator camo or any ‘magic’ effects…and because it will do all of that it will most likely be suitable only for my dubious purposes.
As for what to include as part of the sdk - there are already some very specific things in the ‘old’ common filters (and some more stuff in the direct dir - a demo or two for mirrors and shadows, parts of a level editor iirc). If that stays then all the new fillters should ship with the sdk. The logical alternative would be not to include any filters at all and ship them only in the samples.
Extract both this and CommonFilters190_with_retro.zip in the same directory
In the terminal, go to that directory, and run python -m SSLRExample
The code should be fairly well commented; if there is something that requires explanation, post here and let me know. SSLRExample.zip (626 KB) SSLRExample_code_only.zip (3.19 KB)
I am digging up this old thread as I am looking for a good solution to project volumetric light.
The Volumetric Lighting Filter built into Panda3D Common Image Filters works somehow weird, producing only a faded copy of the illuminated object, as in the screenshot below:
In the example above, I used code based on this post:
Meanwhile, I’m looking for something like this:
In this thread, I noticed a lot of work done with the new filters, including filters producing effects similar to what I am looking for. For example:
(and several more similar versions)
It seems there was even a concept of integrating new filters with the main Panda3D code, but as I understand it, this idea is dead?
I tried to download the latest version along with SSLRExample:
First of all, it looks like this is code still written for Python 2.7 (looking at the old print form, for example), but I dealt with it and improved it. Still, after launching it, I get a lot of warnings/errors (I quote all of this at the end of my post) and the visual effect is very disappointing, as in the screenshot below:
One thing: I’m working on macOS and I know there are some shader restrictions with macOS (though generally getSupportsBasicShaders() shows me True).
Can anyone help me run this new filter code? Alternatively, is there any other solution to achieve the volumetric light effect I am looking for?
/Users/miklesz/PycharmProjects/Demo2023/venv/bin/python /Users/miklesz/PycharmProjects/Demo2023/CommonFilters190/SSLRExample.py
Using deprecated DirectStart interface.
Known pipe types:
CocoaGraphicsPipe
(all display modules loaded.)
Warning: pandac.PandaModules is deprecated, import from panda3d.core instead
:shader(warning): Shader::make() now requires an explicit shader language. Assuming Cg.
:shader(warning): Shader::make() now requires an explicit shader language. Assuming Cg.
:shader(warning): Shader::make() now requires an explicit shader language. Assuming Cg.
:shader(warning): Shader::make() now requires an explicit shader language. Assuming Cg.
:shader(warning): Shader::make() now requires an explicit shader language. Assuming Cg.
Caught exception while setting filter; details follow.
Traceback (most recent call last):
File "/Users/miklesz/PycharmProjects/Demo2023/CommonFilters190/SSLRExample.py", line 243, in <module>
t = SSLRExample()
File "/Users/miklesz/PycharmProjects/Demo2023/CommonFilters190/SSLRExample.py", line 118, in __init__
filterok = self.filters.setAmbientOcclusion()
File "/Users/miklesz/PycharmProjects/Demo2023/CommonFilters190/CommonFilters.py", line 76, in compatibilityGlue
args[0].reconfigure() # args[0] = self (CommonFilters)
File "/Users/miklesz/PycharmProjects/Demo2023/CommonFilters190/FilterPipeline.py", line 950, in reconfigure
self.stages[-1].reconfigure()
File "/Users/miklesz/PycharmProjects/Demo2023/CommonFilters190/FilterStage.py", line 635, in reconfigure
f.attachStage( filterStage=self )
File "/Users/miklesz/PycharmProjects/Demo2023/CommonFilters190/Filter.py", line 849, in attachStage
self.onAttachStage()
File "/Users/miklesz/PycharmProjects/Demo2023/CommonFilters190/AmbientOcclusion.py", line 146, in onAttachStage
self.interQuads.append(self.pipeline.manager.renderQuadInto(colortex=self.textures["ssao1"], div=2, align=2))
File "/Users/miklesz/PycharmProjects/Demo2023/CommonFilters190/FilterManager.py", line 246, in renderQuadInto
buffer = self.createBuffer("filter-stage", winx, winy, texgroup, depthbits)
File "/Users/miklesz/PycharmProjects/Demo2023/CommonFilters190/FilterManager.py", line 278, in createBuffer
winprops.setSize(xsize, ysize)
TypeError: 'float' object cannot be interpreted as an integer
Continuing.
:shader(warning): Shader::make() now requires an explicit shader language. Assuming Cg.
Caught exception while setting filter; details follow.
Traceback (most recent call last):
File "/Users/miklesz/PycharmProjects/Demo2023/CommonFilters190/SSLRExample.py", line 243, in <module>
t = SSLRExample()
File "/Users/miklesz/PycharmProjects/Demo2023/CommonFilters190/SSLRExample.py", line 119, in __init__
filterok = self.filters.setBloom(size="large")
File "/Users/miklesz/PycharmProjects/Demo2023/CommonFilters190/CommonFilters.py", line 76, in compatibilityGlue
args[0].reconfigure() # args[0] = self (CommonFilters)
File "/Users/miklesz/PycharmProjects/Demo2023/CommonFilters190/FilterPipeline.py", line 922, in reconfigure
stage.reconfigure()
File "/Users/miklesz/PycharmProjects/Demo2023/CommonFilters190/FilterStage.py", line 635, in reconfigure
f.attachStage( filterStage=self )
File "/Users/miklesz/PycharmProjects/Demo2023/CommonFilters190/Filter.py", line 849, in attachStage
self.onAttachStage()
File "/Users/miklesz/PycharmProjects/Demo2023/CommonFilters190/AmbientOcclusion.py", line 146, in onAttachStage
self.interQuads.append(self.pipeline.manager.renderQuadInto(colortex=self.textures["ssao1"], div=2, align=2))
File "/Users/miklesz/PycharmProjects/Demo2023/CommonFilters190/FilterManager.py", line 246, in renderQuadInto
buffer = self.createBuffer("filter-stage", winx, winy, texgroup, depthbits)
File "/Users/miklesz/PycharmProjects/Demo2023/CommonFilters190/FilterManager.py", line 278, in createBuffer
winprops.setSize(xsize, ysize)
TypeError: 'float' object cannot be interpreted as an integer
Continuing.
:shader(warning): Shader::make() now requires an explicit shader language. Assuming Cg.
Caught exception while setting filter; details follow.
Traceback (most recent call last):
File "/Users/miklesz/PycharmProjects/Demo2023/CommonFilters190/SSLRExample.py", line 243, in <module>
t = SSLRExample()
File "/Users/miklesz/PycharmProjects/Demo2023/CommonFilters190/SSLRExample.py", line 120, in __init__
filterok = self.filters.setCutout(shape="rectangle", boundingBox=(-0.1, 1.1, 0.1, 0.9), smoothingRadius=0.01, strength=0.75)
File "/Users/miklesz/PycharmProjects/Demo2023/CommonFilters190/CommonFilters.py", line 76, in compatibilityGlue
args[0].reconfigure() # args[0] = self (CommonFilters)
File "/Users/miklesz/PycharmProjects/Demo2023/CommonFilters190/FilterPipeline.py", line 922, in reconfigure
stage.reconfigure()
File "/Users/miklesz/PycharmProjects/Demo2023/CommonFilters190/FilterStage.py", line 635, in reconfigure
f.attachStage( filterStage=self )
File "/Users/miklesz/PycharmProjects/Demo2023/CommonFilters190/Filter.py", line 849, in attachStage
self.onAttachStage()
File "/Users/miklesz/PycharmProjects/Demo2023/CommonFilters190/AmbientOcclusion.py", line 146, in onAttachStage
self.interQuads.append(self.pipeline.manager.renderQuadInto(colortex=self.textures["ssao1"], div=2, align=2))
File "/Users/miklesz/PycharmProjects/Demo2023/CommonFilters190/FilterManager.py", line 246, in renderQuadInto
buffer = self.createBuffer("filter-stage", winx, winy, texgroup, depthbits)
File "/Users/miklesz/PycharmProjects/Demo2023/CommonFilters190/FilterManager.py", line 278, in createBuffer
winprops.setSize(xsize, ysize)
TypeError: 'float' object cannot be interpreted as an integer
Continuing.
:shader(warning): Shader::make() now requires an explicit shader language. Assuming Cg.
Caught exception while setting filter; details follow.
Traceback (most recent call last):
File "/Users/miklesz/PycharmProjects/Demo2023/CommonFilters190/SSLRExample.py", line 243, in <module>
t = SSLRExample()
File "/Users/miklesz/PycharmProjects/Demo2023/CommonFilters190/SSLRExample.py", line 152, in __init__
filterok = self.filters.setDesaturation(mode="bandpass", tintColor=finalTintColor, strength=0.95)
File "/Users/miklesz/PycharmProjects/Demo2023/CommonFilters190/CommonFilters.py", line 76, in compatibilityGlue
args[0].reconfigure() # args[0] = self (CommonFilters)
File "/Users/miklesz/PycharmProjects/Demo2023/CommonFilters190/FilterPipeline.py", line 922, in reconfigure
stage.reconfigure()
File "/Users/miklesz/PycharmProjects/Demo2023/CommonFilters190/FilterStage.py", line 635, in reconfigure
f.attachStage( filterStage=self )
File "/Users/miklesz/PycharmProjects/Demo2023/CommonFilters190/Filter.py", line 849, in attachStage
self.onAttachStage()
File "/Users/miklesz/PycharmProjects/Demo2023/CommonFilters190/LocalReflection.py", line 564, in onAttachStage
self.interQuads.append(self.pipeline.manager.renderQuadInto(colortex=self.textures["sslr1"], div=2, align=2))
File "/Users/miklesz/PycharmProjects/Demo2023/CommonFilters190/FilterManager.py", line 246, in renderQuadInto
buffer = self.createBuffer("filter-stage", winx, winy, texgroup, depthbits)
File "/Users/miklesz/PycharmProjects/Demo2023/CommonFilters190/FilterManager.py", line 278, in createBuffer
winprops.setSize(xsize, ysize)
TypeError: 'float' object cannot be interpreted as an integer
Continuing.
Process finished with exit code 0