Filters break multisamples

It seems like using filters will completely disable multisamples. I’ve tested on multiple GPUs, different OS and few versions of Panda. So it is likely a bug inside Panda, not a driver bug.

I was going with a cartoon style for this project so I would like to use this effect, however not being able to use multisamples is a price I might not be able to pay. Especially with cartoon inking you have pixels just pop up during animation.

I’m not sure but depending on the bug a custom shader will do no different.

On a side note, can you guys add an extra argument in the constructor to set the color of the inking?

Yeah, I’m aware of the multisampling bug. It happens with any postprocessing filter, and it’s due to the fact that the buffer that the main scene is being rendered into doesn’t have multisamples. I’m looking into it.

I’ve just implemented coloured cartoon inking, you can now pass a “color” argument to setCartoonInk, default value being (0, 0, 0, 1).

OK. Looking forward to it.

Wow, that was fast.

I may be asking too much, but can you make it so it is possible to tell the cartoon inking shader to respect the distance from the camera? (less thick lines as you move away?). Could be used as another boolean argument, so it wouldn’t break old code.

Another thing that always seemed odd to me was bloom filter taking string argument for the size of the bloom instead of a float. You could allow to pass a float number instead of a string and just check in the module the type of the argument. Again wouldn’t break any old code.

Anyway, thanks.

Here’s what I have gathered about the issue. FilterManager doesn’t pass the multisample settings along to the buffer, but when I change that, the buffer fails to open.
I found out that that’s because it’s refusing an FBO because of the requested multisamples, refusing a pbuffer because it’s not resizable, and then refusing a ParasiteBuffer because it doesn’t support auxiliary colour targets.

Now, there appears to be some experimental code in the codebase that should add support for multisample FBO’s. I can’t seem to get it to work correctly, though.

Pbuffers can theoretically be used with multisampling, but not when directly rendering to a texture, the data needs to be copied to a new texture. However, Pbuffers don’t seem to work right on OSX.

Now, using a ParasiteBuffer for postprocessing effects is probably not even such a bad idea, especially because it shares the properties of the host window, which is what we’ll be using the output for anyway. I can correctly use ParasiteBuffer with multisampling and the Bloom filter, which doesn’t require an auxiliary colour target.

It might work just fine if we give the main window an auxiliary colour target, so that the ParasiteBuffer will be able to take advantage to it. But the FilterManager can’t do that, because it doesn’t control the creation of the main window.

Maybe the easier solution (until we fix the buffer problems) would be to implement a cartoon filter that uses the depth buffer instead of the normals to calculate the results.

Because of how it works, a floating-point argument wouldn’t make a lot of sense. If you choose larger sizes, the code starts adding more render passes and different behaviour to make it more optimal and to reduce artifacts that appear when upscaling the effect.
I agree that a string was a bad choice for something that should have been an integer or something of the sort.

I don’t know why Panda doesn’t do those stuff. Maybe someone else who is familiar with the source and the low-level stuff could help you. I wish you luck.

Sorry, is this a reply to how to make the line thickness depend on the camera distance? If so, maybe someone else will write a new shader, I’m not a shader programmer. I thought the current filter used the depth buffer already.

Then you could allow to also pass an integer instead of the current string argument.

BTW I’m getting this message when using a Volumetric Lightning filter.

:display(error): Could not get requested FrameBufferProperties; abandoning windo
w.
  requested: depth_bits=1 color_bits=32 alpha_bits=8 back_buffers=1 force_hardwa
re=1
  got: depth_bits=24 color_bits=32 alpha_bits=8 accum_bits=64 force_hardware=1

Buildbot panda, Windows 7, 64 bit.

Any update on this? sounded like rdb found a solution with pbuffers.

No, I’ve been occupied with other stuff. I’m too swamped with other tasks to attempt an overhaul of any buffer code right now.

Pbuffers aren’t resizable, which is a problem because the main window is able to resize, and the buffer should follow the size of the main window.

You might be able to get it to work using a ParasiteBuffer though, with a hack or two. (Note that pbuffer != ParasiteBuffer.)

I don’t know how pfuffers/parasite buffers work, but maybe it wouldn’t be too evil to just delete/create new when window is resized?

Anyway, really waiting for this one :smiley:

I don’t think this is caused by the same issue: volumetric lightning doesn’t seem use any smoothing, like in other games.

import direct.directbase.DirectStart
from panda3d.core import *
from direct.filter.CommonFilters import *
base.setBackgroundColor(0,0,0,1)

filters = CommonFilters(base.win, base.cam)

panda = loader.loadModel('panda')
panda.reparentTo(render)
panda.setPos(0, -4, -8)

sun = loader.loadModel('smiley')
sun.reparentTo(render)
sun.setTextureOff(1)
sun.setColorScale(1,1,0,1)

filters.setVolumetricLighting(caster = sun, decay = 0.8, density = 1.0, numsamples = 32)

run()

And this is printed in the console:

:display(error): Could not get requested FrameBufferProperties; abandoning windo
w.
  requested: depth_bits=1 color_bits=32 alpha_bits=8 back_buffers=1 force_hardwa
re=1
  got: depth_bits=24 color_bits=32 alpha_bits=8 accum_bits=64 force_hardware=1

Using more samples is too expensive and is still bad at closer angles.

You can apply the effect in multiple passes, this will produce a much better, smoother effect.

Can you assure me this is expected behavior? I thought such shader smooths the results itself. I’ve never needed multipass rendering, I’ve heard it’s not so good in Panda anyway. But what will be the difference between doing that and just increasing the samples? I think it will be as slow.

It’s the expected behaviour of any implementation of this effect. I’m not sure how you imagine said “smoothing” would occur besides with more samples or more passes.

It won’t be as slow. If you use two passes of 8 samples each, then the result will look the roughly same as if you used one pass with 64 samples, while only being as expensive as if you used one pass with 16 samples.

Hm, I’ve never used multipass so I don’t know.
Well if it’s not a limitation of Panda’s shader, then I have no complaints.

I’m not a shader programmer, but I assumed something like motion blur with image transform for a perspective effect could work.

The technique already uses a zoomblur effect, you just have to use enough samples for good quality if you use a single pass.

It’s been a while since my original post.
I didn’t really get the situation, no ideas how to fix the issue, or simply no time currently? I hope it’s not the former, still hope to be able to use filters properly.

Which issue are you referring to exactly?

The original issue, topic title.

Then no, I haven’t gotten around to looking into it. It’s likely a complicated issue that will be a significant undertaking to get rid of.

In the meantime, you can use the workaround of enabling auxiliary render targets on the main window and applying a fix to FilterManager to set the framebuffer properties appropriately.

Alternatively, I believe some people on the forums have pointed toward antialiasing methods that are implemented in a postprocessing filter; FXAA I believe.

scratches head

If that’s also a filter, wouldn’t that be like using a program which removes jpeg artifacts and saving the resulting image as jpeg again?

Kind of, except it means you can disable multisampling, which may be a significant performance boost (dunno how expensive FXAA is).