Test shadows on a point source of light

Today I decided to test how the shadows work on a point source of light along with a relief map, but ran into artifacts.

plight.setShadowCaster(True, 512, 512)

This is all I added.

I forgot to add, this is my problem, or the Panda3D shader system works this way - 1.10.0?

1 Like

Hmm, interesting. It only seems to happen when the light hits the surface at a very low incident angle.

I think the exaggerated normal mapping might be at fault here, since it seems to happen on the back of the column surface. Indeed, removing the normal map from the model removes the artifacts as well.

At a guess, it may be that it doesn’t show up without normal-mapping because in that case, the standard shading hides it: With normal mapping, areas that would normally by dark may become light, and so be affected by shadows.

I think that this is a form of “Shadow Acne”; I think that, if it isn’t already implemented, this article recommended a slope-based bias to reduce this particular artefact. However, I don’t know how current that technique is–perhaps a better way has subsequently been found. Furthermore, I haven’t tried it myself, and so am not in a position to recommend for or against it. (It’s also entirely possible that I’m very mistaken in my diagnosis, of course.)

An easy way to solve this is to clamp off the shadows based on the surface normal:

However, this leaves a nasty-looking vertical line at the cut-off point (though it could possibly be turned into a smooth transition). But this is arguably partially because of the really exaggerated normal mapping in this case.

Soft Shadows option ?
Does Panda 3D supports cascade shadow maps ?

Neither is supported out of the box; but soft shadows are pretty easy to add and I’ve been thinking for a while about cascading shadows. Nobody has put in a feature request for either feature yet though, so please feel free!

EDIT: Looks like wezu added a feature request for soft shadows:

I have a radical opinion, I see no future for setShaderAuto () Instead, it would be better to organize a multi-pass render. So that users can easily OpenGL examples port to panda. At the moment this is possible, but there is a lack of something like a DisplayRegion manager. The job would look like FSM in a similar way, created a DisplayRegion from exactly the name stage1, the stage1 function would become available, which would be called at its turn. It would be nice to create a stack for the queue, so as not to use setBin () but directly change the stack. For example, to place the GUI render at the beginning of the queue, so that it is rendered behind everything, for example.

We have plans to replace setShaderAuto with a new set of shader generators for 1.11.

What you are suggesting sounds more like a replacement for FilterManager than for the shader generator. But it is not a bad idea. I think Technologicat made a system like that in the past, and tobspr also made a system like that part of the RenderPipeline. I agree that something better than what Panda has for that will be a very important feature.

We internally discussed some plans to create a new graph-based multi-pass render system in the future. This would allow you to define render passes and the relationships between them via a graph, where each graph in the node is a render pass with a certain set of settings (eg. stencil buffers and such). I’ve hinted towards such a system in this blog post.

Yes, I meant it essentially. I would like to add the wish that the work was like working with layers of editors such as GIMP, or Photoshop. This will allow to understand the principle of work for many users.

As for the shader generator, I still do not understand why this is necessary. When it is possible to put the shaders in their original state, which, if necessary, can be updated by the user.

I think that the idea is that it’s useful for those developers who aren’t familiar with shaders. I can very much imagine a solo developer who is familiar with Python and either capable as a 3D artist or using third-party assets, but who is unfamiliar with shaders. For such a person, it could be handy to just switch on automatic generation and let Panda handle the shader-stuff.

I meant the source in the folder with the panda, why generate it? Transferred to the folder with the project and called Shader.load (“bump”)

I’m not sure that I’m reading you correctly. Are you saying that you would rather it generated files, which the user would then load in code?

If so, I’m not sure that doing so would work very well: if I’m not much mistaken, the shader generator produces shaders based on the properties that it observes in the scene (such as whether normal maps are present). Since it doesn’t know the scene before it’s loaded, it’s not in a position to generate a file.

It could, perhaps, require a single run to observe the scene, after which the shaders could be loaded–but that seems cumbersome, and might call for another once-off “shader-generating” run if changes are made (or might even break the scene).

The generated shader will look differently depending on how many lights there are, what type lights there are, what texture settings, what other kind of properties are set on the nodes, etc. There are thousands if not millions of potentially different shaders that can be generated depending on the active settings in the scene.

I have not seen such a system in other engines, perhaps, they dynamically change the input data. For example, Jme3 shaders can be represented in the initial state of the same locality for example.

Many other engines have some form of shader generator; they just hide it better than Panda does.

Are those features for the render pipeline or will become Panda native default render ?

We’ve been talking about native Panda.