Change the color of light as it passes through a translucent object

In the real world, light changes colour as it passes through a translucent object. Later, falling on some other object, it “colors” it. An example would be a stained glass window:

I am trying to get a similar effect in Panda3D. Unfortunately, without success.
Below is a minimalist code example:

from direct.showbase.ShowBase import ShowBase
from panda3d.core import TransparencyAttrib, PointLight

base = ShowBase()
foreground = base.loader.loadModel("teapot")
foreground.setPos(0, 5, 0)
foreground.setColorScale(1, 0, 0, .5)
background = base.loader.loadModel("teapot")
background.setPos(0, 10, -2)
light = PointLight('light')
light = base.render.attachNewNode(light)
light.setPos(0, -1, 1.25)
light.node().setShadowCaster(True, 8192, 8192)

The code places two teapots in the scene. The distant (larger) one, in the original colour, serves as a background. The closer (smaller) one has been modified by me. I scaled it down four times and changed the colours, leaving only the red (R) channel and half the transparency/alpha (A) channel. In addition, at the back of the camera (behind the observer’s back), and a bit above, I placed a light source that not only illuminates the scene, but also casts shadows.
In the real world, it seems to me that such a set should result in:

  1. The teapot in the background, as we observe it through the foreground teapot, will be partially visible, albeit stained red.
  2. On the background of the teapot, there should be a shadow of translucent, reddish light, which is the effect of passing through the teapot in the foreground.

And here is the actual effect obtained in Panda3D:

As you can see, while the first effect was rendered correctly, the second one - not at all. Instead of a reddish light, we see a completely black shadow that completely ignores the translucency of the red teapot.
Anyone have a solution to this problem? How to get the effect I expect?

Not sure if it should work the way you describe. Panda does not have a physical software calculation of light rays, this is a rather capacious procedure. However, shaders do a good job of this. The problem is that a shader with real physically correct lighting obviously cannot do this in real time. Therefore, some tricks and pseudo-effects are used.

For example, this code. Tree-based Screen Space Reflections (HZB) Demo Full Code

However, do not confuse this with gaming PBR materials, the real physical calculation looks like this, this is what you want.

1 Like

[edit] If you just want a potential solution, skip to the last paragraph. The rest is essentially explanation for why, as I see it, lighting doesn’t “just work” as one might expect. [/edit]

To elaborate a bit, light and shadows in games don’t generally work as they do in our world:

In our world, light is emitted from an object, courses through space, and interacts with various objects: bouncing off, or being filtered through, or being absorbed by, in short. (It gets more complicated than that, I think, but that’s the basic idea.)

Note that this means that there are a great many particles of light that never go anywhere near your eyes, and that some of the particles that do reach you do so by quite circuitous routes.

To do this in software would be… incredibly expensive, I fear.

Now, the cost can be reduced by reversing the process: sending out “rays” through each pixel and seeing where they bounce, filter, or are stopped, and ultimately, how close they end up to a light source. (This is called “ray-tracing”, I believe.)

But even this is prohibitively expensive for realtime rendering, I fear–especially at modern resolutions, and even more so when one accounts for things like scatter, I suspect.

So, what games tend to do is to make a broad approximation:

The lighting of a point on the surface of an object is calculated from the distance to the relevant light -source or -sources, the angle of incidence at that point, and, in the case of PBR, viewing angle–as well as some other bits of mathematics.

And shadows are done separately: In a common technique, “shadow mapping”, a rendering is made from the perspective of the light. The depth buffer of this rendering is then used to determine whether the current point is closer to the light than the corresponding point in the depth-buffer, and if not, the current point is shadowed. (This is in broad strokes, but should give the general idea.)

Note that in this approach to shadows, there is no information regarding the colour of the object. I suppose that one could incorporate such colour–but only really for a single object; multiple layers of translucent colour would be difficult, and increasingly expensive, I fear.

Now, there are a few games these days that employ some ray-tracing–but even those do so in limited parts of the rendering, and only on what I think are pretty high-end graphics cards. Even so limited a ray-tracing is expensive.

However! All is not lost!

Some of what you have in mind can be done, I think, via a combination of a light and a texture projected onto the target surface. See the following manual page for more:


Wow! Thank you!
I haven’t checked it out yet, but I think texture projection will solve my problem. In fact, such a trick is enough for me - I do not necessarily have to reconstruct all the laws of physics in my rendering.
I will try to let you know how it went and show off the effect.