GLSL uniform variation threshold?

I’ve encountered a very weird behaviour and I’m struggling to find the root cause : I’m passing very small float via uniforms to my shader, however it seems that if the value is below a certain threshold the uniform is not updated and keep the previous value, except if the new value is zero ! For what I could test, the threshold seems to be around 5e-7

What is strange is if that I multiply the value by e.g. 1000 and immediately divide that value in the shader by 1000 the threshold is actually shifted by x1000 so it’s not any precision issue in the shader code itself.

Here is an short code to demonstrate the problem, at startup the program displays a blue square, if one press b, the square become red immediately.
However, if one press c, the square becomes a dark red correctly, but when pressed again and again the color does not change until the value is above 1e-9 and the square becomes blue again…

from panda3d.core import CardMaker, Shader
from direct.showbase.ShowBase import ShowBase

def shader():
    return Shader.make(Shader.SL_GLSL,
#version 120

uniform mat4 p3d_ProjectionMatrix;
uniform mat4 p3d_ModelViewMatrix;

in vec4 p3d_Vertex;
in vec4 p3d_MultiTexCoord0;

void main() {
    gl_Position = p3d_ProjectionMatrix * (p3d_ModelViewMatrix * p3d_Vertex);
#version 120

uniform float value;

void main() {
   if(value < 1e-8) {
    gl_FragColor = vec4(value * 1e8, 0, 0, 1);
   } else {
    gl_FragColor = vec4(0, 0, 1, 1);

value = 1e-9
def change(card):
    global value
    card.set_shader_input('value', value)
    value *= 2

def changeBig(card):
    card.set_shader_input('value', 5 * value)

base = ShowBase()
cm = CardMaker('card')
card = render.attachNewNode(cm.generate())
card.setPos(-0.5, 3, -0.5)
card.set_shader_input('value', 1)

base.accept('c', change, [card])
base.accept('b', changeBig, [card])

Python uses 64-bit floating-point variables, but GLSL uses 32-bit. So, the precision is reduced, which has the effect you are seeing here. When you are multiplying by, say, 1024, only the exponent value is changed and not the significand, so it shifts the precision issue with it.

Note that Panda also internally uses 32-bit floats for performance (though it can be recompiled in 64-bit mode), so you would observe the same effect passing a number into (eg.) setPos.

You can use 64-bit floats in your shader by using a type like dvec2. This requires relatively recent hardware. You can also use integers that cover the entire range you wish to represent (this is known as “fixed precision”).

The epsilon of 32-bit floats (the difference between 1.0 and the next storeable value) is 1.19209e-07, which sounds close to the 5e-7 that you reported.

I think i did not describe the problem properly: I’m not adding or comparing values ~1 with values smaller than 1e-7. Here I stay in the 1e-7 to 1e-8 domain, which is safely inside the float precision range (AFAIK, 32-bits float can represent values down to 1e-38). The rounding due to the limited precision should not be noticeable and the range is well under the significance limit.

In the example above, I’m comparing multiple of 1e-9 with 1e-8 and then multiplying this value with 1e8 to get back to the ~1 range which shouldn’t be a problem.

OK, sorry for jumping to a conclusion so quickly. You’re indeed only changing the exponent when pressing c, which can indeed go down way further.

I tried using apitrace, which shows that Panda is consistently passing 1e-9 to the shader, and then suddenly switching to passing 5.12e-7. This is a little odd, and I’ll investigate further. My first thought is that it might have something to do with the way that we use SSE2 variables for storing shader inputs.

(For the record, I had to change the #version in your code to 130 as in is not a valid GLSL 1.20 keyword.)

I had a hunch, so I tried setting “state-cache false” in Config.prc and the issue appears to no longer occur.

I think the issue is that the state cache (which uniquifies ShaderAttrib objects) is using a hash algorithm that converts floating-point numbers to fixed-precision, with the intent of accounting for any floating-point precision issues. I’m not sure whether this was by design, but it’s certainly not having the intended effect here. But, I’m still missing a piece of the puzzle, so I’ll keep looking.

Thanks for the quick workaround, it does not seem to impact performance, at least for what I’m testing right now, so I can progress :slight_smile:

For the record, you can also do use PTAs to change the shader input value, which is more efficient if you are changing it many times and I don’t believe this method suffers from this bug.

I filed an issue. I’ll be taking care of this issue for 1.10.5. Thanks for reporting!