I encountered a very weird problem: If I do with LPoint3d this kind of arithmetic operation (a - p) + b, the Y the result suffer from precision loss !
I have the following C++ test code :
static LPoint3d a(0.0, -2.8311460325405014e+17, 1.2274520263792942e+17);
static LPoint3d b(127576048.4776305, 43032144.01342494, 0.0);
LPoint3d test_diff(LPoint3d p)
{
return (a - p) + b;
}
if I invoke in from Python, with the following code :
p = LPoint3d(0.0, -2.8311460325405014e+17, 1.2274520263792942e+17)
b = LPoint3d(127576048.4776305, 43032144.01342494, 0.0);
print(test_diff(p) - b)
I expect to get a null vector, but instead I get this result :
LVector3d(0, 15.9866, 0)
If I do the same code in pure Python, :
p = LPoint3d(0.0, -2.8311460325405014e+17, 1.2274520263792942e+17)
a = LPoint3d(0.0, -2.8311460325405014e+17, 1.2274520263792942e+17);
b = LPoint3d(127576048.4776305, 43032144.01342494, 0.0);
print(((a - p) + b) - b)
the result is as expected :
LVecBase3d(0, 0, 0)
Now, if I change the code in C++ to split up the operation :
static LPoint3d a(0.0, -2.8311460325405014e+17, 1.2274520263792942e+17);
static LPoint3d b(127576048.4776305, 43032144.01342494, 0.0);
LPoint3d f(LPoint3d a, LPoint3d b)
{
return a - b;
}
LPoint3d test_diff(LPoint3d p)
{
return f(a, p) + b;
}
It works fine :
LVector3d(0, 0, 0)
At first I thought it was a memory corruption, but it always happens even in trivial code like this. Also it is not a compiler or platform issue as I get the same problem on Linux and on macOS!
And for the test I’m using one of the latest Panda3D SDK 1.11 : 68f0931f43284345893a90d5bba9ba5df8aa53bb) by cmu (Dec 13 2021 08:35:38)
Not really sure what’s going on. I tend to believe that there is a wrong cast to single precision float that happens somehow as the delta is always between -15 something and 15 something (at least with the values I’m using in my real code). But is it a bug, or am I doing something wrong ?