In my app, the camera is sometimes really, really close to really really large objects, until now it’s not a problem, I’ve implemented several mechanisms to avoid the major precision issues of the large scale.
One of them is to use the Model Matrix to transform the normals instead of the Normal Matrix (which is the inverse transpose of the Model matrix).
However this works only if the Model Matrix is “nice”, i.e. contains only translation, rotation and uniform scaling. This on the other hand introduces artefacts on oblate shapes, where the scale is not the same on all the three axes. And if I switch to the correct transformation with the Normal Matrix, I get precision issues : the large scaling factor being inverted causes jittering.
So, does anyone knows any tricks to still transform the normals correctly while avoid inverting the Model Matrix ?
(Another solution would be to apply the oblateness in the generated model, but that would multiply the number of models generated and I wish to avoid that if possible)