Precision problems with the normal matrix

In my app, the camera is sometimes really, really close to really really large objects, until now it’s not a problem, I’ve implemented several mechanisms to avoid the major precision issues of the large scale.
One of them is to use the Model Matrix to transform the normals instead of the Normal Matrix (which is the inverse transpose of the Model matrix).
However this works only if the Model Matrix is “nice”, i.e. contains only translation, rotation and uniform scaling. This on the other hand introduces artefacts on oblate shapes, where the scale is not the same on all the three axes. And if I switch to the correct transformation with the Normal Matrix, I get precision issues : the large scaling factor being inverted causes jittering.
So, does anyone knows any tricks to still transform the normals correctly while avoid inverting the Model Matrix ?

(Another solution would be to apply the oblateness in the generated model, but that would multiply the number of models generated and I wish to avoid that if possible)

I don’t think there’s a way without inverting and transposing the matrix, or something to that effect, or avoiding non-uniform scales.

We’ll have to look into what exactly is causing the precision issues, and what we can do about them.

That’s what I was afraid of.

I’m combining several scaling factors: one to bring the objects closer to the camera, another one to avoid jittering due to the limited precision of the depth buffer, and also the actual scale to flatten the object. And when the camera gets really close to the surface of the object, the combined scale factors blow up and in turn, the inverse matrix loses precision. I guess I’m at the limit of that system.