I’m sure many of you are aware of camera and/or model shaking issues caused by lack of precision. Well, I set out to solve this issue for the case where the camera is close to, and attached to a node far for the render origin.
There are 2 possible solutions that I am aware of:
- Mangle the node tree so the camera or nearby node is the origin
- Compute the trans_model_to_view matrix in camera or model space, not world space (which is what is done by default). In panda we have the ability to avoid world space because we can compute the matrix only using the relevant part of the node tree (that which connects the camera to the model)
I chose option 2. Potentially one could implement a solution that would work out from the camera and generate all the matrixes as needed, thus solving the issue in engine in the general case (If someone want to add support for this to panda, it would be sweet). I just want to do it for a single simple case however:
I have a planet, with a surface mesh (self.surface, located at 0,1,0) that is rotated to stay under the camera. This is done by attaching it to self.surfHolder which looks at the camera (via lookAt). Both self.surfHolder, and my camera holder, self.camHolder are attached to my planet node (self.earth)
I need trans_model_to_view for self.surface, and the one provided is horribly shaky due to the reason described in the context section above. Thus, I must compute it myself.
I have tried a ton of variation of this code involving inverting different matrixes and different multiply orders, but none of them work. Maybe someone who understands matrixes could do better than my guess matrix math.
mat=Mat4() mat.invertFrom(self.surface.getMat()) mat2=Mat4() mat2.invertFrom(self.surfHolder.getMat()) mat=self.camHolder.getMat()*mat2*mat
mat2=Mat4() mat2.invertFrom(self.camHolder.getMat()) mat=mat2*self.surfHolder.getMat()*self.surface.getMat()
I made many more tries as this, but I see no need to post them.
base.camera.getMat() is identity
base.camLens.getViewMat() is identity
thus I should not have to involve them (I think)
It is possible that my approach to get an non-transformed matrix to a shader (Passing 4 row vectors and recombining them in the shader) could have a bug as it is hard to verify what values show up in the matrix inside the shader, but I’m pretty sure it is working right. Also if there is a way to get a trans_model_to_view on the CPU I could see what the result I should get is: self.surface.getShaderInput(‘trans_model_to_view’) returns a shaderInput, and I can’t see a way to get a matrix out of it