Computing a trans_model_to_view matrix

Context:
I’m sure many of you are aware of camera and/or model shaking issues caused by lack of precision. Well, I set out to solve this issue for the case where the camera is close to, and attached to a node far for the render origin.

There are 2 possible solutions that I am aware of:

  1. Mangle the node tree so the camera or nearby node is the origin
    OR
  2. Compute the trans_model_to_view matrix in camera or model space, not world space (which is what is done by default). In panda we have the ability to avoid world space because we can compute the matrix only using the relevant part of the node tree (that which connects the camera to the model)

I chose option 2. Potentially one could implement a solution that would work out from the camera and generate all the matrixes as needed, thus solving the issue in engine in the general case (If someone want to add support for this to panda, it would be sweet). I just want to do it for a single simple case however:

Problem:
I have a planet, with a surface mesh (self.surface, located at 0,1,0) that is rotated to stay under the camera. This is done by attaching it to self.surfHolder which looks at the camera (via lookAt). Both self.surfHolder, and my camera holder, self.camHolder are attached to my planet node (self.earth)

I need trans_model_to_view for self.surface, and the one provided is horribly shaky due to the reason described in the context section above. Thus, I must compute it myself.

I have tried a ton of variation of this code involving inverting different matrixes and different multiply orders, but none of them work. Maybe someone who understands matrixes could do better than my guess matrix math.

    mat=Mat4()
    mat.invertFrom(self.surface.getMat())
    mat2=Mat4()
    mat2.invertFrom(self.surfHolder.getMat())
    mat=self.camHolder.getMat()*mat2*mat

Another try:

    mat2=Mat4()
    mat2.invertFrom(self.camHolder.getMat())
    mat=mat2*self.surfHolder.getMat()*self.surface.getMat()

I made many more tries as this, but I see no need to post them.

base.camera.getMat() is identity
and
base.camLens.getViewMat() is identity
thus I should not have to involve them (I think)

Note:
It is possible that my approach to get an non-transformed matrix to a shader (Passing 4 row vectors and recombining them in the shader) could have a bug as it is hard to verify what values show up in the matrix inside the shader, but I’m pretty sure it is working right. Also if there is a way to get a trans_model_to_view on the CPU I could see what the result I should get is: self.surface.getShaderInput(‘trans_model_to_view’) returns a shaderInput, and I can’t see a way to get a matrix out of it

Hmm, I’ve never seen a problem with a shaky camera due to floating-point imprecision. Matrix math is generally not that imprecise. I’ve seen plenty of shaky cameras, though, and it’s almost always due to a timing issue of updating the camera at the wrong point in the frame relative to other moving objects, and relative to when the frame is rendered.

I suppose it’s possible it could be a floating-point precision issue, but that’s only likely if you have a wildly divergent scale going on between your transforms. For instance, you’re trying to represent galactic scale and local scale in the same scene graph, and the relative transform between your camera and the surface crosses some deep scale operation, twice.

I’d be more inclined to blame the task that’s calling lookAt() on your camera, though. Use the sort parameter to taskMgr.add() to ensure this task gets invoked immediately before the frame is rendered. Since the frame is rendered by the igLoop task, which has sort 50, I would specify a sort = 49 on your lookAt task.

To answer your original question, though, the model_to_view transform is really just the relative transform from your camera to your surface, e.g. base.camera.getTransform(self.surface), though it also includes the appropriate coordinate system transform for OpenGL or DirectX, so you ought to be able to compute it with something like base.win.getGsg().getInvCsTransform().compose(base.camera.getTransform(self.surface)).

David

David

It gets far worse when I move my planet really far from the origin. The shake goes away completely if I hard code a matrix transform in the shader to skip using the trans_model_to_view (I can’t guess the right matrix though! This would probably fix it regardless of cause though)

I’m pretty sure the precision is to blame. I know how much precision there is in a float (23 bit significand), and I intentionally exceeded it because I know I will face this problem when I have scales in thousands of light years and centimeters in use at once.

The LookAt code does use getRelativePoint, but removing the lookAt call completely does not fix the shaking. I have read about other people having this issue (but not in Panda specifically), and I have had it and solved it with a dynamic origin before (though not in panda).

Tried sort = 49, no change.

mat= base.camera.getTransform(self.surface).getMat() gives the same mat I got with my code (and no shake). Thats good and it is much easier that way.

getInvCsTransform does not exist however, so I can’t try that.
Anyway, were you thinking trans_model_to_apiview? I just need model_to_view (trans_view_to_apiview has no shake issues, and works fine)

Well, I’m a bit more informed, but I’m still roughly where I started. I now have a slightly shorter bit of code giving the same result:
mat=base.camera.getTransform(self.surface).getMat()
Both approaches produce identical shake free matrixes, but I get a plain black screen when using them.

Thanks for your response. Hopefully we can get to the point where it works :slight_smile:

Hmm, getInvCsTransform() isn’t exposed to Python, you’re right. You can get the same thing with TransformState.makeMat(Mat4.convertMat(gsg.getInternalCoordinateSystem()), CSZupRight)). Or you can just hardcode the appropriate transform for your particular GSG. (It’s one thing for OpenGL, another for DirectX.)

OK, you’ve convinced me that you are experiencing a problem with numerical precision. But are you really sure you want to subject yourself to this problem? The scene graph is not really intended to represent such vastly different scales at the same time, and it’s hard to imagine a game that actually requires this. You can have one scene graph for your outer-space scenes, and a completely different scene graph for your planetside scenes. It’s probably possible to make it work with just one common scene graph, but I don’t think you’ll be doing yourself any favors by going down that path.

David

Well, that got me to having the camera inside the planet so I could see a few triangle on steep slopes outside, and the camera pointing along the surface so both the position and rotation came out wrong. It should be looking down from space. There was no shake though!

mat=TransformState.makeMat(Mat4.convertMat(base.win.getGsg().getInternalCoordinateSystem(), CSZupRight)).compose(base.camera.getTransform(self.surface)).getMat()

Are you sure I need to apply a InvCsTransform thing? Doesn’t trans_view_to_apiview cover that for me? I just need view space, not api view.

I would think this would be all I need:

mat=base.camera.getTransform(self.surface).getMat()

but I get nothing, so maybe I need a Z up matrix or inverse or something in there.
I tried adding this afterwards

  l.xyzw=l.xzyw;
  l.y=-l.y;
  l.z=-l.z;

and it at least gets the surface visible, though it like inside out or something, and the camera controls are screwed up, so I don’t think it is any more right, it just happens to have visible stuff.

Any yes, I will put stuff out side the solar system in a separate scene or something, but I will be needing ships to properly pass in front of and behind planets (and optimally visible in the sky from planets at night), so if possible (and I’m getting really quite close) I want the whole solar-system in one scene, and it won’t fit floats very well. I will have to do the object positions as doubles, and get my own matrixes, but thats ok (If I can get it to work). Doing double positions and a dynamic origin would work, but would likely be more of a pain.

So now I have gotten botched projections with visible mesh! Possibly an improvement over black screen. The triangles don’t shake though!

I might be doing something stupid, so here is my code:
In the shader:

  float4x4 projectMat=float4x4(k_project1,k_project2,k_project3,k_project4);
  float4 l=mul(projectMat,loc);
  l=mul(trans_view_to_apiview,l);
  l_position=mul(trans_apiview_to_apiclip,l);

If I use trans_model_to_view instead of projectMat, it works right, but shakes horribly.

In my every frame do everything task:
I have tried both versions of that first line.

    #mat=TransformState.makeMat(Mat4.convertMat(base.win.getGsg().getInternalCoordinateSystem(), CSZupRight)).compose(base.camera.getTransform(self.surface)).getMat()
    mat=base.camera.getTransform(self.surface).getMat()
   
    v=mat.getRow(0)
    self.surface.setShaderInput("project1", v.getX(),v.getY(),v.getZ(),v.getW())
    v=mat.getRow(1)
    self.surface.setShaderInput("project2", v.getX(),v.getY(),v.getZ(),v.getW())
    v=mat.getRow(2)
    self.surface.setShaderInput("project3", v.getX(),v.getY(),v.getZ(),v.getW())
    v=mat.getRow(3)
    self.surface.setShaderInput("project4", v.getX(),v.getY(),v.getZ(),v.getW())

I’ve ran into this problem before, as well:
discourse.panda3d.org/viewtopic.php?t=6674

I did it.

mat=Mat4()
mat.transposeFrom(base.camera.getTransform(self.surface).getMat())

I just needed to transpose it. I guess I transposed it trying to get it into the shader. Good thing I started looking through all the matrix methods and looking up what they meant and trying them.

No shake, super up close zoom works smooth really darn far form the origin. Now I just need to do the same trick to fix my lighting and I should have true scale planets that work with the camera really close! (Millimeters at sea level. I still have work to do to get that at mountain tops likely.)

David, I’ll probably go ahead and try that transform code you got for me. I think that should save me the API View transform in my vertex shader.

Thanks everyone!

Edit: I should clarify: I think it needed to be transposed because my code to get it into the shader is defective and transposed it. I can fix this so it does not need to be transposed.

Edit2:
Actually I also needed to invert the matrix of when I used my first person camera, the planet turned instead of the camera!