[SOLVED] Mat_projection and other cameras

Hi,

I’ve found an issue with my instancing code in regards to rendering from a second camera.

My aim is to get shadowmapping working, but the screen positions of the instances in the shadowcam view are exactly the same as base.cam, which means creating a depth buffer from the second cam wouldn’t be correct.

Heres what it looks like:

So, the cube isn’t instanced, and shows the correct angle in the buffer, but the instanced geometry positions match the ones in the main view. When I move base.cam, the instances in the buffer move, but they should stay still since I’m not moving the shadow cam.

The line in the shader to position objects is this:

OUT.o_position  = mul(mul(mul(mat_projection, to_apiview),offset[IN.l_id]),IN.vtx_position);

I’m not really sure how mat_projection works, but I’m assuming its always the projection of the base.cam? Is it possible to get the projection of the current camera?

My other thought is to pass the shader the shadowcams projection matrix, but I couldn’t work out how to do that. (Also I’m not sure you can pass shader input only for objects rendered by a certain cam?)

Or is there a way to multiply the mat_modelproj by the offset instead, since that might not be as tempermental as mat_projection?

mat_projection is a shorthand for trans_apiview_to_apiclip, which applies to the camera that’s rendering it - this means that it’s different when rendered by different cameras. However, the issue is not with mat_projection, but with mat_modelview. This is short for trans_model_to_apiview, or in other words, the transformation of your model in camera space. (mat_modelproj is basically the composite of both of these, aka trans_model_to_apiclip.)

So I can’t really guess what’s going on here until I know how you determine to_apiview in your code. If you calculated it yourself and passed it to the shader, then the problem might be that you’re calculating it against your main camera, and might instead need to split it into two matrices: one model-to-world matrix that you calculate, and one world-to-apiview matrix that Panda calculates for you. (trans_world_to_apiview or trans_world_to_apiclip to get the projection matrix integrated).

Well the to_apiview looks like this:

// This matrix represent a ninty degrees rotation around the X axis.
// It will used to transform a vertex from the Panda3D coordinate system
// to the OpenGL coordinate system (right handed - Y Up)
const float4x4 to_apiview = {{1.0, 0.0, 0.0, 0.0},
			  {0.0, 0.0, 1.0, 0.0},
				   {0.0,-1.0, 0.0, 0.0},
				   {0.0, 0.0, 0.0, 1.0}};

But - erm - I kind of backtracked a bit and thought why would the shader be at fault, it might be the python code?

Sure enough the view matrix being passed to the shader is updated every frame according to base.cam :blush:

    # Retrieve the view matrix needed in the shader.
    # Need to update this every frame to allow for camera movement
    self.viewMatrix = self.originalNode.getMat(base.cam)  #HERE

    # Retrieve model matrices from the dummy NodePath
    self.modelMatrices = [nodePath.getMat(self.dummyNodePathRoot) for nodePath in self.dummyNodePath]
   
    # Compute the modelview matrix for each node
    self.modelViewMatrices = [UnalignedLMatrix4f(modelMatrix * self.viewMatrix) for modelMatrix in self.modelMatrices]

Once I changed the getMat call to my shadowCam it was correctly showing the instances from the shadowcams viewpoint.

Just wondering though - would it be better to pass the modelMatrix to the shader and compute the modelview there? I’m thinking this is probably a bit slower if its left in Python?

Which would be this:

  // Modelview matrix
  float4x4 my_modelview = mul(trans_world_to_apiview, offset[IN.l_id]);  
  
  OUT.o_position  = mul(mul(mat_projection,my_modelview ),IN.vtx_position);  

That sounds sensible, especially as you won’t need to update the matrix every time the camera position changes.