Stereo lens projection matrices

Hello,

I’m not 100% certain but I think something is off with the way the projection matrices for the stereo lenses are created. To test things out, I created a scene with a camera and a rectangular object at the camera lens’ Convergence distance. The size of the object was set to cover only the vertical and horizontal FOV of the lens. From my understanding of off-axis stereo frustums, the object would fill up the mono, as well as the left and right stereo lenses due to it being placed at the convergence distance. If I then switch the stereo channel of the single display region from mono to either left or right, the object no longer fills the window (it is offset left/right and background is visible). Printing out the projection matrices for the mono as well as the left and right lenses, it appears as that two points in the projection matrix are changed for the stereo lens (one for the convergence distance and one for IOD). I would expect that there would only be one point changed in the matrix - the one corresponding to (r+l)/(r-l) in the OpenGL-styled matrices.

Playing around more with the scene I created, it appears that setting the convergence distance to infinity creates a parallel stereoscopy effect (nvidia.com/object/IO_36545.html). The Panda3D default convergence distance is 25.
Looking at the 3rd example on the nvidia page (off-axis stereoscopy), I believe that the convergence distance is the distance of the drawn screen from the two cameras. This would allow for the near and far clipping planes to be in front of and behind the convergence distance, respectively.
The Panda3D manual on the stereo lenses says that it does not do any toe-in (camera rotation) (panda3d.org/manual/index.ph … ay_Regions - second to last paragraph).
I’ve been going through the source code (below) for how the stereo projection matrices are created and it appears that it is implemented as two translations of the camera lenses. I do not understand the routines enough to be certain what is going on in that code, however.

    LVector3 iod = lens_cdata->_interocular_distance * 0.5f * LVector3::left(lens_cdata->_cs);
    lens_cdata->_projection_mat_left = do_get_lens_mat_inv(lens_cdata) * LMatrix4::translate_mat(-iod) * canonical * do_get_film_mat(lens_cdata);
    lens_cdata->_projection_mat_right = do_get_lens_mat_inv(lens_cdata) * LMatrix4::translate_mat(iod) * canonical * do_get_film_mat(lens_cdata);
    
    if (lens_cdata->_user_flags & UF_convergence_distance) {
      nassertv(lens_cdata->_convergence_distance != 0.0f);
      LVector3 cd = (0.25f / lens_cdata->_convergence_distance) * LVector3::left(lens_cdata->_cs);
      lens_cdata->_projection_mat_left *= LMatrix4::translate_mat(cd);
      lens_cdata->_projection_mat_right *= LMatrix4::translate_mat(-cd);
    }

Hmm, yeah, that is confusing. I thought I knew how the stereo rendering in Panda worked, but perhaps I was wrong. That would seem to contradict the docstring for set_convergence_distance, which sounds like it’s worded to imply toe-in:

//     Function: Lens::set_convergence_distance
//       Access: Published
//  Description: Sets the distance between between the camera plane
//               and the point in the distance that the left and right
//               eyes are both looking at.  This distance is used to
//               apply a stereo effect when the lens is rendered on a
//               stereo display region.  It only has an effect on a
//               PerspectiveLens.
//
//               This parameter must be greater than 0, but may be as
//               large as you like.  It controls the amount to which
//               the two eyes are directed inwards towards each other,
//               which is a normal property of stereo vision.  It is a
//               distance, not an angle; normally this should be set
//               to the distance from the camera to the area of
//               interest in your scene.  If you want to simulate
//               parallel stereo, set this value to a very large
//               number.

What it appears to be doing, though, is bringing the two left/right images closer by 1 / (4 * conv_distance) in order to make them converge, and by doing so, making an off-axis projection. This looks to be consistent with a skewed frustum projection, and it’s simply bringing the images closer together as to create a negative space.

So, the iod controls a translation in world space, and the convergence distance controls a translation in film space (thus creating the off-axis projection).

Sorry for the misinformation earlier, I was getting things all mixed up.

Cool, thanks for that find. No worries about the earlier confusion. I am still somewhat confused myself over what geometry they are using to use 1/4*Conv_Dist as the film space displacement.

Yeah, the geometry in the Panda code must be off. The translation should be IOD/(2CDtan(Xang/2)) rather than 1/(4*CD). At least for my understanding of the definition for convergence distance.

Just to update this thread for anyone else stumbling onto it: the convergence distance calculation in Panda was off, and has now been corrected. Also see the relevant bug report.