headtracking + red-green-glasses


this is my first post in these forums, hopefully I got the right one.

I’m working on a little 3D-ballgame which includes wiimote control and headtracking.

Great parts of it are already working, I also use the ODE implementation of Panda3D.

I guess there are going to be several questions from my side in the future :wink:, but the first one aims towards headtracking.

We want to use it pretty much in the same way as Johnny Chung Lee: youtube.com/watch?v=Jd3-eiid-Uw

Unfortunately we realised that the quite impressive 3D-effect only works well, if one is looking at it on a VIDEO! The illusion is by far not that good, if one is wearing the tracking device himself.

So, we decided to implement support for 3D-glasses additionally.

My first approach for this problem was to shift the camera position every frame alternately to represent the position of each eye. Therefore I also applied alternating color write masks every frame, as we will use red-green-glasses.

Unfortunately this kills the framerate…

My question is:
Is it possible to have two cameras which both show the SAME scene in the SAME window. That means that I want to superpose both camera views like layers.


why not simply use panda’s building stereo-camera feature? simple set the corresponding config-file entry and let panda’s magic do the rest.

That would be:

red-blue-stereo 1
red-blue-stereo-colors red green


Is there a way to set it so it uses the new type of 3d glasses such as the RealD 3d ones?

Real3D uses circular polarization. correct me if i’m wrong, but i think you would need a special display for that.

When I add that to my prc file and start my ordinary game on an ordinary screen, moving DirectGUI stuff leave some kind of permanent blue tracks after them.
I dont know if this is normal on a usual hardware, just thought id let you guys know if there is some kind of obvious unnoticed bug or something.

hi again

the hint with the built-in stereo camera was great, we already implemented it.

even the effect without headtracking is quite impressive :slight_smile: (with more or less standard configuration).

now I’m trying to get the interocular distance and the convergence distance right. Obviously I don’t quite understand it so far:

Both distances should be given in panda units, which we call “panda meters (pm)”. They should be more or less user defined, if I get it right.

I know the exact width of the scene in the screen plane (20pm) and of course I also know the width of my screen itself (I measured it, 52cm), so I can calculate a conversion between centimeters and the panda unit.
If you want to get an impression:

The distance between my eyes is around 6 cm. Using my derived conversion I get around 2.3 pm (Pandameter).

Unfortunately the effect is not working at all with these calculation, but I get quite good results by using the panda standard configuration. This uses a value of 0.2 for the interocular distance.

I set the convergence distance to 30 pm, which yields good results, but I’m not yet convinced of the “tuning” so far.

Now where is my error?
How do the calculations of the interocular and convergence distance work?

best regards,

Hmm, I’m not sure that the width of your screen and the width of your scene at the screen plane are necessarily proportionate. It depends also on your field-of-view, and the distance from your eyes to the screen. Finally, it depends on psychological references within the game–if you have a human figure, for instance, and he’s about 1.8 pm tall, then you’re going to end up with the psychological feeling that 1 pm == 1 m. But if your human figure is about 6 pm tall, then you’re going to end up with the feeling that 1 pm == 1 ft. The interocular distance will vary accordingly.

Bottom line, though, is just to ballpark it until it looks good. You’re not sending a shuttle to Mars, so it doesn’t have to be precisely calculated and measured, it just has to look good.


hmm, ok, I think, I see your point, but:

We want to create the feeling, that the player is looking into a box (maybe you could take a look at the screenshot also).
That means, that the camera is adjusted dynamically with the information of the headtracking.
But not only its position, but also its frustum (with setFrustumFromCorners) is adjusted, so that the field of view always starts at the edges of our box.
That means that the width of the box IS actually always congruent with the edges of my screen. Therefore the width of my scene (which represents the transparent front wall of my box) is actually always proportionate to the width of my screen.

So I’m quite sure that I should be able to calculate a precise conversion between pm (game units) and cm (real units). We use this approach also successfully concerning the headtracking.

As we also have to adjust the camera position dynamically and therefore also adjust the convergence plane dynamically (at least I think so), I’d actually like to know how it works. But so far I absolutely don’t understand where this default value of 0.2pm comes from and, in particular, why it works so well. Unfortunately the API reference isn’t very detailed in this point.

Your last point is right, one can achieve acceptable results by just trying around. But this isn’t actually that easy taking into account the fact, that we have to decide which plane in the game scene will be going to be “sharp”. If I see it right so far, this is only possible for one plane.

Nevertheless, thanks a lot for the help of all of you so far, we built up the whole stereo camera thing also on an example of drwr out of another topic :slight_smile:

Hmm, well, the default value of 0.2 simply comes from the assumption that 1 pm == 1 foot, which is a common convention within Disney. 6 cm ~= 0.2 feet. But there’s nothing in the projection matrices that assumes you are using feet or any other unit, so I don’t know why 0.2 works so well in your own scene.

You can find the calculation in the Panda source code in panda/src/gobj/perspectiveLens.cxx, in the method compute_projection_mat(). There you will see the computation for the nominal center view:

_projection_mat = get_lens_mat_inv() * canonical * get_film_mat();


  • get_lens_mat_inv() is the inverse of the lens transform; e.g. lens view direction. In a normal scene this is identity, but when you use setFrustumFromCorners(), this is where the computed distorting transform shows up.

  • canonical is the perspective transformation based on focal length, near/far planes, and so on.

  • get_film_mat() is the transform from the canonical perspective transform to the film plane, which includes the film size and offset parameters. In a normal scene this is only a scale.

For the left and right views, we modify the above with:

LVector3f iod = _interocular_distance * 0.5f * LVector3f::left(_cs);
_projection_mat_left = get_lens_mat_inv() * LMatrix4f::translate_mat(-iod) * canonical * get_film_mat();
_projection_mat_right = get_lens_mat_inv() * LMatrix4f::translate_mat(iod) * canonical * get_film_mat();
LVector3f cd = (0.25f / _convergence_distance) * LVector3f::left(_cs);
_projection_mat_left *= LMatrix4f::translate_mat(cd);
_projection_mat_right *= LMatrix4f::translate_mat(-cd);

You can find similar formulas for stereo pair transforms on the net.


Hmm, probably I have to get into that a little bit more :stuck_out_tongue:

Unfortunately I’m also a beginner concerning stereo view…
Should it be possible to get every plane in the scene in every distance from the camera “sharp”?

So far I’m always just able to focus on one plane. That means, that the two images for every plane behind and in front of this one have got a little offset.
This starts to be annoying as soon as some objects start to be far away from my convergence plane.

EDITH SAYS: ok, update, I think, the small offset which I see between the images isn’t an error in the calculations, but the red-cyan-colors of panda don’t exactly match the ones of our glasses ^^.

Nevertheless I still don’t understand the thing about this ocular distance… going to have a look on the code…