projection window in a CAVE setup

I am interested in using Panda3D with a single back projected wall and tracking hardware. My question concerns setting up the projection matrices. I am used to having the underlying software handle a setup where the projection window and viewpoint are not aligned (i.e. CAVElib, etc.). Can someone give me some advice about how to setup a floating viewpoint and display window that is fixed relative to the viewpoint coordinate system?
Thanks
Alex

I’m assuming your talking about about moving eye point. Panda actually provides very easy moving eyepoint tools in the PerspectiveLens class via lens offset, lens focal length, and film size.

setFilmSize()
setFocalLength()
and setFilmOffset()

This allows moving eye point / head tracking to mapped to real world units which is what tracking software usually gives you.

So read the manual on perspective lenses first: www.panda3d.org/manual/index.php/Lense … ld_of_View

Then try the following:

  • set your film size to the physical size of your projection screen in whatever units you decide to work in.

  • set a nodepath to the position of where-ever your physical screen is relative to your tracking origin. Measured from the center of your screen of course. Or if you want to make measurement and math easier, just set your world origin at the screens center.

  • Then make a task that updates the film offset of your lens at every frame.

Here’s an example of what I mean

Here I assume that trackerNP’s position at any given frame is the position of the tracker relative to tracker’s origin. As for rootNP, you can replace that with render. I use that for more complex VR movement.

    def updateMovingEyePointCameraTask(self,task):
        #task for moving eye point
        self.headNP.setPos(self.trackerNP.getPos(self.rootNP))
        self.headNP.setHpr(self.screenNP.getHpr(self.rootNP))
        pos = self.headNP.getPos(self.screenNP)
        self.lens.setFocalLength(-pos[1])
        self.lens.setFilmOffset(Vec2(-pos[0], -pos[2]))
        return Task.cont

[/code]

Thank you.
This was exactly what I needed.
Between this, some of drwr’s stereo displayRegion code and the VRPN examples at CMU, I am in pretty good shape.

I am trying to do the same thing, but I’m not having any luck. Does anyone have some finished code that I could look at? I’m writing in C++ and using the Kinect as a tracker. I’ve tried several different methods but haven’t been able to get the desired results.

Please help.

Thanks.

Can you give us some more specific information about what, precisely, you are trying to do, what attempts you have made already, and in what ways these attempts have failed?

Simply asking “show me how to do X,” when X is such a large topic as eyepoint-corrected cave projection, isn’t likely to yield helpful results.

David

I’m trying to get my tv screen to work like a CAVE wall such that I’m peering into the 3D world like I’m looking through a window. I’m using the Kinect to get the position and orientation of my head. I’ve converted everything to inches. I’ve tried the above example, which seems to move disproportionately both left to right and up and down and forward and backward. I set the film size to the size of my tv screen.

I’ve also tried manipulating the frustum using set_frustum_corners, but I don’t think I really understood how that function works.

Note:
I have a 61" 3DTV and I split the screen into to display regions (left and right eye) for side by side 3d. Maybe that is throwing things off?

If it works but the amount of movement seems wrong, check your unit conversion. If the scaling error is greater on X than on Z, it could be related to the L/R split format. scale X by 2 (or .5) in the setFilmOffset() call. to simplify matters, you could try disabling stereo - you should still be able to tell when it’s working correctly.

You may also find it useful to reveal the frustum with camera.showFrustum(). This will draw lines around the frustum so you can see what shape it has.

It won’t be useful for the camera you’re actually looking through, of course, because you will never see the lines in that camera (they’re by definition on the edge of your view), but you can watch the frustum distort from a third-party viewpoint.

David