[SOLVED] Scenegraph units and camera lens units

How do panda scenegraph units relate to camera lens units? I know the scenegraph is unitless (could be feet, meters, centimeters, whatever), but are the camera lens intrinsics always set in millimeters or should they be set in the same units as the world?

I’m setting up head tracking and want objects on the projection screen to appear the same size that they would be in the real world. I’m using this formula to do it:

d = tracked distance from head (eye) to screen
h = height of screen
fov = 2 * atan (h / 2d) 

with a screen height of 7.5 and standing at 5 feet away, that gives me an FOV of 1.287 or around 73 degrees.

I place the camera at (0,0,0) and draw a 1-unit cube at (0,5,0). Now, I’m not sure if that should be a 1 foot, or 1 meter, or 1 what cube, but it’s enormous and fills the whole screen. Do I need to set those camera parameters in millimeters instead? And if so, should I be positioning objects in millimeters as well?

[/code]

Panda doesn’t convert scene graph units to any other kind of units. The units mean whatever you decide them to mean.

Hmm… Where are you getting your cube? Is it a model made elsewhere? Can you check that it is in fact 1 Panda-unit along each side?

I made it in Maya. The vertices in the .egg go from -0.5 to .5 on each axis, so should be 1 unit, and there’s no transform.

<Group> pCube1 {
  <VertexPool> pCubeShape1.verts {
    <Vertex> 0 {
      -0.5 -0.5 0.5
      <Normal> { 0 0 1 }
      <RGBA> { 0.5 0.5 0.5 1 }
    }
    <Vertex> 1 {
      0.5 -0.5 0.5
      <Normal> { 0 0 1 }
      <RGBA> { 0.5 0.5 0.5 1 }
    }
    <Vertex> 2 {
      0.5 0.5 0.5
      <Normal> { 0 0 1 }
      <RGBA> { 0.5 0.5 0.5 1 }
    }
etc...

So if the units are arbitrary and can mean anything, let’s say I choose feet. So I have these parameters:

cube dimensions: 1’ x 1’ x 1’
screen dimensions: 10’ wide x 7.5’ high

let eye position be at the origin, standing at a distance of 5 feet from the screen.
so, screen position is (0,5,0)

place the cube at the center of the screen at zero parallax (0,5,0)

render in mono, so not dealing with interocular distance yet.

on the screen, the cube should appear to be the same size as a real 1’ x 1’ x 1’ cube placed at the screen surface. Or, put another way, at 1024x768 it should be about 100 pixels wide. But it’s ginormous and fills the entire screen.


cube=loader.loadModel("cube.egg")
cube.setPos(0,5,0)

d=5.0   # distance from eye to screen
h=7.5   # height of screen 
fov=2 * math.atan ( h / (2*d))  # calculate field of view

base.camLens.setFocalLength (d)
base.camLens.setFov (fov)

I’m not sure that it’s your problem, but a look at the documentation for “setFov” indicates that it sets the horizontal field of view, not the vertical. It appears that setting both horizontal and vertical fields of view is done by the use of one of two other versions of the method, which take a point or a pair of floats instead of a single float. Note that they are stated to change the aspect ratio (understandably, I feel).

On a related note, it appears that the calls to “setFocalLength” and “setFov” are redundant - the former is described as “an alternate way to specify field of view”, I believe. (Whether it would act as you expect I do not know, my own knowledge of such things lacking a little there, I fear.)

so, um. radians to degrees. yup. solved. :blush: