# Rendering a 360-degree panorama

Hello, I’m a college student attempting to create a special relativity simulator for my physics class. I’m using Panda3d and learning it as I go. So far, my program creates a lattice of objects that move from in front of the camera to behind and then jump back in front and repeat, creating an effectively infinite lattice that the camera appears to be moving through. I’m having trouble finding a way to create the relativistic distortions, though.

The effect I want to simulate is the aberration of light. Basically, things bunch forward, so that if you’re traveling fast enough, something to the right of the viewer (90 degrees from forward) would appear to be in front of the viewer (say, 40 degrees from forward). Something at, say, 150 degrees from forward, or behind the camera, would appear at maybe 70 degrees from forward, in front of the camera. The degree of this effect increases with higher velocity, so the object at 90 degrees appears to be at 90 degrees with 0 velocity and approaches an apparent angle of 0 degrees as velocity approaches the speed of light. (This page explains the effect and has pictures and diagrams from their own simulation starting halfway down, if you’re curious.)

The way I was planning to do this is to use a FilterManager to apply the relativistic distortions in a post-processing fragment shader. The shader will have the current velocity passed to it and use this to properly warp the image and display the right things on-screen. Since things behind the camera need to appear in front of the camera, the shader needs some sort of panoramic, 360-degree view rendered to a texture as its input, and I’m at a loss how to do this. I’ve tried using base.camLens.setFov(), but a field of view of 185 degrees seems to just be a mirror image of a field of view of 175 degrees, so I can’t seem to get a 360-degree image that way.

Can anyone help me find a way to render a 360-degree panorama? Or, since I’m new to Panda, is there a completely different, better way to go about this that I’m not aware of?

Thank you!

You can’t set the field-of-view on a single camera to 180 degrees or larger–the math falls apart. Actually, you don’t really want to set it larger than about 100 degrees or so, since beyond that you’re rendering too many pixels that you don’t care about, and hardly any for the center of the view.

For wider fields of view, you need to go with multiple cameras. One common way to do this is with a cube map, which is described in the manual.

David

hm… that brings up some memories of a quake based engine a once played with. it allowed to have a fov with more then 360 degrees. but it wasn’t a rectangular projection anyway. i think it was spherical. but it might work for you. as drwr pointed out, a cubemap and a shader wich does the distortion or lookup may work for you.

Sorry for the huge delay here.

Thanks for the cubemap suggestion. I’ve tried implementing it, but where I believe my shader should be getting the cubemap as a texture, it’s instead getting what seems to be just the texture from the stock smiley face model (which I’m using for my lattice of objects). I’m utterly bamboozled as to how this is happening, besides that I must be doing something wrong. Would someone mind taking a look? My python script is here and my shader is here. (Space toggles the post-processing shader, which also swaps green and blue color values to make it immediately obvious whether or not it’s running, though that’s not really necessary at the moment.)

Thank you very much!

EDIT: I talked to my professor today and he’s become confident enough in his math to take a different approach that won’t require shading a 360-degree panorama. I feel optimistic that this new approach will work nicely, so please don’t take time to help me with the old approach that likely won’t be used.