Multiple cameras around a central point

Hello Panda community,

I am new to Panda and am close to giving up experimenting, so maybe somebody could help me…

I need to virtually render x-number of cameras side by side, so that a panoramic view is formed all the way around a central point.

I will then need to render a second image, by taking the panoramic image and putting it through a virtual angular amplification mirror (so that it emerges as a kind of doughnut). In case you are wandering, this will eventually be projected onto the same type of mirror in real life to simulate virtual reality.

I am not sure whether the second task can be done using panda, but I suppose if I could extract the pixel distribution of the panoramic image from the first task I could possibly use another program to change it the way I need to. However, if it can be done, any help would be appreciated…!

Thank you for your help.
I am using Panda3D SDK 1.7.2 and I’ve been playing around with the 3D grass environment from the Panda tutorial.

Best Wishes,
Pawel

Sure, we do this sort of thing all the time. You want to render your first set of images onto offscreen buffers, and then apply the resulting images as textures onto a mesh that you construct of the appropriate shape for the output window.

There is plenty of information about offscreen rendering in the manual, the samples, and the forums.

David

Thank you David.

I actually had no idea where to begin for the second task so now I can search for the appropriate tools that you mentioned.

However, I am currently still stuck on actually getting the 360degrees panoramic view in the first place…any ideas? (I’ve actually seen your replies for other messages on this but I haven’t been able to use your code to achieve it…)

BW,
Pawel

What do you mean? When you create an offscreen buffer, you need to assign a camera to it. If you create a series of buffers, you can assign a camera to each one. If you parent all of the cameras to the same node, but rotate each of them to face a different direction, then you have created a panoramic series: each buffer shows the view in a different direction.

David

Genius, thank you!!! I understand now…!

Thanks again David.

BW,
Pawel

hi again,

so I am stuck :frowning:

I’ve managed to generate an offscreen buffer with 8 cameras giving a panoramic view and I’ve parented the grass environment from the Panda tutorial to the buffer. I’ve also parented the teapot (which is the mesh I want to use) to the default window.

I wanted to get the texture from the buffer onto the teapot. However, if possible, I wanted this to come out like it would if I was generating a dynamic cube map to apply texture to the teapot.

I have no idea how to do this…maybe it’s better to have 8 buffers as opposed to 8 cameras and one buffer…? Any ideas??

Thank you so much!!!
Pawel

code so far:

import direct.directbase.DirectStart
from pandac.PandaModules import * 

mainWindow=base.win
mybuffer=mainWindow.makeTextureBuffer("My Buffer", 512, 512)
altrender=NodePath("new render")
mycamera1=base.makeCamera(mybuffer,displayRegion = (0, 0.125, 0, 1))
mycamera1.setPos(0,0,0)
mycamera1.setHpr(0,0,0)
mycamera2=base.makeCamera(mybuffer,displayRegion = (0.125, 0.25, 0, 1))
mycamera2.setPos(0,0,0)
mycamera2.setHpr(-39,0,0)
mycamera3=base.makeCamera(mybuffer,displayRegion = (0.25, 0.375, 0, 1))
mycamera3.setPos(0,0,0)
mycamera3.setHpr(-78,0,0)
mycamera4=base.makeCamera(mybuffer,displayRegion = (0.375, 0.5, 0, 1))
mycamera4.setPos(0,0,0)
mycamera4.setHpr(-117,0,0)
mycamera5=base.makeCamera(mybuffer,displayRegion = (0.5, 0.625, 0, 1))
mycamera5.setPos(0,0,0)
mycamera5.setHpr(-156,0,0)
mycamera6=base.makeCamera(mybuffer,displayRegion = (0.625, 0.75, 0, 1))
mycamera6.setPos(0,0,0)
mycamera6.setHpr(-195,0,0)
mycamera7=base.makeCamera(mybuffer,displayRegion = (0.75, 0.875, 0, 1))
mycamera7.setPos(0,0,0)
mycamera7.setHpr(-234,0,0)
mycamera8=base.makeCamera(mybuffer,displayRegion = (0.875, 1, 0, 1))
mycamera8.setPos(0,0,0)
mycamera8.setHpr(-273,0,0)
mycamera1.reparentTo(altrender)
mycamera2.reparentTo(altrender)
mycamera3.reparentTo(altrender)
mycamera4.reparentTo(altrender)
mycamera5.reparentTo(altrender)
mycamera6.reparentTo(altrender)
mycamera7.reparentTo(altrender)
mycamera8.reparentTo(altrender)

environ=loader.loadModel("models/environment")
environ.reparentTo(altrender)
environ.setScale(0.25, 0.25, 0.25)
environ.setPos(-8, 42, 0)

base.bufferViewer.setPosition("lrcorner")
base.bufferViewer.setCardSize(0.5, 0.0)

teapot=loader.loadModel('teapot.egg')
teapot.reparentTo(render)
teapot.setScale(15, 15, 15)
teapot.setPos(-7, 150, -10)

run()

You definitely need a different buffer for each camera. Assigning 8 cameras to the same buffer gives you nothing–all the cameras will render on top of each other, and you’ll just be left with the final results of the last camera.

If your goal is just to apply a cube map to the teapot, though, perhaps you should study the page in the manual called Dynamic Cube Maps, in which exactly this operation is done. That page recommends the use of the built-in function base.win.makeCubeMap(), which is implemented in C++, but it’s basically creating six buffers with six cameras facing in six different directions.

David

Yes I was wandering what was happening with the images from the cameras…!

The thing is, I need the teapot to be stationary while still be able to move the (grass) environment – a sort of rear-view window effect. So I want to move around in the environment and have the frames projected onto the stationary mesh that anyone can then just watch while I move.

I’ve tried the dynamic cube map and I managed to get the ‘reflection’ effect on objects but I am still moving relative to those objects. I would essentially want to just see the environment moving on one side of the object (hence why I was experimenting with having the teapot in the default window on it’s own with the code above…).

Please feel free to tell me I am talking nonsense since I am only getting use to Panda/programming language…and thank you again for help!

BW,
Pawel