Embedding SeeFront SDK into Panda3D

Hi,

I’m trying to implement the SeeFront SDK for an autostereoscopic display, that basically takes the rendered scenes, which have been rendered into the buffers and binds them to a left and right texture.

So I have to edit the functions in Panda, that take care of rendering the scene into the buffers. I think I have to edit the ones for quad buffer stereo since I need a left and right scene.

Where would be the best way to start or where can I find the mentioned functions in the Panda source?

Thanks for any help!

Does this SDK actually require you to create quadbuffer stereo, or will any two offscreen buffers do? Panda can certainly create ordinary offscreen buffers (and bind them to the left and right eyes of a stereo camera) without any special-purpose coding.

For that matter, Panda can then bind them to a left and right texture for you. It’s up to you to decide what to do with those textures. If you have an autostereoscopic display that requires the scene to be rendered in alternating vertical strips of left and right views, you can construct a mesh in Panda to render your two textures in that way, without the need for any third-party SDK.

David

Hum, that sounds quite promising.
I think two offscreen buffers should also do. I have to bind them to some special “textures”, which the SeeFront SDK then slices into alternating strips. At least that’s what it looks like.
So I actually need alternating vertical strips, but we also got a face tracking API the SDK uses to edit the strips behind the lenticular lens to adjust the image if the user moves in front of the display. If Panda could edit the strips with the face tracking data, how would I set this up? And if it doesn’t, how could I access the buffers and bind them to the SeeFront textures?

Thanks a lot for your help!

In OpenGL (and DirectX), you don’t actually slice up a texture into strips. In fact, you don’t actually render a texture directly. You render a texture by applying it to a mesh, and rendering the mesh. If you want to slice a texture into strips for rendering, you apply it to a mesh that has been sliced into strips.

If you have two textures that you want to render in alternating vertical strips, you apply them to two meshes, which each consist of vertical strips alternating with empty space, and then you place those two meshes together so that they each fill in the other’s empty space.

I have no doubt that this is what the SeeFront SDK is doing for you behind the scenes. This is really the only sensible way to render alternating vertical strips of two offscreen buffers.

Now, what’s the best way to integrate this with Panda? Is it to hand a pair of textures to SeeFront and let it do the work? Or is it to create the meshes yourself and do the required work in Panda? That I can’t tell you without knowing more about how, precisely, the SeeFront SDK expects to work. Does it run as an OpenGL callback, where you pass in the texture ID’s and expect it to draw the mesh? Or some other, more complex, mechanism?

You can easily hook into an OpenGL callback from Panda, but doing so does (of course) limit you to running in OpenGL, not DirectX.

David

I can’t quite remember how it worked exactly. There were some texture IDs and a glBindTexture(GL_TEXTURE_2D,textureIDarray[0]). The ID array was a bit different but that’s the best I can recall. I have to check tomorrow at work.
If I could do the slicing in Panda it would give a lot more control I guess but I’m not sure if I can get it to work properly with the tracking data. And the SDK offers some other functions which might be useful so I would rather try going for the texture binding.

Just OpenGL would be fine since we don’t really need DirectX for anything.

I checked the SDK sample program again.
There is a so called InterlaceHandle to the SeeFront interface and an array with place for two textures. It also checks for a frame buffer object extension to decide if it can render the scenes directly to the textures or if it has to rebind them from the buffers.
It creates two empty textures with the same size as the render window into the array and passes them to the SeeFront interface by calling:

sfogl::setTextures(handle,textureArray,...);
sfogl::setTextureSize(handle, width, height);

The handle gets initialized, it sets up a tracker callback and the window is being prepared:

sfogl::setScreen(handle, origin, size);

It then creates and renders a scene and binds them to one texture with:

glBindTexture(GL_TEXTURE_2D,textureArray[0]);
glCopyTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 0, 0, width,height);

Lastly it creates the interlaced scene:

sfogl::render(handle);

To me it seems that you can basically pass any two textures to the SeeFront object with the “sfogl::setTextures()” function.
So I guess the best way would be to let Panda bind the buffers to two textures,like you said, then pass them to the SeeFront object and do the other SeeFront stuff in my own program instead of editing the Panda source.

If you could tell me how to let Panda bind slightly offset scenes to the two textures it would help me a lot I think.

EDIT: I found this entry in the manual: http://www.panda3d.org/manual/index.php/Low-Level_Render_to_Texture
I’ll try getting it to work. If that’s the wrong way please tell me.

Yes, that’s the right place to start. To make the two buffers render the left and right eyes of a stereo camera, you need simply call set_stereo_channel() on the DisplayRegion for each buffer. One of them will be SC_left, and the other will be SC_right. Then assign the same camera to both of them.

Are you coding in C++ or Python? The API you’re quoting for SeeFront appears to imply a C++ interface.

You will need to make all OpenGL calls from within a draw callback, so that means all of your calls to the SeeFront SDK have to be made from within a draw callback (on the assumption that SeeFront will be making OpenGL calls). This is necessary because outside of a draw callback, you might not have an OpenGL context bound. Also, OpenGL is not inherently thread-safe.

The easiest way to make a call from within a draw callback is to create a CallbackNode and assign it into the scene graph, then use set_draw_callback() on that node. The draw callback will be called when the node is rendered.

David

Yeah I’m coding in C++.
Ok, I created two cameras and offset them a bit from the center but I’ll try the stereo channel tomorrow.
I’m not sure if the SeeFront SDK needs an OpenGL context, it looks like the functions are mostly independent but I could be wrong.
I’ll just see what I can come up with and check what works.
Thanks again for your help.

All OpenGL calls require an OpenGL context to have been created and made current. OpenGL has a concept of the “global context”.

David

I managed to get two textures with the two different views of the stereo camera now.
It seems that you have to pass the texture IDs to the SeeFront SDK, like you said before. I’m a bit confused how I would link the Panda textures to the array that just contains the texture IDs. You said you could easily hook into an OpenGL call back. Could you explain that a bit more to me?
In the OpenGL GLUT example they use

glGenTextures(1, &textureIDs[0]);
glBindTexture(GL_TEXTURE_2D,textureIDs[0]);
glTexImage2D(GL_TEXTURE_2D, 0, 4, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0x0);

The textures are passed to the SeeFront object with

sfogl::setTextures(sfHandle, textureIDs, 2, 0.0, 0.0, u, v);

How would I do this with Panda textures instead?

I found this via the forum search but I can’t get it to work:

PT(GraphicsStateGuardianBase) gsg1 = bufferL->get_gsg();
GLTextureContext *GLtexL = DCAST(GLTextureContext,textureL->prepare_now(gsg1->get_prepared_objects(),gsg1));
textureID[0] = GLtexL->_index;

It always says: “‘GLTextureContext’ : undeclared identifier”.

You have to include glgsg.h to define GLTextureContext.

To make an OpenGL callback, you have to create a subclass of CallbackObject and attach it to a CallbackNode which you put in your scene. You should override CallbackObject::do_callback() to do the appropriate thing.

David

Thanks a lot, I get two interlaced images now.
But somehow the viewport seems to be in the wrong position. When I apply the texture to a CardMaker with “set_frame_fullscreen_quad()”, it’s shifted to the left quite a bit, leaving a black section on the right.

It seems like the textures repeat infinitely into distance, causing the fps to drop dramatically. I also get some black areas in the scene itself sometimes. Here’s a screenshot:

You can see it repeating to the back left and some covers are missing or just black.

Are you sure you are applying the texture from an offscreen buffer and not from the main window? You can examine the results of your offscreen renders by putting “show-buffers 1” in your Config.prc file; this will display the contents of the offscreen buffers in a little box in the corner of the main window.

David

I can’t remember the exact code I use, have to that check tomorrow at work, but it is something like this:

PT(GraphicsOutput) bufferL = window->get_graphics_output()->make_texture_buffer("bufferL",1280,480);
PT(Texture) textureL = bufferL->get_texture();

DisplayRegion* regionL = bufferL->make_display_region(0,1,0,1);
regionL->set_stereo_channel(lens->SC_left);
regionL->set_camera(camera);
regionL->set_active(true);

The window is set by

WindowFramework* window = framework.open_window();

The camera by

PT(Camera) cam = new Camera("Camera");
camera = window->get_camera_group()->attach_new_node(cam);

And the lens with

PT(PerspectiveLens) lens = DCAST(PerspectiveLens,cam->get_lens());
lens->set_interocular_distance(eyeSep);

EDIT: Updated code.

Ah, right, again I forgot you’re working in C++. The show-buffers trick is implement in Python, so in C++ you’ll have to do it yourself, by creating an onscreen card and applying your texture to this card.

I still suggest doing this as a useful debugging tool. It is invaluable to be able to view the contents of your offscreen buffer directly. I’m not sure precisely what’s going wrong in your case, but this is where I’d start trying to figure it out.

David

Somehow the 2D coordinates seem to be messed up, I created the DisplayRegion with

PT(DisplayRegion) regionL = bufferL->make_display_region(0,1,0,1);
regionL->set_stereo_channel(lens->SC_left);
regionL->set_camera(camera);
regionL->set_active(true);

And a CardMaker by

CardMaker cmL("left card");
cmL.set_frame(-1,1,-1,1);
NodePath cardL = window->get_render_2d().attach_new_node(cmL.generate());
cardL.set_texture(textureL);

But instead of getting a card filling the whole window I get this:

It just fills about the left 2/3 of the window and the aspect ratio looks off.

When I set

PT(DisplayRegion) regionL = bufferL->make_display_region(0,1.6,0,1);

it looks fine even with SeeFront enabled:

I just have to set

set_clear_color_active(true);
set_clear_depth_active(true);

for the DisplayRegions to get rid of the repeating textures then.

It is trying to create a power-of-two texture for you by increasing the window size up to the next power-of-two and framing the window region in the corner of that larger area. Many (older) graphics drivers require a power-of-two texture for good performance.

If you have a modern card, you can avoid this nuisance by setting “textures-power-2 none” in your Config.prc file.

If you want to support a wider variety of graphics cards, it’s best to create your texture buffer so that it is a power of two size in the first place. Or, you can use the Texture get_pad_[xy]_size() parameters to set your DisplayRegion to the appropriate size automatically.

David

Ah ok, we need the 1280x480 resolution for the autostereoscopic display to work.
Since we probably won’t change the graphics card I’ll try the “textures-power-2 none” tomorrow.
Thank you very much for your help. It seems that everything is working fine now. I’m so glad I came across Panda3D, it’s a really awesome engine.

The “textures-power-2” worked perfectly.
I can set the DisplayRegion to (0,1,0,1) now:

PT(DisplayRegion) regionL = bufferL->make_display_region(0,1,0,1);

And I don’t need the

set_clear_color_active(true); 
set_clear_depth_active(true);

anymore.

Thanks again.