Stereo off-screen buffer

From Stereo Display Regions

As of Panda3D 1.9.0, you may create a stereo off-screen buffer without special hardware support, assuming the card supports using multiple render targets (most modern cards do), by setting the stereo flag in the FrameBufferProperties object. Panda3D will automatically designate one of the draw buffers to contain the stereo view for the other eye. When binding a texture to the color attachment for render-to-texture, Panda3D will automatically initialize it as a multiview texture containing both left and right views.

Is there a good example of how to setup such stereo off-screen buffer, render one pass, and save frames for each eye to separate image files? I spent some time reading the source code and the docs, but eventually have got lost.

I’m not sure that there is such an example, but what you want to do is a regular render-to-texture but with these differences:

  • Set the stereo flag in the FrameBufferProperties to True.
  • When adding a texture to the buffer using addRenderTexture, make sure you have called setNumViews(2) on that texture.
  • You want to use the RTM_copy_ram mode (or the RTM_triggered_copy_ram and explicitly call buffer.triggerCopy()) to get the image saved to RAM.
  • The texture object should contain two views; I think you should be able to call tex.write("test#.png", 0, 0, True, False) to write out both images to test0.png and test1.png, or explicitly call it two times as tex.write("test0.png", 0, 0, False, False) and tex.write("test1.png", 1, 0, False, False).

Thank you for your answer. That’s mostly what I’m tryhing to do, except the setNumViews() call - I didn’t know it was required. Still I get SIGSEGV trying to render a frame, the code I’m running is below

import sys

import cv2
import numpy as np
from direct.showbase.ShowBase import ShowBase
from panda3d.core import FrameBufferProperties, WindowProperties, GraphicsPipe, \
    Vec3, Texture, GraphicsOutput, PerspectiveLens, NodePath, Thread, \
    loadPrcFileData

import matplotlib.pyplot as plt
loadPrcFileData('', 'support-threads #f')
print(Thread.isThreadingSupported())



props = FrameBufferProperties()
# Request 8 RGB bits, no alpha bits, and a depth buffer.
# fb_prop.setRgbColor(True)
# fb_prop.setRgbaBits(8, 8, 8, 0)
# fb_prop.setDepthBits(16)
props.set_rgb_color(True)
props.set_depth_bits(16)
props.set_stereo(True)

# Create a WindowProperties object set to 512x512 size.
win_prop = WindowProperties.size(512, 512)

# Don't open a window - force it to be an offscreen buffer.
flags = GraphicsPipe.BFFbPropsOptional | GraphicsPipe.BF_refuse_window

base = ShowBase(windowType='offscreen')
cube = base.loader.loadModel('misc/rgbCube')
cube.reparent_to(base.render)
base.camera.set_pos(5, 5, 5)
base.camera.look_at(0, 0, 0)

buffer = base.graphicsEngine.make_output(base.pipe, "stereo buffer", -100,
                                         props,
                                         win_prop, flags, base.win.getGsg(),
                                         base.win)

dp = buffer.makeDisplayRegion()
print type(dp)

t = Texture()
t.setNumViews(2)
# t.setFormat(Texture.FRgb8)
buffer.addRenderTexture(t, GraphicsOutput.RTM_copy_ram)

print t.getNumPages()
render = NodePath('alt render')
cam = base.makeCamera(buffer)
lens = PerspectiveLens()
lens.setFov(54.611362)
lens.setFocalLength(1.37795276)

cam.node().setLens(lens)
cam.node().setScene(render)
dp.setCamera(cam)

t.clear()
base.graphicsEngine.renderFrame() # SIGSEGV at this point
print 'PASS'

# tex = buffer.getScreenshot()
tex = t
image = np.asarray(memoryview(tex.getRamImage())).reshape(512, 512, 3)
image = np.flipud(image)

cv2.imshow('image', image)
cv2.waitKey(0)
print tex.getNumPages()

You’re on the right track, but there are several issues:

  • You should not call t.clear(). This clears all texture properties. Maybe clearRamImage() is what you intended, but that should not be necessary.
  • You should set the FrameBufferProperties to the right number of bits, so props.setRgbaBits(8, 8, 8, 0).
  • tex.getRamImage() returns both views concatenated, you’ll have to split the returned data into two halves or adjust your numpy cast.
  • There’s a bug in Panda, where it sets the texture back to have only 1 view by accident, before happily trying to read data into a now non-existent view, causing a crash. I’ll fix this.

To work around this bug, there is a different way you can extract texture data for now. Instead of RTM_copy_ram, set it to RTM_bind_or_copy, and after rendering a frame, call this:

success = base.graphicsEngine.extractTextureData(t, buffer.gsg)

Then, you can read out the RAM image as usual.

Finally, are you aware that Python 2 will reach EOL at the end of this year?

1 Like

Thanks! I’ll try now. Re python2 - definitely! Thanks for reminding though. It just takes time, but I’m looking forward to it.

The SIGSEGV problems aside - does this approach benefit from using the render to multiple target GPU capability? Or, in other words, is it any better to bother with set_stereo(True) of the FrameBufferProperties instance rather than rendering the scene twice, moving the camera to get the frame for the other eye?

Well, it worked out mostly, except that the view for the other eye is empty (black):

It’s probably worth to mention that I’m running Panda 1.10.0.

Bah, this method is also bugged. I’m sorry about this; you’re a little off the beaten path here. Multiview textures are not quite as battle-tested as we’d like them to be yet. I have fixes lined up for both bugs here, and will be checking them in shortly. Are you OK with using a development build?

Not really; in fact, the main benefit is that Panda will cull the scene just once. You can get the same effect by creating two lenses and setting up two display regions with differing lens index and target texture page, but Panda will cull the scene twice.

If you want to use a GPU-based technique to render to two eyes simultaneously, you would actually need to set up a 2-D texture array with RTM_bind_layered and instance your geometry in a shader, using a geometry shader or using an AMD extension to assign layer index in the vertex shader.

Panda3D 1.10.0 is unstable and I highly recommend that you update to the latest bugfix release, 1.10.4.1. This version does not introduce incompatible changes, only bugfixes. However I will try to release 1.10.5 soon with the aforementioned bug fixes.

The fixes have been checked in and you can benefit from them by grabbing a build from the buildbot:
https://buildbot.panda3d.org/downloads/76d6b7ce585edbbada55f37e6ddf7abeaa75c74b/

Or if you installed panda3d using pip, by upgrading using something like:

pip install -U --extra-index-url https://archive.panda3d.org/branches/release/1.10.x 'panda3d>=1.10.5.dev92'
1 Like

Thank you so much! I confirm that your both approaches do work after updating Panda3D to the dev version. A quick follow-up question: is it possible for a GLSL program to distringuish between two eye views? I mean something like

if (is_left_eye_rendering) {
// ...
} else {
// ...
}