Running without window, can't use self.win.getGsg() to create buffer

I’m trying to run a simulation on a remote machine that’s unable to create a window. If I try running the code that works on my laptop, I get

:ShowBase(warning): Unable to open 'offscreen' window.

so I’m trying to use the window-type none option. However, when I use that I have issues creating buffers.

The code below is the way I usually create a buffer (in a class that inherits from ShowBase), however, now that there’s no window I’m getting an error when I call self.win.getGsg() (error message below).

buffer = self.graphics_engine.make_output(
      self.pipe,
      f'Buffer[{name}]',
      config.render_order,
      config.frame_buffer_properties,
      window_props,
      p3dc.GraphicsPipe.BFRefuseWindow,  # don't open a window,
      self.win.getGsg(),
      self.win
    )
:loader: loading file type module: p3assimp
:loader: loading file type module: p3ptloader
:loader: Model /Users/michael/mit/sli/absynthe/experiments/box/assets/room.egg found in disk cache.
:loader: Model /Users/michael/mit/sli/absynthe/experiments/box/assets/box.egg found in disk cache.
Traceback (most recent call last):
  File "experiments/box/likelihood.py", line 260, in <module>
    image_cameras = SceneUtil.make_image_camera_ring(scene, intrinsics, n_cameras, camera_radius, camera_height, true_side_length/2)
  File "/Users/michael/mit/sli/absynthe/absynthe/synthesis.py", line 205, in make_image_camera_ring
    image_camera = scene.add_image_camera(intrinics, pos, hpr)
  File "/Users/michael/mit/sli/absynthe/absynthe/synthesis.py", line 176, in add_image_camera
    camera = self._make_camera(camera_config, pos, hpr, f'Image, {len(self._image_cameras)}')
  File "/Users/michael/mit/sli/absynthe/absynthe/synthesis.py", line 149, in _make_camera
    self.win.getGsg(),
AttributeError: 'NoneType' object has no attribute 'getGsg'

How should I go about creating buffers when I’m using the window-type none setting?

The code below seems to work.

My understanding is that a windowless session has no self.pipe or self.win, so I get some default pipe and I don’t need a window for anything? Please correct me if I’m wrong.

buffer = self.graphics_engine.make_output(
      p3dc.GraphicsPipeSelection.getGlobalPtr().makeDefaultPipe(),
      f'Buffer[{name}]',
      config.render_order,
      config.frame_buffer_properties,
      window_props,
      p3dc.GraphicsPipe.BFRefuseWindow,  # don't open a window,
    )

Well, it at least ran on my laptop. On the remote machine I’m not getting

AL lib: (WW) alc_initconfig: Failed to initialize backend "pulse"
AL lib: (WW) alsa_load: Failed to load libasound.so.2
AL lib: (WW) alc_initconfig: Failed to initialize backend "alsa"
AL lib: (EE) ALCplaybackOSS_open: Could not open /dev/dsp: No such file or directory
AL lib: (WW) alcSetError: Error generated on device (nil), code 0xa004
AL lib: (EE) ALCplaybackOSS_open: Could not open /dev/dsp: No such file or directory
AL lib: (WW) alcSetError: Error generated on device (nil), code 0xa004
:audio(error): Couldn't open default OpenAL device
:audio(error): OpenALAudioManager: No open device or context
:audio(error):   OpenALAudioManager is not valid, will use NullAudioManager
AL lib: (EE) ALCplaybackOSS_open: Could not open /dev/dsp: No such file or directory
AL lib: (WW) alcSetError: Error generated on device (nil), code 0xa004
AL lib: (EE) ALCplaybackOSS_open: Could not open /dev/dsp: No such file or directory
AL lib: (WW) alcSetError: Error generated on device (nil), code 0xa004
:audio(error): Couldn't open default OpenAL device
:audio(error): OpenALAudioManager: No open device or context
:audio(error):   OpenALAudioManager is not valid, will use NullAudioManager
:display:x11display(error): Could not open display ":0.0".
Traceback (most recent call last):
  File "experiments/box/likelihood.py", line 260, in <module>
    image_cameras = SceneUtil.make_image_camera_ring(scene, intrinsics, n_cameras, camera_radius, camera_height, true_side_length/2)
  File "/usr/local/lib/python3.6/dist-packages/absynthe/synthesis.py", line 203, in make_image_camera_ring
    image_camera = scene.add_image_camera(intrinics, pos, hpr)
  File "/usr/local/lib/python3.6/dist-packages/absynthe/synthesis.py", line 174, in add_image_camera
    camera = self._make_camera(camera_config, pos, hpr, f'Image, {len(self._image_cameras)}')
  File "/usr/local/lib/python3.6/dist-packages/absynthe/synthesis.py", line 151, in _make_camera
    buffer.add_render_texture(texture, p3dc.GraphicsOutput.RTMCopyRam)
AttributeError: 'NoneType' object has no attribute 'add_render_texture'

I’m looking into Code working with pandagl but not p3tinydisplay but not sure if this is the right solution.

The issue is that Panda is unable to open an offscreen window, so then trying to open one yourself using make_output won’t make a difference.

What OS is the remote machine—Linux? It looks like there is no X11 server running, so there is no way to communicate with the graphics card. You may be able to use software rendering with load-display p3tinydisplay, but this software renderer is woefully inadequate at rendering anything but the most basic geometry. If you have a graphics card in the machine, then it is possible to set up a headless X11 server; I’ve had some luck doing this in the past.

I’ve heard that it is also possible to use EGL to set up offscreen rendering. I believe that user @mbait has done something like this. This will require making some changes to the Panda soruce.

Is there a way to run Panda without any X11 server?

If I’m rendering offscreen do I need an X server?

I think I’ve already addressed both questions in the previous post? You need an X11 server to talk to OpenGL through GLX, which is what Panda tries to do by default.

You can run a headless X11 server, or you can use the very limited tinydisplay renderer, or you can edit the Panda source to use EGL instead of GLX to use offscreen rendering.

Sorry about the confusion. I was checking if there was some other headless configuration that didn’t use X but was hardware accelerated, but I read your response more closely and it makes sense now. Thank you!

After a lot of messing around, I realized all I had to do was run xinit on the remote machine. I can now render images heedlessly on a remote Linux server and save them to file (thanks again!).

However, when it comes to creating my simulated depth cameras, I’m getting an error I don’t normally get on my laptop, shown below.

Singularity absynthe.simg:~/nfscode/absynthe> python3 experiments/box/likelihood.py 
Failed to create secure directory (/run/user/23051/pulse): No such file or directory
AL lib: (WW) alc_initconfig: Failed to initialize backend "pulse"
AL lib: (WW) alsa_load: Failed to load libasound.so.2
AL lib: (WW) alc_initconfig: Failed to initialize backend "alsa"
AL lib: (EE) ALCplaybackOSS_open: Could not open /dev/dsp: No such file or directory
AL lib: (WW) alcSetError: Error generated on device (nil), code 0xa004
AL lib: (EE) ALCplaybackOSS_open: Could not open /dev/dsp: No such file or directory
AL lib: (WW) alcSetError: Error generated on device (nil), code 0xa004
:audio(error): Couldn't open default OpenAL device
:audio(error): OpenALAudioManager: No open device or context
:audio(error):   OpenALAudioManager is not valid, will use NullAudioManager
AL lib: (EE) ALCplaybackOSS_open: Could not open /dev/dsp: No such file or directory
AL lib: (WW) alcSetError: Error generated on device (nil), code 0xa004
AL lib: (EE) ALCplaybackOSS_open: Could not open /dev/dsp: No such file or directory
AL lib: (WW) alcSetError: Error generated on device (nil), code 0xa004
:audio(error): Couldn't open default OpenAL device
:audio(error): OpenALAudioManager: No open device or context
:audio(error):   OpenALAudioManager is not valid, will use NullAudioManager
:display(error): Could not get requested FrameBufferProperties; abandoning window.
  requested: float_color color_bits=32 red_bits=32 
  got: color_bits=24 red_bits=8 green_bits=8 blue_bits=8 force_hardware force_software 
Traceback (most recent call last):
  File "experiments/box/likelihood.py", line 261, in <module>
    depth_cameras = SceneUtil.make_depth_camera_ring(scene, intrinsics, n_cameras, camera_radius, camera_height, true_side_length/2)
  File "/usr/local/lib/python3.6/dist-packages/absynthe/synthesis.py", line 214, in make_depth_camera_ring
    depth_camera = scene.add_depth_camera(intrinics, pos, hpr)
  File "/usr/local/lib/python3.6/dist-packages/absynthe/synthesis.py", line 180, in add_depth_camera
    camera = self._make_camera(camera_config, pos, hpr, f'Depth, {len(self._image_cameras)}')
  File "/usr/local/lib/python3.6/dist-packages/absynthe/synthesis.py", line 151, in _make_camera
    buffer.add_render_texture(texture, p3dc.GraphicsOutput.RTMCopyRam)
AttributeError: 'NoneType' object has no attribute 'add_render_texture'

I don’t care about audio, but this part

:display(error): Could not get requested FrameBufferProperties; abandoning window.
  requested: float_color color_bits=32 red_bits=32 
  got: color_bits=24 red_bits=8 green_bits=8 blue_bits=8 force_hardware force_software 

seems to be stopping my depth cameras from being created.

Here is where I’m defining my configuration to make a buffer for the depth camera.

@classmethod
  def get_depth_camera_config(cls, intrinsics: DepthCameraIntrinsics):
    # depth is provided through the red channel as a 32-bit float
    frame_buffer_properties = p3dc.FrameBufferProperties()
    frame_buffer_properties.set_float_color(True)
    frame_buffer_properties.set_rgba_bits(32, 0, 0, 0)

    # load custom vertex and fragment shaders
    shader_directory = os.path.join(os.path.dirname(__file__), '..', 'assets')
    vert_path = os.path.join(shader_directory, 'depth-camera.vert')
    vert_path = p3dc.Filename.from_os_specific(os.path.abspath(vert_path)).get_fullpath()
    frag_path = os.path.join(shader_directory, 'depth-camera.frag')
    frag_path = p3dc.Filename.from_os_specific(os.path.abspath(frag_path)).get_fullpath()

    shader = p3dc.Shader.load(p3dc.Shader.SL_GLSL, vertex=vert_path, fragment=frag_path)
    shader_attrib = p3dc.ShaderAttrib.make(shader, 0)
    render_state = p3dc.RenderState.make(shader_attrib)

    return cls(
      intrinsics,
      frame_buffer_properties,
      render_state,
      -2,                         # render order
      (10, 0, 0, 0),              # by default, use distance of 10 where no distance is measured
      1,                          # only 1 channel (red) is used
      np.float32                  # buffer stores depth as 32-bit float
    )

Does this mean that my GPU does not support float color?

It’s not clear how you have done your buffer set-up. It may be that floats are not supported for pbuffers, but only for FBOs. I would suggest using the “offscreen” window type and then creating a new float buffer, passing in base.win as host, this would create an FBO instead of a pbuffer.

This worked!

I just needed to run xinit on the remote machine and use the “offscreen” window type in my Panda configuration.

Thanks again.