Frame packing for high refresh rate projector display


Sorry in advance for what might be a very specific and unusual question :slight_smile: We are exploring the possibility of using Panda3D for scientific applications involving virtual behavioral experiments with animals. We have an interest in using the engine to render very simple 3D scenes for an animal VR application that uses a high speed projector for displaying the environment. These projectors have the ability to render at speeds in excess of 180 Hz but require image frames to be generated in such a way that the three successive monochrome frames are packed into the three channels (R,G,B) of a single frame sent to the projector. This is called frame packing in other places I have seen. So the game loop is running at 180 Hz, the display frames are sent at 60 Hz, the projector is projecting them at 180 Hz. Another Python “game engine” called PscyhoPy, developed for these types of virtual behavioral experiments, has an implementation of this ( As far as I can tell, the low level OpenGL implementation of this is fairly easy as it involves simply applying successive calls of glColorMask for each channel as 3 frames are rendered by the game loop and then clearing after 3 for the next set.

My question is, how difficult would it be for me to modify or implement something like this in Panda3D? I am not asking for code but maybe simply some pointers at where this would be the best place in the code to look at to adding this feature. I am very comfortable with Python and C++ so just some tips on where to get started would be greatly appreciated. Also, can anyone think of potential pitfalls that I should look out for in advance?


Hi, welcome to the community! No worries, this kind of question is not actually so rare, given that Panda3D tends to be a popular choice for esoteric rendering set-ups.

This would not be very difficult. There are two ways I can see to do this:

  1. Disable the clear settings on the window (you will need to provide your own card behind the scene to ensure you don’t see the results of previous render). In a task, apply a ColorWriteAttrib to only render one of the channels depending on which frame you are rendering:
n = task.frame % 3
if n == 0:
elif n == 1:

With this method, you may need to make sure that all the textures and colours in your scene are monochrome to begin with, so that different channels don’t end up writing different values, or you can apply a shader to your scene that filters the colour values down to a monochrome value.

  1. Alternatively, you can render each frame into a different texture, and either, create a shader to composite the three together into a third texture, or set the formats of the three textures to Texture.F_red, F_green and F_blue and use texture blending to composite them on a fullscreen card.

I think the first approach is easier; I trivially modified the Hello World demo to implement that approach, here it is. (To fully work, you also need to make sure that the scene is monochrome to begin with, as pointed out earlier).

from panda3d.core import *

# Limit frame rate so you can see the effect better
loadPrcFileData("", """
clock-mode limited
clock-frame-rate 10

from direct.showbase.ShowBase import ShowBase
from direct.task import Task
from import Actor

from math import pi, sin, cos

class MyApp(ShowBase):
    def __init__(self):

        # Disable window clear; it clears all channels., 0, 0, 1))

        # Instead put a black card behind everything.
        cm = CardMaker("card")
        cm.setFrame(-100, 100, -100, 100)
        card = self.render.attachNewNode(cm.generate())
        card.setColor((0, 0, 0, 1))
        card.setBin("background", 0)
        # Load the environment model.
        self.scene = self.loader.loadModel("models/environment")
        # Reparent the model to render.
        # Apply scale and position transforms on the model.
        self.scene.setScale(0.25, 0.25, 0.25)
        self.scene.setPos(-8, 42, 0)
        # Add the spinCameraTask procedure to the task manager.
        self.taskMgr.add(self.spinCameraTask, "SpinCameraTask")
        # Add the task to set the color mask
        self.taskMgr.add(self.setColorMaskTask, "ColorMaskTask")

        # Load and transform the panda actor.
        self.pandaActor = Actor("models/panda-model",
                                {"walk": "models/panda-walk4"})
        self.pandaActor.setScale(0.005, 0.005, 0.005)
        # Loop its animation.

    def setColorMaskTask(self, task):
        n = task.frame % 3
        if n == 0:
        elif n == 1:

        return task.cont        
    # Define a procedure to move the camera.
    def spinCameraTask(self, task):
        angleDegrees = task.time * 6.0
        angleRadians = angleDegrees * (pi / 180.0) * sin(angleRadians), -20.0 * cos(angleRadians), 3), 0, 0)
        return Task.cont
app = MyApp()

Though, come to think of it, if you need to be able to vsync using the graphics library, you may need to take a slightly different approach: you could create three DisplayRegions instead of only one for your scene, and have each render a slightly different state in time, but with a different color mask. But it would require you to set up three copies of your scene at the different states.

It would be easier to modify the Panda source to only call SwapBuffers every third frame, or something like that; you would be looking either in wglGraphicsWindow.cxx (assuming Windows) or graphicsEngine.cxx to make this adjustment. I suppose it would be easy for us to add a “sync interval” feature for this.

Depending on how you handle synchronization this may not be a problem though.

Thanks so much for the quick and helpful reply! I am not sure at the moment whether I need to be able to vsync. For PsychoPy, the engine I mentioned before, they make a big effort for this I think because measuring the duration that a stimulus appears on the screen as precise as possible. In my context, I am not sure it is as important. Though, I don’t know if this will have other implications regarding rendering quality (rendering artifacts).

I am also have some trouble getting the code to fullscreen on the secondary display (the projector). I tried this method but it doesn’t seem to work.

When I use this method, or a slight variation the replaces, with, called directly after ShowBase init. The window fullscreens but on the primary display, not the display that the window is on.

Any ideas?