Multisampled buffer produces "GL_INVALID_OPERATION"

In my current project, in order to support antialiasing, I render to an off-screen buffer that, depending on the game’s settings, may be set to be multisampled. If the player changes the relevant setting while the game is running, the relevant method is called, recreating the buffer and associating the camera with it.

I believe that this had previously worked.

However, as of a few days ago, I found that enabling multisampling resulted in an OpenGL error being reported. Adding " gl-debug #t" to my “prc” file (as suggested by the initial error message) resulted in this error-text being repeated:

:display:gsg:glgsg(error): GL_INVALID_OPERATION error generated. Depth formats do not match.

The following is my current approach to creating my buffer and camera (with some commented-out code omitted). If the value given to “setMultisamples” is “0”, it successfully creates a buffer, and the scene is visible. If instead it’s given some other value (in all fairness, I’ve only tested a few power-of-two values), it seems to fail, with only blackness in place of the scene.

def makeBuffers(self, multisamples):
    self.cleanupBuffers()

    frameProperties = FrameBufferProperties()
    frameProperties.setMultisamples(multisamples)
    frameProperties.setRgbaBits(8, 8, 8, 0)
    frameProperties.setDepthBits(32)

    windowProperties = base.win.getProperties()

    self.mainBuffer = base.graphicsEngine.makeOutput(base.pipe, "Main Buffer", -1,
                                                     frameProperties, windowProperties,
                                                     GraphicsPipe.BFRefuseWindow, base.win.getGsg(), base.win)
    self.mainBuffer.setClearColor(Vec4(0, 0, 0, 1))

    if self.sceneCamera is None:
        self.sceneCamera = base.makeCamera(self.mainBuffer)
        self.sceneCamera.node().setScene(self.rootNode)
        self.sceneCamera.node().setCameraMask(BitMask32(1))
        self.updateAspectRatio()
    else:
        region = self.mainBuffer.makeDisplayRegion(0, 1, 0, 1)
        region.setCamera(self.sceneCamera)

    self.sceneTexture = Texture()
    self.sceneTexture.setWrapU(Texture.WM_clamp)
    self.sceneTexture.setWrapV(Texture.WM_clamp)

    self.mainBuffer.addRenderTexture(self.sceneTexture,
                                     GraphicsOutput.RTMBindOrCopy)

    self.card.setTexture(self.sceneTexture)

I’m not sure of whether this is a bug in Panda, a problem with my approach (perhaps linked to changes in Panda, since it once worked), or perhaps even an issue with my computer’s drivers–hence my not filing this in the issue tracker just yet!

That is a bug in Panda (or in the drivers, but even then we want to look at what we can do to work around this). Please either provide an apitrace output or a self-contained test case and we can fix this.

Not a problem! And indeed, doing so has caused me to stumble upon a clue: it seems that the value given to “setDepthBits” is important.

The test-program below defines a method called “makeBuffers”. The parameter passed to that method is the value given to the call to “setMultisamples”.

Within the method itself, note the two calls to “setDepthBits”: one of them calls for 32 bits, the other for 16.

When the 32-bit call is uncommented, and the 16-bit line commented out, the method only succeeds (on my machine, at least) when the value given to “setMultisamples” is 0. Conversely, however, when the 16-bit line is uncommented and the 32-bit line is commented out, the method works for a other values (as well as 0) given to “setMultisamples”.

The test-program:

from direct.showbase import ShowBase as showBase
from panda3d.core import Vec4, NodePath, PandaNode, FrameBufferProperties, GraphicsPipe, GraphicsOutput, Texture

class game(showBase.ShowBase):

    def __init__(self):
        showBase.ShowBase.__init__(self)

        self.rootNode = NodePath(PandaNode("root"))

        self.model = loader.loadModel("panda")
        self.model.reparentTo(self.rootNode)
        self.model.setPos(0, 50, -5)

        self.mainBuffer = None
        self.sceneCamera = None
        self.sceneTexture = None

        self.card = base.win.getTextureCard()
        self.card.reparentTo(render2d)
        self.card.setShaderOff(1)

        self.makeBuffers(8)

    def makeBuffers(self, multisamples):
        frameProperties = FrameBufferProperties()
        frameProperties.setMultisamples(multisamples)
        frameProperties.setRgbaBits(8, 8, 8, 0)
        frameProperties.setDepthBits(32)
        #frameProperties.setDepthBits(16)

        windowProperties = base.win.getProperties()

        self.mainBuffer = base.graphicsEngine.makeOutput(base.pipe, "Main Buffer", -1,
                                                         frameProperties, windowProperties,
                                                         GraphicsPipe.BFRefuseWindow, base.win.getGsg(), base.win)

        if self.sceneCamera is None:
            self.sceneCamera = base.makeCamera(self.mainBuffer)
            self.sceneCamera.node().setScene(self.rootNode)
        else:
            region = self.mainBuffer.makeDisplayRegion(0, 1, 0, 1)
            region.setCamera(self.sceneCamera)

        self.sceneTexture = Texture()

        self.mainBuffer.addRenderTexture(self.sceneTexture,
                                         GraphicsOutput.RTMBindOrCopy)

        self.card.setTexture(self.sceneTexture)

app = game()
app.run()

If this test-program doesn’t produce the same issue on your side, or if you want output from APITrace as well as the above, let me know!

(Since this has been confirmed to be a bug, I’ve moved the issue over to the issue-tracker..)