Killing a Bufferwindow

I’m performing a render to texture in my program. I need to be able to create and then later destory the buffer as at times it is not necessary and will be stealing resources.

I’ve essentially copied code from the render-to-texture tutorial:

mainWindow = base.win
self.altBuffer = mainWindow.makeTextureBuffer('buffer', 256, 256)
altRender = NodePath("New render")
altCam = base.makeCamera(self.altBuffer)

The actual render to texture works fine.

I just need the code to destroy the buffer. I can kill the render and camera nodes fine, but can’t get rid of the buffer. I can’t figure it out, I need the instance of of GraphicsEngine that showbase loads up I think, but I don’t know its name.

base.graphicsEngine.removeWindow(buffer)

David

Thanks David as always you know the answers.

I actually figured this one out a few minutes after posting here as I read through the ShowBase.py code.

Thanks for answering though.

Also, if you like, you can temporarily turn off the buffer using “setActive(false)”

After some more testing I’ve got more problems with my buffer.

The whole thing is in a class. When the class is called, the buffer scene loads and it loads different models into the buffer scene based on information passed to the class on initialization. The texture generated is then textured to a simple tile model.

There is also a destroy function in the class. I’m using the class to cycle though different models but for the application I am using it for, the buffer may not need to be on the screen at all during some times- that’s why I want to destroy the buffer itself during times in the program where it isn’t used.

However when I run the destroy command and later open up the class again this time with a different model in the buffer’s scene it doesn’t take much cycling (destroy/create) before the framerate falls very quickly.

My destroy function runs like this:

def destroy(self):
		#parent to the model in buffer scene
		self.center.getChildren().detach()
		self.center.removeNode()
		self.center = None
		
		#parent to the tile to which the buffer is textured.
		self.center2d.getChildren().detach()
		self.center2d.removeNode()
		self.center2d = None
		#The alternate render node
		self.altRender.removeNode()  #gets rid of the alternate Render for the buffer
		self.altRender = None
		#Removes the altBuffer from the graphics Engine
		base.graphicsEngine.removeWindow(self.altBuffer) #kills the buffer window itself
		self.altBuffer = None
		
		self.altCam.removeNode() #kills the alternate camera
		self.altCam = None
		
		self.tile.removeNode()	#kills the tile node of the final obj.		
		self.tile = None

But it’s sucking resources. I have not an render.analyze() and an aspect2d.analyze() but they show next to nothing but I still have the slow fps.

Any help would be great.

you’re suffer from having some redundant threads lying around.

I guess it’s impossible to kill them from Python.

That’s not it. The standard distro isn’t multithreaded.

Perhaps the window isn’t truly getting destroyed? I have had difficulties in the past with reference counts I haven’t cleaned up in some obscure part of the data hanging onto a window indefinitely. I’m not sure if that’s what’s going on here, though, since I think an explicit call to removeWindow() should do the trick regardless of any outstanding reference counts.

It’s possible there’s a Panda bug in there. Have you tried simply disabling the buffer, and then reenabling and reusing the same buffer on subsequent passes?

David

I’ve reworked my code a little. Instead of creating a new buffer texture each time I need it, I now create a global buffer at the beginning of the script and when it is needed the class just loads into the buffer’s scene what it needs and loads up the tile.egg which receives the texture from the buffer.

By doing that, the frame rate stays constant. There is a short hick-up while the class loads the model from the disk and displays it in the buffer’s scene but it quickly resolves back to baseline.

That’s the only difference I’ve made. I’ve looked around the class code which isn’t very long but there aren’t any extra references to the buffer/altRender/ etc.

I’m not sure it’s a bug with Panda but at the moment it seems like creating and destroying a buffer rapidly creates resource problems. My guess is that the buffer isn’t really destroyed.

But for my purposes- a global buffer called once and then refitted according to need is a fine work-around. There are times when the buffertexture isn’t needed at all, but the alternative (creating and destroying the buffer as needed) is just too costly resource-wise.

I’ve run into this issue as well. I created a test script that demonstrates the problem.

import direct.directbase.DirectStart
from direct.task import Task
import time

class BufferTest:
    def __init__(self):
        self.buffer = None
        taskMgr.add(self.Test, "Test")
        self.count = 0
        
    def destroyBuffer(self):
        self.buffer.removeAllDisplayRegions()
        self.buffer.clearRenderTextures()
        base.graphicsEngine.removeWindow(self.buffer)
        assert(self.buffer.isValid() == False)
        self.buffer = None
    
    def createBuffer(self, w, h):
        self.buffer = base.graphicsEngine.makeBuffer(base.win.getGsg(), 'test.buffer', 1, w, h)
        
        # When video memory runs out, panda either hangs or trips this assert.
        assert self.buffer
        
    def Test(self, task):
        print self.count, time.clock()
        if self.count % 2 == 0:
            self.createBuffer(2048, 2048) # Crash appears to be a function of buffer size.
        else:
            self.destroyBuffer()
        self.count += 1
        return Task.cont

def test():
    t = BufferTest()
    run()

if __name__ == '__main__':
    test()

Panda crashes after around 15 or so buffer allocations for me, although it seems to vary with buffer size. I’m using opengl with up to date drivers. And I have an Nvidia Quadro NVS 290. I’ve also checked the reference counts on the buffers before and they looked normal.

Have I done anything wrong with my buffer allocation / deallocation?

If I were to try and fix this in the code, where would be a good place to start looking around?

And, in order to work around this, like mavasher did, how would I go about resizing an off screen buffer? The ‘requestProperties’ method doesn’t seem to be present for offscreen buffers. Could I instead create a large buffer and then set the display regions to the required size?

Any suggestions or solutions would be most appreciated. Thx.

-Greg

Certainly seems to be conclusive evidence of a buffer leak. I’ll investigate.

In the meantime, yes, opening one large buffer and then varying the DisplayRegion size would work. It depends on your buffer needs, of course. If you’re rendering to texture, it may be inconvenient to you to have a large part of the texture unused; but if you’re only captured offscreen screenshots, you can use the DisplayRegion to control the size of the screenshot nicely.

David

Have you tried running in DirectX to see if the problem also exists there? How about with prefer-parasite-buffer enabled in your Config.prc (and a buffer size no larger than your main window)?

David

Hi Dave,

Thx for responding so quickly.

The directX implementation doesn’t seem to work for me:

display:gsg:dxgsg9(error): SetRenderTarget  at (c:\...\panda\src\dxgsg9\wdxGraphicsBuffer9.cxx:419), hr=D3DERR_INVALIDCALL: Invalid call

c:\...\panda\src\dxgsg9\wdxGraphicsBuffer9.cxx 419
:display:gsg:dxgsg9(error): SetRenderTarget  at (c:\...\panda\src\dxgsg9\wdxGraphicsBuffer9.cxx:419), hr=D3DERR_INVALIDCALL: Invalid call

c:\...\panda\src\dxgsg9\wdxGraphicsBuffer9.cxx 419

I tried “prefer-parasite-buffer enabled,” but no luck either.

I’m rendering to a texture but only to grab a snapshot of the depth buffer. So I think I’ll give varying the DisplayRegion size a try next. Thx.

-Greg

Sorry, I should have said:

prefer-parasite-buffer 1

in your Config.prc file.

David

GLGraphicsBuffer can be destroyed correctly, but wglGraphicsBuffer can’t be, so using :

        self.buffer = base.win.makeTextureBuffer('test.buffer', w, h,)
        print self.buffer.getType()

doesn’t crash.

That’s interesting. It’s not a palatable fix for me though because of the base.win dependency. I’m using the buffer for some offline build operations and don’t have a reason to keep that window open. Using display regions works well for me so I’ll probably stick with that until wglGraphicsBuffer’s can be deallocated.

I’m curious, do you know why wglGraphicsBuffer’s have this issue? I’m wondering if this is a known limitation or a bug with the wgl implementation.

-Greg