It takes two frames to render texture correctly?

So I’m trying to optimize/rationalize my terrain system. I have discovered that to render my textures correctly, I must call
base.graphicsEngine.renderFrame()
twice. I don’t know why, but if I only call it once, renderings using multiple textures in the shader get all mangled as if the textures were swapped or not yet loaded.
I have tried:
allow-incomplete-render #f
and it does not help.

I’m also trying to thread it, or at least avoid calling
base.graphicsEngine.renderFrame()
when its running in realtime, but thats not working very well, so I’m trying to figure out whats going on without threading first. Ignore the illogical unfinished threading related stuff in this code. Heres the code:

yes, I’m needlessly copying to ram, and copying the texture too (both of which are unnecessary), but those are because of bugs which I’ll address later. One thing at a time for now please.

    def renderMap(self, rawTile, inputMaps, shader, threaded=False):
        
        # Resolution of texture/buffer to be rendered
        size=int(round(tileMapSize*shader.resolutionScale+shader.addPixels))
        
        #we get a handle to the default window
        mainWindow=base.win

        #we now get buffer thats going to hold the texture of our new scene   
        buff=mainWindow.makeTextureBuffer('MapBuff',size,size,Texture(),True)
        
        #now we have to setup a new scene graph to make this scene
        altRender=NodePath("new render")

        #this takes care of setting up ther camera properly
        altCam=base.makeCamera(buff)
        
        oLens = OrthographicLens()
        
        margin=texMargin(size)
        
        oLens.setFilmSize(1+margin*2, 1+margin*2)
        altCam.node().setLens(oLens)
        altCam.reparentTo(altRender)        
        altCam.setPos(.5,-1,.5)
        
        c=CardMaker("MapCardMaker")
        
        c.setUvRange(0-margin,1+margin,0-margin,1+margin)
        
        c.setFrame(0-margin,1+margin,0-margin,1+margin)
        mapCard=NodePath(c.generate())
        
        mapCard.reparentTo(altRender)
        mapCard.setPos(0,0,0)   
        
        
        mapCard.setShader(shader.shader)
        mapCard.setShaderInput("offset",rawTile.x,rawTile.y,0,0)
        mapCard.setShaderInput("scale",rawTile.scale,0,0,0)

        for m in inputMaps:
            texStage=TextureStage(m.name+"stage")
            mapCard.setTexture(texStage,m.tex)
        
        for p in shader.shaderTex:
            mapCard.setTexture(*p)
        
        """
        
        Here the texture is aauctually generated
        For some unknowen reason, both calls to:
        base.graphicsEngine.renderFrame() 
        are needed or there are issues when using multiple textures.
        
        
        """
        
        
        def waitAFrame():
            if threaded:
                i = self.frameCount
                while self.frameCount == i:
                    print i
                    Thread.considerYield()
            else:
                base.graphicsEngine.renderFrame()
        
        
        #buff.setSort(-100)
        

        
        tex = buff.getTexture()
        
        
        waitAFrame()
        #buff.setOneShot(True)
        waitAFrame()
        
        buff.setActive(False)
        tex = tex.makeCopy()
        base.graphicsEngine.removeWindow(buff)
        
        
        tex.setWrapU(Texture.WMClamp)
        tex.setWrapV(Texture.WMClamp)
        
        mapCard.remove()
        
        
        return Map(shader.name,tex)

Now if I can get these last few issues resolved, you all will get endless, seamless (I think I fixed both texture and mesh seams, including texture interpolation on the edge pixels), procedural terrain using geomipterrain tiles (with brute force enabled, because its much faster, smoother, and I have my own cool LOD system. Ya, I’ll make that more logical later) generated as needed in the background. You want that right? So fix my stuff :slight_smile:

If it would be helpful, I can upload the latest complete version of the project so you can run it. It is in somewhat of a messy mid debugging state at the moment though.

The problem here is that it takes one frame to open the buffer, and a second frame to actually render to it. You might get away with a call to base.graphicsWindow.openWindows(), which is an alternate way to force the buffer to open without actually rendering a frame.

But, for optimal performance, you shouldn’t be creating and destroying buffers like this. It’s best to create the buffer(s) that you need ahead of time, and reuse them as needed. You can call buffer.setActive(False) to temporarily disable rendering so the buffer doesn’t waste time rendering needlessly when you don’t want it to.

Incidentally, your calls to Thread.considerYield() aren’t doing anything unless you have explicitly created your own threads. By default, Panda doesn’t create any threads for you.

David

base.graphicsWindow does not exist. If you meant base.win, base.win.openWindows() does not exist. If it takes a frame to open a buffer, why does using the first frame work when I only have 1 or no textures? Race condition? Shouldn’t allow-incomplete-render #f fix any loading race conditions, or does calling base.graphicsEngine.renderFrame() get around that?

That was my original intention, but I kept needing buffers of different sizes. I guess I should just keep a dictionary of inactive buffers around, and add any new sizes to it as needed.

Also, the first call to base.graphicsEngine.renderFrame() has to be after the textures are loaded. Putting it after the buffer is made and before the textures are loaded causes the issue to occur, so I don’t think its caused by opening the window, and I also think saving the buffer might not fix it (I remember having this issue back when I did save the buffer, but I’m not completely sure about that).

I don’t think it should matter (it might!), but some of the textures I’m loading were created by this very method moments before.

I tried loading the textures and setting up my texture card before I made the buffer too. No change.

When I call it with threaded true, it is from a task chain threaded thing, and that waiting code does work, kinda. It’s slow and flakey though, but I though I would try and fix the 2 frame issue first.

Thanks for the suggestions!

Oops, I meant base.graphicsEngine.openWindows. In general, the call to makeBuffer() returns a handle to a buffer that will be created, but does not (necessarily) create an actual buffer until after the next call to either base.graphicsEngine.renderFrame() or base.graphicsEngine.openWindows().

allow-incomplete-render #f is the default, and it means that the frame will not be rendered until all textures have been fully loaded from disk and transferred to the graphics card. With allow-incomplete-render #t, the frame will render without waiting for the textures to fully load. This only has to do with textures loaded from disk, though; it’s not related to textures rendered in an offscreen buffer.

I wouldn’t say that “allow-incomplete-render #f” fixes all loading race conditions, but it should eliminate any problems due to incomplete texture loads. It is possible this is what’s causing you grief, if you’re running with this set to #t.

You can also request all of the textures under render to load immediately by calling render.prepareScene(base.win.getGsg()).

You can keep just one large buffer around, and use parts of it as you need. You can use a DisplayRegion to render to a fraction of the buffer, and you can choose texture coordinates to select the appropriate part of the resulting texture to apply.

Well, that certainly sounds like you’re running into problems with allow-incomplete-render #t.

Note also that base.graphicsEngine.renderFrame() will cause the current frame to be rendered to the back buffer, but it won’t yet visible onscreen until the second call to base.graphicsEngine.renderFrame(), which swaps the back buffer and the front buffer (and renders a new frame to the back buffer). If this is what’s causing you problems, you can configure “auto-flip #t” to tell renderFrame() to flip the buffers immediately after rendering, though this will impact your overall frame rate. Normally, though, for multipass algorithms, it doesn’t matter what’s showing in the front buffer; that’s only relevant when it matters what the user actually sees.

David

Well, I have allow-incomplete-render #f, and it does not change anything.

base.graphicsEngine.openWindows()
has no effect as well.

When I have more than one texture used by shader, I must render an extra frame after loading the textures, and with the buffer active.

Anyway, your suggestion of:
altRender.prepareScene(base.win.getGsg())
does fix it! Fantastic! Thanks. Now, why calling that makes a difference when allow-incomplete-render is false is a mystery to me. It works now, so I’ll just leave it at that, and fix my buffer creation spamming.

If you think this may be a bug in panda, I can try more experiments and report it.

Again, thanks for your help.

Fascinating! This does rather sound like a bug in Panda, or at least something worth taking a closer look at. Would it be difficult to package up a complete demo application that demonstrates this behavior?

David