Low level multistage render to texture pipeline, trouble with saving intermediate stages

Hi all,

I’m trying to write a low level multistage render to texture pipeline, but I need a way to save intermediate stages because, for example, the third stage needs the output of both the first and second stages. I’ve tried a few different ways of doing this, including the high level FilterManager, with the renderQuadInto method, but that requires passing it a window while I want the last step to render into a GraphicsBuffer.

Reading the documentation for renderQuadInto, it says that it “Creates an offscreen buffer for an intermediate computation. Installs a quad into the buffer. Returns the fullscreen quad.” so I tried just doing a low level implementation of this by creating a buffer and quad combo, each with their own scene graph, for each stage as follows:

import panda3d.core as pc
import numpy as np

# Setup the engine stuff
engine = pc.GraphicsEngine.get_global_ptr()
pipe = pc.GraphicsPipeSelection.get_global_ptr().make_module_pipe("pandagl")

# Request 8 RGB bits, 8 alpha bits, and no depth buffer.
fb_prop = pc.FrameBufferProperties()
fb_prop.setRgbColor(True)
fb_prop.setRgbaBits(8, 8, 8, 8)
fb_prop.setDepthBits(0)

# Create a WindowProperties object set to size.
win_prop = pc.WindowProperties(size=(texW, texH))

# Don't open a window - force it to be an offscreen buffer.
flags = pc.GraphicsPipe.BF_refuse_window

# Create a GraphicsBuffer to render to, we'll get the textures out of this
def makeBuffer():
    buffer = engine.makeOutput(pipe, "Buffer", 0, fb_prop, win_prop, flags)
    btex = pc.Texture("Buffer Tex")
    btex.setup2dTexture(texW, texH, pc.Texture.T_unsigned_byte, pc.Texture.F_rgba8)
    btex.setWrapU(pc.Texture.WM_repeat)
    btex.setWrapV(pc.Texture.WM_clamp)
    buffer.add_render_texture(btex, pc.GraphicsOutput.RTM_copy_ram)
    return buffer, btex

# Create a scene graph, a camera, and a card to render to
def makeScene():
    buffer, btex = makeBuffer()
    cm = pc.CardMaker("card")
    canvas = pc.NodePath("Scene")
    canvas.setDepthTest(False)
    canvas.setDepthWrite(False)
    card = canvas.attachNewNode(cm.generate())
    card.setZ(-1)
    card.setX(-1)
    card.setScale(2)
    cam2D = pc.Camera("Camera")
    lens = pc.OrthographicLens()
    lens.setFilmSize(2, 2)
    lens.setNearFar(0, 1000)
    cam2D.setLens(lens)
    camera = pc.NodePath(cam2D)
    camera.reparentTo(canvas)
    camera.setPos(0, -1, 0)
    display_region = buffer.makeDisplayRegion()
    display_region.camera = camera
    return card, btex

card1, btex1 = makeScene()
card2, btex2 = makeScene()
card3, btex3 = makeScene()

I previously had only one buffer when I was implementing stages1 & 2, this worked fine as stage 1 would render to the buffer texture btex, which could then be bound to a sampler for stage 2 which would then also render to btex, the problem came when I started to implement stage 3, which needs the output of stage 1. If I use one buffer with a texture called btex, stage 1 writes into btex, but stage 2 also writes into btex, overwriting the output of stage 1 and preventing me from using it later on. Each stage is wrapped in a function that looks something like this:

def jumpFlood(seeds, sphereXYZ: pc.Texture):
    # Place the seeds in the texture
    texArr = np.zeros((texH, texW, 4), dtype=np.dtype('B'))
    for seed in range(min(seeds, 255)):
        i = np.random.randint(0, texH)
        j = np.random.randint(0, texW)
        texArr[i,j,0] = j
        texArr[i,j,1] = (texH - i - 1)
        texArr[i,j,2] = 1 + seed
        texArr[i,j,3] = 255
    seedsTex = arrayToTexture("seeds", texArr, 'RGBA', pc.Texture.F_rgba8)
    storeTextureAsImage(seedsTex, "seeds")
    # Compute the maximum steps
    N = max(texW, texH)
    steps = int(np.log2(N))
    # Attach shader and load uniforms
    card1.set_shader(voronoiShader)
    card1.set_shader_input("sphereXYZ", sphereXYZ)
    card1.set_shader_input("maxSteps", float(steps))
    card1.set_shader_input("jumpflood", seedsTex)
    card1.set_shader_input("texSize", (float(texW), float(texH)))
    # Start jumping
    for step in range(steps+2):
        card1.set_shader_input("level", step)
        engine.renderFrame()
        card1.set_shader_input("jumpflood", btex1)
    return btex1

This is stage 1, which currently uses card1 and btex1. The functions are chained together as follows:

# Initialize sphereXYZ texture
sphereXYZ = loadXYZ()

seeds = 10
voronoi = jumpFlood(seeds, sphereXYZ)
boundaries = plateBoundaries(seeds, 2, voronoi)
distances = boundaryDistances(voronoi, boundaries, sphereXYZ)
storeTextureAsImage(voronoi, "jumpflood")
storeTextureAsImage(boundaries, "smooth boundaries")
storeTextureAsImage(distances, "boundary distances")

When I tried to implement stage 3 with one buffer, I also tried having the functions return btex.makeCopy() in the hopes that this would copy the output of the stage so that the next stage wouldn’t be writing into the same texture, but this doesn’t appear to have worked.

The problem with my multibuffer implementation currently is that the textures stored at the end are just grey rectangles and I’m not sure why. I also get the console message :display(error): Shader input jumpflood is not present. from stage 1, which I suspect is caused by the line card1.set_shader_input("jumpflood", btex1) as the texture seedsTex is saved correctly every time and doesn’t rely on rendering through a buffer.

I could write each stage’s texture out to disk and read it back in as a new texture each time, but that would be very slow, so I would like to keep the intermediate textures at least on RAM, if not in VRAM during runtime.

Any help is greatly appreciated.

I have not studied your code in detail, but one thing did jump out at me: you need to pass in a host window (which can be a buffer) into the makeOutput call. Usually this would be base.win but it can also be your final buffer if you’re doing everything offscreen. Otherwise they will all create a different graphics context and you will not be able to share graphics resources between the buffers.

Thanks for your feedback! I updated to:

finalBuffer = engine.makeOutput(pipe, "Buffer", 0, fb_prop, win_prop, flags)
btex = pc.Texture()
finalBuffer.addRenderTexture(btex, pc.GraphicsOutput.RTM_copy_ram)
def makeBuffer():
    buffer = engine.makeOutput(pipe, "Buffer", 0, fb_prop, win_prop, flags, finalBuffer.getGsg(), finalBuffer)
    btex = pc.Texture()
    btex.setWrapU(pc.Texture.WM_repeat)
    btex.setWrapV(pc.Texture.WM_clamp)
    buffer.addRenderTexture(btex, pc.GraphicsOutput.RTM_copy_ram)
    return buffer, btex

I also discovered that the shader error was because I had the wrong shader linked in stage 3, it now appears to be working with the multibuffer method!

1 Like