Yet another attempt for a motion blur.

Hello there, panda-ers! I’ve been on IRC from time to time, but this problem made me to finally register.

So, this is my goal: http.developer.nvidia.com/GPUGem … _ch27.html
That’s all nice and clear, but panda doesn’t seem to have an easy way to get this view projection matrix. I’ve found quite a few topics around here, but none of them give a working answer. Rdb’s solution here works quite well only when you change the camera’s heading (H rotation component). Other than that, it’s unusable - the ghost images pop waay far from where they should. Either it’s a bug on my part or rdb has incredibly low standards to call that “jittering”. I figured out that conversion matrix (convmat in my source) is an identity matrix (convertMat(0, 1))… So that line makes no effect.

Might be a bug on my part though… here are the sources:

shader:

//Cg
//
//Cg profile arbvp1 arbfp1

// http://http.developer.nvidia.com/GPUGems3/gpugems3_ch27.html

const int SAMPLES = 4;

void vshader(
    in float4 vtx_position : POSITION,
    out float4 l_position : POSITION,
    out float2 l_texcoord0 : TEXCOORD0,
    uniform float4 texpad_frame,
    uniform float4x4 mat_modelproj
    )
{
    l_position = mul(mat_modelproj, vtx_position);
    l_texcoord0 = vtx_position.xz * texpad_frame.xy + texpad_frame.xy;
}

void fshader(
    in float2 l_texcoord0 : TEXCOORD0,
    out float4 o_color : COLOR,                                           
    uniform sampler2D k_frame,
    uniform sampler2D k_depth,
    uniform float4x4 k_cur_inv_projection,
    uniform float4x4 k_prev_projection
    )
{      
    float depth = tex2D(k_depth, l_texcoord0);
    float4 currentPos = float4(l_texcoord0.x * 2 - 1, (1 - l_texcoord0.y) * 2 - 1, depth, 1);
    float4 worldPos = mul(currentPos, k_cur_inv_projection);
    worldPos /= worldPos.w;
    float4 previousPos = mul(worldPos, k_prev_projection);
    previousPos /= previousPos.w;
    float2 velocity = (currentPos - previousPos) / 2.0;

    float4 colorbin = tex2D(k_frame, l_texcoord0); // current frame
    for (int i = 1; i < SAMPLES; ++i)
    {   
        l_texcoord0 += velocity;
        colorbin += tex2D(k_frame, l_texcoord0);
    }

    o_color = colorbin / float(SAMPLES);

}

relevant part of the script:

def _getProj():
                projmat = panda.NodePath('proj_matrix')
                convmat = panda.Mat4.convertMat(self.base.camLens.getCoordinateSystem(),
                                                self.base.win.getGsg().getCoordinateSystem())
                projmat = self.base.camLens.getProjectionMat() * convmat * self.base.cam.getMat(self.base.render)
                return projmat

            self._prev_mat = _getProj()
            frame = panda.Texture()
            depth = panda.Texture()
            quad = self.fmanager.renderSceneInto(colortex = frame, depthtex = depth)
            quad.setShader(resource.shaders.motion_blur)
            quad.setShaderInput('frame', frame)
            quad.setShaderInput('depth', depth)
            quad.setShaderInput('prev_projection', panda.Mat4())
            quad.setShaderInput('cur_inv_projection', self._prev_mat)

            def _update_transform(task):
                if not self.mblur:
                    return task.done
                else:
                    quad.setShaderInput('prev_projection', self._prev_mat)
                    self._prev_mat = _getProj()
                    inverse = panda.Mat4()
                    inverse.invertFrom(self._prev_mat)
                    quad.setShaderInput('cur_inv_projection', inverse)
                    return task.cont

            self.base.taskMgr.add(_update_transform, 'prev_transform_catcher')

Thanks for any and all help! Dolkar

I think you’re multiplying the projection matrix with the camera matrix in reverse order. I think it goes cam_transform * conv_mat * projection_mat instead of projection_mat * conv_mat * cam_transform.

Also, it’s probably faster not to invert the matrix using invertFrom, but instead generating the reverse transformation by using getProjectionMatInv() and self.base.render.getMat(self.base.cam) and composing in reverse order.

The order makes little to no difference. Probably because conv_mat is an identity matrix. Trust me, I really need that motion blur, so I’ve tried almost everything I could think of. It must be some nasty little bug hidden somewhere…

The difference is that cam_transform is multiplied before projection_mat instead of after, which should make a huge difference. Keep in mind that with matrices, A * B is not necessarily the same as B * A.

Could you share your code so that I could take a look?

I know, it should… but it doesn’t work either way. All the relevant code is there… I’m not sure what else are you looking for. Thanks for trying, though.

Can’t you just zip up and e-mail me a working (well, except for the problem you’re having of course) example? I don’t really have the time to create a new test case from scratch.

Oh, right. I’ll make you a neat little script tomorrow…

I’m posting it here anyways, so others also have a chance to help and/or see the potential solution… aand I also couldn’t find your e-mail.

from direct.filter.FilterManager import FilterManager
from pandac.PandaModules import loadPrcFileData
from direct.showbase.ShowBase import ShowBase
from panda3d.core import *

loadPrcFileData("", "want-directtools #t")

motion_blur_shader = """\
//Cg
//
//Cg profile arbvp1 arbfp1

// http://http.developer.nvidia.com/GPUGems3/gpugems3_ch27.html

const int SAMPLES = 4;

void vshader(
    in float4 vtx_position : POSITION,
    out float4 l_position : POSITION,
    out float2 l_texcoord0 : TEXCOORD0,
    uniform float4 texpad_frame,
    uniform float4x4 mat_modelproj
    )
{
    l_position = mul(mat_modelproj, vtx_position);
    l_texcoord0 = vtx_position.xz * texpad_frame.xy + texpad_frame.xy;
}

void fshader(
    in float2 l_texcoord0 : TEXCOORD0,
    out float4 o_color : COLOR,
    uniform sampler2D k_frame,
    uniform sampler2D k_depth,
    uniform float4x4 k_cur_inv_projection,
    uniform float4x4 k_prev_projection
    )
{
    float depth = tex2D(k_depth, l_texcoord0);
    float4 currentPos = float4(l_texcoord0.x * 2 - 1, (1 - l_texcoord0.y) * 2 - 1, depth, 1);
    float4 worldPos = mul(currentPos, k_cur_inv_projection);
    worldPos /= worldPos.w;
    float4 previousPos = mul(worldPos, k_prev_projection);
    previousPos /= previousPos.w;
    float2 velocity = (currentPos - previousPos) / 2.0;

    float4 colorbin = tex2D(k_frame, l_texcoord0); // current frame
    for (int i = 1; i < SAMPLES; ++i)
    {
        l_texcoord0 += velocity;
        colorbin += tex2D(k_frame, l_texcoord0);
    }

    o_color = colorbin / float(SAMPLES);
}
"""

class BlurTest(ShowBase):
    def __init__(self):
        ShowBase.__init__(self)

        environ = base.loader.loadModel("models/environment")
        environ.reparentTo(base.render)
        environ.setScale(0.25, 0.25, 0.25)
        environ.setPos(-8, 42, -1)

        fm = FilterManager(base.win, base.cam)
        self.prev_mat = self.getProj()

        frame = Texture()
        depth = Texture()
        self.quad = fm.renderSceneInto(colortex = frame, depthtex = depth)
        self.quad.setShader(Shader.make(motion_blur_shader))
        self.quad.setShaderInput('frame', frame)
        self.quad.setShaderInput('depth', depth)
        self.quad.setShaderInput('prev_projection', Mat4())
        self.quad.setShaderInput('cur_inv_projection', self.prev_mat)

        base.taskMgr.add(self.update_transforms, 'prev_transform_catcher')

    def getProj(self):
        convmat = Mat4.convertMat(base.camLens.getCoordinateSystem(),
                                  base.win.getGsg().getCoordinateSystem())
        projmat = base.camLens.getProjectionMat() * convmat * base.cam.getMat(base.render)
        return projmat

    def update_transforms(self, task):
        self.quad.setShaderInput('prev_projection', self.prev_mat)
        self.prev_mat = self.getProj()
        inverse = Mat4()
        inverse.invertFrom(self.prev_mat)
        self.quad.setShaderInput('cur_inv_projection', inverse)
        return task.cont

app = BlurTest()
app.run()

There’s another problem with this setup: Black is popping from the edges while moving. I didn’t notice that before, since I had a black background. I couldn’t find any other working panda compatible motion blur shader to test out whether the problem is in the shader itself… but that shouldn’t be the case since it’s basically copy-pasted from gpu gems.

Anyways, thanks for your time.

A few things:

  • Panda is z-up, not y-up. Therefore, change your velocity function to use currentPos.xz and previousPos.xz.
  • Turns out you need the GSG’s internal matrix to convert the matrix from Z-up to OpenGL’s Y-up:
convmat = Mat4.convertMat(base.camLens.getCoordinateSystem(), base.win.getGsg().getInternalCoordinateSystem())
projmat = base.camLens.getProjectionMat() * convmat * base.cam.getMat(base.render)
  • If there’s jittering, that may mean your velocity is not properly calculated and that it blurs longer streaks than it’s supposed to. For one, you probably also have to divide velocity by the number of samples. I’m just guessing, though.
  • If there are black borders, that probably means it’s reading outside texture space. Setting the texture wrap mode to WMClamp would help. You may also need to set “textures-auto-power-2 #t” in Config.prc.

Thanks… that works better. Still, the jittering is too much of a problem. It’s not caused by the velocities… rendering just the previousPos shows the same issue. I think some transformation is wrong, because it shows no attempt to blur a forward motion, which is what I’m looking for in a racing-like game.
Actually… if I remove the depth buffer, the result looks exactly the same…

EDIT: Idea! I think a selective radial blur could work in my case… But still, panda would use at least one working sample of such a popular shader like motion blur.