How can I get previous modelview matrix?

Hello there.

I’m trying to implement motion blur shader.
In order to do that, I need previous frame’s matrix of modelview/modelproj

How can I get those things?
And are there any useful sample shaders for panda3d?

Thanks

Ah, I’ve once implemented motion blur that’s based on previous frame’s modelview matrix as well. Have you seen this thread?
discourse.panda3d.org/viewtopic.php?t=6674

(On a sidenote, theres a GPU Gems chapter about this kind of motion blur - but maybe you’ve already seen it.)

thanks, now I can have a little understanding of shader programming :slight_smile:

and other questions.
i almost finished converting example cg for motion blur to .sha file for panda3d

but when I run it I got an error like this.
“error C5119: variable/member “l_position” has semantic “POSITION” which is not visible in this profile”

Which profile is needed?
In fact I tried every profile I know,
ps_1_1,ps_1_2,ps_1_3,fp20,arbfp1,ps_2_0,ps_2_x,fp30

But every profile failed.
":gobj(error): Fragment shader failed to compile with profile xxx!

What’s wrong with that profile?
my current graphic card is ATI Radeon HD 4800 series on vista.
(it should be work, isn’t it?)

and I need to get that position from vertex shader,
Are there any way to do that?

Perhaps you’re trying to use a vertex shader as fragment shader (e.g. chose a fragment profile for the vertex shader) ?

umm…
I tried like this:
Shader.load(‘mblur.sha’, “arbvp1”, “arbfp1”)

even that default profile is failed. “arbfp1”
Are there anything I need to know?

Now I fixed compile time error but not working properly.
Probably I made wrong previous matrix values for modelview / modelviewproj.

So below code is what I did.
Is it ok to make previous frame’s modelview / modelviewproj matrix?

       
mat = base.cam.getTransform(render).getMat()
self.prevModelViewNP.setMat(mat)
self.prevModelViewProjNP.setMat(mat*base.camLens.getProjectionMat())

This code runs every frame after passing these parameters to shader.
And I pass like below

mat = self.prevModelViewNP.getMat()
self.teapot.setShaderInput('prevModelView0',Vec4(mat.getRow(0)))
self.teapot.setShaderInput('prevModelView1',Vec4(mat.getRow(1)))
self.teapot.setShaderInput('prevModelView2',Vec4(mat.getRow(2)))
self.teapot.setShaderInput('prevModelView3',Vec4(mat.getRow(3)))

No. This is what I use, but I’m not sure if it’s right, as I’m getting a lot of jitter with this one.

self.transClipToWorldNP = NodePath("world-transform")
self.transClipToWorldNP.setMat(self.manager.camera.node().getLens().getProjectionMatInv()
 * Mat4.convertMat(self.manager.camera.node().getLens().getCoordinateSystem(),
                               self.manager.win.getGsg().getCoordinateSystem()) *
                               self.manager.camera.getTransform(render).getMat())
self.prevWorldToClipMat = Mat4()
self.prevWorldToClipMat.invertFrom(self.transClipToWorldNP.getMat())
self.prevWorldToClipNP = NodePath("previous-camera-transform")
self.prevWorldToClipNP.setMat(self.prevWorldToClipMat)

I’ve heard that you should reparent the NodePath to the object being rendered (the fullscreen quad, most likely) to get best precision.

Keep in mind that it matters in which order you multiply the matrices. E.g. AB is not the same as BA. I might have messed up the order a bit in my snippet.