Getting access to the SceneSetup

Hi all,

I’m currently writing a motion blur shader for which I need the (inverse) view projection matrix of last frame. I know I can access them in C++ using:

camnode->get_display_region(0)->get_scene_setup(current_thread)->get_world_transform()

and get_camera_transform() but I can’t seem to access this via Python code.
What is the Python equivalent to doing this?

No firsthand experience, but I’m assuming the lens parameters (e.g. cam.getLens().getViewMat()/.getLensMat()/.getProjectionMat()) aren’t quite what you’re looking for?

1 Like

I guess we could publish DisplayRegion::get_scene_setup(), but it’s true that the world_transform() that reports is nothing more than render.get_transform(camera), which you can also query directly.

David

I can’t believe I overlooked those methods - thanks to both of you!
The lens methods - are those relative to the lens node in the scene graph?

That’s what I would assume, but I’ve rarely had to do anything with lenses beyond plugging in a new FoV and letting panda handle the math. And my brain is kinda reluctant to delve too deeply into it atm on account of fighting off some bug or other :stuck_out_tongue:

Your assumptions are correct. The lens projection matrix is a property of the lens itself, and knows nothing of the scene graph; so it is in that sense relative to the LensNode.

David

I’ve implemented my motion blur, but I seem to be getting a lot of jitter when moving, like an earthquake is happening.
Basically, I’m passing this frame’s inverse projection matrix to the shader and previous frame’s projection matrix.
I tried averaging previous frame’s matrix with the pre-previous frame’s matrix and that helped a bit - but still there must be something causing this jitter.
This is my code (the two nodepaths are passed as shader inputs - panda’s shader code reads the matrix from the nodepath every frame):

transClipToWorldNP.setMat(camera.node().getLens().getProjectionMatInv() *
          Mat4.convertMat(camera.node().getLens().getCoordinateSystem(),
                          win.getGsg().getCoordinateSystem()) *
                          camera.getTransform(render).getMat())

prevWorldToClipNP.setMat(self.prevWorldToClipMat)
prevWorldToClipMat.invertFrom(self.transClipToWorldNP.getMat())

Any ideas? Could there maybe be floating point issues when doing matrix calculations like this?

Also, I discovered there is no + operator to Mat4, only a += operator, while there are * and / operators.

Hmm, I doubt that floating-point error is leading to jitter. There’s probably a wrong matrix going into the calculation somewhere.

The + operator is not defined for matrices because addition of matrices not a well-defined operation in linear algebra. The * operator is defined because a matrix-multiple operation is well-defined. It is occasionally useful to perform a componentwise addition between two matrices, though, so there’s justification for defining the + operator; or maybe we should make a separate method like componentwise_add() instead since it wouldn’t be obvious how the + operator is defined.

David

Well… again, no experience, but the debugging approach I’d take would be

  • quantify the nature of the jitter (e.g. does the view change every frame, or every few frames? is it arbitrary, or might it just be fighting between 2 or more views, or between the right view and some off-by-one parameters?)
  • set up as simple a test case as possible and try outputting each matrix in turn to see which one is jittering
  • verify that anything that is expected to be normalized is normalized
  • verify that everything is in sync (all relevant parameters are coming from the same frame, and they’re all resolving before getting passed to the card, and that they’re getting passed to the card every frame they change)

I’d also advise doing away with the averaging until you find the root cause. If you end up tracing any values through by hand, that’s just one more level of math to crunch. You may want to even forgo the first stage of averaging for the motion blur- I’d assume if you “average” the current frame with nothing else, you should get just the current frame out of the shader, and if that still shakes (or even if it doesn’t), you’ve just reduced the number of places you need to look for the problem. I ended up doing that for my 5-view 3D blit shader to finally get it working- if it couldn’t put one view in the right place, it certainly wouldn’t work on 5.
I also sometimes kick myself for not simply verifying my initial assumptions on data persistence/availability/format before relying upon them to work in an unusual case.