projectTexture() questions

Edit 2: (putting an answer before my question …) I found the lens.setFilmOffset() function which appears to successfully mimic my opencv K matrix. So perhaps I’m back to the projectTexture() function being cool and doing what I want better than I can do it myself. But in the interest of learning … if there is something to help me better understand the 3rd and 4th texture parameters, that is still interesting.

Side comment: they should teach linear algebra as the required college freshman class, not calculus!

In another thread about perspective correct texture mapping, @rdb suggested I take a look at the projectTexture() function. I mostly got this working, but ran into an issue where my real camera has an opencv style calibration matrix (center of projection is not center of the image and the u, v pixel size isn’t exactly the same.) So when I compute my geometry and project onto it … the texture is off by a few pixels in places.

I was curious how projectTexture() works, but I’ve hunted around and so far can’t find the source code for this function which is weird. Is it called something else in the C++ code? I only get two matches when I search the github repository and both are example calls, not the function itself.

Edit 1: texProjectorEffect.cxx … and it seems to use a texture matrix derived from the lens paramters, not setting texture coordinates explicitly … hmm…

For each vertex in my geometry, I can compute the distance to the camera and also the distance to the camera plane. Googling around and trying to figure this out, the closest I’ve come to something working (but with some severe artifacts that the projectTexture() function does not have) is this:

u, v = pixel coordinate normalized to image dimensions (i.e. in a range of 0.0 to 1.0)
z = distance from 3d point to camera plane
w = distance from 3d point to camera position.

4-element texcoord = (u/w, v/w, 0, 1/w)

I haven’t found good information on the 3rd parameter … I’ve put different things in there (like 0 and z/w) and don’t see much difference in the results.

Questions: is there a place I can go find the projectTexture() and maybe figure out what’s going on myself? Or can any one give me a quick tip on what I’m supposed to do, or what I might be doing wrong?

Backing up, what am I trying to do? I have a real camera image, I am estimating the geometry of my world that the image intersects with. I am trying to project the image onto the geometry. I want to then be able to view the textured geometry from different vantage points. Hopefully that makes some sort of sense.

Thanks in advance!


The camelCase names are aliases only available in the Python source code. So if you search for set_tex_projector, you’ll find it in nodePath.cxx, and you’ll find that it indeed creates a TexProjectorEffect.

As you’ve already found, you should be able to alter the lens settings to get the desired effect. You can also use MatrixLens to set up a custom projection matrix. That said, if you already have the projection matrix, you could also decide to use a TexMatrixAttrib directly (with setTexTransform(TextureStage.getDefault(), TransformState.makeMat(mat))). After all, all that TexProjectorMatrix does is calculate the matrix from the lens settings and apply it as a texture transform.

For what it’s worth, if you use 4-component texcoords instead of a projector/matrix, you shouldn’t divide by w (at least not before fragment interpolation). That’s already what the texture sampling hardware does. It won’t get the projection right if you do it yourself.

1 Like