Nonlinear Lenses and Projected Textures


I just figured out (with help from people at the C++ forum) how to create a new Lens' type. Now I've discovered thatLenses’ don’t seem to work like I thought.

The `Lens’ I wanted to create was highly non-linear: so non-linear that the same point in space is potentially projected to different points on the film.

Now, I thought I would only have to implement the do_extrude' method accordingly, but when I was playing with my copy ofFisheyeLens’ inserted into this code instead of the CylindricalLens', I noticed that changes todo_extrude’ don’t really do anything. Only if I change `do_project’ does the picture actually change.

do_project' is hard to implement if the same point can appear in two places on the film. After all, I only get oneLPoint3’ argument to fill. Also, for my specific scenario (undistorting a complex (physical) projection scenario), this direction is much more difficult than the other.

So my thought is, would it work if I used a projected texture instead? That means, could I implement do_extrude' the way I was going to (with a basically arbitrary mapping from world coordinates to points on the film), and then use that to project on some flat surface? Would I have to create another camera andOrthographicLens’ to film that surface?

I’ll try it out myself, but I’m really new to Panda3D and it will take a while. I would be grateful to hear if there are any pitfalls where I’m going or if the whole idea is doomed to fail.


PS: Also, any helpful comments and pointers to relevant documentation/code snippets etc. are greatly appreciated.

Most real-time and near-real-time 3-D rendering is scanline rendering, which is basically a projection operation: vertices in 3-d space are converted to points on the 2-d film, then the pixels on the 2-d image are filled in with the appropriate color. That’s why the “project” operation is relevant to most rendering operations. The “extrude” operation is generally used only for projective texturing.

You might be thinking of ray-tracing, which is a form of rendering in which the points on the 2-d film are one-by-one extruded into the 3-d world, and then the appropriate color is computed for the pixel based on where the ray hits the scene in the world. Ray-tracing is extremely computationally intensive and is almost always reserved for non-real-time operations. Panda renders via scanline rendering, not ray-tracing.

If you want to render with your custom lens, you need to define it such that each 3-d ray projects only to a single point on the film.

Whether you can use projective texturing as you describe depends, I suppose, on precisely what effect you are trying to achieve. In principle, what you describe is sensible, but it is limited in its utility.


Thanks a lot! Yes, what I know of 3D graphics is from ray-tracing and I was kind of hoping I could get away with that.

Actually, I’m thinking I can do the texture mapping outside of Panda—I’ve got the code already for doing it with Numpy and the rest of my project relies heavily on Python and Numpy anyway. So, for now, I’ll just try to render the scene in Panda, see how I can force it into an OpenCV image (ie. Numpy array), and then do texture mapping.