Hi all,

I’m sure this is a question that has been asked in various ways before, but searching the forum archives hasn’t quite yielded what I want. Specifically, I’m using Panda3d to generate a bunch of rendered still images from 3d models. Each image contains one or more meshes. What I want for each image is a surface normal map from a single point of view.

What I mean by this is:

– fix a camera at some location relative to the scene

– for each phi, psi in polar coordinates, produce a line L(phi, psi) originating at the camera and extending infinitely outward at angle (phi, psi).

– each such line either does not intersect any 3d mesh in the scene, or does so at a closest location

– Produce a 4d vector bundle V over two-d space such that

V(phi, psi) = (inf, 0, 0, 0) if L(phi, psi) has no intersection with a 3d model

(distance, normal_phi, normal_psi) otherwise

where distance = distance between origin and closest intersection along L(phi, psi)

normal_{phi, psi} are the angular components of the normal to the mesh at the closest intersection.

My initial thought on how to do this was to use a CollisionRay as the “from node” and some other more general kind of collision sold with the mesh surfaces in the scene as “to nodes”. Is the the right approach? What type of collision solid should I use for the meshes? I don’t care if the process runs fairly slowly.

Or is some other totally different approach (e.g. somehow directly doing things with the vertex data) the right way to go?

Thanks!