Getting U,V coordinates from collision?

Got a question I couldn’t find the answer to in the docs or forum archive here.

The picking algorithm, to allow the user to select an object by clicking on it, is pretty straightforward. What I would like to do is have a way to figure out the effective texture coordinates of the point on that object that the user clicked on. This would allow me to locate the spot on the texture file that corresponds to the point the user cursor is and modify it. The aim is to make a three-dimensional paint program that lets a user paint a texture on a model, although this could also be used for adding bullet holes or scuff marks to objects in real time.

The picker code and collision system can tell me what object the user clicked on and tell me all kinds of information about the three-dimensional position of the collision point. I think what I need is a way to identify the U,V coordinates of an arbitrary point on an object’s surface. This is easy to do for a simple shape like a sphere or box, but I’d like to do it for complex geometry. Anyone have any ideas?

Have you seen this thread?

David

I had not. Thank you.

Ive never done this or I maybe interpreting the technique wrong but you can use DCS tags in the . egg files to set up areas on geometry for things… in this case a start point.

<Group> start_point { 
  <DCS> { net } 
  <Transform> { 
    <Matrix4> { 
      1 0 0 0 
      0 1 0 0 
      0 0 1 0 
      0.0 0.0 1.7 1 
    } 
    <Translate> { 0 0 0 } 
  } 
} 

Then you could use the find ("**/start_point) method in Panda to get that particular area…I dont know how to set up selection areas in your modeler and then apply that DCS tag to it. Dont know if that would allow you to “capture” areas though.
Also I apologize if the syntax above is incorrect.