How would I go about getting various lower level info about a CollisionEntry “into” a GeomNode? The CollisionEntry gives me the world-coords and normal of the collision, but I need to know the triangle that it hit and it’s vertices and texture coordinates, etc? Is this possible?
I can take the getSurfacePoint() of the collision, convert to local coords and then use that to iterate all of the vertices (using a GeomVertexReader) to find it, but that is very slow for large models and it seems to me that the the collision system is already doing that at some level.
As far as I remember, Panda supports named collision nodes. Theoretically, you can write anything you want and extract the data as a string when it collides. But this can only be written when exporting in the editor, However your question is slightly different, but I would go the workaround that I suggested above.
Hi, welcome to the forums!
If I could wager a guess, are you trying to do some kind of texture editing with the mouse? If so, the technique that is recommended for this is to create a shader that renders the UV coordinates to the red and green channel of a floating-point buffer or separate render target, and then you can sample that texture at the coordinates of the mouse to extract the UV coordinates at that mouse position.
By changing the shader to write different things to the channels, you can make this work with almost any part of the mesh information.
Hello, thank you.
Something like that. I’m basically just shooting rays along the camera’s forward vector (FPS-style I guess) through one or more static models. For each geometry collision (entry and exit, there could be 0-dozens) I want to color the area of the model around that point based on the ray’s current properties; the properties will change as the ray passes through the models.
I have the ray shooting working to my liking now using panda’s Collision* classes, and I’m now at the stage of figuring out a way to visualize it all…
It’s been awhile since I wrote a shader, so I’ll have to brush up on that.
I may be misunderstanding your suggestion, but since in my case there are no “mouse coordinates” to write to and I need more than one output texcoord, I’d need to run with this shader enabled once per collision, passing in the collision coords?
Ah, okay. It’s even slightly easier without mouse coordinates, since you’re always extruding the ray from the (0, 0) point of the camera.
However, this is only one way to do it. If you already have it working using the collision system, it may be worth seeing if you can get it to work with that method. Hard for me to recommend one method or the other without knowing more about the kind of visualisation you are going for.
If you want to, say, colorize the model within a certain radius of the collision, you could simply spawn a sphere model at the collision point, using the CollisionEntry surface point, and have this sphere model affect the framebuffer in some way via a ColorBlendAttrib.
Should the effects of the ray be persistent, or is it only a temporary effect where the collision occurs? If it is only temporary, you may be able to just do the ray test in the vertex shader and colour the model appropriately on the fly.
I don’t even know what kind of visualization I’m going for, I’m just exploring different ideas at the moment
I’m already spawning vertical lines that extend above and below the model at the points of intersection now. That was the first quick thing I came up with…
I also considered doing it as a point cloud.
It is a persistent effect, and I probably need to do other things with the collisions other than just visualize them (reporting, statistics, etc) so I can’t do it all in the shader and have it just be a visual effect.
Hmm… I seem to recall that the Bullet physics engine reports triangle-data in its ray-test results–although I don’t know how useful that data will be. Perhaps it’s worth looking into switching to that system in place of the built-in collision system?
Ok thanks. I’ll add that to my list of things to learn/research.