I’ve seen some discussion on the forum (namely here) about GL selection, which made be speed up the release of a piece of code that drives selecting stuff in my editor. I also followed some advice by Epihaius in the linked thread regarding the single pixel solution. I’ve been planning this kind of optimization for a long time, but his post made the pieces fall together .
The difference between my approach and his is that I actually use shaders to drive the entire thing, so I don’t need to write any special data into vertices. Instead, I just provide the selection color as shader input. The added benefit is that you can select through semi-transparent surfaces, if you want to.
Obviously, the whole thing is perfectly compatible with the Panda’s shader generator.
Here’s the code along with and a simple demo.
Hey, this is great ! Thank you very much for releasing this!
Glad I could help! This is the kind of community-effort that I really enjoy; someone posts an idea, an other person turns it into usable code, someone else gets inspired by it and improves upon it, and so on. And that’s why I like this place .
Well I’m not very shader-savvy, heheh, so it is definitely interesting to see how you do this. Thanks again!
On a related note:
Now if only we could come up with an equally efficient way of region-selecting multiple objects with the same kind of precision, that would be awesome. Yes, I know about this thread, but none of those solutions are ideal for object selection in editors, it seems.
I do have an idea, but it will most likely fail epically because it involves rendering every single object separately… yeah, lol.
In a nutshell, rectangular selection would go something like this:
- create a selection camera, whose display region area is set to the selection rectangle;
- also set the corners of the frustum to those of the rectangle and use FC_shear;
- render each object separately (using setTagStateKey()); ( <- deal breaker )
- check if object is in rectangle using… PNMImage.getAverageGray()!
That last step seems quite efficient, because the result of PNMImage.getAverageGray() immediately tells you whether the object is even partially inside the rectangle (color returned is anything but the clear color) or not at all.
Is there some way to get the rendering data of a particular object (from a single rendering pass) into a texture (via a shader, maybe)? Then there might still be some hope for this approach.
Anyway, sorry for going a bit off-topic.