Object selection / picking using unique color IDs

Hello everyone,

I’m currently making a level editor, and selecting objects is the first thing I’m trying to implement. Panda manual describes a way to get object under cursor through CollisonTraverser (it can be done also via physics engine).

However, there is another technique (see gpwiki.org/index.php/OpenGL_Sele … _Color_IDs). It works by assigning every object a unique color, rendering them to invisible buffer, and reading the color under the mouse cursor (often optimized by restricting rendering to the one pixel under cursor).
The advantage of this method is that picking can be based on what user is seeing (e.g. accounting for vertex shader, stencil and alpha-test effects).

Have anyone implemented such thing? Maybe someone knows how this technique can be best implemented in Panda?

I’ve done something like that before. It’s not too difficult using standard Panda constructs as described in the manual. It’s considerably slower than using the standard physics-based detection, though, so you should use it only if you really do need the visual-based picking precision.

David

Thanks :slight_smile: Is it possible to take a look at your implementation of the method?

Currently I see two ways to do it:
A) render scene with modified shaders
B) use original shaders, but render in two passes:

  • render everything (to fill depthbuffer), clear color buffer to white
  • render everything with ZTest:EQUAL, ZWrite:Off, Blending:(SrcFactor=0, DestFactor=object-ID-color)

The second approach will result in wrong color if more than one pixel passes Z-test, but this is supposedly a rare situation.

My implementation is long gone, it was many years ago for a project that is no longer alive. I did it by actually setting colors explicitly on the scene graph because I didn’t need to preserve the normal scene view in this application; but you could also use per-camera state changes to do the same thing.

Your two strategies sound reasonable, but I’m not sure what the advantage of approach (B) would be. You’re still modifying the shader to achieve the second part of (B).

David

If OpenGL/Direct3D pipeline diagrams don’t lie, in the second stage of (B) I don’t need to care about shaders at all. ZTest, ZWrite and blending are performed after fragment shaders.

OpenGL: opentk.com/node/1342
Direct3D: xmission.com/~legalize/book/ … index.html

Ah, good point. I forgot that blending is separate from the fragment shader itself.

David

Hmm, perhaps in (B) I just can reuse depth buffer left from previous frame, thus (B) can be one-pass method.

Also, stencil test can be used so that only first fragment would pass. So, it seems that (B) is all-around better method.

Since my last post I was looking for ways to implement (B) in Panda, and in the end I came to conclusion that it would be too cumbersome. On a second thought, I guess I’ll need shader generation anyway, so (A) would be trivially implemented in that case.

…Hmm, also “color-ID buffer” has uses beyond object picking – primarily in edge detection and non-photorealistic rendering.