Oldschool 2D "collisions" - Renderstates? [SOLVED]

I’m currently working on a situation where there’s 50 (more in the future :confused: ) actors that I need to get “collision” info on with the mouse cursor. However, I require roughly pixel level accuracy, but only as far as to know which actor is under the cursor. I don’t need more precise info such as the collision location in world space etc. Therefore using actual ray collisions seems needlessly heavy.

So I started envisioning a seperate render buffer where each actor is rendered in different (flat) color. The end goal is of course to test for the pixel color value for the cursor’s location in the buffer and so forth. Very oldschool, but seemed good enough for me. But I hit a snag in creating this kind of “alternate” flat color rendering.

The main problem is that I need to be able to use different vertex colors in the onscreen rendering, so I can’t just use a simplified rendering with the same vertex colors on the buffer render. I tried tinkering with render states, but I couldn’t get multiple render state changes to work and I’m not sure if they’re a great idea performance wise either. This is part of a loop where I tried to assign a different state for each actor in an array and then “activate” that state type for the camera. Unfortunately only the last state assignment stays and only on the last actor processed.

self.pandas[-1].setTag("Pandashade"+str(p), "True")
base.cam.node().setTagStateKey("Pandashade"+str(p))
base.cam.node().setTagState("True", pandastates[-1].getState())

Any thoughts on the subject are welcome. :unamused:

base.cam.node().setTagStateKey("Pandashade"+str(p))

Shouldn’t you be setting this tag on the p’th buffer camera, not on base.cam?

Other than that, your idea seems sound. Note, however, that it can be very slow to extract the contents of the rendered offscreen buffer from the graphics card onto the CPU for analysis of the pixel color. Many times slower, in fact, than computing a collision ray.

David

No, I just used base.cam as a testing camera for actually seeing the changed states.

But I now realize why my idea will probably never work quite as I desire: cameras can have only one setTagState. Therefore I would need to have P amount of cameras and renders and somehow combine them all into a single image on which to make my pixel test on!

Perhaps next I’ll try to achieve something with using the same vertex colors as in the regular rendering after all.

Regarding efficency, I would probably have to resort to doing lots of bitmask collisions per frame if I used a real ray based approach. I can’t imagine that being faster than rendering one flat color render and getting a pixel’s color values from it.

You can have P color states on a single camera. That’s what the tag value is for:

base.cam.node().setTagStateKey("Pandashade")
for i in range(len(self.pandas)):
  self.pandas[i].setTag("Pandashade", str(i))
  base.cam.node().setTagState(str(i), pandastates[i].getState())

As to performance, you’d be surprised. The graphics pipeline is highly optimized for sending pixels to the framebuffer. It is extremely unoptimized for getting pixels back. You can do a lot of ray tests in the same amount of time it takes to read back the result of one frame’s render.

David

I will now wholeheartedly agree with you on the performance issue. Even though I never got as far as properly testing for the pixel values, the impact on performance the secondary render to image caused is big. I would say it must have slowed things down by 50% compared to using the ray code from https://www.panda3d.org/manual/index.php/Clicking_on_3D_Objects

There’s a bright side though, at least I finally understood the state changing (the manuals example with the boolean values fooled me a bit). And maybe someone else can learn something from this…somewhere, some day. Thanks David 8)