In which top-down room does a point Lie?

I’m working on a game in which certain levels are split up into “rooms”, and I want a means of detecting in what room (if any) a given point (e.g. the location of the player, or of a roaming enemy) lies.

Now, each “room” has a polygonal floor, which seems like a natural object to test against. Furthermore, the game has an orthographic top-down perspective, meaning that the z-axis can in theory be entirely ignored.

My first instinct is to use a vertical ray and a copy of the floor-model converted into collision-geometry. But while that would, I daresay, work well, I feel that under these specific constraints there might be a better way.

So, I come here then to ask: Does anyone have any suggestions for other approaches to detecting which room a given point lies, under the setup described above?

I don’t see why a collision ray with appropriately labeled floor-colliders wouldn’t work well for this, and especially if the ray test were performed discretely, IE every couple frames or second-slices.

You could solve this analytically with distance calculations, but I imagine that that would be somewhat more convoluted than the classic trigger-volume/collision-detection approach.

Oh, it would work well, I daresay. And indeed, I’ll likely fall back on it should nothing better be suggested.

I just find myself thinking that, given the lack of the three-dimensionality in the setup there might be a way that’s better yet.

True. But I suppose that I’m wondering whether there isn’t a Panda-feature that might be of use here.

(Analogous to the ray-plane-intersection convenience feature that it has.)

After all, this particular scenario doesn’t technically require anything three-dimensional–it doesn’t require a ray-direction, or 3D distance calculations, etc… It is, theoretically, just a point-in-polygon matter…

The algorithm isn’t too difficult, I can share my Python code if you want.

Thank you, that might be useful. :slight_smile:

Do you know how your implementation’s performance compares with the collision-based approach mentioned above? Given that it’s in Python, I worry that it might actually prove slower.

Here you go.

It’s assuming the polygon you check is convex, e.g. think of an umbrella with the tips being vertices. The algorithm checks if the point is between two spokes and if the point is “below” the line between the tips or not.

Collision may be more efficient, not sure.

Ah, thank you.

Hmm… My shapes are more arbitrary than that–but they’re also meshes composed of quads and triangles, so the algorithm or a simplified version, could be applied per quad/triangle.

And the code does look like it should be pretty fast!

I’ll give it some thought, I intend.

Either way, however, thank you again! The help is appreciated! :slight_smile: