Room Detection: Is There a Better Way Than This?

In a project of mine I want to be able to detect in which “room” a given point lies.

To this end, each room has a (planar, or intended to be so) polygonal floor-collider, which defines its logical interior, and which is attached to a root-node specific to such floor-colliders.

My current approach when detecting a room, then, is to attach a ray to that root-node, have a collision-traverser traverse said node, have the results be placed into a CollisionHandlerQueue, and to then examine the first result.

And this works!

However, it’s common-enough for me to want to run this detection multiple times per frame–it’s used by characters that may cross room-boundaries, by elements that may spawn objects into rooms, and even by one character in order to prevent their leaving a room. As a result, the traversals can add up a bit, I fear.

So, my question: is there a better way of doing this? A more efficient means of detecting whether a point lies within a planar space of fairly-arbitrary shape?

(I’ve considered having a single traversal handle multiple rays all at once–but while that might work for lists of objects known at the start of a frame (such as when finding the current rooms of extant characters), it’s more awkward when dealing with off-the-cuff detections, such as a character might use in targeting.)

I think if you only need the fact of finding a point in the room, then you can use a trigger to enter and exit. And not a collision with the surface with the help of a ray.

By a “trigger”, I presume that you mean “a collision object for which an event has been set”. If so, then that would work for characters, but would seem less suitable for other purposes, such as AI.

For example, I have a character that has an attack that places a number of objects in the game-world. However, it’s supposed to confine these objects to its current room, to which end it first checks its target-points against that room. Such checking would seem to be awkward to do with a trigger-based approach–or so I would think.

Further, would that not simply be much the same as what I already have: a collision-based detection system?

I think the difference is that you don’t need an iteration for the polygon every frame.

I also think you should change the design of the spawning objects, for example, the air somehow has to be in some room, then why not just make sure that the spawn system marks the object with a tag with the name of the room, also at the time of the trigger, the function changed this tag entered, or exited.

At the same time, you will not be limited to a flat floor, the room can have any shape, for example, a ball.

But, as noted above, I can potentially do that with my current system–I would just traverse once for all rays, rather than once per ray.

However, either way doing so incurs problems for things like targeting, etc.

Spawned characters are already tagged with their current room (or rather, the ID of their current room), I believe.

However, the spawning would still have to detect in which room the spawning is taking place in order to so tag them.

Or, in the case of the object-placing character mentioned above, whether their intended spawn-points are still within the current room.

That can still be done with rays, I daresay: one would just alter the direction and origin of the rays as appropriate. That said, such floors would be superfluous with the project in question.

I think I understand now, it’s actually different, you need to define the boundaries of the room. I think here you need to use raycasting first forward from the character, then in four or more sides, depending on the effect.

Surely that would be less efficient than my current approach, which uses only one ray-cast?

To clarify: What I have right now works. However, when I do performance testing with PStats, I see (what seems to me to be) quite a few collision tests, and am wondering whether there might not be a more efficient way.

I didn’t say that you need to use multiple ray, I was talking about the sides of the direction for it. Apart from your idea with a ray that is already working, it is difficult to come up with something.

That’s fair. But it would still come down to multiple ray-casts, rather than a single one.

Also fair! Thank you nevertheless for your suggestions–they are appreciated! :slight_smile:

You can also create a “floor plan” image of your world, where each room is drawn with its own colour. Then, you only need to sample the pixel corresponding to the object’s position, and map the colour back to the room ID.

You could write a script to use render-to-texture with a topdown ortho camera to generate this image.

However, it gets more difficult with multiple floors, of course. Then you need one such image per floor.

Although you said that your room has an arbitrary shape, however, if it is convex, then you can simply check the result of the dot product for each polygon. For these purposes, it is better to use a simplified geometry that repeats the shape of the room. However, if it is concave, then this method will not work, but you can always break it into several parts that will be convex. Next, simply summing up the checks of the dot product.

Hmm… Multiple floors aren’t a problem in my case.

But would reading from texture in Python not be somewhat slow…?

Hmm… That is tempting–I’d likely just iterate over the triangles and quads that make up my current floor-geometry, as said geometry is fairly simple.

However, intuitively, I would be inclined to expect such a thing implemented in Python to be slower than ray-casting implemented (I presume) in C++…

Again, ray-casting will test the entire geometry on the root node.

In the case of dot product, you can create a simple list that will describe the wall of the room with the help of a pair of vectors. It is worth noting that all the basic operations of subtraction and dot, you will do on methods that are implemented in C++.

But I have these things on a separate root-node that contains only the floor-colliders (and the ray, of course); on top of which, I would imagine that the collision-system would apply some sort of broad-phase culling before entering into the actual collision-testing.

Indeed, but I’d still be performing the iteration and function-calling in Python–along the lines of “for each room, test the broad bounding rectangle; them, for each triangle/quad in that room, and for each line within that triangle/quad, call these functions…”.

Depending on the size of your map and the resolution you need you could just use a 2d numpy array?
Numpy is very fast (and if you need to know how fast exactly, making a profiling test is a matter of minutes)… I think it’s very likely it will be fast enough for everything you need. If you do need to make many lookups per frame, you might even be able to parallelize the lookup, i.e. make all lookups “at once”.

Hmm… I’ve never used numpy, I’ll confess! But, if it is as fast as you say, then perhaps it’s worth thinking on.

I will say that I would probably want a large map at pretty decent resolution, I do think. (The levels aren’t tile-based, and rooms may connect at arbitrary angles, so I would want decent resolution to allow for something approximating those angled connections. And the levels are rather larger than the interfaces between their rooms, thus seeming to call for a large map at the least.)

I was thinking about it, you can also create images for each room, something like a mask. Then just connect the coordinates of the beginning of the room and the texture. According to this principle, mini-game cards are made, which are located in the corner of the screen.

I think the pre-generated texture will not be read slowly. You can even make a tile system as it is done for terrain, then load the necessary tiles in the process.

Hmm… More to think about, thank you.

I think at this point you’re probably even easier off just using a Quadtree or a KDTree. I think the one in scipy.spatial is probably implemented in C and relatively easy to use…

That… Does seem as though it should work. It would essentially be a matter of searching through the tree for the lowest cell that contains the given point, wouldn’t it?

But I wonder how that compares to Panda’s collision system performing a ray-test against polygons–that surely has some sot of broad-phase culling, after all, and I doubt that the test itself is slow.