What might be an efficient way to trace geometry?

I’m working on an editor to automate a few things when setting up objects for my current project.
Especially when setting up damage zones for relativley complex shapes, its supposed to reduce workload.

For my filling operation I set a collider that traces the object’s model and saves the collision coordinates.
It works pretty accurate, but some objects might get relatively big coordinate systems and polygon counts.
In most cases I’ll be able to work on something else while the process is running. I just thought I could ask, if there are ways to optimise this operation without loosing (at least not too much) accuracy, just in case.

Here is my current code of that process for reference. I put it into a task, so I still can aboard the process, without closing the entire editor.

def taskPainterFill(self, solid, maxPos, iterations):
        for i in range(iterations): #currently 50 iterations
            pos = solid.getPos()
            self.raster.traverse(render)
            if self.queueRaster.getNumEntries() > 0:
                #self.queueRaster.sortEntries()
                
                #Translate point of collsion into the current coordiante system
                cpos = self.queueRaster.getEntry(0).getSurfacePoint(self.dictModels[self.currentMod][3])
                cpos = (int(cpos[0]),int(cpos[1]),int(cpos[2]))
                
                #save point with associated Subsystem into a temporary dictionary
                self.RecordSub[str(cpos)] = [self.currentSub, cpos]
        
            #check grid bounds and set new position
            if pos[0] < maxPos[0]:
                solid.setX(pos[0]+1)
                #print("setting X")
            else:
                solid.setX(0)
                if pos[1] < maxPos[1]:
                    solid.setY(pos[1]+1)
                    #print("setting Y")
                else:
                    solid.setY(0)
                    if pos[2] < maxPos[2]:
                        solid.setZ(pos[2]+1)
                        self.CounterFillLayer["text"] = "Layer "+str(solid.getZ())
                        #print("setting Z")
                    else:
                        self.pasteDictSub() #add temporary dictionary into data base
                        self.exitPainterFill() #return to main process
                        break
        
        #print("Made it to End")
        return Task.cont

If I may ask, what are you doing with the results of this? Perhaps knowing that might inspire alternative approaches.

Here, I am building a grid based point cloud and save it as a dictionary .
Then, In the actual game, when a hit is registered I translate the coordinates of the collision point and check the dictionary.
That tells me what subsystems are affected by the hit and whether this section in the grid is destroyed wich gets displayed on the model by a 3D texture using the same coordinates.

Ah, I see! Thank you for the explanation! :slight_smile:

Hmm… How small are your sub-systems? Could you not separate out the two elements, using the grid-positions of registered hits to edit the texture and individual collision-objects to determine the sub-systems? That way you should, I think, be able to dispense with the initial raster-pass.

I guess that would work for some subsystems. However, I am aiming for a heavy duty combat simulation. Some subsystems are essentially just concentrated in large boxes within an outer shell, but they also are protected by that shell aswell as by energy shields and indiviual sections of reinforced armor. All wich should be destroyable peace by peace, with indivitual damage threshholds on each point.
And that filling function itself is really meant to set up these elements, wich follow the hull’s shape.

I figured, using a point cloud combined with a simple collision box, very rougly resembling the outer shell’s shape, could be less expensive at runtime than checking collisions with a highly detailed setup.
Though, that makes the full collision setup outside the actual game more complex.

All told, how many individual subsystems, outer shells, shields, and reinforced armour pieces would a single character have, do you think? And how many characters would you expect to have in a scene at once?

Don’t mistake me: it may be that something like your approach is called for. But I do want to check that a simpler approach might not work!

If a more complex approach is called for, perhaps some sort of UV-based approach might work: use textures that define the subsystems and armour-pieces, and on hit, query those textures. I’m not sure of how feasible that might be, however (in particular the question of finding the relevant UV-coordinates for a hit)–on that I’ll defer to others, I think!

Including hull, shields, engines and everything, I’d say the collision would have to distinguish between roughly 15 seperate damageable systems on smaller characters and maybe 50 or more on the absolute largest. Some of wich being spread out across the entire character.

The average scene will probably not have much more than 3 to 6 visible characters at a time, but if possible I want to potentially have many more outside of visible range conducting independent actions, albeit likely with some calculations simplified. (With my current approach these invisble characters also don’t necessary have to bother with collision and just go directly into the point cloud with some predetermined hit chances)

PS: For the models’ polygon counts I currently have between 1k to 50k and don’t really expect higher numbers for now. Though the larger ones need to be loaded as multiple parts, due to restictions from the 3D texture.

Yeah, I heard some games are using a similar approach. Though I am not sure how they get that UV-information from the collision point. (Maybe with help from a shader?)
It could work, though I think that approach would need detailed collision solids afterall, if I’m not mistaken.

Hmm… So you would be looking at around 300 collision solids at once in a roughly-worst-case scenario, I think. That’s… actually not that bad. It might be worth trying a collision-based approach!

(I would imagine that–as you say–characters outside of visible range would use some simplified approach. Maybe roll some random numbers to represent where they were hit.)

I think that you might be surprised at the degree of simplification that you could get away with.

In any case, you likely wouldn’t need as many as in the collision-only approach above–you should be able to work with only a surface-point in the UV-based approach, I would imagine.

My current build works rather well thus far, so I’m not really sure if I want to change some fundamental parts halfway through.
EDIT: Well, even though it is more effort to fully setup each character before it can’t be used in the actual game.

Though it won’t hurt to at least do some tests, to see how another approach might perform.

That’s fair; if what you have is working, and seems to scale well, then by all means stick with it!