Picking animated models

Hello,

I’ve managed to download and build Panda 1.6 and all that works fine…But…

I’m not sure what I’m doing wrong now, but the program can’t seem to load any bespoke collision geometry from the model file I’ve been talking about, i.e. I have a line

{ Polyset keep descend }

under by main Group node, but when I am trying to find this after I’ve loaded it, i.e.

actor = Actor(“myfile.egg”) # load the above file
np = actor.find("**/+CollideNode")

I get an empty result!

What is going on? Is there something changed in the new build? Should I have some kind of configuration option set?

Thank you for any help and pointers about where this might be going wrong…it used to work in the sense that I could load the collision geometry at least. Now that doesn’t seem to and I’m not sure if it is me or the build.

Greg

You can’t load Actor-animated CollisionPolygons from an egg file. The best you can do is load them from a different egg file, then parent them to the exposed joints of your Actor by hand. Or, you can use the mechanism described in this thread to implement collision-with-geometry, which works even though it’s painfully slow.

David

Hello,

I’m sure what you mean.

In the past I thought I definitely had a system where I could load collision geometry from my skeletal character model file, e.g. model.egg, which contained a line

<Collide> { Polyset keep descend }

Maybe I didn’t though…maybe I just was using the visible geometry as collidable into.

So are you saying you can’t have a node like

<Collide> { Polyset keep descend }

Inside a character animation file? I thought you could? I notice that when I load non-animated models with this line in it seems to work fine.

Greg

(Rereads whole thread). Ahem I see this is indeed the case - you can’t load collision geometry from an animated model.

Other than using the full whack collisions-with-all-that-visible-geometry technique (yes it is slow, I go from about 250 fps to 60fps, albeit using a CollisionEventHandler rather than the queue technique – maybe that will speed it up a little), I wonder if there are any alternatives?

One question - how ‘visible’ does the geometry have to be? Lets say I want an object to follow the contours of a character’s bare chest as it breathes. Could I get a performance improvement in my modelling app by creating a simplified version of this chest (i.e. in terms of number of polygons and it is only the chest and not the whole body), somehow hiding this so it isn’t rendered, but still it is able to be used as ‘visible’ geometry for collisions, and the rest of the character model is ignored for this purpose, so you avoid much of the performance hit?

Is that possible?

Greg

Yes, that’s possible, and it will help. You can simply model two chests, flag the lower-level model in egg-optchar, and hide it (and set its collision mask).

But the best performance will be to model a rigid piece as a set of collision polygons, and attach it to an exposed chest joint at runtime.

David

Thank you for your help so far.

I am currently using this technique:

  1. In the modelling app, create a low vertex/poly version of the patient’s chest, which is parented to the underlying skeleton but which is given a different name, i.e. myChest

  2. Keep myChest accessible by using egg-optchar

  3. In the program, do

    n = node.find("**/myChest")
    n.setTag(…) and n.setCollideMask(…)
    n.hide()

  4. I then cast a ray through the screen and using a separate traverser and collisionqueue get a collision entry and position my stethoscope at that point

The outcome is so far very good, and all the above steps have combined to make the performance much much better than it was.

However, I have noticed a problem of a different kind. Certainly the collision detection occurs on the chest surface and this is live with the motion of the chest. But at the moment the stethoscope doesn’t really move as you’d expect. As the chest moves, if you don’t move the mouse, the stethoscope remains roughly at the same location, and doesn’t appear to move as you’d expect with the chest. The reason is that in reality the stethoscope should ‘stick’ to a point on the chest and move with it. In my program, it kind of hovers above the chest, which moves underneath it.

The solution I think is this: when I move the mouse, a single point on the chest needs to be located, and then This Point should identify the location of the stethoscope, such that as the x, y, z of that point moves with subsequent frames, so then I can update the stethoscope position.

But how can this be achieved? It’s like I need to be able to do this:

thePointId = collisionEntry.getPointId()

Then until the mouse moves again, each frame I do

steth.setLocation(model.getPoint(thePointId))

What facility is there to actually do this, I wonder?

There is no “point ID” associated with every point on a surface. You could shoot another collision ray through the stethoscope each frame to redetermine where the nearest point on the chest model is each frame. You’d have to be careful that the chest does not move above the start of the ray, so you could start the ray a few inches behind the stethoscope to be sure (or even just use a collision line instead, which starts infinitely far back).

Or, you could wrtReparent the stethoscope to the animating chest bone, so that it would automatically continue to move as the chest moves. This would only be possible if there were only one predominant bone that moves with the chest motion (or if you could easily determine the appropriate bone based on the point of intersection).

David

Hi there, thanks for the speedy reply.

To take your points in reverse order, I have tried reparenting the stethoscope and you are bang on that it is problematic because there are more than one joints involved and the effect is not convincing (this is why I resorted back to a more contour-based approached, i.e. the steth following the chest contours).

Your first point then: I suppose I meant the pointID of the nearest vertex of the chest geometry, but thinking about it that might not be that useful if this was low res, as the steth might be snapped to this sparse landscape of points, so I’ll forgoet about that.
So are you saying that I should cast a ray out of the stethoscope and not through the screen (which I am doing at the moment, each frame). If so…I am struggling to figure out what the values I should use for the direction of the ray. Coming to think about it, could I instead use a collisionsphere around the stethoscope, and then use that collision information to set its new location? I’m still not convinced it will give the effect required, whichever of these approaches. I need it so that e.g. if you put the steth on the nipple, then it stays on the nipple as the chest moves up and down.

No, I mean before the stethoscope has been placed, you cast a ray through the screen, as you are doing now, and place the stethoscope. Thereafter, while the stethoscope remains there, you cast a different ray, this one through the stethoscope model itself.

David

But what values do I use to cast a ray through the stethoscope itself? I am assuming you mean a ray which passes through both the centre of the stethoscope and then into the chest/body, right?

How do I derive what direciton this collisionray should have?

Did you create the stethoscope model? Which direction is “down” in the stethoscope model? Create that ray, then parent it to the stethoscope.

David

I am struggling to get this to work.

As a test what I’ve done is say the first time I engage the stethoscope, I use a ray cast through the screen and onto the chest. This set the steth location (and 0, 0, 0 on the steth is the centre of the listening surface) to the collision surface location, this bit works fine

Then from then on I try and do the following:

(Ray is actually a collisionline now as you’ve suggested)

ray.setOrigin(0, 0, 0)
ray.setDirection(Vec3(0, 0, -1)) (I’ve also tried 0, 1, 0 - in my modeling up ‘down’ is 0, 0, -1 but in Panda I notice that Z and Y are different way round - the point is that I’ve tried both!)

Then I have reparented the collisionnodenp that owns ray to the stethoscope.

Then I use the same code as above to traverse then go over the results.

It isn’t working since I don’t get any results, or I might get one and thereafter get none!

Can you think of anything I’m not doing right, or if not, can you tell me how can I make visible this ray so I can check it is being fired in the right way? (I’ve tried showCollisions but that doesn’t do anything). I’ve also parented an axis at the point of collision but that still doesn’t help me check if the collisionline is right.

A CollisionNode is simply hidden by default. You can call nodePath.show() to make it visible. A CollisionRay and CollisionLine are both infinite, and therefore a challenge to represent visually; the visual representation for both of these is a reasonable approximation.

David

c4scroller does your patient model breath animation is based upon bones?
if it is so why don’t you make the stethoscope movement follow the relative position of a specific bone of the chest?

I have tried this but it didn’t look right…I wondered if it was to do with the fact that more than one bone was being moved as part of the breathing animation.

Yes I remember: so if you move the stethoscope say down to the abdomen, which moves in a different fashion, if you are still parented to a chest joint then the steth doesn’t move properly.

of course. I missed that you may want to move the stet while the animation is playing. Pretty nasty problem indeed. I guess you should try to see if ODE can be of some help starting by this pro_rsoft snippet, where there are floating surfaces that collide with a solid one in real time. I guess it is worth a try.

ODE may indeed be helpful, but this problem is certainly up to Panda’s collision system to handle as well. The collision ray parented to the stethoscope ought to solve the problem nicely. Have you been able to line up the collision ray properly yet?

David