Picking animated models

Hi,

My problem is that i try to pick a animated model, however when i try to do so, its only checking against the unanimated model (default pose). Is this a known problem or do i have a bug in my code? Unfortunately i dont have a good example code to show.

Thanks for any help

Ah, that would be a problem in Panda, and not in your code.

What’s happening is that the vertex animation is computed very late in the graphics pipeline, just before the polygons are drawn–in fact, in some cases, the vertex animation is computed on the graphics card itself, and not in the CPU at all.

Since the collision pass happens up at the high level, in a process unrelated to drawing the polygons, it always sees vertex-animated polygons in their original, unanimated pose. This is especially true if the animation is performed entirely on the graphics card.

This is arguably a bug, and in fact, it’s fairly easy to “fix” Panda so that it computes the vertex animation on-the-fly if necessary in the collision pass. I’ll go ahead and check in this fix, which will be released in a future version of Panda. The only reason I haven’t done this before is because it never occurred to me that anyone would really want to do that.

Collision detection with visible geometry is relatively slow, and animated models tend to have lots of little polygons, so collision detection against an animated model would be likely to be especially slow. But, for picking purposes, it may not be that bad if it takes a hundred milliseconds or so to do the collision test on an animated character.

David

Thanks for the fast reply. Im not sure if i will use this feature in the future if it’s so slowly. However it’s good to have it working for testing purposes.

Thanks

What is the status of this feature? I’m having the same problem where it is only picking against the default pose. If the feature isn’t in, is there any hacky quick fix I can use? I don’t even need the model to be animated or animatable. All I do is set the pose after loading the model, so is there a way to pose a model and then bake it into that state?

The feature is in the trunk version of Panda, but not in 1.3.2.

You could bake in the animation by applying it yourself, something like this:

for gnp in model.findAllMatches('**/+GeomNode').asList():
  gn = gnp.node()
  thread = Thread.getCurrentThread()
  for i in range(gn.getNumGeoms()):
    g = gn.modifyGeom(i)
    g.setVertexData(g.getVertexData().animateVertices(thread))

This code is off the top of my head, so there might be minor details that are close but not quite right.

David

that code seem to work, although it doesn’t work right way. If I load the actor, set its pose, and then run that code all before calling run then it doesn’t even look correct (it looks like the first pose). I tried using a taskMgr.doMethodLater and that works, but not if the time is 0. Is there something I can do to make it work immediately? why would this be? here’s the code I’m using that sort of works now to bake the model at the last frame of its animation:

def bakeAnim(obj):
    obj.pose('fall',obj.getNumFrames('fall')-1)
    obj.update()
    for gnp in obj.findAllMatches('**/+GeomNode').asList():
        gn = gnp.node()
        thread = Thread.getCurrentThread()
        for i in range(gn.getNumGeoms()):
            g = gn.modifyGeom(i)
            g.setVertexData(g.getVertexData().animateVertices(thread))


model = Actor("tree.egg",{'fall':"tree_anim.egg"})
taskMgr.doMethodLater(.1,bakeAnim,"bakeanim",[model])

Ah, sorry. Try calling actor.update() after setting the pose, but before baking in the vertices. (This is necessary because the actor’s joint positions normally are not automatically computed until the frame is rendered–and it’s the joint positions that determine where the vertices will be applied.)

David

yeah, I thought something like that might be the case, and if you look in my sample, I am calling Actor.update, but calling pose, then update, then baking still wasn’t working before the first frame had rendered. If you look in the code sample I posted above, it wasn’t even necessary that I wait between setting the pose and baking the vertices. I had to wait after loading the actor before the first frame of the program before setting the pose, calling update, and then baking the vertices. It’s sort of a hack but I’ve got it working. Any reason why that’d be the case, though?

Hmm, sorry I overlooked your call to update(). Strange behavior. I admit it doesn’t make any sense to me why it’s necessary to wait for a frame to pass. I’ll have to investigate that.

David

Hello,

I was wondering if there had been an official line/fix about this issue. The main post (dated ~2004) does not seem to reach a conclusion about the best way of going about this problem.

It was also raised why anyone should want to do this.

I have an example: perhaps you could suggest a different way to do it. Lets say I want to model a 3D patient and want to place a stethoscope on their chest, and moving the mouse moves the stethoscope. If the patient is breathing (i.e. has an animated breathing motion) then the stethoscope should rise and fall with this. At the moment this does not happen - the vertices of the expanding chest temporarily ‘eat up’ the stethoscope before it reappears on expiration.

So, please could you suggest:

  1. What is the official solution to the aforementioned problem of picking on animated actors
  2. Is there an alternative way to achieve this specific effect that I’ve described?

Thank you so much for any help

Greg

I can’t answer number 1 but I do have a suggestion for number two. You can have an offscreen depth buffer and set the camera bits in your scene so that only the pickable objects appear in your depth buffer. You can then convert thte buffer into a PNMImage and then read the pixel data under the mouse coordinates to get how far away from the camera your object is by using the camera’s far plane as your guide. You can then place your object in the scene relative to the camera and you’ll know how far away to place it.

Converting the whole offscreen buffer into a PNMImage might be a bit slow though so you might want to have another 2x2 offscreen buffer and use a shader to fill this buffer with the pixel colour that you’re interested in. An example of this is in my code snipplet for texture painting.

Thanks for this idea - but surely there must be a solution to the underlying problem of picking animated actors?

I just went through the code. There have been a few generations of code since the original post, and it looks like the bit I put in there back in the day to support collisions with animated geometry had gotten inadvertently lost, so I just put it back in. You can pick up the latest by downloading the CVS version and building from source, or waiting for the 1.6 release.

It works perfectly in my tests, with no frame delay. However, you should be aware that depending on how you are computing the collisions, the collision system itself may be imposing a frame delay. In particular, the CollisionHandlerEvent is guaranteed to impose a delay of at least one frame. To avoid this, you should use the CollisionHandlerQueue, and use your own CollisionTraverser (not base.cTrav), and call traverse() explicitly, then poll the queue immediately after that.

It may be an adequate solution, depending on your needs. From a strictly performance standpoint, it’s a terrible solution; but you may not be concerned with performance. If you find performance unsatisfactory, consider ZeroByte’s solution, or use a compromise solution with the collision system (for instance, create a single CollisionPolygon which you parent to an exposed joint in the chestbone, and test collisions against that instead of the visible geometry).

David

Thanks for this. Can you tell me when version 1.6 is expected? I don’t really have the setup to rebuild from the sources.

I would still appreciate any further suggestions to my original post.

Would a reasonable idea be to incorporate into the underlying human model a simplified version which is hidden from view but which is used for collisions? That way the stethoscope would be tested against a simpler ‘terrain’ made of fewer polygons but still it would appear to (to some degree) move over the contours of the body.

Sounds perfectly reasonable. This is more-or-less my suggestion in the above post as well. The only complication to this idea is that collision structures aren’t supported by the egg loader for animated models, so you’d have to load your simplified version from a separate, static egg file, then parent it to the exposed joint of the chest.

David

Unfortunately I don’t have a compiler etc. to build the version/retrieve the CVS or source. How long away is version 1.6? Is there any way someone could provide the necessary replacement files just relevant to this suggested alteration to the source?

Thank you again for any input.

I’m aiming for at least within a month or two. Are you running windows?

Yes I am. A bit frustrated that I can’t get round this problem easily!

Can someone provide a link to the most up to date instructions for building the latest Panda on Windows platform. I don’t really have an existing compiler, also.

Thank you very much for any help.

Here:
panda3d.org/manual/index.php/Tutor … on_Windows

You’d probably need a CVS client like TortoiseCVS to check out Panda’s cvs repository on windows (I never used CVS on windows though so don’t ask me)