ODE Middleware


Latest version works fine for me! This is a Linux system, though…

I did wonder something about the class setup - was there a reason the kinematicCharacterController inherits from Object, rather than kinematicObject? I’ve been working on some other objects based on the kcc; and adjusting the inheritance that way made it a lot easier to manage the art and interactions.


The story is simple. If you scan through this topic you’ll find that general kinematic objects class came around after the KCC. I started writing all this to get an ODE-based character controller, and everything else was added later. So the KCC doesn’t inherit from other kinematic objects for, let’s say, historic reasons. This setup just never done me any harm so I rarely notice it, and thus don’t change it. Although, it might be considered an oversight (at best) of mine, yes.

Uploaded a new version, named 1.2.1, in which the KCC now inherits from general kinematics. It doesn’t make much difference, but I guess since it makes your codding easier, Stoo, it might do the same to everybody else :wink:.

Let me know if I didn’t screw up something when changing that ;D.


What service! My quick test didn’t show anything breaking, anyway :smiley:


Thanks Copper, the new version works for me now (Win7) :slight_smile:

One thing I did notice - I threw a grenade in the bottom room into the corner with the other grenades and got this:


Glad to read that :slight_smile:.

About that bug – you probably threw a grenade, then picked in up, and then threw it again before it exploded. If you do that, the grenade’s explosion is triggered twice, but it gets destroyed in between, resulting in this error.

It simply shouldn’t be possible and it’s just a result of the way stuff is handled in the simplistic placeholder inventory-ish system included in the sample. In the same way, the grenades are just meant for presenting the explosion mechanics, as well as providing an example of how a simple grenade can be implemented, rather than being ready-to-use in a real project.


Ah, I see. No worries then :slight_smile:

Have another question - just curious as to why you have the map’s egg file written into the constructor instead of passing it in as an argument? (I’m still a bit of a python n00b, please excuse me if this seems like an obvious question.)


Same reason as before :slight_smile:. It’s meant solely for demonstration of how you can load objects by placing them in Blender using Empties with tags. But as far as the low level setup goes, you’ll want rearrange it to fit your needs, whatever they may be. The point of the map.py file is how objects are created around empties and how the map’s egg is traversed, so whether you pass it as an argument, hardcode it or do whatever else with it is up to you.


Ok, great. :slight_smile:

Small contribution from me - a tagger for maya based on the maya object tagger script. Middle mouse this script to your shelf and click it with some objects selected to add the tag attribute. Ctrl-click to remove the attribute.

global proc copperOdeTagger()
	string $sel[] = `ls -sl`;
	for ( $i in $sel )
		string $attrName = "tagtype";

		// Modify this line as needed to add your own object types
		string $eggFlags = "none:solidBoxGeom:solidTriMeshGeom:playerPosition:alight:dlight:plight:fly_trigger";
		string $object = ( $i + "." + $attrName );
		// Command click removes the attribute, otherwise it is added
		int $mods = `getModifiers`;
		if ( ( $mods / 4 ) % 2 )
			if( `objExists $object` )
				deleteAttr -at $attrName $i;
			if( !`objExists $object` )
				addAttr -ln $attrName -k 1 -at "enum" -en $eggFlags $i;


That’s really cool. I don’t have Maya, so I can’t really try it out, but it’s really nice to see such contribution :slight_smile:.


Okay; so I’ve got a basic vehicle class set up now, all that’s really left on it is the turning behavior. Right now, in the update method for the vehicle, I’ve got the following:

        newQuat = self.getQuat()
        newQuat[0] = newQuat[0] + self.rotamt * self.rotspeed

This should look at whatever the player has set the rotamt to, and rotate based on the vehicle’s rotspeed. This works fine, until it almost completes a spin - it suddenly slows way down. I assume there’s some funkiness with quaternion arithmetic, I’m not really familiar with using quats beyond pulling them out and messing with an axis. I didn’t see any obvious examples of setting a slow, controlled spin in any of the dynamic or kinematic object classes, is there an easy way around this? Or do I need to do something else with the quaternion?


I can’t say I fully understand your problem. That is, I think I understand what you’re trying to do, but don’t fully comprehend how nor what is actually going wrong. I generally avoid using quaternions directly, and instead prefer to extract HPR from them, which I feel a lot more comfortable with.

Still, though, the setQuat() method in the physicalObject (and thus the kinematic object) is just a convenience method. It does nothing more than passing the quaternion to the nodepath and the geom. So there’s really not much that could go wrong there.

Now, just to clarify, are you working on an AI vehicle or a car?

In either case, though, if you want smooth turning then perhaps you should take a look into PandaSteer. It might not be 100% what you’re looking for, but it might give you some idea of how you can make this using vectors and Panda’s lookAt functionality.

Obviously, the easiest way to achieve smooth turns in Panda is to use an HprInterval, but obviously this is not a universal solution and is the most useful for scripting events (such as in-engine cut scenes) rather than anything else.


Okay, switching to the HPR system fixed it. I just borrowed your setH code from the kcc.py file:

        quat = self.getQuat()
        hpr = quat.getHpr()
        hpr[0] = hpr[0] + self.rotamt * self.rotspeed

Right now, what I’m after is more of a tank - something that can turn whether it’s moving or not. I’ll be looking at cars and more complicated systems later, when I’ve got the rest of the game at least sketched out.

Speaking of AI though; do you have any plans to include a basic NPC class in this project at some point? I’m trying to make up my mind which part of my project to sketch out next. :stuck_out_tongue:


I’m currently working on AI for my game. However, I’m not writing it from scratch, because it would be reinventing the wheel. Instead, I build it on top of modified versions of two awesome pieces of code – chombee’s PandaSteer2 and et1337’s pathfinding code written for his game “Stainless” (GPL2 and MIT licenses accordingly) – by integrating them with each other and with my code.

With these (and my humble KCC) you can deploy functional NPCs in no time. The biggest change I had to make was replacing the Panda’s internal collision detection with my ODE stuff in PandaSteer. Except for that it was mostly tweaking.

Generally, it works quite well already and I’d like to release it aside of my ODE framework, so you can just have basic NPC support OOTB (even though it’s not that much work, mostly a matter of integration). However, I don’t think I’ll find the time to do it anytime soon.

If I were to do it now (or, more realistically, soon), I would have to release it with no documentation, no comments, no sample and no subsequent releases until some time in the future. It would be just bare code. If you’re interested in such a “raw” deal, presumably as a starting point for your work, then no problem.


I would be interested in seeing how you integrate the AI and pathing with the rest of your system; even without the documentation. I could just look at PandaSteer too, but if you’ve already done the work of integrating it… :stuck_out_tongue:


Ok, so I’ll drop it here as soon as I have a moment.


So here is the NPC code I use. I can’t say it’ll “just work”, but it should/might.


I recommend using Recast for navigation mesh generation. It automates the whole process. IIRC the current version should automatically export generated navigation meshes to .obj. I can’t attach the one I use, because I’ve made it by combining code from different revisions, through a process of trial and error, and I’m very surprised it works, so it would probably eat your food.

Note that pathfinding comes from Stainless, which is under the MIT license, and PandaSteer is under the GPL2. Notices inside files. And huge thanks to et1337 and chombee for writing and publishing these pieces of code.


Thanks for putting it up! I’m still digesting how it all works, haven’t done much more than look at PandaSteer or Stainless.

Are steerVec.py and vehicle.py the only two PandaSteer classes you’ve modified, or the only two you’ve needed in your project? pathfinding.py looks pretty self-contained; and npc.py’s easy enough to grok… :stuck_out_tongue:


Yes. They’re the essence of PandaSteer, basically. There’s also obstacles in the original PandaSteer, but that doesn’t really need it’s own class in my version – all you need to do to have an obstacle is add a static/kinematic object with a sphere geom to whatever you want to be avoided by the vehicles. Then you set the bitmask correctly and you’re good to go.

The bitmasks should be such that the vehicle’s ai capsule collides with obstacles’ ai spheres, obviously.

Right now I’m trying to figure out a way to make the most important part of pathfinding-steering interoperability, which is containment on the navmesh. I’m trying to figure out how to keep the agents on the navmesh at all times, because that’s extremely important (as the creator of Recast/Detour points out).

As far as the containers that come with PandaSteer, I haven’t used them yet, but they don’t need to be changed at all to work with my version of PandaSteer (unless I’ve done something to the containment code and don’t remember what ;D).

About pathfinding, yes, it “only” loads the navmesh and allows you to find a path through it, so there was no need to change it to work with my ODE stuff. The only changes I’ve made to it include better path smoothing and path visualization.

Oh, just note that when you load the navmesh (as egg) it’s center MUST be on the world center. Otherwise you will get strange results.


Okay, so I’m still fiddling with the AI packages, but I’m trying to figure out why you ended up using both of them. Are you using the Stainless pathfinding (via aiVehicle’s FollowPath behavior) for gross navigation (from room to room, for instance), and then switching to PandaSteer for fine control?


Exactly. Pathfinding and steering are two different tools for different purposes. Pathfinding in like GPS – it can only give you information about when to turn, but it knows nothing about where all the other cars are, so it can’t drive for you. That’s what your eyes on the road are for in a car, and that’s what steering does in AI. It goes in the overall direction of the next path waypoint, while at the same time avoiding, so called, local collisions with other stuff.

So bottom to top it’s like this:

  1. Pathfinding – the general set of waypoints to pass through, or (better) a corridor of polygons to stay within.
  2. Steering – avoiding local collisions with dynamic object and vehicles and turning smoothly on the road to the next waypoint
  3. Physics and collision detection – making sure the character doesn’t walk through stuff when all of the above fails to prevent a collision.

The thing I still need to work on is keeping the character inside the path corridor and inside the navigation mesh at all times during steering. This is rather complex, for my current standpoint, so it might take a while. Especially that I’ve currently decided to dive into some other parts of my todo, that’ve been desperately crying for my attention for months now.

Oh, to make one thing clear. With a dynamic navigation mesh, you can make your pathfinding know about dynamic obstacles. Detour does that. However, this is rather complex and even if my skills and time allowed me to extend Stainless’ pathfinding by this, I’m not sure python would handle that anyway. Panda AI also has dynamic obstacle avoidance built into pathfinding, but it uses a 2D navigation graph, with which it’s a lot easier to achieve (because it doesn’t require cutting polygons), but at the same time this type of navigation data is very limited.

Additionally, even with that, you still need steering for interaction between AI vehicles, because no pathfinding can do that.