First of all, thank you NNair and company for bringing PandAI to Panda3d! I originally posted a seperate thread, but I deleted that to move my post here, so sorry if this sounds familiar.
I’m trying to make sure I understand pandai correctly, because the steering behaviors aren’t quite what I expected. Consider the demo I developed here out of examples on the pandai website.
#for directx window and functions
import direct.directbase.DirectStart
#for most bus3d stuff
from pandac.PandaModules import *
#for directx object support
from direct.showbase.DirectObject import DirectObject
#for tasks
from direct.task import Task
#for Actors
from direct.actor.Actor import Actor
#for Pandai
from panda3d.ai import *
#for Onscreen GUI
from direct.gui.OnscreenText import OnscreenText
def addTextField(pos, msg):
return OnscreenText(text=msg, style=1, fg=(1, 1, 1, 1),
pos=(-1.3, pos), align=TextNode.ALeft, scale=.05, mayChange=True)
class Target:
def __init__(self, number, gameWorld, pos = (0,0,0)):
self.gameWorld = gameWorld
self.model = loader.loadModel("models/arrow")
r = number % 2
g = (number / 2) % 2
b = (number / 4) % 2
self.model.setColor(r,g,b)
self.model.setPos(pos)
self.model.setScale(1)
self.model.reparentTo(render)
self.AIchar = AICharacter("target"+str(number),self.model, 100, 0.05, 5)
self.gameWorld.addAiChar(self.AIchar)
self.AIbehaviors = self.AIchar.getAiBehaviors()
self.collcp = CollisionSphere(0,0,3,7)
self.collcn = CollisionNode("Seeker")
self.collcn.addSolid(self.collcp)
self.collcn.setFromCollideMask(BitMask32(0x8))
self.collcn.setIntoCollideMask(BitMask32(0x8))
self.collcnp = NodePath(self.collcn)
self.collcnp.reparentTo(self.model)
def wander(self, wander_r = 2, flag = 0, aoe = 12, priority = 1.0):
self.AIbehaviors.wander(wander_r, flag, aoe, priority)
class World(DirectObject):
def __init__(self):
base.disableMouse()
base.cam.setPosHpr(0,0,55,0,-90,0)
self.loadModels()
self.setAI()
self.last = 0.0
self.text = addTextField(0.9, "Evasion and seeking behavior combined.")
def loadModels(self):
# Seeker
ralphStartPos = Vec3(-10, 0, 0)
self.evader = Actor("models/ralph",
{"run":"models/ralph-run"})
self.evader.reparentTo(render)
self.evader.setScale(0.5)
self.evader.setPos(ralphStartPos)
def setAI(self):
#Creating AI World
self.AIworld = AIWorld(render)
self.AIchar = AICharacter("evader",self.evader, 100, 2, 10)
self.AIworld.addAiChar(self.AIchar)
self.AIbehaviors = self.AIchar.getAiBehaviors()
self.evader.loop("run")
self.target = []
self.numTargets = 7
for x in range(self.numTargets):
self.target.append(Target(x, self.AIworld))
self.target[x].wander()
self.AIbehaviors.evade(self.target[x].model, 4.0, 4.0, 0.5)
self.target.append(Target(self.numTargets, self.AIworld))
self.seekTarget = self.target[self.numTargets]
#self.target[numTargets].wander()
self.AIbehaviors.seek(self.seekTarget.model, 0.25)
#AI World update
taskMgr.add(self.AIUpdate,"AIUpdate")
#to update the AIWorld
def AIUpdate(self,task):
# elapsed = task.time - self.last
# self.last = task.time
# self.text.setText = (str(elapsed))
self.AIworld.update()
# this behavior has to be frequently reset for it to function
self.AIbehaviors.seek(self.seekTarget.model, 0.25)
# for x in range(self.numTargets):
# d = self.distance(self.evader, self.target[x].model)
# self.AIbehaviors.evade(self.target[x].model, 3.0, 5.0, 0)
return Task.cont
def distance(self, a, b):
return((a.getPos(render) - b.getPos(render)).length())
w = World()
run()
I’ve observed several problems with the steering behaviors in my demo.
Firstly I expected that all recalculations would be done during AIworld.update(). However I’ve observed Ralph’s reactions to the evasion targets seem to be at an almost random distance suggesting it is not checking their proximity every frame. Am I just seeing things? If this is true I understand there may be a performance imperative behind it, but how would I go about improving the frequency of such checks?
Additionally, the behaviors of evasion and pursuit don’t appear in my experiments to actually behave as though the target is moving. A pursuit ai should be able to calculate an intercept point based on the speed of both the pursuer and target. At present it appears that pursuit and evasion ai simply move directly toward or away from the target with no option to improve upon this. I’ve actually solved this problem before in an AI aiming algorithm with finite speed projectiles. It was kind of frustrating and forced me to remember the law of cosines , but for someone who actually knows what their doing it should be just a few lines of c++. I’ll step up if no one else is offering.
Finally, and most importantly, I expected Ralph to act on multiple behavior priorities simultaneously, but this doesn’t appear to occur in my demo. For example if there were an evasion target above and slightly left of Ralph and one below and slightly left of Ralph, Ralph should be evading by moving to the right. In practice I see Ralph “bouncing” between the two as he alternates targets, or more commonly, just evading one and plowing straight through the other with no apparent regard.
I could attempt to emulate better behavior to some degree by each frame manipulating the priority of every target based on their distance.
I’m not trying to be demanding I’ve just worked with OpenSteer in the past and saw all of these behaviors working beautifully, but that is a much more specific and mature solution.
In any case see http://www.red3d.com/cwr/steer/ for some steering that seems to behave the way I would expect.
If some or all of my problems are in how I’ve used PandaAI, please let me know! If the problem is not with my implementation am I just expecting to much out of the still immature PandAI? Perhaps I could adapt PandaAI to borrow some code from OpenSteer. It does use the MIT license though…