Collision detect of the models rotated by billboard effect

Hi All,

I hope somebody can help…

I have the following issue:

I create an empty node and attach another two nodes to the left and to the right. Then I attach the first node to the model and apply the billboard effect (setBillboardPointWorld()). I want to have usual model in the center and two controlling models on the right and on the left that would rotate around and show always one side for the camera.

But when I’m trying to detect the collision using mouse click, I see that collision mesh is not rotating the same way as the visible geometry, or something like that. Seems like this effect only rotates visible geometry and leaves the collision mesh on the same place.

Is it possible to use this effect and be able to correctly detect the models?

I created simple example of my problem here

Just click to the arrows on the left and on the right.

I tried to search for the solution in the forum but wasn’t successful.

Code for reference:

from direct.showbase.ShowBase import ShowBase
from pandac.PandaModules import CollisionRay,CollisionNode,GeomNode,CollisionHandlerQueue,CollisionTraverser,LineSegs
from direct.task import Task

class World(ShowBase):
  def __init__(self):
    #load models
    self.circleModelNext   = self.loader.loadModel("models/circle2")
    self.circleModelPrev   = self.loader.loadModel("models/circle2")
    circlePrev_tex = self.loader.loadTexture("textures/circle_next_tex.png")
    circleNext_tex = self.loader.loadTexture("textures/circle_back_tex.png")
    self.circleModelNext.setTexture(circleNext_tex, 1)
    self.circleModelPrev.setTexture(circlePrev_tex, 1)
    self.clusterModel = self.loader.loadModel("models/cluster")
    cluster_tex = self.loader.loadTexture("textures/cluster_tex.png")
    self.clusterModel.setTexture(cluster_tex, 1)
    #collider set
    #set mouse and camera
    self.accept("mouse1", self.mouseSelect)
    self.accept("mouse1-up", self.clearSelect)
    self.camCenter = self.render.attachNewNode('camCenter'),5,10),0,0)
    self.connecter = LineSegs("lines")
    #draw X Y Z coordinate lines
    self.connecter.moveTo(0.0, 0.0, 0.0)
    self.connecter.drawTo(25, 0, 0) 
    self.render.attachNewNode(self.connecter.create()) #X (RED)
    self.connecter.moveTo(0.0, 0.0, 0.0)
    self.connecter.drawTo(0, 25, 0)
    self.render.attachNewNode(self.connecter.create()) #Y (BLUE)
    self.connecter.moveTo(0.0, 0.0, 0.0)
    self.connecter.drawTo(0, 0, 25)
    self.render.attachNewNode(self.connecter.create()) #Z (PINK)
    self.cameraTask = taskMgr.add(self.moveCam, "moveCam")
  def genObject(self):
    #one test instance of model with specific tags
    cluster = self.render.attachNewNode('Cluster')
    center = cluster.attachNewNode('center')
    next = center.attachNewNode('next')
    prev = center.attachNewNode('prev')
    next.setTag('type', 'cluster_control')
    prev.setTag('type', 'cluster_control')
    next.setTag('action', 'next')
    prev.setTag('action', 'prev')
    next.setTag('cluster','ref to cluster1')
    prev.setTag('cluster','ref to cluster1')
  def moveCam(self, task):
    self.camCenter.setH(self.camCenter, 1)
    return task.cont
  #select 3D object by mouse click
  def mouseSelect(self, showMenu = False):
    if self.mouseWatcherNode.hasMouse():      
      self.pickerRay.setFromLens(self.camNode, mpos.getX(), mpos.getY())
      amount = self.colHandler.getNumEntries()
      cnt = 0
      if amount > 0:
        self.colHandler.sortEntries() #this is so we get the closest object
        found = False
        while cnt < amount:          
          if not pickedObj.isEmpty():
            found = True
            cnt = amount #set flag to stop loop
            type = pickedObj.getTag('type')
            if type == 'cluster_control':
              #controlling cluster
              action = pickedObj.getTag('action')
              if action == "next":
                print action
                #not good idea but only for this test
              if action == "prev":
                print action
                #not good idea but only for this test
          cnt = cnt + 1
        if not found:
          print "collision list didn't have nodepath with 'action' type"
          print 'Nothing found!'
  def clearSelect(self):
  def setCollider(self):
    self.mtraverser = CollisionTraverser()
    self.colHandler = CollisionHandlerQueue()
    self.pickerNode = CollisionNode('mouseRay')
    self.pickerNP   =
    self.pickerRay = CollisionRay()
    self.mtraverser.addCollider(self.pickerNP, self.colHandler)

w = World()

Correct, the billboard effect is a camera effect, and has no effect on the collision traverser.

Maybe you could construct your collision geometry as a sphere or cylinder, or something else dimensional, so that it doesn’t need to be rotated in order to be detected.


Thanks David for your answer!

I suspected something like that, but thought I’m doing something wrong.

Decided to emulate this effect with a task, rotating the center nodes with lookAt(

But it is painful for my implementation (have many objects). The problem is that I can use only the space on the model sides. The up and down model space are used by other geometry that must be detected with collisions.

Unfortunately, in my case I can’t add dimensional models like torus or others around my model, because I need to be able to select the model itself.

Anybody can provide better solution (then Task)?

I also expected that this approach would work:

createdModel = …
dummyNode1 = …
dummyNode2 = …




I tried to rotate only the instance model expecting all other dummy nodes would get this transformation, but seems like this is wrong expectation. This is difference between pandaNode and NodePath that must be understood? Scaling, colors … - are changing the content, but rotation, positioning - not?

I don’t understand your use of instancing here, but in general, this doesn’t sound like a NodePath/PandaNode confusion. Remember, a NodePath is just a handle to a PandaNode, and anything you can do to a NodePath can also be done directly to the node within it (and, in fact, this is what actually happens when you perform a NodePath operation).

If you have one node with multiple different parents–that is, instancing–then that node will appear to be in multiple places within the scene simultaneously, and any transform you apply to that node will be manifest on all of its instances. But relative operations (such as model.lookAt(camera)) make sense only for one instance at a time; the other instances will show the same rotation, but they won’t necessarily be looking at the camera.


Thanks for clarifications!

I tested and I understood, what you mean. We can’t achieve this effect with only instance rotation because each instance is in parent node’s coordinate space.

So,… then rotation of each node with a task is the only way…

Thanks for all your answers David!

Dmitri D.
(from Estonia)

P.S. I use instancing because I have a lot of similar models. I refreshed the example in zip to demonstrate this.

A small screenshot of my master thesis related to graphs I did so far with panda3d:

researching graph layouts and navigation in 3D.

Maybe I will try to implement and explore the scene graph with this tool later :slight_smile:
smth like import blabla, blabla.explore()