Not exactly, since
render2d are completely different scene graphs. If you want to use clever scene graph constructs like CompassEffect (which keeps a particular aspect of the transformation relative to another node) then you need to put them in the same scene graph.
One reliable way to do labels and nametags would be to keep the TextNode in the 3D scene, and have it face the camera always using a BillboardEffect. You can play tricks with depth testing in order to make it appear on top always. This is what I did in this game I created recently:
The code is a bit of a mess due to being a one-week project, but most of the code for the labels is in
An alternative approach is to have the labels parented to a plane some units in front of the camera, and then, using a CompassEffect to tell Panda to track a particular object in the scene. Something like this:
labels = base.cam.attach_new_node(plane)
# Render on top of everything
label = labels.attach_new_node(TextNode(...))
However, in both approaches, the label will get smaller as you zoom out, as the CompassEffect inherits too much of the node’s position, including the depth. Limiting the components by changing it to
CompassEffect.P_x | CompassEffect.P_z doesn’t work, since it unfortunately CompassEffect splits the components in world space, and not in the space of the reference node.
So, either approach will still require you to write a task that sets the Y in camera space to a fixed value, at which point it’s not much more automatic than writing a task that just calculates the entire 2D position of a 3D node.
Perhaps what we need to do is a CompassEffect flag (or a
fixed_depth setting on BillboardEffect) that locks a node’s depth to a fixed distance from the camera. I think this would be a fairly trivial feature to add for the upcoming 1.10 release; would you be interested in something like this?