Automatic 3D to 2D node positioning

Is it possible for a node whose parent is self.render to attach a child whose coordinates are computed with self.render2d?

The problem I’m trying to solve is the following. I have a bunch of objects in a 3D scene. For each object I want to create a text label (TextNode) that should follow the assigned object and be re-positioned as the camera moves, but only in 2D, i.e. the label’s width and height are supposed to change, but not it’s depth. The label scale must also remain the same. A similar effect can be viewed on virtual maps, like Google Map, where labels and markers remain their relative positions, but their size is independent from the map extent. I know I can do this manually by projecting objects’ coordinates on the screen and re-positioning labels with the computed coordinates, but I wonder if I can create such a graph structure, that would do the same automatically?

Not exactly, since render and render2d are completely different scene graphs. If you want to use clever scene graph constructs like CompassEffect (which keeps a particular aspect of the transformation relative to another node) then you need to put them in the same scene graph.

One reliable way to do labels and nametags would be to keep the TextNode in the 3D scene, and have it face the camera always using a BillboardEffect. You can play tricks with depth testing in order to make it appear on top always. This is what I did in this game I created recently:


The code is a bit of a mess due to being a one-week project, but most of the code for the labels is in gamelib/construct.py.

An alternative approach is to have the labels parented to a plane some units in front of the camera, and then, using a CompassEffect to tell Panda to track a particular object in the scene. Something like this:

labels = base.cam.attach_new_node(plane)
labels.set_y(2)

# Render on top of everything
labels.set_bin("fixed", 0)
labels.set_depth_write(False)
labels.set_depth_test(False)

label = labels.attach_new_node(TextNode(...))
label.set_effect(CompassEffect.make(node_in_scene_graph, CompassEffect.P_pos))

However, in both approaches, the label will get smaller as you zoom out, as the CompassEffect inherits too much of the node’s position, including the depth. Limiting the components by changing it to CompassEffect.P_x | CompassEffect.P_z doesn’t work, since it unfortunately CompassEffect splits the components in world space, and not in the space of the reference node.

So, either approach will still require you to write a task that sets the Y in camera space to a fixed value, at which point it’s not much more automatic than writing a task that just calculates the entire 2D position of a 3D node.

Perhaps what we need to do is a CompassEffect flag (or a fixed_depth setting on BillboardEffect) that locks a node’s depth to a fixed distance from the camera. I think this would be a fairly trivial feature to add for the upcoming 1.10 release; would you be interested in something like this?

You could draw the labels as geom points, you’d need a shader to generate texture coords to put the text on each point (as a texture). Geom points have their size set in pixels, so they’d stay the same size on screen, you’d just need to disable depth testing to render them in front of everything else.
Maybe an overkill, but I think it’s an interesting concept anyway :wink:

I just implemented this:

Now you can do this:

label.setBillboardPointEye(-10.0, fixed_depth=True)

This will always keep it at a distance of 10 units from the camera, thus making it appear fixed size.

Thanks for the patch! I wish I could use master in my project, but I have to stick to the latest stable.

Thank you all fellows for the comments! I wanted to find a solution or to realize that manual positioning is the only way to go. So I did both.

I have a follow-up question: is it possible to force a 3D model to have constant scale as the camera moves?

Here’s how I did it with a 3d object in the render graph. The attribute self.grabScaleFactor controls the size of self.grabModelNP as a k-factor against the distance to the camera. It computes a root, so you maybe can root that upfront and multiply the root by the results of .lengthSquared() instead of .length() (maybe).

        distToCam = (camera.getPos() - render.getRelativePoint(BBGlobalVars.currCoordSysNP, self.grabModelNP.getPos())).length()
        self.grabModelNP.setScale(self.grabScaleFactor * distToCam,
                                  self.grabScaleFactor * distToCam,
                                  self.grabScaleFactor * distToCam)
        # keep the position identical to the selection
        # for when outside objects like undo/redo move selected
        self.grabModelNP.setPos(render, self.selected.getPos(render))
        return task.cont