Target Picking


I’ve been mulling over a targeting system but I’m not sure how to implement it. Normally you can get a target using a ray cast and using extrude to convert from 2d to 3d position. But that only works if you hit an object (in my case Bullet objects).

What I’d like to do is get the position of my cursor in 3D space even if it’s not over a Bullet object. Then I want to move my physics object towards that location by applying forces. I thought about adding a bullet object in line with the cursor as a sort of cursor in 3D space so wherever I clicked I would hit the object and get the position that way with raytest, but it seems convoluted.

Any suggestions would be appreciated.


If I understand your scenario correctly, the problem is that, if there’s nothing for the ray to intersect then there is no such point: a screen-point doesn’t correspond to a point in 3D space, but to a line, and thus a theoretically-infinite number of points.

Of course, if you have level collision geometry then you should be able to simply include that in your ray-intersection and use the resulting point.

If not, however, then you can perhaps use the camera vector (acquired via the “extrude” method, being the result of the “far-point” minus the “near-point”): simply use that vector (presumably normalised and scaled as appropriate) as the force to apply.

I’d prefer the second option, the reason I thought that wouldn’t work is because the to value is so high. So I do this:

LPoint2 mousePoint = Application::GetWindow()->get_mouse();
	LPoint3 from(0,0,0); 
	LPoint3 to(0,0,0);

	PT(Lens) lens = window->get_camera(0)->get_lens();
	bool result = lens->extrude(mousePoint,from,to);

	LPoint3 fromRel = window->get_render().get_relative_point(camera_,from);
	LPoint3 toRel = window->get_render().get_relative_point(camera_,to);

LPoint3 normTo = toRel - fromRel;
	printf("normTo:x %f y: %f z: %f\n",normTo.get_x(),normTo.get_y(),normTo.get_z());

But the values are

normTo:x -32420.667969 y: 100521.140625 z: 27113.095703

I guess the trick is to cap that value to something sensible.


No, just normalise your vector, then multiply to the intended length, as I suggested previously.

(If you don’t know what “normalisation” means in this context, it simply reduces the length of your vector to one, while leaving the direction unchanged (aside from potential numerical imprecision, of course), I believe.)

(The value is initially very high because, if I recall correctly, the “far point” is on the “far-plane” of your camera and the “near point” on the “near-plane”, which are generally very far apart.)

Since Panda uses US spelling, the relevant method for normalisation is “Vec3.normalize()” (note the “z” at the end).

Thanks Thaumaturge. I didn’t really know what you meant by normalisation or that Panda had a function to do it, so that will save me some head scratching, I follow what you mean now. I basically reset the vector minus the direction and then multiply by the power I want which puts me in the right direction at a designated speed.


Hmm, I’m getting no joy with that. Maybe I’m doing something wrong here:

LPoint2 mousePoint = window->get_mouse();
	LPoint3 from(0,0,0); 
	LPoint3 to(0,0,0);

	PT(Lens) lens = window->get_camera(0)->get_lens();
	bool result = lens->extrude(mousePoint,from,to);

	LPoint3 fromRel = window->get_render().get_relative_point(camera_,from);
	LPoint3 toRel = window->get_render().get_relative_point(camera_,to);
	LPoint3 normTo = toRel - fromRel;
	bool isNormalized = normTo.normalize();

Then I just apply the velocity to my bullet object on my server. I get behaviour that is odd though, it always goes in the same direction regardless of where the camera points, but I can see it does move towards the cursor if it’s to the left, but not the right. So something not right there. I might look at the geometry method of doing it, I see that they do it that way in the panda3d beginners guide (but with Panda’s built in physics).


Hmm… I’m not familiar with Panda’s C++ side, so I may be missing something in the differences between that and the Python side.

That said, what is “camera_”? I note, for one thing, that you’re using the result of “window->get_camera(0)” when getting the lens for the purposes of extrusion, but you then find the near- and far- points relative to the NodePath “camera_”…

If you’re providing the wrong NodePath to “get_relative_point”, you might well get results that don’t match your intentions.

By the way, as a side-node, I think that you probably don’t need to normalise your near- and far- points before finding the direction vector (which you call “normTo”); it should be enough to simply normalise “normTo”.

Yeah, camera is just a NodePath to camera group:

NodePath camera_ = window->get_camera_group();

With get_relative_point() I need a NodePath as the first parameter, that’s why I don’t use Lens. Unless I’m missing a step there.

LPoint3 get_relative_point(const NodePath &other, const LVecBase3 &point) const;

I checked and using get_camera(0) gives the same results.


Ah, fair enough.

No, you’re not missing a step–but I didn’t suggest using Lens, I suggested using the result of “window->getCamera(0)”, which is presumably a camera.

Fair enough; I’ll confess that it was a bit of a shot in the dark. :confused:

Hum… I’m afraid that only two other ideas come to mind right now, both again shots in the dark:

  1. Have you tried printing out “mousePoint”? Perhaps something odd is happening to your mouse-coordinates. (The coordinates should lie between -1 and 1 in both dimensions.)

  2. Perhaps the problem isn’t here, but in how you’re applying your forces to the object.

Out of curiosity, what is “the geometry method of doing it”?

Ah ok, sorry I misunderstood there.

It could be this. I’ve set the mouse to relative mode and I have a task to recentre the mouse for my FPS view:

  MouseData md = window->get_graphics_window()->get_pointer(0);
  mouseX_ = md.get_x();
  mouseY_ = md.get_y();

		if(window->get_graphics_window()->move_pointer(0, centreX_, centreY_))
		//do mouse stuff
  1. Perhaps the problem isn’t here, but in how you’re applying your forces to the object.

This is what I do:


linearVal is just the result of the extrude earlier (normTo); ContactPoint is zeroed for now.

I was going to add a target plain in Bullet extended from the camera/character controller so I can fire a ray test against it and use that as a vector.


But have you actually printed out your mouse-coordinates in your targeting code, or examined them in a debugger? Before guessing that the problem lies with them, let’s check; if it does, then we proceed; if not, we look elsewhere rather than attempting to fix something that’s working as expected.

Nevertheless, your re-centring task could well be the source of the problem: if the re-centring code is executed before the code that handles your targeting, then you might indeed see little to no effect as a result of the mouse-cursor having been moved to the centre of the screen, and thus being either zero or near to zero. Why it might still respond somewhat when moving the mouse to the left I’m not sure, however.

But hold on–how was your targeting supposed to work in the first place, then? If you’re forcing the mouse cursor to the centre of the screen, then how can you move it to some point on the screen in order to direct the object that we’ve been discussing? o_0

Does the re-centring only happen under certain conditions?

On another point, you mention that you’ve set your mouse cursor to “relative mode”; do you mean that you’ve, at some point, done something along the lines of the following?


(The above code is in Python; I imagine that the C++ version is similar, but with appropriate use of arrows and underscores, and fewer capital letters.)

If so, then I’m not sure that it’s a good idea: for one thing, I believe that “relative mode” doesn’t work under Windows, and for another, I’m really not sure of what would result from doing both that and manual re-centring…

All that said, am I correct in guessing that your FPS code works? If so, I see that you store the mouse -x and -y coordinates (in the variables “mouseX_” and “mouseY_”, respectively) before you recentre your mouse cursor; why not just make those available to your targeting code and use them instead of fetching the mouse-position again as you seem to currently be doing?

Changing the mouse settings (relative etc.) didn’t affect anything, though according to the manual they don’t affect anything on WIndows, so not too surprising.

When log mousePoint as you suggested it’s always 1,1. I assume that should be changing from 1,1, to -1,-1.


Still, it might be safer to take out any such settings that you might have, just in case. At best (presumably) it’s doing nothing, and thus there’s little reason to keep it, it seems to me.

Ah, that’s likely to be our issue.

(I’m a little mystified that it’s always at (1, 1), rather than (0, 0). This seems to suggest that the mouse is always in one corner of the screen–are you sure that the centre-coordinates to which you reset the mouse in your FPS code are correct?)

But I return to one of my questions above: how is your targeting code supposed to work? You’re resetting your mouse-position to the centre of the screen, so how can you move it to direct the object in your targeting code?

I’m asking because the answer may affect the solution to your problem: the better I understand what you’re trying to do, what behaviour you’re trying to create, the more likely I am to provide a solution that does what you want.

Sorry I replied before but somehow my reply came after yours. Weird. Anyway, I checked mousePoint and it’s always 1,1. That doesn’t seem right.

The re-centring is code is executed as an async task so it’s always running. I then get the difference and move my character controller using angular velocity. The camera is reparented to the character controller to give me an FPS view.

I’m not sure :smiley: I had the FPS code already and it works well, but originally I wasn’t using a targeting system, then decided I should implement one. I was originally going to get the player to control the forces but it’s too cumbersome and not intuitive enough. At the moment I check whether centreX/Y is 0 or less or more and move my character controller with angular velocity. The camera is attached to the character controller so rotates with it.

It runs as an asynchronous task, so it’s running constantly.

Yes, you might have a point. I didn’t realise it didn’t affect Windows until I read the API docs yesterday.

Yes it works ok. I’ve made a quick video showing it pulling to the right but not the left when I rotate in either direction.

I should re-use those variables yes, good point.


Ahh, fair enough, and thank you for the replies. :slight_smile:

Hmm… Thinking about this a bit more, and given that you’re using a first-person perspective, I’m not sure that the vector-based approach is a wonderful idea: I imagine that it would be quite easy to end up with the directed object outside of the player’s view, and thus awkward to control.

Instead, why not simply attach a NodePath to your camera with a large y-value (remembering that, as a child of the camera, its coordinates are relative to the camera)? Doing so means that the object is always being directed towards a point near the centre of the screen, and thus the object should be more likely to stay in view. As the player turns, the object is directed. This also avoids the problem of the mouse-coordinates.

(I believe that something similar can be done using extrude, by the way, but using a NodePath seems simpler.)

Yes this sounds much better. That’s kind of what I was thinking with the, ahem, “geometry method” but with the addition of a collision plain and raytest, but that is basically an over complicated way of doing it, your method sounds more elegant.

Thanks for your help on this, much appreciated.