Using getPointer instead of getMouse

So, on recent advice, I’ve decided to switch from using “base.mouseWatcher.getMouse()” to using “base.win.getPointer(0)”.

However, there’s a problem: “getPointer” returns values in the range from 0 to the width or height of the window, in pixels, as appropriate. By comparison, “getMouse” returns values in the range from -1 to 1.

Now, my first instinct is to minimise the changes to my code by converting the result of “getPointer” into the range produced by “getMouse”. That shouldn’t be too difficult: subtract 0.5*, then divide by the same, if I’m not much mistaken.

(Of course, as was pointed out to me in another thread, there’s also the approach of using “getRelativePoint”.)

But I have an uncertainty: Will doing this result in reduced accuracy, thus undermining the smoothness of my various mouse-driven elements, perhaps most saliently mouse-look?

So I suppose that my questions are these: Does this approach incur the problem that I’m worrying about? And if so, what might be done about it? (And if there is a problem, is the “getRelativePoint” approach better than the “subtract-and-divide” approach?)

One thought that occurs to me is to wonder how “getMouse” arrives at coordinates in the range from -1 to 1. Does anyone here know how that’s done? (I’m happy to simply be pointed to the relevant code, if called for.)

Can you simply say what is the main problem you are having?

Will converting the results of “getPointer” to the ranges produced by “getMouse” result in a problematic loss of accuracy, and thus perceptible issues in mouse-movement, most especially mouse-look?

If so, what might be done about it?

And, presuming that there is a problem, is the use of “getRelativePoint” to perform the conversion any better than the use of subtraction followed by division?

Finally, how does “getMouse” produce results in the ranges that it does?

I suggest that you can use the getMouse() in places needed, as it seems you have more knowledge about it. And when the problem of getMouse() arises, use getPointer(0) and to make everything work, run some tests with the whole “subtract-and-divide” ( :slight_smile: ) method.

The problem is that I don’t know whether the conversion is likely to be problematic, and I don’t want to change important code in a large project only to encounter reports months later that there’s a problem with it. And I’m not sure of how I’d manually detect relevant issues of numerical precision.

Hence I’m hoping that those more familiar than I with the elements in question might have some insight!

So what problem are you encountering with getMouse()?

In short, it’s been reported that at low frame-rates the mouse-look feature becomes inconsistent. This is apparently a known issue with “getMouse”, and it was recommended that I switch to “getPointer”.

The original report was here:

And the recommendation was here:

This is how Panda calculates the mouse position as reported by mouseWatcherNode:

So this is exactly equivalent in Python:

ptr = win.get_pointer(0)
if ptr.in_window:
    w, h = base.win.size
    x_offset = (2 * ptr.x) / w - 1
    y_offset = 1 - (2 * ptr.y) / h

You don’t need to worry about precision issues because the input values are integer values within a small range.

Aah, excellent! Thank you very much rdb–that helps, and sets my mind at rest about performing the conversion. :slight_smile:

I just want to add that you may want to reconsider normalising based on the window size. Doing so means that a given movement of the mouse on screen will result in a larger rotation when the window is resized to be smaller, because you are only considering the proportion of the mouse movement to the window size.

This might not be a big issue for you, especially if your game usually runs in full screen, though.

Hmm… That’s a reasonable point. Maybe I should normalise for a known “reference size”, then, rather than the window-size.

I could also just use the raw offset, especially since I might end up looking through the various pieces of code that rely on this anyway…