With Panda having extended its input-device support, I’ve been giving thought to how I might handle this in my key-mapping module.
I think that I see how to handle much of it in the back-end.
As part of that, and relevant to this thread, I currently intend to have it unify “axial” controls, whether they come from a “real” axis (like a thumbstick) or are simulated via keypresses, with the output from the keymapper looking pretty much the same either way. This should mean that I can implement things like movement, whether via either WASD or thumb-stick, without dealing with a mess of input-related code.
The problem, then, is how to handle this in terms of UI. Should I have a separate set of mapping-controls per input device? But then what if someone has an unusual setup that involves using multiple devices? And is the mouse then considered a separate controller, meaning that the standard keyboard-and-mouse setup involves two sets of mapping controls? Perhaps some sort of tabbed mapping-control setup, showing all of the attached devices?
(I’m not very experienced with controllers, and so don’t know how this is usually handled these days. ^^; )
Does anyone have any input (so to speak :P) on this?
Hmm, this is tricky, since the exact use depends on the type of game. For a couch multi-player game, I think the mapping should be ideally interchangeable between gamepads (it shouldn’t forget your mapping if someone picks up a different gamepad). But for a single-player simulator game, a user might have a fancy set-up with separate pedals and joysticks and may want to hook up a specific button on a specific device to a specific task.
I could see the point to defining some abstract idea of a “mapping set” that, for a local multi-player game, could be configured to correspond to such options as “keyboard and mouse” vs “gamepad” that can be assigned to different players, but for a single-player game could be broader and allow buttons from even more devices to be mapped in the same mapping. (Some games might even use multi-player co-op using different parts of the same keyboard, so there is definitely not a 1:1 association between device and control set.)
Panda has a concept of a “device class” (ie. device.device_class, such as DeviceClass.gamepad or DeviceClass.flight_stick). Panda’s input device layer is intended to largely abstract away the differences between different devices in the same class; so if you create a mapping set for a particular combination of device classes (which could just be “gamepad”) you could reasonably expect it to work for any gamepad. Someone might want to occasionally play a single-player game with a gamepad and sometimes with keyboard/mouse, but not with two gamepads at the same time. But you’ll want to keep the option open of using both a joystick and rudder pedals simultaneously.
Wow, that’s a level of complexity that hadn’t even occurred to me! O_O;
Okay, for now I think let me set aside considerations for multiplayer usage–I generally make entirely single-player games. It’s a pity, and it may mean that my key-mapper is less generally-useful than I’d hoped, but ah well. :/
My main uncertainty here is how to present key-mapping to the player. (When I spoke of “mapping-controls”, I meant “UI-items that the user can select and use to bind actions to inputs”, I think.)
My main idea right now is that each input device gets a tab on the key-mapping UI. (I actually have a “TabbedFrame” DirectGUI sub-class that would likely handle this.) So, there would be a keyboard tab; a mouse tab; and, if a gamepad is plugged in, a gamepad tab. But is that a good way to organise this sort of UI? How do other games generally handle this?
In some games the action mapping input is one list of actions only, it’s less work.
Depending if you have selected “use joypad” or “use keyboard and mouse” , when you want to remap some action it will wait for the device input (press key or mouse, or press joypad button).
I don’t think I get where or what the problem is.
When a player want’s to redefine a key I would expect some sort of pop-up dialog window asking the player to press a key or button for a given action (be it jump, fire or open inventory, etc). To do this you’d need to listen for base.buttonThrowers and base.deviceButtonThrowers events (as show in the samples in the gampad dir). For axis (analog sticks, triggers etc) one would need a task checking for movement on all the axis (at least for configuration, later only the axis bound to do anything) and send events if/when movement is detected (or the axis is pushed to the max or in one or two directions). The same logic that listens for base.buttonThrowers events could listen to these so there’s no difference between a mouse click, a button press, a key press or a stick movement, this might as well work for mouse cursor movement (if the same task watching the controller axis would also monitor the mouse cursor movement).
If you use a scheme like in Roaming Ralph where you have a key map dictionary and key presses updates the values in that dictionary then just replacing the True/False values with a 0.0-1.0 ( or is it -1.0 - +1.0?) floating scale would mean a key/button press sets it to 1.0 (or current value + frame delta if you need acceleration) and a axis movement sets it to whatever the axis value.
Dealing with fallback options may be tricky if something is bound to a button on a device that is not currently connected or to an axis the current controller lacks, or mixing up player 1 and player 2 controllers because a usb stick got disconnected and windows decided to reorder the devices (true story!), but that’s self inflicted harm on the players side and can’t be helped.
Hmm… I’ve considered something like that, I think–but then what about cases in which the player wants to use more than one device (say keyboard and joystick), or has an accessibility-related setup? It may be less work for me, but it may also make my game less accessible… :/
The problem isn’t in the back-end logic–I think that I have a reasonably solid idea of what to do there. The problem is on the user-facing side, the UI.
For example, consider four-directional movement:
A user with a keyboard might use WASD. This calls for four button-assignments, either as four separate bindings, or as a combined binding that asks for four button-presses.
A user preferring a gamepad might use the thumbstick. This calls for just one assignment, with just one binding.
But another user might prefer the gamepad, but want to use the D-pad for movement. In that case, we’re back to four bindings, or a single “use the D-pad” binding.
Thus I seem to have an inconsistent interface: is there one UI-element for binding movement, or are there four? I could handle this by giving each device a tab, but perhaps that’s unwieldy, especially for users with many devices. (And how do I know how many inputs a given device should have, especially once we start considering accessibility for people without the standard interface devices…?)
I’m aware of it, but I’ll confess that I don’t think that I’d looked closely at it.
Having just done so now, I see that you have it require a separate input for each action, even axial inputs. So a user wanting to use the thumbstick to move the character is required to input an axis for each of left, right, forward, and back, rather than just specifying that the stick should be used for movement.
It’s straightforward–and it’s flexible, I will say. That said, will users perhaps find it inconvenient? And what do games usually do these days?
(Thank you for pointing me to it, come to that! ^_^)
I do notice that, in the 1.10.0 version, at least (I haven’t downloaded a more-recent set of samples, if there is one), the names given to some of the axes seem to be incorrect. For example, I think that I got something like “Right trigger” for the right thumbstick y-up axis.
Sure! I’m not sure that I have all of the information, at least not with confidence (it looks as though I may have discarded the packaging for the controller… :/), but I’ll message you what I have, I intend.
(And that’s a neat little testing-tool, by the way. ^_^)
The game could read both devices, some games do that, you could move with keyboard and use gamepad buttons.
But this is a very weird to play a game and it makes it difficult mixing keyboard mouse and gamepad LOL
I can only advice you to not over complicated things while this should be kept simple as most games does.
At the same time use a virtual reality helmet and steering wheel …
For the sake of experiment, I can implement control of the game from the midi keyboard, but the question will be that the game will become more accessible and manageable?
I have never used a gamepad on a PC. However, if possible, to find out the type of input event. You can simply make the separation of control, if the event occurs from the keyboard, then we handle it with a separate class, respectively a gamepad, with another.
The solution is to reduce the difference. In fact, the mouse and keyboard are already different input devices. However, nothing prevents us from implementing a review from the keyboard. The question is how convenient it is.
I think for a single-player game, you should focus on organising the UI by action, and not by device. My first thought when changing a mapping in a game is “where is the jump action” and not “do I want to change a keyboard or joystick mapping”.
You can then let the player define multiple different mappings (from potentially different devices) for the same control.
I’m… not quite sure of what you’re saying here, I’m afraid. I think that you’re in agreement with rdb, just below your post? ^^;
Hmm… That makes sense, I think.
It does result in cases such as I described above, in which the player is asked to map the thumbstick four times over (once each for “forwards”, “backwards”, “strafe left”, and “strafe right”)–but I suppose that doing so does mean that odd setups are more likely to be supported.
I’ll have to think about handling multiple mappings per action–I fear that I might be squeezing a bit much into the UI that I have at the moment–although that could perhaps be reviewed. Another thought might be to just support multiple layouts, including custom user-defined layouts, each with one mapping per action…
You could say that you can map each “axis control” (horizontal move vs vertical move) to either an axis on an input device or to a pair of buttons in Panda. This means you would have to distinguish between these “axis controls” and regular button-style inputs.
Alternatively, you could have them separate but mark two controls as being interlinked, so that when you bind a vertical gamepad axis to eg. “move forward” it automatically binds it to “move backward”, whereas it wouldn’t do this if you bound a button press to the same control. This means you don’t need to treat the GUI for these cases separately, and otherwise would prevent you of having to handle the awkward case where someone tries to bind two different directions of the same axis to different inputs.
This–if I’m reading you correctly–is something that I already have in mind, as it would allow me to respond to such controls the same way in my game-code, without checking which is in use.
It occurs to me that there is a counter-argument to having sets of buttons mapped in a single UI-interaction: it means that if the player makes a mistake, they’re required to re-map all of the linked buttons, not just the erroneous one.
That would simplify matters somewhat. Conversely, it occurs to me that it might be an issue in accessibility cases in which mapping the axes separately is desirable, should such cases exist.
I’m going to think about this stuff for a while. Right now I’m leaning towards simply having the player map each input individually, but I haven’t made a final decision yet.
Thank you, everyone who has responded here, for your input! It’s helped a lot, I do believe.