The recent announcement by Microsoft that they will be selling the Xbox One without the bundled Kinect has sparked mixed reactions from critics and developers. From my perspective, Microsoft seem to be accepting something that I’ve been thinking about for a while - the problems with waggle and touch controls for gaming (which I’ll collectively refer to as “waveprod” from here on).
I should point out at this juncture that I think both motion tracking technology and modern capacitive touch screens are amazing advancements in consumer tech. I attended a lecture by Professor Chris Bishop a couple of years ago on the science behind Kinect and it’s truly fascinating stuff. The thing is, a lot of his own enthusiasm for the product seemed to come from applications outside of gaming, such as helping doctors control interfaces in sterile environments.
I’m also not trying to say that these types of interface have no place in the gaming world. There have been some excellent applications of the technologies thus far, and I’ll touch on how some of these implementations overcome the issues with waveprod. Unfortunately there are also some oft repeated mistakes made with waveprod implementations in games, quite often as a result of lazy porting of an aspect of game design to mobile platforms. I’m going to touch on two different sides of the waveprod problem – the connection to the game, and the connection to the player.
It’s all about intent
The goal of gaming interfaces should be to blur the line between man and machine, translating human movement into play on screen as seamlessly as possible. If I’m playing something like Titanfall (which I have been doing far too much) and want to leap for a wall to my left, I should be able to think that, have my muscles respond with familiar movement and guarantee each time I’ll get the same leap. This is why PC FPS players hate mouse acceleration – they want to know that every time they plan to move their aim by a certain distance it will move exactly that distance, regardless of speed.
Waveprod interfaces introduce a margin of error to this interpretation of intent. For touch this is fairly simple to articulate: fingers big, pixels small. Motion control (and touch to some extent), relies heavily on gesture interpretation: without 100% guarantee of accuracy in this we have our error margin.
Now the errors of waverprod aren’t a problem on their own – it’s possible to design around them and so reduce the impact on the gamer. The issue is that there are far too many games out there that don’t do this. I’ll ignore the wealth of lazy ports of games that try to shoehorn in touch controls without making any gameplay concession to account for the radical change in interface experience. Bad ports are bad ports, whether they fail to effectively use new control systems or deliver an awful UI experience (*cough* consoles ported to PC*cough*). So what exactly am I referring to? I’ll give two examples.
The first are unforgiving games that penalise a player for non-perfect approaches. Now I’m not generally keen on these sorts of game myself, despite a recent addiction to Rayman Origins which is particularly unforgiving. Of course Rayman Origins (on the Vita at least) lets me play with perfect intent translation through physical controls, and I can see how people do enjoy such games. One particularly bad example I came across of this penalty was a sokoban clone that would only award full points for completing a level in perfect time. Now this would be fine if it were just measuring player intent – I can work out the puzzle, plot the quickest route to solve, and try to enact. It’s the last bit that caused a problem – the touch interface required moving by picking a target square on an isometric grid. With a large grid and a small screen, there was a strong chance that a move would not go quite according to plan. Therefore, trying to achieve a full score repeatedly wastes the player’s time through no fault of the player themselves. It’s offensive to the gamer.
The other type of problem game are those that have a great concept for using waveprod, but don’t quite deliver. For this I’m looking square at Fable: The Journey. The gods only know how this has over 60% on Metacritic. In Fable: The Journey you take a first person role in the franchise, using the Kinect to control movement through the on-rails world. Imagine that! Being able to hurl fireballs from the comfort of your sofa! Sounds good huh? Unfortunately, in practice it’s really not so good. I tried this both at the Eurogamer expo before it was released, and on my own 360 in the hope that a controlled environment might help. It really doesn’t – I can’t for the life of me get it to interpret in which direction I want a fireball to fly, and because of that I gave up as soon as spells became available. Given that up until then I’d only been steering a horse by miming holding reins, I wasn’t all that happy! It’s a perfect example of something that must’ve sounded fantastic on paper, but should’ve made sure it could deliver before going too far.
So how do games utilise waveprod well without the impact of errors? The best example I can think of is Angry Birds. Angry Birds still relies upon a high level of accuracy – you can subtly vary a bird’s trajectory to great effect. The trick is that Angry birds allows you to verify that your intent is being properly picked up. Rather than just hitting a point to select angle and power, Rovio make use of what a touch interface does well and allows you to hit any point then drag to measure relative to your initial touch. Along with this you’re given the visual feedback of the projected trajectory of the bird once you release your finger. Thus, prior to releasing the touch screen you know that your intent will be properly translated into action.
Hooked on a Feeling? No? Right … moving on. I mentioned the connection to the player earlier. This is a pretty simple one really: touch and motion controls tend to have very little in the way of responsive feedback. OK, so there may be the occasional vibration here and there, but for the most part you’re waving in thin air or dragging against unresponsive glass. Humans are bloody amazing, and one thing that makes us amazing is our sense of touch. With a sprung analogue control stick, I know how far I’ve moved the stick, what the limits of movement are, and which direction I’ve moved it in. Games have attempted to replicate the controller layout on touchscreens, but without that haptic feedback you just can’t have the same level of detailed control. As a result, something in the game design is going to have to give to make allowances for mistakes.
Motion controls are even worse for this. Given most good implementations of motion controls involve interaction with physical objects (bowling, tennis, sword fights), it doesn’t work that you get no feedback from this interaction. Suddenly, there’s nothing about the motion control itself that is necessary for the game and you realise it’s just a novelty application.
This is the one area that I don’t think can be solved without combining proper physical controllers with waveprod. A hybrid system allows for the best of all worlds, and hey, maybe that’s what the PS4 does. But I doubt it.