Games were clearly not the focus of the Xbox One reveal event.
Nonetheless, Microsoft still wanted to provide some sense of the improvements made to the hardware, so visiting journalists were treated to a series of so-called “interactive experiences”. The first set of experiences was designed to showcase the new, redesigned controller. According to the minder of this particular demonstration, some 40 individual improvements have been made to the 360 controller’s design for the Xbox One version.
Immediately noticeable, Microsoft has completely redesigned the D-pad; while it retains the one-piece construction of its predecessor, each of the four points feels much more independent, clicking satisfyingly when pressed. At the same time, the slightly raised edges at the tips of each point lend them relatively well to rolling motions such as those used for Street-Fighter-style fighting games. On first impressions, it appears to find a happy medium between the D-pads utilised by the current Xbox 360 and PS3.
The grips and the contours of the controller have been revised, reportedly to accommodate a wider variety of hand sizes. The many join lines visible on the 360 controller have now almost completely vanished, and the unsightly screw holes on the device’s underside are no more. Most notably, though: the batteries have been rotated 90 degrees and integrated into the controller itself, meaning no more bulky, protruding battery pack.
The main focus of this particular demonstration, however, was to serve as an introduction to the new impulse triggers. The traditional rumble motors are still there, fixed within the controller grips that rest in a player’s palms. Now they’re complemented by vibration motors in the triggers, meaning the Xbox One controller can deliver a more comprehensive and more convincing haptic experience. The first demonstration, coordinated via PC, simulated a simple human heartbeat, its the double beat switching from left to right. The heightened sensitivity of a human’s fingertips, coupled with the way they rest naturally against the controller’s triggers, truly does make the sensation much more believable. That said, you do have wonder how much the accompanying on-screen visual cue plays into the whole sensation.
This heartbeat simulation was followed by another five similar activities designed to simulate different experiences. A helicopter demo, in particular, simulated the sensation of a rotor firing up slowly, gathering pace and increasing sharply as it takes off. Also included were simulations for: a car engine igniting and then revving powerfully; a first-person perspective of the user casting a fireball from the palm of their hand; a car wheel bumping and vibrating over different surfaces; and a first-person perspective of the user firing a handgun not entirely unlike Halo’s plasma pistol. The variety of demonstrations on offer did much to highlight the variety of sensations that the new impulse triggers can provide, not to mention how much more effective they are than the traditional vibration motors alone.
Next, it was on to our first look at the new-and-improved Kinect sensor. Again, no games were demonstrated, but rather custom built activities designed to show off the improvements made to the hardware. One of the first things noticeable about the new sensor is that it can operate with users in much closer proximity than the previous version. Its field of view has been vastly increased by some 60%, which enables the independent tracking of up to six users simultaneously (as opposed to the previous Kinect’s two-person limit).
The demonstration kicked off with what our minder called the “3D feed”: a largely black-and-white/greyscale representation of the 3D space in front of the Kinect. Three times as sensitive as its predecessor, Kinect can now detect much more detail, reproducing clearly (albeit with a glossy, milk-like effect) the contours of a user’s face and even the wrinkles on their clothing.
Our minder then switched the device to what he called, simply, the “HD colour feed”: a straight-up, 1080p colour reproduction of everything that the camera can see. This feed reflects the quality of the image that will be utilized by Xbox One’s integrated Skype functionality.
One problem with this feed, however, is it is largely dependent on the lighting conditions of the environment. Knowing that many users will likely consume their games and other media in the dark, Microsoft has thus implemented an “Active IR” mode. The lights in our demo were switched off entirely to demonstrate this feature, and in near pitch black, Kinect could still reproduce whatever was within its field of view on-screen. Not as detailed as the colour feed, this IR representation resembled the green hues of night-vision goggles. When our minder switched on a flashlight to produce more light, whatever was cast under the beam of light appeared on-screen in full colour once again.
A four-microphone array runs along the bottom of the unit, and our minder played us a recording of some audio fed to just one of these mics in a typical gaming environment. The sensitive microphone picked up all manner of ambient noise, resulting in a grating, static-heavy blare. We were advised that, if we listened hard enough, we might just hear someone talking. This was followed up by playback of the same audio post-processing by the new Kinect, which cut out all of the ambient noise and isolated a man issuing voice commands to a game, instructing “Delta Squad” to take cover, for instance.
The next part of the demonstration showcased the skeletal-tracking capabilities of the new Kinect by inviting a volunteer to stand and move in front of it. Kinect can now identify far more joints, including spine joints, shoulder joints, the tip of a user’s hand and also their thumb. An orientation demo then swapped out this stick-figure representation of the volunteer to a blockier one in order to convey Kinect’s improved 3D detection. The volunteer could move his right leg behind his left leg, for instance, and this was accurately reflected on-screen – a real-time representation in a virtual 3D space.
The “muscle man” demo, a “realistic, human-based physics model” layered over the wire-like skeleton model in order to calculate the force generated by a character’s movements. As the volunteer’s weight shifted from his left foot to his right foot, circles underneath each foot grew or shrank in size and changed colour to reflect the shifting balance. White circles appeared on screen when he threw jabs, their size dependent on the force and momentum behind the punch.
Perhaps the most impressive of Kinect’s new features was saved for last: an ability to estimate a user’s heart rate in real time based on “human imperceptible fluctuations” in their facial expressions, including their pulse. This demo also highlighted Kinect’s detection abilities. A box-out honed in on the volunteer’s face, referring to him as “Guest,” since he had not been tracked before (whereas our demonstrator’s name appeared on-screen beneath a close-up of his face when he stepped into view). A list of settings beneath the users’ mugshots toggled on and off depending on what they were doing at that point in real time. For instance, if our volunteer looked at the screen, the “Engaged” indicator read “Yes,” switching instantly to “No” when he looked away. Similarly, Kinect can ascertain whether the user’s mouth was open and also whether they’re talking at any given point.
Despite the limitations of this minder-driven demonstration, it’s hard to deny the absolute strides that have been made in the second iteration of this already-promising technology. Unfortunately, there were no demonstrations of refinements to the new Kinect’s cursor accuracy, which proved problematic in version 1.0. Surely, resulting from the aforementioned improvements alone, progress has been made in that respect, but we’ll likely find out more in Los Angeles next month.