Wednesday, September 28, 2016

How mobile technology can solve VR’s ‘imagination gap’


How mobile technology can solve VR’s ‘imagination gap’



When Georges Méliès was shooting the first science-fiction film, 1902’sA Trip to the Moon, he had to invent his own special effects, including an ingenious editing trick to crash-land his space capsule on the face of the moon. His Jules Verne-inspired vision outpaced the available technology, so he invented his own.
Directors in later decades exhibited similar ingenuity by using miniatures to produce large-scale special effects, and the 1990s brought the era of computer-generated imagery to filmmaking, paving the way for virtual cinematography as seen in many early 2000s Hollywood action flicks. Breakthroughs like these often accompany the emergence of a new tech medium. Presented with the promise of drawing vivid new worlds, our imaginations run wild, so much so that they outpace what technology allows.
Today, the virtual reality industry is working to bridge this same “imagination gap.” Filmmakers and game developers are just beginning to wrap their minds around the kinds of immersive virtual worlds they can create. Camera manufacturers, animators, and middleware vendors are creating tools to bring those visions to life, and processor architects and VR headset makers are working to fashion the hardware that brings it all together.
VR as a medium has long struggled with this gap between vision and technical capability, but the medium now has momentum on its side. Devices like the Oculus Rift, PlayStation VR, and the HTC Vive provide developers with new capabilities and applications as diverse as gaming, live concert streaming, and job training.
No matter where VR storytellers choose to take us, though, the challenges facing hardware makers are universal: Experiences must look stunning, sound real, and allow intuitive interaction between the user and the virtual world.
While tethered VR systems have given early adopters a glimpse of potential virtual worlds, the setups are held back—both literally and figuratively—by their wires. A wired connection to either a high-powered PC or videogame console prevents users from moving freely through virtual worlds without fear of tripping. That’s why wire-free mobile systems are the key to bringing VR to the masses.
Fortunately, companies across the industry are working together to create mobile VR devices capable of reaching the quality of their beefier brethren, despite the challenge that these mobile devices need to deliver these experiences with mobile processors, not PC processors.
These mobile processors have to create full 360-degree spherical views for both the right and left eye, render colors accurately, and drive enough pixels to generate crisp images. Sounds must be perfectly generated to create the illusion of distance, direction, and the environment. For interaction, the processor has to fuse inputs from multiple sensors quickly enough to keep audio and video in sync with a user’s every movement, tilt, swivel, and sway.
While there’s still work to be done on the device manufacturer’s side (for example making pixel-dense 4K screens the norm), mobile processors, such as the Qualcomm Snapdragon 820, are already tackling these challenges head-on. Meeting the processing requirements within the power and thermal constraints of a mobile device is no simple task, so Qualcomm took a holistic approach. Designed with VR in mind, the Snapdragon 820 relies on heterogeneous computing, in which purpose-built processing engines work in concert, each handling the tasks they’re best at.
The Qualcomm Adreno graphics processing unit—the part of the chip responsible for image rendering—is a perfect example: It uses sophisticated rendering techniques to maximize efficiency and performance while reducing latency. At the system level, removing latency bottlenecks requires optimizations across the entire processor and software stack. Reducing motion to photon latency is a perfect example since it involves many processing tasks, such as motion detection, visual processing, and updating the display, that require many heterogeneous engines. The Qualcomm Hexagon DSP quickly determines the head pose through on-device motion tracking, the Adreno GPU adjusts the rendered scene, and the display engine seamlessly updates the screen.
Still, meeting the specs is only half the battle; it’s up to content creators to make the most of this new digital canvas. The good news is they won’t have to start from scratch. The Snapdragon VR SDK and VR820 reference platform hands developers the tools they need to realize the full potential of the Snapdragon processor. Developers won’t need to generate their own code to determine the user’s head movement, or painstakingly create separate left-eye and right-eye views, or tweak their renderings to compensate for the way the optics of a headset’s lenses can distort images.
In essence, these tools unburden VR content creators from having to figure out how to realize their visions, the way Georges Méliès did. As device makers continue to release powerful new hardware, VR game developers and filmmakers can instead focus on bringing to life what they see with their mind’s eye, from crash-landing on the moon to dreaming up new worlds.
Qualcomm Adreno, Qualcomm Hexagon DSP, Qualcomm Snapdragon and Qualcomm VR SDK are products of Qualcomm Technologies, Inc.
This article was produced by Qualcomm and not by the Quartz editorial staff.

No comments:

Post a Comment