At this year’s F8, Oculus Head of Product Management Maria Fernandez Guajardo gave attendees a sneak peek at Half Dome—a prototype headset that incorporates earlier research on varifocal displays and some preliminary perceptual testbed findings with a wider field of view in an ergonomic form factor. Earlier this week, Facebook Reality Labs Director of Computational Imaging Douglas Lanman delivered a keynote at SID Display Week in Los Angeles, where he explained how these varifocal technologies matured over a three-year period. Today, we’re excited to share that story with the Oculus community.
“I started at Facebook Reality Labs (FRL), then Oculus Research, in June of 2014,” says Lanman. “I’d worked on VR/AR before, but it took about a year to help recruit the team and select a technical problem that could have significant real-world impact—after all, that’s what FRL was founded for: to tackle hard, industry-changing problems.”
The small and scrappy initial team looked into multiple ways VR displays could be improved, from greatly enhanced resolutions and higher dynamic ranges to more compact form factors. Soon, however, solving for vergence-accommodation conflict stood out as the right challenge to tackle.
“It’s something I’d worked on briefly before but never had the full set of collaborators to practically resolve,” Lanman explains. “Prior solutions always made some unacceptable trade-off in resolution, image quality, or system bulk. One day, I took a look around the growing lab and saw we had quickly assembled the best team in the world to make meaningful progress on varifocal displays, including experts in mechanical engineering, computer vision, and perception science. As a researcher, this was the first time I looked at a problem and realized lack of talent, time, or resources wouldn’t be the deciding factor.”
Together with Mechanical Engineering Lead Ryan Ebert, Lanman pitched the idea of building a team to construct a groundbreaking varifocal VR headset, one that would work seamlessly with both eye tracking and refined optics to advance the clarity and comfort of VR.
“Fortunately, it wasn’t all a bluff,” Lanman says. “Within a couple months, Ryan, Alex Fix, our lead computer vision scientist on the effort, and I delivered our first working prototype. We’ve been at it ever since, methodically refining the system together with a greatly expanded set of collaborators.”
Given that this was a large research program spanning 40 contributors across both Oculus and Facebook, there were complex challenges to face—all of which the team took in stride. “The synergy on this team is like nothing else I have experienced before,” notes Technical Program Manager Kim Corson. “We’re fortunate to have a team full of people with varying technical backgrounds who continuously go above and beyond their individual roles to ensure the team as a whole is moving towards our longer-term goals.”
“Successfully moving fast requires collaboration within the mechanical discipline—and certainly the extended multi-disciplinary team,” adds Ebert. Thanks to several talented mechanical engineers, a lot of benchtop mechanism prototyping, precision machining under tight timelines by Model Maker Steve Charnley and his team, and some 3D printing, the team was able to rapidly iterate on the path to Half Dome. Mechanical Engineer Joel Hegland lead the eye tracking mechanical design in parallel, with its own series of prototypes, while the wide field of view optics were also explored in tandem with optical prototypes built by Mechnical Engineers Selso Luanava and Stephen Choi.
“While eye tracking isn’t a new technology, eye tracking in a wide-FOV varifocal VR device is,” notes Eye Tracking Research Lead Robert Cavin. “This integration presented a host of new challenges, as the moving screens mean the space normally used for eye tracking components is greatly reduced. We worked closely with the optics team to create new means of folding light, allowing us to image the eye without obstructing the screen movement or the display view. The design also places large lenses very close to the eye, the oblique illumination of which caused poor contrast and artifacts. Solving that required novel image processing algorithms, new calibration strategies, and, of course, lots of time and effort from a great team of researchers and engineers.”
In addition to eye tracking and taking the lead on calibration, Research Scientist Alexander Fix served as the project’s overall software architect—a unique intersection with an interesting computer vision problem waiting to be solved. “Software is the glue that keeps everything together,” he explains. “Without it, this headset is just an impressive-looking brick, so architecting the software means understanding how all the mechanical, optical, and electrical systems are supposed to work together—and making sure we control them all to get the desired outcome.”
To determine the full system’s requirements, the goal was to over-engineer in order and then narrow down for an immersive—and comfortable—visual experience. “Some of the requirements came from the existing vision science literature on visual comfort and visual immersion, others from research conducted in house,” explains Research Scientist Marina Zannoli. “Requirements for each new prototype were established based on our learnings from the previous prototype. The No. 1 recurring requirement was physical comfort. It’s difficult to appreciate the quality of the visual content if the headset is uncomfortable.”
Meanwhile, Optical Scientist Brian Wheelwright and Research Intern Joyce Fang concentrated their efforts on the moving varifocal screens. “The lenses were designed to work with the active mechanical system to perform over the full varifocal range while maintaining wide field of view,” notes Wheelwright. “The core of Half Dome hardware isn’t optics—it’s optomechanics.”
Stay tuned to the blog next week for even more insights on Half Dome, and check out UploadVR for a detailed overview of Lanman’s Display Week keynote!
— The Oculus Team