Oculus research spotlight: Teaming up to build a perceptual testbed
January 26, 2018
At Oculus Research, brilliant minds come together to tackle complex problems in computer vision, optics, human perception, and beyond. Ahead of the annual SIGGRAPH Conference last year, we shared some promising work on focal surface displays that illustrated the potential to improve image clarity and depth of focus for better, more natural viewing experiences in VR. Since then, we travelled to SIGGRAPH Asia to present preliminary results from a perceptual testbed that lets us prototype new displays, test algorithms, and measure viewer responses—all in one system. Today, we’re excited to share the story behind that work—and look at the avenues for future research it opens up.
The project began in 2015. Optical Scientist Yusufu Sulai started at Oculus Research in August and began building a Shack-Hartmann wavefront sensor to test its limitations for studying how our eyes fix and focus on stimuli. By October, Research Scientist Marina Zannoli had come onboard as Sulai’s self-proclaimed “vision science sidekick.” Together, they constructed an initial testbed and began to spec out its successor.
“We wanted to improve the capability of accurately measuring the accommodation response and presenting the highest quality images possible,” says Research Scientist Kevin MacKenzie, who joined the team in June 2016. “During my post-doctoral research, I built a similar multi-planar system with the goal of learning more about accommodation control—nowhere near as sophisticated, mind you.”
Sulai’s PhD and post-doctoral research had centered on large, multi-element, high-resolution retinal imaging systems that used adaptive optics to correct for eye abnormalities, while both Zannoli and MacKenzie had used multi-plane systems to explore how the human visual system uses blur to determine depth in complex scenes. This new project let them continue their earlier work with cutting-edge resources and tools. Says MacKenzie, “The system I built as a post-doc would have greatly benefited from the unique engineering and expertise here at Oculus Research.”
Of course, a number of engineering challenges had to be understood and overcome for the team to get there. “The system integrates a wavefront sensor, a multi-planar display system, and an eye tracker all in one,” explains Sulai. “Getting all three sub-systems aligned and working together in a robust fashion was a fun task. Going forward, I look forward to all the other perceptual questions that we’ll need to answer to support other teams at Oculus Research.”
“A great aspect of working at Oculus Research is the ability to build large, highly-skilled virtual teams that span nearly every discipline,” adds Research Scientist Douglas Lanman, who joined the effort after identifying the algorithmic challenges: What’s the best way to split a 3D scene across six different displays? And how can you do it in a way that accounts for eye movements and the limitations of today’s graphics hardware? “This sort of computational imaging problem is one that I’ve worked for more than a decade and just the type of interdisciplinary challenge my team typically tackles within Oculus Research,” Lanman adds. “As a result, I jumped at the opportunity to get involved.”
At that time, Lanman had been looking for an opportunity to collaborate with McGill University Professor Derek Nowrouzezahrai. When Lanman picked up the phone, Professor Nourezezhrai introduced him to Olivier Mercier, a PhD candidate in Computer Graphics—and the perfect scientist for the job, provided he’d accept.
“Not only did we need to make existing algorithms more than 1,000 times faster, but we needed to integrate them into a complex optical testbed together with state-of-the-art eye tracking,” recalls Lanman. “So I had to convince Olivier to take a risk on diving into a new research topic for half a year, rather than the usual short summer internship.”
In the end, the work and the people behind it were enough to close the deal. “It was clear right away that this project was something special, and that the right team was assembled to successfully complete it,” says Mercier. “I was thrilled that my skills could help solve some of the remaining problems, so I gladly agreed to join the effort to bring the multifocal machine to life.”
Together, the team created an efficient algorithm for optimal decompositions, incorporating insights from vision science—and achieved a three-orders-of magnitude speedup over previous work. They showed that eye tracking can be used for adequate plane alignment with efficient image-based deformations, adjusting for eye rotation and head movement. They built a state-of-the-art binocular multifocal testbed—the first of its kind—with integrated eye tracking and accommodation measurement. And they delivered preliminary results from a pilot study using the testbed.
“It’s amazing to think that after many decades of research by very talented vision scientists, the question about how the eye’s focusing system is driven—and what stimulus it uses to optimize focus—is still not well delineated,” notes MacKenzie. “The most exciting part of the system build is in the number of experimental questions we can answer with it—questions that could only be answered with this level of integration between stimulus presentation and oculomotor measurement.”
“The ability to prototype new hardware and software as well as measure perception and physiological responses of the viewer has opened not only new opportunities for product development, but also for advancing basic vision science,” adds Zannoli. “This platform should help us better understand the role of optical blur in depth perception as well as uncover the underlying mechanisms that drive convergence and accommodation. These two areas of research will have a direct impact on our ability to create comfortable and immersive experiences in VR.”