Portal’s Smart Camera uses advanced computer vision to dynamically frame shots during video calls. Even in a room with people moving around and interacting, Smart Camera uses a variety of AI systems to decide how best to accommodate multiple subjects as they move in-and-out of view. During an early pre-launch test of Portal, Lade Obamehinti noticed something wasn’t right.
“I was talking about how much I liked french toast and gesticulated a lot. I became the visual point of interest; the obvious point of interest,” says Obamehinti, who leads technical strategy for Facebook’s AR/VR software team. Instead of focusing on the person speaking, the pre-production version of Portal’s AI-powered Smart Camera prioritized a different subject. “It zoomed in on my white, male colleague instead of me.”
The AI system in this pre-launch test should have picked up on Obamehinti’s animated story about French toast, so what caused it to ignore her and focus on someone else? Obamehinti, a Nigerian-American born and raised in Dallas, was working on the Portal Software Quality team at the time. Instead of simply filing a bug report and moving on, she asked herself how someone else might react if the same thing had happened to them, and what she could do to make sure that didn’t happen.
“I wanted to do something,” says Obamehinti. “I wanted to build a proactive solution for as many people as possible.” After examining the data used to train Portal prior to launch, she uncovered gaps in representation that could have led to a subpar user experience, like what she experienced during her pre-launch test. The solution came through a new initiative, led by Obamehinti, to build a framework for “inclusive AI.” Obamehinti took the stage at Facebook’s F8 conference this week to share her work with engineers and researchers from across the tech community.
Teaching machines to be inclusive may sound unusual, but it's important in order to ensure that AI-powered devices work well for everyone, regardless of their skin tone and other attributes. AI models may perform differently for different people, in part because they’re trained with datasets created by people, and these datasets may contain limitations, flaws, or other issues. “Humans naturally categorize things,” says Obamehinti. “This can lead to bias in any AI system if left unchecked. “We run the risk of reinforcing, and potentially magnifying negative aspects of our human nature."
The inclusive AI process provides a set of working guidelines to help researchers and programmers design data sets, measure product performance, and test new systems through the lens of inclusivity.
“There are a lot of questions to be asked as we work to define inclusivity and train AI,” says Obamehinti. “Are glasses important to consider? How about eye shape or color? Why or why not?” Questions like these are vital in building inclusive products. With Portal, asking the right questions during development led to the uncovering of data sets with uneven representation for skin tone and gender presentation—both of which can make technology feel like it isn’t made for you.
Obamehinti and team got to work improving Portal’s software with new data sets and samples. They included people of various gender, age, and skin tones, then verified the training adhered to inclusive AI guidelines. These efforts refined Smart Camera performance for Portal’s launch, minimizing the risk that Obamehinti’s own experience with the test unit would happen again.
The benefits of adopting a framework for inclusive AI are clear: It improves technology by making it feel more personal; like it was made for you. The challenge then is to scale inclusive methodology to all Facebook products that use AI. Obamehinti established a three-part process to characterize inclusive AI: user studies, algorithm development, and system validation.
User studies grant valuable insight into how different people respond to new products and features; algorithm development works with data set design, training, and model evaluation; and system validation focuses on performance and the overall experience. For each of these processes, the framework for inclusive AI provides best practice guidelines along two paths: vision and voice, each looking at relevant factors like skin tone and age.
Over the past year, multiple product teams across Facebook have introduced the inclusive AI process into their workflows. The Spark AR team uses the inclusive AI process to power filters on Instagram and Facebook, making sure the software delivers quality effects for everyone, regardless of skin tone and other factors. And in Messenger, AR masks benefit from testing videos filmed with more diverse face shapes and skin tones. In each case, building new features with inclusivity top-of-mind is giving users from all backgrounds the best possible experience. Product teams across Facebook are designing inclusive data sets to ensure inclusivity across age, gender, skintone, and other factors, and they’re grounding system tests in a variety of different user scenarios.
But certain operational challenges can make the inclusive AI process difficult because representative data sets aren’t widely available across the industry, including Facebook. “Take the collection process for training data sets,” says Obamehinti. “Our teams work to design representative data sets with the inclusive AI process, but given our office location, they don’t have immediate access to diverse populations. Responsible intent is met with the pragmatic reality of our operating environment.”
To address this and other challenges, teams across Facebook are working to establish global user studies. This work will be valuable as Facebook tackles the next test for inclusive AI: voice. Skin tone and gender can under-represent certain groups in voice-powered tools. Data sets for voice need to account for dialect, age, and other considerations, in order to create AI tools that understand all kinds of different people. Increasing the number of samples through new user studies will help ensure AI voice products are trained using the inclusive AI process.
Inclusive AI may not be obvious to people using a particular product or service, but its effect is profound. “We have to really understand our diverse product community and the most critical user problems when working with AI,” says Obamehinti. “Inclusive means not excluding anyone.”
Obamehinti’s efforts to improve human-centered AI are only one piece of the puzzle. Even as the inclusive AI process scales to other Facebook products, there’s a growing need for teams across the company to share new findings and continue asking tough questions.
AI already helps people accomplish a lot, from improving vaccination campaigns in rural areas to connecting hospitals with blood donors. It helps people do everyday things too, like optimizing the morning commute. It's set to do even more in the future, so it's increasingly important to teach AI how to navigate the world and the people in it to ensure technology is built to empower everyone.
Meta creates breakthrough technologies and advances AI to connect people to what matters and to help keep communities safe.