As technologists, we are all stewards of some of the most powerful communication tools ever created. These tools have generated a lot of good in the world, but their very power requires a deep sense of responsibility and a commitment to making the most ethically responsible decisions possible, every day. Existing products like Facebook, Instagram, Messenger, and WhatsApp, as well as new and emerging technologies like AR and VR, have the potential to improve people’s lives in myriad ways. They also can create significant challenges and raise hard questions about how to make our products beneficial and safe for all. As use of these technologies grows and the influence of our platforms increases, our responsibility to surface and mitigate potential harm must as well.
That's why we've made such major investments in building teams dedicated to key topics in responsibility, such as protecting privacy and defending the integrity of our platforms against misuse and harmful content. These and many other specialist teams have the expertise and resources to go deep on these specific, complex domains.
But a critical piece of the puzzle is looking at the whole picture of responsibility, and to do that as early as possible in the product development process. This is why we created the central Responsible Innovation (RI) team several years ago, to help teams at Facebook proactively surface and address potential harms to society in all that we build. I help lead these efforts, which work in concert with many specialist teams across the company to provide both breadth and depth in Facebook’s approach to responsible innovation.
When I explain our overall process for assessing whether new products and technologies are safe for people, I often use an analogy from health care. The central RI team is like a general practitioner physician; we focus on early, preventive practices to help avoid as many downstream health issues as possible. We take a holistic approach, treating “the whole patient” and leveraging knowledge across a wide variety of specialists in critical topics. We can triage things for a team and refer them to the right specialist partners as needed, in the same way that a GP refers a patient to specialists in dermatology, orthopedics, etc.
In the product design context, this means thinking not just short- to mid-term, but investing time to forecast what longer term impacts might be. It means not just looking at the people who use the product as intended, but the people who may misuse it to hurt others. It means considering if and how some people or communities may inadvertently have a negative experience with the products that we build. These examples just scratch the surface of the things we must take into consideration when designing for such a massive and diverse global audience, but it shows the ways in which breadth and depth creates a more comprehensive approach to responsible innovation.
I was inspired to help found the central RI team several years ago as I saw the impact of our products and services growing in so many areas of society. A lot of this impact is the kind of positive value we all want to have, but we also saw at times that the value may not have been evenly experienced across all communities. And we also saw that at times our products resulted in negative experiences for people, which no one wants to see.
As a designer, I was trained to think about things at a systems level and to observe and understand how different parts of the system affect one another. Topics like privacy, safety, well-being, and other facets of responsibility are critical dimensions in their own right, but they are also often overlapping and in tension with each other. So I started this team to focus on a model for early, holistic engagement with teams across the company to help them think through the full picture of potential impact, and to create a safe and structured way for teams to help surface and explore mitigation strategies for potential harm.
The central RI team is made up of experts with diverse, multidisciplinary backgrounds, including anthropology, civil rights, ethics, and human rights, just to name a few. This diversity of backgrounds helps us explore complex problems in new and novel ways. We also regularly bring in external subject matter experts as well as a diverse set of perspectives. This means in a given RI round table or workshop, you might have a filmmaker, a philosopher, an artist, and an academic expert, alongside members of communities who could potentially be affected by what we are planning to build, all providing unique perspectives and wisdom to inform our approach.
We help product teams identify potential harms across a broad spectrum of societal issues and dilemmas. We create standards, tools, and guidance for responsible innovation practices across our apps and services. For example, early in the pandemic, we developed guidance on how to minimize potential harm when designing COVID-19-related products. We wanted teams to consider things like combating misinformation about the virus, whether a tool could be exploited by profiteers, or whether a feature could be unintentionally offensive or insensitive.
Underpinning this work is our Responsible Innovation Dimensions. These dimensions were inspired by studying various global resources such as the Universal Declaration of Human Rights and the UN Sustainable Development Goals. These documents are some of the best ways to ground ourselves in the things that people, communities and society need to thrive, and also what we need to work hard to protect. We then did extensive consultation with external stakeholders, ethicists, civil rights, and human rights experts. The result is a framework that helps us define our responsibility through pro-human, pro-society ways.
This framework is evolving over time but currently includes 10 dimensions of autonomy, civic engagement, constructive discourse, economic security, environmental sustainability, fairness and inclusion, privacy and data protection, safety, voice, and well-being.
The RI Dimensions are used by teams across the company to establish team values and principles, such as the Facebook Reality Lab Responsible Innovation Principles. They are also used to shape strategic planning and prioritization, focus product roadmapping, and to provoke product and design reviews. For initiatives which involve more than one product group, this shared framework also helps us facilitate conversations across the Facebook family of apps in a more consistent and intentional way.
RI is just one part of a broad ecosystem of teams across the company that help surface and address potential harms. So while RI is working to scale our early and broad support across all product teams, it’s important to note that products are also reviewed before launch by teams focused on topics like privacy and safety.
There are a number of ways that RI and these various responsibility teams interface and collaborate in the interest of providing a more comprehensive approach. Here’s a common scenario for how it works in practice:
Early in the design processes, a product team comes to the central RI team for support as they consider ways to build a new product or service in the most responsible way possible. This could be initiated through office hours or a more formal engagement like a Foresight Workshop. Over time, we work with the team to determine which RI dimensions are most relevant to their particular project and engage the appropriate responsibility-focused teams who have relevant domain expertise in that topic. This might include teams working on privacy, safety, and responsible uses of AI, among others. We leverage diverse voices inside the company through our Inclusive Product Council, where employees can give feedback on various new product efforts based on their lived experience. We also engage outside stakeholders and experts to help expand the aperture of the lens the team is using to foresee potential harms. We all work together to help surface and plan mitigation strategies for potential harms.
If we come across a possible dilemma — and we sometimes do at the heart of our most complex and controversial challenges — we assist the team in navigating it through various decision-making strategies. This often looks like questioning our assumptions about who might be affected by the decisions we make and how — and setting principled criteria that applies to all decision-making. Perhaps most importantly, we encourage teams to build these ethical checks and balances into their regular processes. Approaching dilemmas in an operationally consistent manner helps make sure decision-making is more consistent, informed and fair.
As you can see, there is no shortage of investment in teams focused on various aspects of the responsibility landscape, and RI has been an important complementary addition, providing a holistic frame to take in the bigger picture. I believe this approach of breadth and depth will help us take a more comprehensive approach to keeping people safe on our platforms. This is why I’ve pivoted my career to focus on it full-time.
In many ways, the journey we are on as a company and as an industry mirrors my own journey over the last 25 years of designing large-scale, global communication platforms. It is a journey from techno-optimism to techno-realism.
I still believe deeply in technology’s power to create good in the world, but what has changed over the years is that I no longer believe that goodness is inevitable. It comes through sustained hard work, investing time in foresight work early in the development process, surfacing and planning mitigations for potential harms, struggling through complex trade-offs, and all the while engaging with external stakeholders, including members of affected communities, to help ensure that we are not just designing for our community but ultimately designing with them.
VP, Product Design
We're hiring leaders!
Help us lead teams as we build the metaverse