Ideas

Problems versus dilemmas: The complex trade-offs produced by social settings

By Margaret Stewart
July 7, 2021

I consider myself a professional-grade problem solver. It’s a big part of my identity and the way I see myself adding value to the world. Many designers and engineers I know share this mindset. Let’s be honest: It’s really satisfying to identify a problem, solve it, and check it off your to-do list! So it’s been a learning journey for me to distinguish between discrete problems — which often have equally discrete solutions — and dilemmas, which are much more complicated to navigate, because there is often no single right answer. 

I’ve been thinking about this since a meetup I had with my friend Clay Shirky a few years ago. Clay is an NYU professor and someone I go to when I need advice. At the time, Facebook was facing controversy over our stance on political ads. I asked Clay what he thought of the ongoing debates and he said something that I’ve been chewing on ever since:

“Social settings,” he told me, “present dilemmas rather than problems; problems have solutions, whereas dilemmas only have optimizations.”

Recommended Reading

At the heart of many of the controversies Facebook finds itself in, there sits a dilemma. Not an obvious right vs. wrong dichotomy but a right vs. right dilemma. It’s a myth that there is always a single right answer to the many dilemmas facing platforms like Facebook. This is particularly true as we aim to serve a globally diverse community. We’re not optimizing for any single scenario, but rather for a myriad of perspectives shaped by different cultural and geopolitical contexts and lived experiences.

People around the world have been given a crash course in dilemmas over the past year. The COVID-19 pandemic has produced a host of incredibly complex challenges for communities to navigate, such as the tension between protecting the health of individuals — including teachers and retail workers — and the devastating impact that closing schools and businesses has on families and the economy as a whole.

In settings that involve large groups of people, it’s impossible to find the perfect answer for everyone, every time. Modern social media is no exception. As we strive to minimize potential harm and risk on our platforms, there aren’t always definitive or win-win solutions; we have to weigh the various competing equities and values at stake. The only way forward is to be open and intentional about the inevitable trade-offs and difficult choices that must be made, and what we ultimately decide to optimize for.

Earlier in my career, I wasn’t as attuned to the distinction between problems and dilemmas, in part because like many in our industry, I was a techno-optimist. I focused my gaze on the upside and didn’t spend enough time thinking about how things might go wrong. I still have a lot of optimism about technology creating good. But what’s changed over the arc of my career is that I no longer believe that good is inevitable or evenly experienced across all communities and contexts. Maximizing good and minimizing harm comes about through highly intentional foresight and proactive mitigation. It comes from each of us evolving from techno-optimist to techno-realist.

Distinguishing between problems and dilemmas

Here’s an example of a discrete problem: People experiencing impaired hearing do not benefit from audio in our video products. Thankfully, there’s a discrete solution: captioning. There aren’t any significant customer downsides to adding captions. Solving this problem is primarily about prioritizing product features and the resources required to offer them to people. So when we offer captioning, we solve a real problem for many people.

Dilemmas, on the other hand, are much more complicated to navigate. Let me share a few concrete examples of dilemmas that Facebook has faced and the tensions that exist at their core: 

People’s sense of agency and economic security in tension with safety: The Instagram Equity team recently introduced a new product that enables Black-owned businesses based in the U.S. with an Instagram Shop to self-designate as such and share a “Black-owned” label on their profile and product pages. This was driven by the desire to continue to find meaningful ways to support the Black Lives Matter movement and help close gaps in economic opportunity. As they worked rapidly to meet an urgent need, the team also wanted to identify potential problems that could arise by highlighting the very businesses we intended to help — including harassment and hate speech by third parties. So they worked with the Responsible Innovation team, conducted in-depth research with Black-owned businesses, and consulted with external experts on the particular needs of underrepresented businesses. Based on this research, the product team took proactive steps to support Black-owned businesses and mitigate potentially hurtful interactions.

Giving people a voice and access to information in tension with safety and civic integrity: During critical times, such as elections or health crises such as the COVID-19 pandemic, our platforms can play an important role in giving people a voice and helping them share information. But when the information they share is unreliable, misleading, or false, it has the potential to cause harm offline. When the pandemic started, WhatsApp saw a huge increase in highly forwarded messages. We learned that many of these messages contained misinformation, while many others included educational health information and works of comfort and support. The team grappled with this tension of protecting people’s ability to use their voice and access to information, and protecting people from the potential harm of misinformation. After a deep debate, they decided to limit the number of times someone can forward a message at once. This isn’t a clear right vs. wrong dichotomy but a right vs. right dilemma; protecting access to information and protecting people from the potential harm of misinformation are both things we care about as a company. It’s a principled decision based on strongly held values and, in this case, prioritizing safety and civic integrity.

Protecting people’s privacy in tension with ensuring their safety and well-being: Our work on suicide prevention is another powerful example of one of our most common dilemmas: navigating the tension between privacy and safety. Because billions of people spend significant amounts of time on our platforms every day, we sometimes get an indication, through community reporting or artificial intelligence (AI), that someone may be struggling. Some might see it as problematic from a privacy standpoint for any company to be in a position to detect that someone might be posting harmful suicidal or self-harm content and might need help. But it is also morally questionable to have that capability and not do anything about it. There is no “neutral” position to take on this issue.

So, we worked with leading suicide prevention experts to inform our approach. When concerning behavior is reported by the community or our AI recognizes content that might indicate someone is in distress, we’ve developed ways to compassionately offer them to connect to friends, mental health services, or crisis support lines. This dilemma has personal significance to me, as I lost two of my cousins to suicide in their 30s about a decade ago. I’ll never know whether this technology could have changed the tragic outcomes, but I am personally very grateful that we’ve developed ways of reaching out and offering assistance to those who may be experiencing a mental health crisis.

Navigating trade-offs: A multipronged approach

So how might we navigate trade-offs between safety, privacy, autonomy, and fairness & inclusion? Do we prioritize by prevalence, perceived intensity, potential to encourage off-platform harm, or disparate negative impact on more at-risk populations? Any direction, however well-informed or intended, has the potential to create negative outcomes as well. Sometimes, making these decisions feels like a scene from a movie where the protagonist kills one monster only to have a dozen more spawn in its place. Of course, when it comes to optimizations, it’s not all or nothing. Even as we decide to optimize for one thing, we must mitigate potential resulting harm whenever we can. 

As an industry, we must take a multipronged approach to tackling these dilemmas. Based on what we've learned over the years, here are some key actions we've integrated into our approach to responsible innovation:

Empower and educate

Given the enormous impact of the work we do every day, we must equip all developers and designers with the training and tools to hone a responsibility mindset and effectively raise concerns early on. Responsibility is a continuous, all-hands effort and process. For many technologists, these weren’t skills or practices we were taught in school, so we must cultivate them in our own work and develop stronger education, support, and incentives within our industry as a whole. For example, in the first days of orientation, we give all new Facebook employees a primer on responsible innovation, to send a strong message about how foundational it is to how we work and build. It’s also a core course in all of our technical boot camps and an ongoing part of education for all of our technical employees.

Start the conversations early 

Before launching a product, we must evaluate the potential negative impacts to help catch any unresolved issues. But a thorough consideration of potential issues must involve engaging stakeholders early in the product development life cycle, to help teams get out in front of issues by asking foundational questions, such as “Should we build this at all?” Early engagement has the added benefit of providing teams with more flexibility to grapple with these ambiguities, so the process feels more like a helpful setting of proper guardrails than a sense of being blocked at the last minute.

Design and build with, not for, our communities

Especially for our thorniest issues, we must continuously build relationships with — and invite into the design process — the diverse set of stakeholders who could be affected by what we build. We must continue to make them key partners in deliberating with our designers and developers; they should feel empowered to give feedback, voice concerns, and propose alternative approaches. Engaging directly with our diverse global stakeholders can shift the conversation in unexpected and productive ways that have real impact on product outcomes.

Show our work

In order to design responsibly, decision-making should be deliberate and consistent. Teams should explicitly call out what values are at play and how various alternatives will optimize for those values. There must be an agreed-upon and transparent process for deliberations and clear metrics for making decisions so in order to avoid the bias that otherwise might creep into more ad-hoc approaches. Final decisions — and the rationales behind them — should be shared as transparently as possible to provide accountability, help ensure adoption in ways that are consistent with the intent of the decision, and facilitate reevaluation if the factors that led to that decision change in the future. The Oversight Board is a good example of transparency in practice. While they make independent judgments about significant and difficult content decisions, they also publish transparent opinions and openly share the factors they took into consideration when making those decisions — along with broader recommendations about our products, policies, and enforcement systems.

A path forward

As technologists at the helm of some of the most powerful communication tools ever created, it is our responsibility to navigate these dilemmas in ways that create the most responsible outcomes possible, especially for at-risk communities. In order to do this, we must reprogram our tendency to reduce dilemmas into problems to be solved. We must sit with the discomfort that dilemmas can generate and acknowledge the fact that there is often no single right solution. We must engage with a diverse set of stakeholders and experts to ensure that we understand all aspects of the dilemmas before us. And then we must do the hard work of pushing through these barriers to determine — through intentional, rigorous decision-making — what our values drive us to optimize for, and how to mitigate the potential negative impact that may result from that optimization. 

I still deeply believe in the potential of technology to empower people to create a more equitable world. But after 25 years of helping build some of the world’s most powerful communication tools, I no longer believe that good outcomes are certain or inevitable. They are the product of intentional hard work from all of us to proactively safeguard that future by navigating these dilemmas that are so consequential to people, communities, and society as a whole.