When you look at the image above, what do you see?
Some people see shapes and symbols. Some see functionality and possibility: Things you can do, or new things you can unlock if you click or press or read.
I see values.
I used to believe that as product designers we were creating “agnostic containers” — vessels for other people’s ideas and beliefs.
It’s true that these platforms have provided access and a voice to billions of people who had not been heard through traditional channels. But the events of the last few years have been a reminder of something that has always been true:
There is no such thing as neutral design.
Our belief at Facebook that everyone should have a voice and have access to tools that allow that voice to be amplified is not a neutral position; it is values-laden. And those values carry consequences at scale that have to be managed with the utmost care.
Every time we design a user interface for a product or feature, we are making decisions on behalf of people — decisions that fundamentally reflect our values to the world and that affect people everywhere, whether or not we intend it.
Unfortunately, it takes more than simply saying “Be responsible!” There are very tough trade-offs that we have to consider in the biggest and smallest decisions related to our products and our business.
Responsible design means being intentional about those decisions, being cognizant of the values that inform those decisions, and doing our best to anticipate the impact those decisions might have. And it means being careful to not define the potential impact of our products too narrowly, but take a broader view of the social and political contexts in which they operate.
These different dimensions of responsibility can be illustrated with some examples from our work. As we’ve discussed extensively today at F8, one of the hardest problems is misinformation.
People often use our platforms to share news stories and other valuable information, but sometimes, whether or not they intend to, people share misinformation. And this misinformation can be particularly harmful when people need to make consequential decisions, such as which candidates to support in an election, or making an important healthcare-related decision.
So how can we use design to reduce the harm caused by misinformation without inhibiting free expression? We’ve tried several different approaches. For example, we worked with third-party fact-checking organizations to flag disputed content in people’s News Feeds. But we found in many cases it drew more attention to this material, which was the opposite of what we wanted.
So we took more time to talk to people in our community as well as to external experts. We conducted extensive research in both the U.S. and around the world, and the results showed that people everywhere want to decide for themselves what information is credible and what isn’t. This is true across the political spectrum in every single country we visited.
With this in mind, we have built new tools to help people determine the reliability of an article they come across on Facebook.
For every article that appears in people’s News Feed, we now display a button that provides context: who posted this piece of content and how long they’ve been on Facebook, as well as other articles from the same source, articles on the same topic from other sources, a map of where the article is being shared, and statistics on who’s sharing it. Our research and the advice we received from outside experts shows that these signals help people make better assessments about the reliability of the content they see.
But this button only works if people see and click on it. In testing, we noticed that people weren’t consistently seeing the button when they were scrolling through their feed. So we added an animation. We don’t use this kind of effect very often, since it adds some visual noise. But it shows how important it is to us that people discover this new feature to learn more about the content they see. That's our values showing up in our design.
People often post on our platforms about joyous major life events — new babies, new jobs, and of course birthdays. But there are also times when tragedy strikes. When someone passes away, sensitive and complicated decisions have to be made about what to do with their online profiles.
This is an emotional process — and it’s particularly hard when their death is unexpected. Those of us who have endured this kind of loss may be familiar with the range of emotions that can come in the aftermath. It's so valuable to have some place to share your memories and your grief with others, but it can sometimes be frustrating when the products do not handle the situation as sensitively as we want them to.
As a developer or designer, the question, “What happens when someone using my product dies?” is not usually top of mind — even though, in addition to being born, it’s one of the only things we all have in common. But just because it’s a common experience doesn’t mean it’s a straightforward scenario to design for.
We call this process memorialization to reflect something that we learned the hard way early on. It’s not only about the person who passed away or their account. It’s also about designing for the experience of the many people who knew and loved them, and who are grieving in the wake of their passing.
Our approach to memorialization has evolved over the years. Initially, we defined the problem too narrowly. Unfortunately, accounts of the deceased are prone to being hacked. We wanted to lock down this security threat, so we allowed anyone to memorialize accounts. But family and friends told us that when a stranger memorialized an account, it felt like they weren’t in control of the process or of what was being communicated to the broader community. Once a profile has been memorialized, it says “Remembering...” which indicates the person has died. So while we were protecting the account from hackers, we weren’t supporting the family’s need to manage this process themselves. Based on this feedback, we changed our process so an account could be memorialized only by family or friends.
We also learned that it could be difficult to balance the privacy preferences of the deceased with the needs of their friends and family. For instance, you can set your profile to not allow anyone to post to your Timeline, and we had to respect and preserve those settings. But that also meant loved ones couldn’t use the Timeline to gather and pay tribute with memories, stories, and support.
So, we recently evolved the experience to create a Tributes tab, separate from the original Timeline, so loved ones can gather and share memories.
We continued to learn, however. As we did more research and consulted with experts, we discovered just how complicated it is to get it right because of how incredibly personal the grief process is.
Many people are familiar with psychiatrist Elisabeth Kübler-Ross’s five stages of grief. She intended it as a way to give people permission to feel the full range of emotions when experiencing the loss of a loved one, but the idea unfortunately caught on in popular culture as a specific, “correct” path people are supposed to follow.
Social scientists have debunked this myth of a linear grief process, because, in fact, there is no “right” way to grieve. People can and do bounce in and around these dimensions over time. The reality of this fluid and highly personal process makes it an incredibly difficult design problem.
Sometimes friends and family aren’t emotionally ready to memorialize an account. There is something about taking that step that feels like an act of finality, and sometimes they just aren't there yet. We have to respect that and allow people to come to terms with the death in their own time.
But if an account isn’t memorialized, we risk upsetting many other people by mistakenly recommending the deceased to them as someone to introduce to another friend or to add to a Group. To better address this challenge, we built a classifier to detect and hide deceased Facebook accounts in situations that might be distressing.
The design challenge is even more complicated when you think about how best to serve people across the globe, so we recently conducted in-depth ethnographic research on this topic in different countries and regions.
One thing that held true is that people everywhere find support from their community after they’ve lost someone, so it makes sense that Facebook can often serve as a powerful support system when communities are grieving. But we also discovered critical insights that showed just how culturally diverse the grieving process can be.
Here’s just one example. Many people want their loved one’s account to live on and to serve as a place for friends and family to come together, remember, and pay tribute. But when we talked to people in Indonesia, we learned that some families’ religious beliefs say it is more difficult for their loved one’s soul to move on to the afterlife if photos of them are still available. When a loved one passes away, some of these families wanted the account completely deleted as quickly as possible. With this in mind, we continue to provide the option to delete an account, even though the broader group of friends and family might want to keep it available to pay tribute.
Requests related to memorialization might include:
These conflicting requests are hard to come to terms with, because while each is completely legitimate, they are in conflict with the others.
I’ve always felt that my job as a designer is to simplify the problem so it can be solved — and ideally do it so the design gets out of the way. But there is no responsible way to simplify the grief process. And there’s no way to stay neutral in how you design ways to support people through it.
This kind of problem may not make headlines the way something like misinformation does, but it’s completely devastating if it happens to you. Responsible design is not just about the problems that affect everyone or what’s reflected in the news headlines. It’s also about caring for the individual experience, especially when people are most vulnerable. We’ll never get it right for everyone every time, but we have to keep trying.
Neither of these examples — misinformation or memorialization — have buttoned-up, neat solutions. There are still many open questions and challenges, but that’s the reality of global platforms.
Each was born of the belief that democratizing powerful technology is good for the world and that you create that access by greatly reducing friction in the system. But we have to be aware that it’s this very reduction of friction that makes it impossible to ever reduce harm to zero. That is a hard truth when your goal is to do good in the world.
There is no obvious playbook to follow in navigating these decisions because no one has ever done this before at this scale. This has been the most challenging shift in my career, but I believe it's also the most important work I’ll ever do.
By accepting that design is not neutral, and by being clear about the values driving our design decisions, we can learn from our mistakes, respond as quickly as we can to new forces at play, and always seek to innovate and design responsibly.
VP, Product Design
We're hiring engineers!
Help us build infrastructure and solve big challenges at scale
Meta’s engineering teams create the infrastructure and systems that underpin our apps and services, connecting more than 2 billion people.