Engineering

Software, servers, systems, sensors, and science: Facebook’s recipe for hyperefficient data centers

By Dan LeeJonathan Rowe
January 21, 2020

Efficiency and renewable energy are Facebook’s first line of defense to help curb our carbon emissions and fight climate change, especially when it comes to our data centers.

As early as 2011, Facebook was one of the first big tech companies to make a commitment to 100 percent renewable energy. Last year, solar and wind energy supplied 75 percent of the power used to operate our business globally. Today, we’re on track to hit our goal to reduce our greenhouse gas footprint by 75 percent when we hit our 100 percent renewable energy target in 2020.

Recommended Reading

Most of that renewable energy supports our global infrastructure and data centers, which are big buildings buzzing with thousands of servers that send digital information across the globe at the speed of light through a web of underground and subsea cables. They’re the secure, 24/7 facilities keeping people connected via Facebook, Instagram, Messenger, and WhatsApp.

The server equipment in these data centers requires lots of energy and water to stay powered up and cooled off, so we’re always looking for creative ways to do more with less. Efficiency has been a big part of Facebook’s data center DNA since we started designing our first facility in Oregon over a decade ago. Thanks to smart design decisions that add up across the full stack of technology inside them, our data centers save significant amounts of energy and water. Conserving these resources is not only smart for our business but also good for the environment.

Servers, software, systems, sensors, and science — data science, that is — are the recipe for Facebook’s hyperefficient data centers.

Software

Unfortunately, we can’t show any photographs of our most efficient data centers. It’s not because they’re top secret. It’s because they’re invisible.

The story of Facebook’s data center efficiency begins with the lines of code that make our platform work. For example, in 2016 we built and open-sourced MyRocks, a clever approach to doubling the compression of an existing data-storage solution called InnoDB. MyRocks cut the amount of storage servers we needed at some of our data centers in half. And we developed a load-balancing technology called Autoscale to ensure that we weren’t wasting energy from servers being used at low capacity during off-peak hours.

We’ve been improving and scaling up both of these efficiency initiatives — along with many others — to minimize the amount of computing resources we need to power our platform. So, technically speaking, our most efficient data centers don’t really exist. They’re the ones we never had to build in the first place because of smart engineers who’ve developed innovative, efficient software solutions.

Servers

Ten years ago, Facebook was growing so fast that a traditional approach to building and operating our data centers wouldn’t work. Off-the-shelf servers included too much stuff we didn’t need. They weren’t easy for technicians to service quickly and wouldn’t help us achieve the levels of efficiency we knew were possible under a new paradigm. 

So we designed our own servers and released the work through the Open Compute Project (OCP), a collaborative community of tech leaders. We helped launch the OCP to elevate efficient computing technology and share the best solutions publicly so that we could all innovate faster.

Facebook’s “vanity-free” servers might not win a beauty contest. They’re minimalist, stripped of anything nonessential for efficient operations. Plastic bezels, paint, and mounting screws were removed to reduce the weight and lower cost. The end products use much less material and are far easier for technicians to service than anything we could buy. This no-frills design has an additional performance benefit: A chassis with larger fans and servers with more surface area for heat sinks let air flow more freely, so much less energy is needed to keep them cool and reliability isn’t compromised.

Facebook’s OCP servers can also accept a higher voltage — and have small backup batteries nearby instead of a central one serving the entire building — so our electrical distribution system doesn’t waste energy by converting from one voltage to another or between alternating current (AC) and direct current (DC) as many times as other data centers. 

Altogether, these innovative designs help us save a ton of resources, from the materials it takes to manufacture the machines to the energy and water it takes to power them and keep them cool while they serve traffic.

Systems

Data centers get hot because all the electricity we deliver to our servers eventually turns into heat. Facebook’s OCP servers can operate at higher-than-usual temperatures, and that helps us keep them cool with outdoor air most of the year using a direct evaporative system instead of using energy- and water-intensive air-conditioning equipment. The way we cool servers at most data centers is similar to putting a fan with a cold, wet towel over it in a window to bring in fresh air instead of using a window-mounted air conditioner to keep a room comfortable on a hot day.

In places where dust, high humidity, or elevated salinity in the air might not be good for server performance, we will use an indirect evaporative cooling system, which we introduced in 2018. StatePoint Liquid Cooling produces cold water instead of cold air, which helps us build and operate efficiently in tropical regions such as Singapore, where removing heat from a data center is more challenging because of the hot, humid climate.

At this point you might be wondering: Where does all that heat go?

For our data centers in North America and Europe, we use some of the extra heat our servers generate to keep adjacent offices and administrative spaces warm during the winter. Our new data center in Odense, Denmark, goes a gigantic step further by recycling excess heat into the local district heating system. We expect it to be enough to warm 6,900 homes in our neighboring community.

Sensors

Measuring, monitoring, and managing the complex interplay of servers, power systems, and cooling equipment in real time is critical to ensure that our data centers operate at peak performance. A vast network of sensors that evaluate environmental conditions such as temperature, airflow, and humidity feed into a central computer that keeps track of trends and helps facility operation engineers make sense of all that data so they can address problems when they arise.

Our earliest data center buildings used direct digital controllers, the same kind of computer that commercial buildings such as office buildings, schools, and shopping malls would use to manage all the data streaming from sensors. But over the past few years we’ve switched to programmable logic controllers, the same kind of system that hospitals, industrial manufacturing facilities, and laboratories rely on for automation to ensure efficient, reliable, uninterrupted service. They provide greater control over the environmental conditions of our data centers and help us consistently operate at peak performance. 

The very same data that streams from our sensor network also feeds into public display dashboards that report live power usage effectiveness (PUE) and water usage effectiveness (WUE) — key metrics for measuring efficient data center operations. In the spirit of transparency, anyone can see the same flow of real-time data that our facility engineers use to ensure efficient operations.

(Data) science

Machine learning is an application of data science that helps us optimize data center efficiency using the massive amounts of data logged from our equipment sensors. We developed algorithms that continuously sense operational conditions; compare them with what we predict them to be, based on historical context and live weather data; and raise a red flag when we need to investigate irregularities or make equipment improvements.

For example, if we notice that one of our sites is using more energy or water than our machine learning model predicted, the facility operations team can begin troubleshooting the cause before it becomes a bigger problem. We’re continuing to invest in data science solutions to keep our data centers on the leading edge of resource efficiency.

Beyond efficiency

Even with all these efficiency measures, Facebook data centers still need resources to serve our global community. That’s why we’re working to support our operations with 100 percent renewable energy and offsetting our water footprint with restoration projects in arid regions that need it the most. 

Facebook’s renewable-energy goals have helped tackle climate change by reducing our greenhouse gas footprint 44 percent since 2017, and there’s a surprising link to water savings too. Energy generated with fossil fuels such as coal and natural gas requires massive quantities of water. In 2018 alone, we estimate we saved a billion gallons of water by transitioning to wind and solar energy and away from using electricity from standard utilities in the United States. 

Facebook is committed to efficiency and renewable energy because it’s good for our business, the planet and the communities we serve and connect. And our data centers will continue leading with efficiency because it’s in our DNA. 

Because it’s the right thing to do.

To learn more about efficiency, renewable energy, and sustainability at Facebook, visit Sustainability.fb.com.

Written by:
Dan Lee

Design Director, Global Data Center Design

Jonathan Rowe

Sustainability Program Manager

We're hiring engineers!

Help us build infrastructure and solve big challenges at scale

Engineering

Meta’s engineering teams create the infrastructure and systems that underpin our apps and services, connecting more than 2 billion people.