Think like the bad guys: An interview with Cristian Canton Ferrer, AI Red team lead

By Mike Schroepfer
September 9, 2020

The issue of deepfakes is an important and difficult one to solve. Deepfakes are adversarial in nature and developing the detection technologies fast enough to keep up with the AI that is being used to create deepfakes in the first place is a constant game of cat and mouse. Like many other types of harmful content, no single company or organization can solve this problem alone. 

Recommended Reading

Last Spring I was meeting with our AI Red team––the team charged with thinking like the bad guys–– and we were talking about some of the challenges we were facing with deepfakes on our platforms. One of those challenges was that adversarial actors were leveraging research advances in AI to create deepfakes, but researchers weren’t spending time on building technology that could detect them. I found that imbalance frustrating and it was clear to us then that to make progress on this issue we were going to have to try a new approach. It was during that meeting that we decided to create the Deepfake Detection Challenge or DFDC, an initiative that would simply said: create a stronger incentive to have researchers attack this problem and be part of a shared solution. 

The goal of the DFDC was to create detection technologies to help us detect deepfakes so that if they're being used in malicious ways we have scaled approaches to combat them. We created the contest in collaboration with Microsoft, Amazon Web Services, and the Partnership on AI and recently announced the results in June. An incredible amount of work went into creating an open data set––which included 100K images which Facebook paid for––and architecting it in a way that allowed researchers to use it, which meant overcoming key structural disincentives to working on deepfake detectors. The outcome of the challenge were pretty remarkable. The project attracted more than 2,000 participants from industry and academia, and it generated more than 35,000 deepfake detection models. The DFDC represents what’s possible when you deliberately practice innovation in the open. Solving the big challenges in AI is the kind of massive scientific undertaking that shouldn’t happen in isolation. Rapid progress requires collaboration and an open exchange of ideas from people across industry, academia, and independent research institutions. 

For more about the challenges we overcame throughout the DFDC, check out my interview with Cristian Canton Ferrer, who leads Facebook’s AI Red team.