In the last few years there has been a major shift in the world of AI. Researchers and practitioners began to coalesce as a community around PyTorch, an open-source AI framework that has emerged as the tool of choice not just for running experiments in the lab, but for powering some of the world’s most advanced AI operations. PyTorch is now at the heart of many of Facebook’s most advanced AI systems, but its use goes well beyond Facebook: Today PyTorch powers everything from self-driving car software to smart farming equipment and digital video platforms.
This has had a transformative effect on how AI gets developed and used, and it’s all thanks to the incredible developer community that has formed around PyTorch. Today at Facebook’s annual developer conference we announced our long-term commitment to that community: we’re going all in on PyTorch as the framework of choice for all our AI research and operational systems moving forward.
For a look at how PyTorch has been such a game changer for us, here’s a conversation I recently had with Cornelia Carapcea, who leads Facebook’s efforts to use cutting edge AI to keep our platforms safe.
Why is PyTorch so transformative? Fragmentation is a common bottleneck that often prevents a new technology from reaching its full potential — in the early days of everything from electric power to the internet and mobile apps, the lack of a common set of frameworks held things back. No matter how groundbreaking the technology was or how visionary its builders were, without a common language their work was limited.
These kind of bottlenecks are an inevitable part of early-stage innovation, as lots of people rush to build a new thing and are convinced their own way is the right way. But over time, builders realize that they can make bigger and better things as part of a community. Better tools, better suppliers and more customers all coming alongside some degree of standardization.
As AI took on an increasingly central role in the technology industry over the last decade, a similar kind of fragmentation held it back. The key players, from researchers in university labs to engineers across the industry, were creating AI code using a number of different, incompatible frameworks. Moving a newly developed AI model from one framework to another was slow.
The result was that a research breakthrough could not be reliably converted into something that worked well in the real world, and the code created by different AI researchers and practitioners could not easily be shared, evaluated or built upon by others. Even within Facebook itself, the discoveries of our in-house AI research teams would take a long time to be adapted for use in systems powering our apps.
As I discuss with Corrnelia in the video here, PyTorch has dramatically shifted that dynamic at Facebook: the ability to quickly turn breakthrough scientific research into operational code is already helping Cornelia’s teams make our platforms safer in very measurable ways. And we know that we’re not alone here — the PyTorch community is full of stories like this, and it’s why we’re committing to it for the long run.