Meta and AWS Deepen Their Strategic Alliance, Bump Up Support for PyTorch
- By John K. Waters
- January 20, 2022
Amazon Web Services (AWS) has announced that Meta plans to use AWS services and global infrastructure to scale research and development, facilitate third party collaborations, and drive operational efficiency.
Meta already uses AWS’s infrastructure and capabilities to complement its existing on-premises infrastructure. The social media giant will broaden its use of AWS compute, storage, databases, and security services to provide privacy, reliability, and scale in the cloud, the company said.
Specifically, Meta will run third-party collaborations in AWS and use the cloud to support acquisitions of companies that are already powered by AWS. It will also use AWS’s compute services to accelerate artificial intelligence (AI) research and development for its Meta AI group.
The two companies with also work together to help enterprises use the PyTorch open-source machine learning library on AWS "to bring deep learning models from research into production faster and easier."
"Meta and AWS have been expanding our collaboration over the last five years," said Kathrin Renz, Vice President of Business Development and Industries at Amazon Web Services, in a statement. "With this agreement, AWS will continue to help Meta support research and development, drive innovation, and collaborate with third parties and the open-source community at scale. Customers can rely on Meta and AWS to collaborate on PyTorch, making it easier for them to build, train, and deploy deep learning models on AWS."
AWS and Meta plan to collaborate to help machine learning researchers and developers by further optimizing PyTorch performance and its integration with such core managed services as Amazon Elastic Compute Cloud (Amazon EC2) and Amazon SageMaker, the companies said. SageMaker is the AWS service that helps developers and data scientists build, train, and deploy machine learning models in the cloud and at the edge.
To make it easier for developers to build large-scale deep learning models for natural language processing and computer vision, the companies are enabling PyTorch on AWS to orchestrate large-scale training jobs across a distributed system of AI accelerators. The companies will jointly offer native tools to improve the performance, explainability, and cost of inference on PyTorch. To simplify the deployment of models in production, the companies will continue to enhance TorchServe, the serving engine native to PyTorch designed to make it easy to deploy trained PyTorch models at scale. Building on these open-source contributions, AWS and Meta plan to help organizations bring large-scale deep learning models from research to production faster and easier with optimized performance on AWS.
“We are excited to extend our strategic relationship with AWS to help us innovate faster and expand the scale and scope of our research and development work,” said Jason Kalich, Vice President of Production Engineering at Meta, in a statement. “The global reach and reliability of AWS will help us continue to deliver innovative experiences for the billions of people around the world that use Meta products and services and for customers running PyTorch on AWS.”
About the Author
John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS. He can be reached at [email protected].