Intel Wants You To Think Xeon When It Comes to Deep Learning
- By Becky Nagel
- May 7, 2018
Intel is making a push for its line of Xeon Scalable processors to be thought of as the new standard for enterprise deep learning projects.
According to a benchmark report the company released earlier this month, using AWS Sockeye Neural Machine Translation (NMT) with Apache MXNet and Intel Math Kernel Library, the company's Xeon Scalable processor was four times faster than Nvidia's V100 GPU.
The company provided exact instructions for replicating the results.
"These results demonstrate the gains of using Intel MKL with Intel Xeon processors. In addition, properly setting the environment variables gives additional performance and provides comparable performance to V100 (22.5 vs 23.2 sentences per second)," the benchmark report stated. "In addition to these gains, additional optimizations are coming soon that we expect will further improve CPU performance."
The results follow a late 2017 academic report the company released focused on how several universities found using Intel Xeon Scalable processors for their deep learning training.
Nvidia is, of course, also pushing its offerings as the ulitmate solution for deep learning, as is Qualcomm and AMD, as well as startups like Cerebras, KnuEdge and Groq.
About the Author
Becky Nagel serves as vice president of AI for 1105 Media specializing in developing media, events and training for companies around AI and generative AI technology. She also regularly writes and reports on AI news, and is the founding editor of PureAI.com. She's the author of "ChatGPT Prompt 101 Guide for Business Users" and other popular AI resources with a real-world business perspective. She regularly speaks, writes and develops content around AI, generative AI and other business tech. Find her on X/Twitter @beckynagel.