Intel Wants You To Think Xeon When It Comes to Deep Learning

Intel is making a push for its line of Xeon Scalable processors to be thought of as the new standard for enterprise deep learning projects.

According to a benchmark report the company released earlier this month, using AWS Sockeye Neural Machine Translation (NMT) with Apache MXNet and Intel Math Kernel Library, the company's Xeon Scalable processor was four times faster than Nvidia's V100 GPU.

The company provided exact instructions for replicating the results.

"These results demonstrate the gains of using Intel MKL with Intel Xeon processors.  In addition, properly setting the environment variables gives additional performance and provides comparable performance to V100 (22.5 vs 23.2 sentences per second)," the benchmark report stated. "In addition to these gains, additional optimizations are coming soon that we expect will further improve CPU performance."

The results follow a late 2017 academic report the company released focused on how several universities found using Intel Xeon Scalable processors for their deep learning training.

Nvidia is, of course, also pushing its offerings as the ulitmate solution for deep learning, as is Qualcomm and AMD, as well as startups like Cerebras, KnuEdge and Groq.

About the Author

Becky Nagel is the vice president of Web & Digital Strategy for 1105's Converge360 Group, where she oversees the front-end Web team and deals with all aspects of digital projects at the company, including launching and running the group's popular virtual summit and Coffee talk series . She an experienced tech journalist (20 years), and before her current position, was the editorial director of the group's sites. A few years ago she gave a talk at a leading technical publishers conference about how changes in Web browser technology would impact online advertising for publishers. Follow her on twitter @beckynagel.