News
Google's Cloud AutoML Mainstreams Machine Learning
- By John K. Waters
- February 7, 2018
The pressure on enterprise developers to infuse artificial intelligence (AI) capabilities into their organizations' software is growing, and the need for tools to help those developers meet those demands is, too. Among the most promising entries in this new dev tools category is Google's Cloud AutoML, a suite of machine learning (ML) tools the company announced last month.
Google's aims here is to support the relatively large population of enterprise developers who have limited experience with machine learning. AutoML Vision is the first product released under the Cloud AutoML banner. The service is built on Google's image recognition technology, which includes transfer learning and neural architecture search technologies. It's designed to make the process of creating customer ML models as simple as possible, with a drag-and-drop interface that allows devs to upload images, train and manage models, and then deploy those "trained" models directly on Google Cloud.
AutoML Vision is the first of a planned series of services for "all other major fields of AI," the company said in a statement.
Fei-Fei Li, Chief Scientist in Google's Cloud AI group, and Jia Li, who heads that group's R&D effort, blogged about this release and the gap in ML/AI developer skills.
"Currently, only a handful of businesses in the world have access to the talent and budgets needed to fully appreciate the advancements of ML and AI," they wrote. "There's a very limited number of people that can create advanced machine learning models. And if you're one of the companies that has access to ML/AI engineers, you still have to manage the time-intensive and complicated process of building your own custom ML model."
Google's "AutoML" approach to machine learning employs a controller neural net that can propose a "child" model architecture, which can then be trained and evaluated for quality on a particular task, explained Quoc Le and Barret Zoph, research scientists on Google's Brain team, in a blog post. "That feedback is then used to inform the controller how to improve its proposals for the next round," they wrote. "We repeat this process thousands of times -- generating new architectures, testing them, and giving that feedback to the controller to learn from. Eventually the controller learns to assign high probability to areas of architecture space that achieve better accuracy on a held-out validation dataset, and low probability to areas of architecture space that score poorly."
About the Author
John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS. He can be reached at [email protected].