Google Bakes Machine Learning into New Android 9 Pie

Succeeding Android 8 Oreo, Android 9 Pie has launched, sporting by a bevy of new developer/user features enabled by machine learning.

Google today announced it's doling out slices of Pie to users of its Pixel smartphones while providing the OS source code to the Android Open Source Project.

The company's vice president of engineering, Dave Burke, said machine learning powers new features that make smartphones even smarter.

"Android 9 helps your phone learn as you use it, by picking up on your preferences and adjusting automatically," he said. "Everything from helping users get the most out of their battery life to surfacing the best parts of the apps they use all the time, right when they need it most, Android 9 keeps things running smoother, longer."

The extended battery life is made possible by a partnership with AI specialist DeepMind, resulting in Adaptive Battery, setting priorities for system resources based on user activity.

Another AI-assisted feature is called Slices, which, although leveraging cutting-edge machine learning technology, can be backward-compatible to Android 4.4, making it available to 95 percent of all Android users.

Supported in the Android Jetpack collection of software components, Slices are described as UI templates able to display interactive content directly from the Google Search app. Android documentation says "Slices can help users perform tasks faster by enabling engagement outside of the fullscreen app experience." Burke said developers can create Slices as enhancements to App Actions, which use machine learning to "to surface your app to the user at just the right time, based on your app's semantic intents and the user's context."

Machine learning is also put to use in models that boost the TextClassifier API, expanding the kinds of entities that can be identified in text input or other content, such as Dates and Flight Numbers.

Finally, on-device machine learning gets a boost from a new Neural Networks API update. "Neural Networks 1.1 adds support for nine new ops -- Pad, BatchToSpaceND, SpaceToBatchND, Transpose, Strided Slice, Mean, Div, Sub, and Squeeze," Burke said. "A typical way to take advantage of the APIs is through TensorFlow Lite."

Besides machine learning innovations, other new functionality in the open source OS includes:

  • Multi-camera support: Multi-camera APIs, providing access to simultaneous streams from two or more physical cameras, and more
  • Display cutout: The new DisplayCutout class helps developers discover the location and shape of non-functional areas where content shouldn't be displayed.
  • Enhanced notifications: These include support for images, notifications channels (user-customizable channel for different types of notification to be displayed) and more, including broadcasts and Do Not Disturb functionality.
  • ImageDecoder: This provide a modernized approach for decoding bitmaps and drawable images.
  • Many others: including indoor positioning with Wi-Fi RTT, animation, HDR VP9 Video, HEIF image compression, and Media APIs, security enhancements and so on.

Burke also provided advice to developers putting the new OS through its paces: "With Android 9 coming to Pixel users starting today, and to other devices in the months ahead, it's important to test your app for compatibility as soon as possible. Just install your current app from Google Play on a device or emulator running Android 9. As you work through the flows, make sure your app runs and looks great, and that it handles the Android 9 behavior changes properly."

About the Author

David Ramel is an editor and writer for Converge360.