Google I/O, the annual developer conference held by Google in Mountain View, California, took place this year at Shoreline Amphitheatre near the Google headquarters. As always, there were many new announcements and launches for developers to play around with. In this blog post, I’ll highlight eight of the announcements that our Akamai Developer Relations team found especially exciting.
- ML Kit : ML Kit brings Google’s machine learning expertise to mobile developers. Essentially, it’s a library that leverages the computing power available on the device (in offline mode) and in the cloud (in online mode) to easily accomplish a range of tasks. The kit offers five tasks/capabilities today (image labeling, text recognition, barcode scanning, face detection, and landmark detection) with more promised to be on the way. ML Kit is available as a Firebase component, so it’s easy to use with other Firebase components. To get started, just head to your Firebase console.
- Android Jetpack: Google is now delivering quite a few packages in the same bundle to simplify the developer’s job of writing clean code and avoiding boilerplate code. The new Jetpack bundle includes support libraries, architecture components, navigation libraries, slices API, and many more interesting and useful libraries, all being delivered as a single package.
- Android App Bundle: In this I/O, Google also introduced a new publishing format for Android apps where developers can choose to assemble their Application Package Kits (APKs) in such a way that all the resources required by any device type are included in one package, but at the time of installation, only the required bundles specific to a device are downloaded. This is a great step forward in dealing with device fragmentation while keeping the APK size small. The Android App Bundle format can also be used to deliver features only at runtime, whenever they’re required, and not at install time. This further reduces the APK size.
- Slices API: Today, when your users search a particular keyword in the Google app on their phone, only your app icon appears if the keyword matches your app’s name. Now with Slices API, you can choose to display rich content instead of just the app icon, and even let users take specific actions to interact directly with a section of your app. This is a great addition to engage your users whenever they search something which is relevant to the services your app offers. Slices are available in Android Jetpack for developers, but have yet to start rolling out to Android P.
- Android P: With Android P, you can allow your users to take certain actions in your app directly, without opening the app itself. You’ll have to register your app to handle certain Intents, then users will be able to see and use your apps through these Intents, across different Android surfaces including the Google Search App, Play Store, Google Assistant, and the Launcher. Android will learn the usage of your app over time and will start suggesting these actions to the users. This also helps in making Android more accessible, because now your app is exposing more standard ways to let people interact with it.
In addition, you can also code your app to expose conversational Actions so that Google Assistant-enabled devices, such as speakers, can also work with your app. The Intents you need to implement for this are here.
- Android Studio visual navigation editing: Android Studio 3.2 Canary, announced at I/O, includes a new tool for visual navigation editing that allows you to define a navigational pattern of Activities and Fragments.
The idea is based on defining “destinations” through code, and then defining how different actions can lead up to a certain destination; using the new visual navigation editing capability, you can define certain screens as destination, and also the actions which will lead the users to the different destinations.
To get started with destinations, first implement the Android Navigation Architecture Component here.
- Adaptive Battery: Google has been trying to improve the battery life on Android versions for quite some time. With Android P, another effort in this line is Adaptive Battery, which, if turned on, learns from the app usage on a device, and based on that, throttles down the apps which are not getting used often by the user.
As developers, we should all be cognizant that going forward, notifications to our apps might get muffled by the Adaptive Battery functionality if the users aren’t using them often. Accordingly, user behavior learning logic (e.g., user analytics) should now be factored for this upcoming change.
- Other launches: Apart from the above launches, quite a few interesting things will now be rolling out to developers. Instant apps are coming to games, so the gamers will be able to install the part of games that you choose, via search, Google Play, and in the future, maybe through ads.
Google also announced the launch of TPU 3.0 (Tensor Processing Unit), which is 8x more powerful than its predecessor (and because of this, for the first time, requires liquid cooling). TPU 3.0 is said to be capable of achieving the 100 petaflops mark.On the IoT side of things, Google announced that Android Things 1.0 is releasing, which means that we can expect more stability and maturity from the Android Things ecosystem, along with long-term support from Google.
There were countless big and small announcements at Google I/O 2018, and we’ve tried to curate here for you a few which may have a significant impact on the way you code your mobile apps or on the way mobile apps are consumed.
We hope this information helps spark some cool ideas and helps empower more developers and admins to be more productive and enlightened.
While we all await next year’s Google I/O (it’s 11 months away), those of us on the Akamai Developer Relations team hope to see you at another great developer/admin event in the interim: an upcoming stop on the Akamai for DevOps World Tour. We’d love that.
Aman Alam is a developer evangelist at Akamai Technologies.