This years Google I/O was packed with news. There was lot of focus on Artificial Intelligence and Digital Wellbeing. Event kicked off in a Keynote led by CEO Sundar Pichai. Here are the main highlights from a developers perspective.
Android P is an important first step towards the vision of AI at the core of operating system. Main theme of Android P is Intelligence, Simplicity and Digital Wellbeing.
Android P will utilize AI in a number of ways. Adaptive battery, adaptive brightness and adaptive app launcher. Android P also brings a new feature called App actions. App menu not only shows the next app that you are going to use but also predicts the next action that you are going to take. System learns this based on your usage pattern. This brings user to their next task quickly. Thses actions are available in launcher, search, assistant, play store and text selection.
The redesigned launcher is built around gesture-based navigation system. Swiping up from the bottom of the screen takes you to recent apps carousel, and at the bottom of screen there are next five predicted apps. Swiping further app will bring up app menu.
There is big focus on digital wellbeing. There’s a new Android dashboard that shows you information about how much you’ve been using your phone including time spent in each app. You can also set time limit on apps usage. There’s also a Shush feature for easily turning on “Do Not Disturb”. There is Wind down mode which grays out the phone screen and automatically gets turned on when its time for sleep.
Other improvements include improved volume, screenshot and portrait/landscape controls.
Gmail Smart Compose
Smart Reply is a feature in Gmail for a while now. Google is expanding this to a bigger idea: Smart Compose. Smart Compose uses machine learning to suggest phrases as you type. All you have to do is hit tab and keep auto-completing. It takes care of mundane things like addresses, phone number etc. Pretty cool stuff.
Sundar Pichai recognized their vision for a perfect assistant is to build one which sounds natural and comfortable to talk to. There are six new voices coming to google assistant and more natural conversations. One of these updates is called Continued Conversations, which will let users ask follow up questions without repeating the “Hey Google” or “OK Google” hot phrase each time. Also assistant is able to recognize when you are talking to it vs when you are talking to others. Another update is “multiple actions” where you can make multiple requests to assistant in the same conversation.
The wow moment of keynote came when Sundar Pichai played back a recording of Google Assistant calling a hair salon and making an appointment. That conversation sounded like two humans talking to each other. There was no clue of a robotic voice or the person in other end of line recognized they were talking to AI. Google calls this technology Google Duplex which brings together all their investments over the years in natural language understanding, deep learning and text to speech.
Smart Displays with Google Assistant
A new visual canvas for the google assistant. This will bring the simplicity of voice to rich visual experience. This will allow users to watch live tv, videos or make video calls while multitasking around house. Assistant takes advantage of screen to build richer immersive, interactive content.
Google maps adding AR directions
The new AR feature combines the power of camera, computer vision, Google’s street view and maps to re-imagine walking navigation. Google calls this “VPS”, virtual positioning system. This new feature superimpose walking directions on the real world feed from camera and help you figure out which way you need to go. In addition to directions, the new AR mode can help identify nearby places, too.
Maps is also becoming more social. There is a new tab called “For You” that will show suggestions specifically tailored to you. Then there is personal score for places which is calculated based on your interests and places you been to and the ratings you have given to similar places. There are more social features like sharing multiple places to friends and vote on them together in real time to decide where to go, all from not leaving from the google maps.
Google Lens, is Google’s AI and AR platform for understanding the real world. It now exists as a feature in the native camera application of Pixel devices and some other Android handsets, coming to most of devices soon.
It’s able to do real-time object recognition and parse text from real-world objects like books and dinner menus. Point your phone’s camera at text in the real world, grab that text, and then paste it into a text field on your smartphone. Its that simple.
Machine is being used in google photos for a while now. A new feature is brought to google photos called suggested actions. This feature essentially suggesting smart actions right in context for you to act on. This feature uses AI to recognize things on the picture and suggest actions based on it. Actions could be ‘share photo to your friend’ who is in the picture or ‘add brightness to an underexposed picture’.
A new revamped version of google news. This new version is powered by AI which constantly search the web for you to bring content from reliable sources. This curated new is specifically tailored for you. It also has a local news section. Also the new app allows you to subscribe for news sources you love, free or paid.