So, AI, AR and Machine Vision walked into Google I/O 2019…

May has been a busy time for technology giants. Only a week after the end of Facebook’s F8, Google had its own annual I/O developer event. Much like with F8, the focus point of the event was helping ordinary people with ordinary as well as less-than-ordinary problems.

The Anticipated New Android Version Update

Smart devices’ Android operating system finally received its 10th major release. Operating under the marketing moniker “Android Q”, the latest version really is the 10th one in release order as Google’s marketing department did not decide to skip the 9th release, as Microsoft did.

Q will be fully compatible with foldable devices in addition to standard ones. As foldables have been a topic of discussion for years and years, would this finally be the time to see them in real life? Let’s take note that good old Nokia presented this concept already over 10 years ago.

In addition, Android Q will also be supporting real-time, AI-created subtitles for all video formats, including streams. When the user watches anything that is in video format, be it in an app, on the web or as a live stream, Q will be able to create subtitles for it without any delay. In the best case scenario, people using their phone on mute and people with hearing disabilities could benefit greatly – should the feature work as promised.

Google also pointed out that it is indeed aware of the health risks related to smart devices. As such, Q’s new Focus Mode will allow the user to temporarily shut down problem applications like email. This way work can continue uninterrupted. Eye-strain can also be reduced with the long-awaited Dark Mode.

Will the Restaurant Business Benefit from Machine Vision?

As a part of Google Photos, Google Lens image recognition technology will be able to recognize popular dishes on physical restaurant menus and display pictures of them with additional information before ordering. All the customer needs to do is to point their phone camera at the menu. Using this technology, it will also be possible to translate a sign featuring a foreign language to the user’s native language and have Google Assistant read it aloud either as the original language or as a translation. This could benefit tourists, illiterate people as well as those with vision impairments all over the world.

For yours truly, it was quite ironic to see Google use an illiterate Indian woman as an example of a person benefiting from this technology by having her phone read text aloud to her. Of course, possibilities like this are amazing but I do wonder how many poor Indian people are actually able to afford a smartphone of their own? In 2017, one billion Indians were without a smart phone. On the other hand, Google has taken this into account by fitting their technology in a mere 100 kilobytes claiming that it will be available for phones that cost as little as 35 dollars.

The company promises that these features will be available by the end of the month. Support for the Finnish language certainly won’t be released that fast, I reckon.

The Increasingly Human Google Assistant

The next-gen Assistant will be much faster and more intuitive. The AI will be able to respond to the user’s voice with incredible speed because a part of it has been transferred from the cloud server onto the user’s phone. Google Assistant was able to smoothly execute most commonly used phone functions with barely any need to touch the screen, at least in the demo presentation.

Still, I doubt that Google Assistant will be much more than a toy this year either. It’s difficult to trust that the speech recognition feature will be able to understand the user’s voice input with 100% accuracy, and if I check on my phone that Assistant is doing what I want it to, I might as well use my fingers to do it myself. Of course, people with, for example, motor skill related difficulties, could benefit greatly from the new Assistant.

Assistant’s Google Duplex extension will be able to autonomously complete even more tasks such as renting a car or booking tickets for a movie. There’s even going to be a web-based version of Duplex for Chrome for both US and UK users later this year. It needs to be said that Duplex’s earlier version received a lot of critiques last year regarding the ethics of an AI that is able to talk in such a human-like manner.

Technical difficulties are not the only questionable property of Google Assistant as, going forward, the application will be able to know even more about our private lives. This way it will be able to give us custom-tailored suggestions from podcasts to food recipes.

Security and Privacy Were Not Entirely Ignored

Though Google focused on user privacy much less than Facebook did with F8, the company did touch upon this subject as well, as you might expect. As Google collects more and more user information, the user will also have more power over how much and what information Google is permitted to store on its servers.

Chrome’s Incognito function will also be expanded to both Google Maps and Search. In Incognito mode, the company will not record anything of the user regarding these two services. In addition, users will be able to install filters for automatic data deletion. These filters would delete user data concerning their locations and activity every three or 18 months.

AR in Search

Later this year, you can expect to start seeing AR-compatible 3D models in Google’s search results. For example, if the user searches for a new pair of sneakers to buy, they will be able to look at the shoes’ model from different angles as well as compare them to their current shoes side-by-side using their camera. I expect that Finnish resellers will not be able to use this feature before next year, at the very least.

“Hey Google, what does your future hold?”

Maybe I can book a doctor’s appointment in the future by letting me and the doctor’s AI find the most suitable timeslot for both us. For now, these will remain to be human tasks, though – especially in Finland.

There was a whole bunch of other matters I did not cover in this article. You can see Google’s presentations for yourself on I/O’s Youtube channel.