SHARE

If you plugged into Google’s launch event this week, you would have noticed how the ‘hardware’ event was actually all about ‘software’ called Google Assistant. The Assistant is voice-enabled artificial intelligence (AI) software that bundles machine learning, the Google Knowledge Graph, and voice and image recognition natural language processing (NLP) to build a “personal Google for each and every user”.

Google Assistant uses artificial intelligence
Google Assistant

“We’re at a seminal moment in computing,” CEO Sundar Pichai declared at the Made by Google event. The Indian-born stressed how technology development today is shifting from a “mobile-first” world to an “AI-first” world. And Google is making that shift by stuffing artificial intelligence in its built-from-scratch Pixel smartphones and the Google Home smart devices.

 

 

The AI revolution

The search giant wants to turn AI into a personal helpdesk agent for you by utilizing over 70 billion facts about people, places and things which are fed into its Knowledge Graph. This database is, of course, powered by years of search queries made by people like you and I.

Google’s vision for the Assistant isn’t limited to a ‘OK Google’ kind of thing. Pichai pointed out that he wants the Assistant to become as integral a part of people’s lives as the Google’s homepage has become. Based on an open-development platform, the company sees the Assistant as a chatbot connected to TVs, speakers, etc., capable of holding a “two-way conversation”. Google is offering open SDKs for developers to build conversational AI experiences, like ordering groceries or playing a game.

“Ask it for a brief update on your day or to play a video on YouTube, look up traffic on the way home from work, find photos or when the nearest pharmacy closes. The Assistant can also offer help with what’s on-screen in any app. So if your friend texts you to meet up at a new restaurant, you can just say ‘navigate there’,” the company said in a statement.

Google Assistant uses artificial intelligence
Google is offering open SDKs for developers to build conversational AI experiences

The voice behind the Assistant

Earlier this year, Google’s AI research group, DeepMind, unveiled a new speech synthesis technique called WaveNet. Using individual sounds instead of complete syllables or words, WaveNet relied on a ‘computationally expensive’ process to generate complex, realistic-sounding audio. This is the voice powering the Assistant.

Last month, the world got a taste of the Assistant in Google’s new messaging app, Allo. Needless to say, Allo comes pre-installed in Pixel. The Pixel, meanwhile, boasts of a 12.3 megapixel rear camera with f/2.0. This means that you can click stunning pictures in any kind of light — #nofilter needed.  The camera also uses AI to power an intelligent burst, Smartburst, to take multiple shots and choose the best one.

Google Home is also voice-activated and designed to learn your preferences. You can use it to create a shopping list or to play a curated music list. And keep saddling it with complex queries — like “How do you get red wine out of carpet?”  —because the more you use it, the better it will get.

Now, Google isn’t the only company racing to become the de facto standard for your voice commands. All the top players, including Apple, Amazon and Facebook, are also betting big on AI. You can learn all about artificial intelligence and machine deep learning from the people who make it, and those who use it, at Geospatial World Forum.