All That New Google Hardware? It's a Trojan Horse for AI

Five years ago, experts thought the phone would be the interface to everything. That's not the case anymore.
PixelGoogleTA.jpg
Google

The focus of Google’s big hardware event this week wasn’t the hardware at all. It was Assistant, the artificially intelligent digital helper that caters to your every whim and powers your every interaction.

Assistant is invisible, in the design-jargon sense. The omnipresent concierge works in the background, predicting your needs, processing your requests, and offering neatly parceled answers to your questions. You never see the cogs behind it, you merely type (or speak) a command and read (or hear) tailored responses served on screen or through a speaker.

This requires more than a smartphone, which explains the gadgets Google announced Tuesday. But as Google likes to say, these are early days for a multi-portal system that includes a phone like Pixel and an Amazon Echo-like device like Home. “Five years ago, if we were talking about this, there was the belief that the phone would be the interface to everything,” says Alan Black, a computer scientist at Carnegie Mellon University’s Language Technologies Institute.

That is no longer the case. Google wants its intangible interface everywhere you are, which requires having it everywhere you are---in your pocket, in your car, in your kitchen, and so on---so it can learn everything about you and provide a personalized experience. Until now, Google only dabbled in devices, relying largely on companies like Samsung, HTC, and Motorola to provide the hardware that ran its software.

Now, to make its invisible AI mainstream, Google must make its own products. Two of them are especially important: Pixel, which resembles an iPhone, and Home, which looks a bit like a Glade air freshener. These portals to Google Assistant are attractive, but nothing spectacular. "There’s nothing too Earth-shattering about them. The phone is just a piece of aluminum,” says Mark Hung, a tech analyst at research firm Gartner. “What matters is the fact that you’re able to use them fairly seamlessly, through a conversational interface.”

The devices, in other words, exist merely as vessels. Rick Osterloh, head of Google’s new hardware group, suggested as much when he said Google decided to build hardware so the company can “get things done without worrying about the underlying tech.” In this instance, “get things done” means deliver a rich AI experience---something Google's spent the better part of its existence preparing for.

Consider Google's information bank, called Knowledge Graph, which has enhanced your search results since 2012. Today, it contains more than 70 billion facts. Assistant can tap that repository, and its conversational UI will only improve as it sees--and learns from---how and when people access it.

This explains why Google suggests placing a Home in every room. “The way to get the AI in front of you is to embody it in hardware,” says Jon Mann, an interaction designer at Artefact. "You need the access points so that it feels ubiquitous.” Today, your primary access point probably is your phone, and your home is among the few places where it may not be at your side. If Google can convince you to sprinkle access points around, it can train you to summon Assistant wherever you want, for whatever you want, whenever you want.

Shifting users toward intent-driven interactions is key to making AI work. Take this typical Spotify interaction: Open your phone, open Spotify, click to search, type what you want to hear. If you’re just listening on your phone, you’re done. Anything else takes a bit more work. “If I want to stream music to speakers in my living room, that’s multiple steps I have to take, and I have to work through discovering the trigger points on the app,” Mann says. Designers thoughtfully craft those trigger points, making sure that you see controlled amounts of information in a logical order. AI increasingly handles that. Want music? Simply say, "Play SubRosa." The more portals to Assistant you surround yourself with, the more places you can ask for it.

This is where Google's Be Everywhere model starts getting interesting. Because the more portals you surround yourself with, the more Assistant can learn about not just how you ask for help, but where, and in what context. “If they can execute on that, it’s really going to be quite revolutionary,” Hung says. Indeed, Google already is considering how best to interact with you in a multi-portal environment; if you pose a question aloud and multiple Home devices hear your request, the nearest node will provide the answer.

It's easy to imagine how this kind of contextual awareness will add an extra dimension to Assistant's intelligence, making it truly usable. That's essential to meeting---and exceeding---user's expectations. “I do think we are now moving to this state where we will expect to be able to say at any time things that will be answered by a speech interface,” Black says. Google's decision to bake its AI into a web of devices that work together certainly indicates as much.