Americas

  • United States

Asia

One day you’ll wear your Mac like sunglasses

opinion
Jul 10, 20194 mins
AppleMobileSmall and Medium Business

What happens when your AirPods become your primary connection with all your computers?

sunglasses with iphone xs max ipad pro
Credit: Leif Johnson

What happens if your AirPods become your primary connection with all your computing devices, accessed in the cloud?

What if you could interact with remote systems using Voice Control? That’s a nice dream, I guess, but the big challenge to developing voice-based user interfaces is authorization. How does a computer know who you are?

Nuance just might have found the missing link.

A flash of Nuance Lightning

Nuance’s newly-introduced Lightning Engine isn’t aimed at Apple, though the company says it has made its new tech available to a number of strategic customers.

What it does is foundational.

The company claims to have built the first-ever voice biometrics solution that doesn’t need users to utter a pass phrase to be recognized. It also says its technology is smart enough to identify fake voices.

What might this mean?

To Nuance its new conversational AI engine is a business tool. Its product mission seems to consist of enabling customer voice recognition in contact centers.

You might use a system like this to get a current balance from your bank, arrange a loan or check a medical appointment – in each case (once setup) the system would be able to determine who you are within a millisecond of answering the call.

But the implications are bigger.

A voice first future

The big challenge with developing a completely voice-based user interface is recognition: even Siri has difficulty telling voices apart.

Nuance changes this with its new voice biometrics system.

Now, I have my reservations entrusting authorization to a single benchmark. I can see voice becoming one of a network of biometric authorization systems in future: face, touch and passcodes will all supplement this – when available.

There’s an argument around convenience.

How much more convenient if you can start your car, open your door, or pay for your shopping simply by saying you want to do so.

More prosaically, how much better will it be when HomePod begins to be able to determine one person from another when playing music in your home – and how much less confused will our Apple Music recommendations become as a result?

But you’re not just thinking about Apple Music, are you?

You are thinking about how intelligent voice interfaces can work alongside other products to create new computing models.

You know the sort of thing – as if your Mac lived in your sunglasses and you controlled the experience through speech.

Like Voice Control.

What a preposterous thought

Unless you are someone whose computing experience is already close to that kind of experience, as so many accessibility tools users are.

Of course, voice also enabled computing for those who can’t see a physical screen – that’s what screen readers do. (And is also why every website owner should make sure to use ALT tags).

Brian Roemmele is a long-term advocate for voice first user interfaces, he says:

“Every single aspect of technology from smartphones, to automobiles to appliances will have a Voice First Interface. New brands will be built around this concept and old brands that do not adapt will die around this concept.”

Which sounds about right.

People are becoming familiar with using voice to get things done.

comScore last year claimed that 50% of all search would be transacted using voice tech by 2020. In the UK, at least one hospital now uses voice-based transcription to write all its letters, which has reduced the time it takes to write letters from 17-days to just two.

One of the key challenges has always been the need to figure out how voice can securely and robustly be used as to identify a person.

Your computer is about to disappear

Google and Apple have been working toward this – Google Assistant on Google Home can recognise six voices, and Apple’s HomePod will gain multi-user identification later this year.

Nuance takes this and moves it forward, enabling authorization via voice.

I do have some concerns about all this.

I fret at the notion that it also becomes more possible to identify a voice in the crowd, or to create spoof voices. I worry about how secure voice as authorization will be when a determined voice artist attempts to mimic someone else.

At the same time, the direction of travel seems clear.

Computers are already wearable. Soon they become virtual.

Please follow me on Twitter, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.

jonny_evans

Hello, and thanks for dropping in. I'm pleased to meet you. I'm Jonny Evans, and I've been writing (mainly about Apple) since 1999. These days I write my daily AppleHolic blog at Computerworld.com, where I explore Apple's growing identity in the enterprise. You can also keep up with my work at AppleMust, and follow me on Mastodon, LinkedIn and (maybe) Twitter.