Americas

  • United States

Asia

How the Xnor.ai purchase opens Apple’s AI future

opinion
Jan 17, 20205 mins
AppleEnterprise ApplicationsInternet of Things

What's the $200 million buy mean? Think machine imaging, smart home devices and IoT industry implementations.

IoT > Internet of Things > network of connected devices
Credit: Elenabs / Getty Images

Apple’s $200 million acquisition of Xnor.ai provides tools for evolution in imaging, edge-based AI, HomeKit and more.

What does Xnor.ai do?

Xnor.ai was spun out of the Allen Institute for AI by Professor Ali Farhadi and Dr. Mohammed Rastegari in 2017. These men were also responsible for YOLO, YOLO9000, Label Refinery and other machine intelligence achievements.

The company developed machine learning and image recognition models that combined accuracy with the ability to work locally on the device, rather than sending those images to a server.

One client, Wyze Labs, used the tech for person detection in CCTV videos, though that feature was withdrawn earlier this month, before news of Apple’s purchase broke.

Xnor.ai was more ambitious than on-device image recognition. Its website states:

“Transform your business with on-device AI.”

On YouTube, a video is still available that explains its aims, including AI on smart home devices, on cameras and on agricultural drones. The intention seems to be to create self-learning AI that works on the device and does so without need of an internet connection.

In other words: no cloud required.

Independent self-learning devices

“We’re building a future where AI is available on almost every device,” The Xnor.ai voice over claims. “We call this AI Everywhere, for Everyone. And it’s the beginning of something truly transformational that will reshape how we work, live and play.”

Within this work, the company developed a solution called AI2GO, a self-serve platform to easily deploy advanced deep learning models onto edge devices.

(You can still watch an interesting account of what this does here.)

It is also important to note that the company has previously demonstrated an AI chip that used so little energy it could run off solar power.

There is an obvious symmetry between the two company’s visions: Xnor.ai’s AI models that can be installed on edge devices and Apple’s strategy to invest its devices with on-board intelligence that don’t need cloud servers.

The notion also fits current trends. Edge-based intelligence is seen as a bastion against the privacy and security risks of cloud-based systems – particularly in industrial deployments.

You’ll already find Apple working with models like this in Photos, which identifies faces, places and things in your images with analysis on your device. It may be possible that Xnor.Ai’s tech may help the company further reduce the quantity of information it needs to gather in order to make services work.

A stepping stone to homeOS?

Where things become more interesting is around smart home devices.

We already know that Apple is looking a little more deeply at HomeKit. It set the scene at WWDC 2019 with HomeKit Secured Routers and support for CCTV systems. It reprised the commitment in 2020 at CES.

The problem with most smart home devices is that they are dumb. They may have sensors, but they are centrally controlled by mobile devices, hubs and the like. They are controlled devices that lack on-board intelligence.

Xnor.ai changes that.

A lot of its work focused on enhancing Raspberry Pi with on-device AI. Huge quantities of processing power aren’t required. This makes it feasible to imagine these technologies being used to help Apple carve out some form of homeOS platform upon which developers can build self-learning (yet still affordable) smart home devices. Or even for industrial IoT deployments.

(While industrial tech has never been a prime market for Apple, things have changed, and its devices are now in use across the enterprise. Why wouldn’t it want strategic positions in some industrial verticals?)

Combine such devices with low power local IP-based networking and you end up with self-learning systems that are smart, but not online. They’re smart, upgradeable and inherently secure because intelligence takes place at the edge.

Don’t get too excited – yet

Apple’s platform-wide implementation of the newly acquired tech will take time. In the near term, it makes sense to see slightly more prosaic improvements, such as easier AI model updates, smarter person and object identification in Photos and smart object recognition in ARKit.

Another place where Apple may be able to make a difference is in CCTV video, improving playback and person recognition systems in these.

Wyze delivered this using Xnor.ai. Apple’s interest in HomeKit Secure Video and its focus on HomeKit, along with its work in video editing, machine intelligence and recognition systems makes this a place in which it could make a difference.

Another possibility is that this tech could make it easier for third-party developers to create, install and upgrade their own AI models on Apple platforms – I can even imagine an AI Playgrounds solution (like Swift Playgrounds) to teach kids the principles of machine intelligence. “I just built a jellybean recognition system for my iPhone…”

But these things take time – Apple is only now rolling out the kind of Maps improvements it began working on in earnest in around 2016.

The road between “could happen” and “did happen” is long and full of stumbling blocks, and the company’s grand plan for the implementation of these technologies is not necessarily linear or obvious. But the implications of the newly-acquired tech could extend across Apple’s product and software lines.

Please follow me on Twitter, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.

jonny_evans

Hello, and thanks for dropping in. I'm pleased to meet you. I'm Jonny Evans, and I've been writing (mainly about Apple) since 1999. These days I write my daily AppleHolic blog at Computerworld.com, where I explore Apple's growing identity in the enterprise. You can also keep up with my work at AppleMust, and follow me on Mastodon, LinkedIn and (maybe) Twitter.