BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Apple, Hewlett Packard Enterprise Take Different Paths For Deep Learning APIs

Following
POST WRITTEN BY
Karl Freund
This article is more than 7 years old.

There have been so many announcements about Deep Learning, Deep Neural Networks (DNNs) and Artificial Intelligence (AI) over the last year that it would be completely excusable to be totally confused. Apple is the latest company to announce their contribution to our collective confusion with a suite of “Basic Neural Network Subroutines” (BNNS) designed to make it easier to build AI applications for mobile and desktop devices. This announcement, however, is very different than previous API announcements and may position Apple well in the coming onslaught of smart applications that can process photos, videos, speech and text to make the man-machine interface to the next level of intuitive interaction.

What is an API, and why should you care?

An Application Programming Interface, or API, provides the interface for one piece of software to talk to another chunk of code on the same processor. The underlying code, or subroutines, can simplify a programmer’s life by providing reusable code, so they can focus on the higher level programming that connects these subroutines into useful applications and provides an interface for the user. In the fast moving world of Deep Neural Networks, where brain-inspired algorithms can be trained and then used for such tasks as recognizing images, translating speech, or processing video and sensor input to pilot a vehicle, recent APIs have focused on simplifying the job of creating these data-rich models by defining a rich framework with libraries optimized to specific architectures such as GPUs and CPUs. These APIs have come from universities such as UC Berkeley (Caffe) and the University of Montreal (Theano) as well as from internet giants such as Facebook (Torch), Google (Tensorflow), Amazon.com (DSSTNI) and Microsoft (CNTK).

HPE Haven is for Deep Learning cloud applications

Last year, Hewlett Packard Enterprise  (HPE) announced a unique set of APIs it had added to their Hadoop Autonomy Vertica on demand portfolio (Haven). Unlike the frameworks mentioned above, these routines are designed for an entirely different audience: programmers who have data and want to use simple subroutines to analyze that data using pre-trained neural networks. So, for example, if you are writing an application and want to find faces in an image, you can call the appropriate routine hosted on the Microsoft Azure cloud, and it will return the vertices of the area in which it identified a face. HPE offers over sixty such handy libraries in HPE Haven and the company states that hundreds of applications have been built and are using Haven today. Note that the user does not have to build and train a neural network. HPE has already done the heavy lifting for certain classes of data, like images and text.

Apple’s APIs are for building smart apps on iPhones and iPads

But what if you have designed a neural network with one of the popular open-source frameworks to distinguish, say, breeds of dogs. Then you train it using NVidia  GPUs in the cloud with “tagged” data (this is called Supervised Learning and represents the real state of the art today). Now you are ready to sell your app on Apple iTunes just in time for the big Westminster dog show. That’s where these APIs come in: you write your app to call the Basic Neural Network Subroutines (BNNS), creating your nifty trained neural network on the iPhone or iPad. When your app takes a picture of a dog at the show, it sends the picture (the input) to the DNN you defined and trained, and it returns the dog’s breed (the output). Viola! It’s called a Havanese!

Where do we go from here?

What we are seeing is the rapid maturation of the development tool chain (APIs) for Deep Neural Networks, which will help accelerate the creation of new neural networks and the applications that use them.

  1. Low-level frameworks are used to build and train the networks (e.g., Google TensorFlow). This is a double black diamond slope: experts only.
  2. APIs like Apple BNNS will make it easy to build and use a trained neural network on a specific processor to infer answers from a set of inputs. This will be typically on mobile devices for now, using the device's camera and microphone for input but could evolve to include embedded DNNs such as robotics and IOT edge devices.
  3. High-level cloud services provide access to a set of trained networks for common applications (HPE Haven). These are very useful for your average Python developer who has data such as images, text or speech and who does not require a custom neural network.

Taken together, these three approaches will help accelerate application development and foster the pervasive use of neural networks that will take our applications to the next level.

Disclosure: Moor Insights & Strategy, like all research and analyst firms, provides or has provided research, analysis, advising and/or consulting to many high-tech companies in the industry, including some of those mentioned in this article including Hewlett Packard Enterprise and NVIDIA. I own Google and AAPL in my retirement account, but otherwise do not have any investment positions in the other companies named in this article.

Follow me on Twitter or LinkedInCheck out my website