Bloomberg's Mark Gurman recently published a scoop about an upcoming piece of chip technology from Apple (AAPL -0.57%). Gurman says Apple is "working on a processor devoted specifically to AI-related tasks."

The chip, Gurman reports, is "known internally as the Apple Neural Engine," and it would "improve the way the company's devices handle tasks that would otherwise require human intelligence -- such as facial recognition and speech recognition."

Apple's A10 chip embedded in an iPhone

Image source: Apple.

This sounds cool, and I hope Apple deploys it sooner rather than later. Let's consider why the development of this so-called Apple Neural Engine isn't a surprise and how it's part of a broader, ongoing trend with respect to mobile applications processors.

Not surprising

In a mobile device, power efficiency is of the utmost importance. These mobile devices are battery powered, and those batteries aren't getting much bigger or better. And since the longer a battery can stay charged, the better, power consumption must be minimized.

Apple could very well run these artificial intelligence-related tasks on, say, the CPU cores inside its A-series chips. However, a CPU is a general-purpose piece of technology, meaning it can do anything the software developers can code up, but it might not be very fast or efficient at performing the tasks.

Slow processing of a task degrades the user experience, and so does excessive power consumption. Indeed, it is the fundamental realization that certain well-defined and computationally intensive tasks can be performed much faster and more efficiently that drives the very concept of a mobile system-on-a-chip.

A mobile system-on-a-chip like Apple's A-series chips includes all sorts of dedicated functionality in service of efficiency. For example, the graphics processor inside the A-series chips is much better at quickly and efficiently rendering complex 3D games than the CPU could ever hope to be.

The image signal processor that's used to help the camera subsystem generate high-quality images quickly is another example of such a dedicated processor: Doing all those computations on the CPU, or even the GPU, would certainly be much less efficient and deliver a much worse user experience.

The trade-off, of course, is that designing these specialized processors certainly isn't cheap, and embedding those processors into the main system-on-a-chip increases chip area. This is, for example, why the major contract chip manufacturers and their customers are so interested in moving to smaller chip manufacturing technologies. They want to be able to cram in more stuff -- often, chip technology for handling specific functionality -- without letting chip sizes get out of hand.

So if AI functionality is going to become a critical part of Apple's future smartphones and, potentially, tablets, then it only makes sense for Apple to build a specialized piece of silicon to handle that functionality.

A competitive advantage for Apple
There's no doubt that other mobile-chip makers will follow suit and build similar technologies to Apple's Neural Engine, democratizing the technology. However, I suspect that Apple will have a lead for quite some time over other smartphone makers in utilizing such functionality.

According to Gurman, "Apple plans to offer developer access to [the Apple Neural Engine] so third-party apps can also offload artificial intelligence-related asks."

Since Apple controls the chip and iOS, it should have a much easier time making such a dedicated AI processor easily accessible to developers. Apple's control of the software and hardware ecosystem should also allow it to add new, interesting capabilities to future iterations of the engine and expose them to developers at a pace that competitors will have a tough time matching.