Specialized hardware needed to accelerate machine learning algorithms is moving out of the data center and into high-end mobile phones. Soon it could be in everyone's pocket, if chip designer Imagination Technologies has its way.
Imagination made a name for itself as the designer of the graphics accelerators in Apple's smartphones and tablets. Now it's designed a new range of processing cores that chip makers can use to accelerate artificial intelligence algorithms in their own hardware.
That, in turn, means app developers will have access to powerful local processing capabilities without the need for network access -- potentially allowing the use of artificial intelligence-based image recognition and diagnostic tools on industrial sites, in remote areas, or after natural disasters.
The renaissance in AI research is built on the use of neural networks to draw inferences from data, and then to apply that learning to new situations. The process is computationally intensive, especially in the learning phase, but the repetitive calculations involved can be significantly speeded up by specialized hardware.
Until recently, that hardware has been confined to power-hungry racks in air-conditioned data centers. That poses a dilemma for mobile app developers and embedded device manufacturers wanting to apply machine learning algorithms in the field: Wait for an unaccelerated mobile processor to come up with the answer, or flip the raw data to a remote server for faster processing and wait for the answer to come back.
That's fine for some applications, but for others (self-driving cars, say, or on-the-fly video processing) the latency or cost of sending the data back to the server may be unacceptable. There's also the issue of privacy or security: local processing leaves users in control of their data.
That's prompted a couple of smartphone manufacturers to dedicate hardware to the processing of neural networks in their latest smartphones.
Huawei Technologies jumped first, revealing that the Kirin 970 chipset that will power its forthcoming Huawei Mate 10 phone includes a dedicated neural processing unit.
And at the big reveal of the iPhone X, Apple announced that the A11 chip at the heart of it includes silicon dedicated to machine learning applications, a feature it calls the Neural Engine.
If developers have to rely on customers having the most expensive phones out there to run their apps, though, they're not going to build much of a following.
Imagination hopes that its new PowerVR Series 2NX Neural Network Accelerator will make neural processing capabilities available to a much larger slice of the Android smartphone market than just flagship phones. In addition to smartphones, Imagination is also targeting other mobile and embedded devices.
It has gone out of its way to lower power consumption. One of the ways it does this is by allowing variable bit-depth processing. When teaching, or tuning, a neural network, it's important to perform the calculations with precision, but when using the tuned network to make decisions based on live data, it's often possible to make correct decisions with less detailed calculations, using as few as 4 or 5 bits of precision instead of 16.
Less precision requires less power, so with the 2NX developers can choose to perform calculations with 16, 12, 10, 8, 7, 6, 5 or even 4-bit precision. According to Imagination, switching from 8-bit precision to 4 bits increases speed by 60 percent and reduces bandwidth by 46 percent, yet only has a 1 percent effect on the accuracy of inferences.
To help Android developers prepare for the new capabilities, Imagination is offering a combined API for the 2NX and its existing graphics accelerators. Developers will be able to write to the API, getting some benefits from existing hardware and, "as the new hardware becomes available, people will be able to take advantage of the increase in power," Chris Longstaff, the company's senior director of product and technology marketing, said.
That won't be for a while yet: Imagination sells designs, not devices, so it will be late next year before manufacturers have phones containing the 2NX core are on the market, Longstaff said.
Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.