Apple developing dedicated AI chip called Apple Neural Engine
Apple is reportedly developing a dedicated chip for integration with devices like iPhone that can handle artificial intelligence tasks, such as facial and speech recognition, according to a report on Friday.

Referred to internally as "Apple Neural Engine," the silicon is Apple's attempt to leapfrog the burgeoning AI market, which has surged over the past year with products like Amazon's Alexa and Google Assistant. According to a source familiar with the matter, the chip is designed to handle complex tasks that would otherwise require human intelligence to accomplish, Bloomberg reports.
Though Apple devices already sport forms of AI technology -- the Siri virtual assistant and basic computer vision assets -- a dedicated chip would further improve user experience. In addition, offloading AI-related computational processing from existing A-series SoCs could improve the battery life of portable devices like iPhone and iPad. If it comes to fruition, the strategy would be similar to chips introduced by competing manufacturers, including Google and its Tensor Processing Unit.
Apple has tested Apple Neural Engine in prototype iPhones, and is thinking about offloading core applications including Photos facial recognition, speech recognition and the iOS predictive keyboard to the chip, the report says. The source claims Apple plans to open up third-party developer access to the AI silicon, much like APIs for other key hardware features like Touch ID.
Whether the chip will be ready in time for inclusion in an iPhone revision later this year is unknown, though today's report speculates Apple could announce work on Apple Neural Engine at WWDC next month.
Apple's interest in AI, and related augmented reality tech, is well documented. CEO Tim Cook has on multiple occasions hinted that Apple-branded AR solutions are on the horizon. The company has been less forthcoming about its ambitions for AI.
That cloak of secrecy is slowly lifting, however. At a conference last year, Apple Director of Artificial Intelligence Research Russ Salakhutdinov said employees working on AI research are now allowed to publish their findings and interface with academics in the field. Some believe the shift in company policy was designed to retain high-value talent, as many researchers prefer to discuss their work with peers.
Just weeks after the IP embargo lifted, Apple published its first AI research paper focusing on advanced methods of training computer vision algorithms to recognize objects using synthetic images.
Apple has been aggressively building out its artificial intelligence and augmented reality teams through acquisitions and individual hires. Last August, for example, the company snapped up machine learning startup Turi for around $200 million. That purchase came less than a year after Apple bought another machine learning startup, Perceptio, and natural language processing firm VocalIQ to bolster in-house tech like Siri and certain facets of iOS, MacOS, tvOS and CarPlay.
Earlier this year, Apple was inducted into the the Partnership for AI as a founding member, with Siri co-founder and Apple AI expert Tom Gruber named to the group's board of directors.
Most recently, Apple in February reveled plans to expand its Seattle offices, which act as a hub for the company's AI research and development team. The company is also working on "very different" AI tech at its R&D facility in Yokohama, Japan.

Referred to internally as "Apple Neural Engine," the silicon is Apple's attempt to leapfrog the burgeoning AI market, which has surged over the past year with products like Amazon's Alexa and Google Assistant. According to a source familiar with the matter, the chip is designed to handle complex tasks that would otherwise require human intelligence to accomplish, Bloomberg reports.
Though Apple devices already sport forms of AI technology -- the Siri virtual assistant and basic computer vision assets -- a dedicated chip would further improve user experience. In addition, offloading AI-related computational processing from existing A-series SoCs could improve the battery life of portable devices like iPhone and iPad. If it comes to fruition, the strategy would be similar to chips introduced by competing manufacturers, including Google and its Tensor Processing Unit.
Apple has tested Apple Neural Engine in prototype iPhones, and is thinking about offloading core applications including Photos facial recognition, speech recognition and the iOS predictive keyboard to the chip, the report says. The source claims Apple plans to open up third-party developer access to the AI silicon, much like APIs for other key hardware features like Touch ID.
Whether the chip will be ready in time for inclusion in an iPhone revision later this year is unknown, though today's report speculates Apple could announce work on Apple Neural Engine at WWDC next month.
Apple's interest in AI, and related augmented reality tech, is well documented. CEO Tim Cook has on multiple occasions hinted that Apple-branded AR solutions are on the horizon. The company has been less forthcoming about its ambitions for AI.
That cloak of secrecy is slowly lifting, however. At a conference last year, Apple Director of Artificial Intelligence Research Russ Salakhutdinov said employees working on AI research are now allowed to publish their findings and interface with academics in the field. Some believe the shift in company policy was designed to retain high-value talent, as many researchers prefer to discuss their work with peers.
Just weeks after the IP embargo lifted, Apple published its first AI research paper focusing on advanced methods of training computer vision algorithms to recognize objects using synthetic images.
Apple has been aggressively building out its artificial intelligence and augmented reality teams through acquisitions and individual hires. Last August, for example, the company snapped up machine learning startup Turi for around $200 million. That purchase came less than a year after Apple bought another machine learning startup, Perceptio, and natural language processing firm VocalIQ to bolster in-house tech like Siri and certain facets of iOS, MacOS, tvOS and CarPlay.
Earlier this year, Apple was inducted into the the Partnership for AI as a founding member, with Siri co-founder and Apple AI expert Tom Gruber named to the group's board of directors.
Most recently, Apple in February reveled plans to expand its Seattle offices, which act as a hub for the company's AI research and development team. The company is also working on "very different" AI tech at its R&D facility in Yokohama, Japan.
Comments
But the reality is that AI is still like a game of Monopoly and each player only holds one property so far. There are so many moves to go that comparing Siri to Echo or Google or Cortana right now is pointless - they all suck.
https://www.wired.com/2014/08/ibm-unveils-a-brain-like-chip-with-4000-processor-cores/
That Apple would be looking at this makes perfect sense of course and if anyone can jump-start augmented reality apps it would be Apple.
Those systems are better described as advance databases. They simply compose queries for a database search, the real struggle is in decoding the human language which AI technologies can help with. To call them an artificial intelligence though is a bit foolish in my mind. Data does not imply intelligence, processing that data and coming up with unique or novel results is a different matter.
Exactly! Now maybe not in the first iteration but I'm absolutely certain that Apples move to an in house GPU has a lot to do with extensions to the GPU to facilitate AI type calculations. In a cell phone sized device you simply can't afford to have a discrete chip for AI even with the next process shrink and board space doesn't permit discreet chips.
The only other option Apple would have is a stacked arrangement of chips but that would dismiss the value of the all those computational units in a GPU. What I'm expecting is a GPU from Apple with resources that can be easily allocated to different processes. Instead of the vary wide computational resources you will see many smaller units that can run their own threads as needed.
Exciting times are coming it is just a question of how "good" the first implementation will be. Considering the direction Apple gave Imagination I suspect that we will see the first example of Apple AI hardware this year.
This is not entirely true. Software is certainly immature but part of that is due to trying to do stuff on a computer chip that could be done several orders of magnitude faster on dedicated AI hardware. The point here is that there are many libraries that leverage GPU's for AI calculations but even todays GPU's leave much to be desired. Beyond all of that AI is kinda analog like is some respects, your brain isn't a digital computer.
https://futurism.com/images/the-dawn-of-the-singularity/
The frustrating this with Siri is that they have the voice to text down pat. I seldom have troubles with Siri translating what I've said to text, it is the inability to process that text that is a huge problem. Unfortunately this happens on Apple servers so an AI chip on a cell phone won't help much unless Apple restructures their Siri services significantly.
I've been an advocate of Apple doing just that though. That is have the cell phone itself construct the queries to the various databases Siri uses. AI really needs to reside local to the device be it a phone, laptop or whatever. In the end you would want your local AI to be able to help with the management of your device, searching it and optimizing it.
As for what Apple is up to with Siri, I've pretty much given up on it. Siri actually got worse for me for a long time. In the end I'm not a big fan of these AI assistance, I don't think the use case has really been thought about buy the builders of the various systems.
One of the problems I have with the term AI is that it really hasn't become an intelligence at this point. Current AI techniques are just another way to process data that mimic some operations in the brain. This really has nothing to do with intelligence in the same way that a normal computer program solving a problem for you does not represent intelligence.
As for the quantum world that is a very real concern in modern semiconductor processes. It wouldn't be impossible to leverage quantum realities to produce an AI chip that performs with unique capabilities. I'm still not sure that would mean "intelligence" in the sense of a human being.
No, it's more like TensorFlow. Google has TensorFlow Lite which is scaled down and designed for mobile. Google even worked with Qualcomm to have TensorFlow Lite work with the Hexagon DSP that's in the 835 processor.
So if Apple makes a mobile "neural engine" it will very likely be similar to TensorFlow Lite. Except Apples "neural engine" will most certainly be superior to Qualcomms, just like their processor cores already are.