Apple's John Giannandrea talks Apple Silicon, moving from Google to Apple
In a wide-ranging interview, Apple's head of Artificial Intelligence John Giannandrea states that it's "technically wrong" for AI to be processed at data centers instead of on a user's own device.
ohn Giannandrea, Apple's Senior Vice President for Machine Learning and AI Strategy
John Giannandrea, Apple's Senior Vice President for Machine Learning and AI Strategy, has answered critics who say that the company can never succeed in Artificial Intelligence because of its insistence on the privacy of processing on device. He denies that without leveraging mass-collected information in data centers, Apple's Machine Learning must be constrained.
"I understand this perception of bigger models in data centers somehow are more accurate," he told Ars Technica in an interview, "but it's actually wrong. It's actually technically wrong."
"It's better to run the model close to the data, rather than moving the data around," he continued. "And whether that's location data -- like what are you [the user is] doing [such as] exercise data, what's the accelerometer doing in your phone -- it's just better to be close to the source of the data, and so it's also privacy preserving."
As an example of privacy, speed and even practicality, he talked about how cameras can now help you choose when to take a photo. "If you wanted to make that decision on the server, you'd have to send every single frame to the server to make a decision about how to take a photograph," he said.
"That doesn't make any sense, right? So, there are just lots of experiences that you would want to build that are better done at the edge device," he continued.
Siri works on-device as much as possible for speed and privacy
"When I joined Apple, I was already an iPad user, and I loved the Pencil," he said. "So, I would track down the software teams and I would say, 'Okay, where's the machine learning team that's working on handwriting?'"
There wasn't one, so he created it and says that there is also now practically no part of Apple that isn't engaging with AI and ML. "I find it very easy to attract world-class people to Apple," he said, "because it's becoming increasingly obvious in our products that machine learning is critical to the experiences that we want to build for users."
"I guess the biggest problem I have is that many of our most ambitious products are the ones we can't talk about and so it's a bit of a sales challenge to tell somebody, 'Come and work on the most ambitious thing ever but I can't tell you what it is," he said.
Giannandrea says that he was attracted to Apple, and believes it is the right place to work on these topics, because of this same issue of being focused on experiences. "I think that Apple has always stood for that intersection of creativity and technology," he said.
"And I think that when you're thinking about building smart experiences, having vertical integration, all the way down from the applications, to the frameworks, to the silicon, is really essential," he continues. "I think it's a journey, and I think that this is the future of the computing devices that we have, is that they be smart, and that, that smart[ness] sort of disappear[s]."
Apple has begun its two-year transition to Apple Silicon
"We will for the first time have a common platform, a silicon platform that can support what we want to do and what our developers want to do," he says. "That capability will unlock some interesting things that we can think of, but probably more importantly will unlock lots of things for other developers as they go along."
Giannandrea explains that this is specifically because of how Apple is going to be able to leverage the Apple Neural Engine (ANE) works. "It's a multi-year journey because the hardware had not been available to do this at the edge five years ago," he said.
"The ANE design is entirely scalable," he continued. "There's a bigger ANE in an iPad than there is in a phone, than there is in an Apple Watch, but the CoreML API layer for our apps and also for developer apps is basically the same across the entire line of products."
As well as his technology work within Apple, Giannandrea has recently been lobbying European Union officials over their plans to regulate the use and implementation of AI. Separately, Apple is continuing to acquire companies specifically to aid in the development of AI such as Siri.
ohn Giannandrea, Apple's Senior Vice President for Machine Learning and AI Strategy
John Giannandrea, Apple's Senior Vice President for Machine Learning and AI Strategy, has answered critics who say that the company can never succeed in Artificial Intelligence because of its insistence on the privacy of processing on device. He denies that without leveraging mass-collected information in data centers, Apple's Machine Learning must be constrained.
"I understand this perception of bigger models in data centers somehow are more accurate," he told Ars Technica in an interview, "but it's actually wrong. It's actually technically wrong."
"It's better to run the model close to the data, rather than moving the data around," he continued. "And whether that's location data -- like what are you [the user is] doing [such as] exercise data, what's the accelerometer doing in your phone -- it's just better to be close to the source of the data, and so it's also privacy preserving."
As an example of privacy, speed and even practicality, he talked about how cameras can now help you choose when to take a photo. "If you wanted to make that decision on the server, you'd have to send every single frame to the server to make a decision about how to take a photograph," he said.
"That doesn't make any sense, right? So, there are just lots of experiences that you would want to build that are better done at the edge device," he continued.
Siri works on-device as much as possible for speed and privacy
Moving from Google to Apple
Giannandrea was Google's head of AI and search until 2018 when he moved to Apple and was put in charge of Siri and Machine Learning."When I joined Apple, I was already an iPad user, and I loved the Pencil," he said. "So, I would track down the software teams and I would say, 'Okay, where's the machine learning team that's working on handwriting?'"
There wasn't one, so he created it and says that there is also now practically no part of Apple that isn't engaging with AI and ML. "I find it very easy to attract world-class people to Apple," he said, "because it's becoming increasingly obvious in our products that machine learning is critical to the experiences that we want to build for users."
"I guess the biggest problem I have is that many of our most ambitious products are the ones we can't talk about and so it's a bit of a sales challenge to tell somebody, 'Come and work on the most ambitious thing ever but I can't tell you what it is," he said.
Giannandrea says that he was attracted to Apple, and believes it is the right place to work on these topics, because of this same issue of being focused on experiences. "I think that Apple has always stood for that intersection of creativity and technology," he said.
"And I think that when you're thinking about building smart experiences, having vertical integration, all the way down from the applications, to the frameworks, to the silicon, is really essential," he continues. "I think it's a journey, and I think that this is the future of the computing devices that we have, is that they be smart, and that, that smart[ness] sort of disappear[s]."
Apple Silicon
The next known step for Apple is the announced transition to Apple Silicon, and while Giannandrea unsurprisingly won't directly say what this means to AI, he does say it will have an impact outside Apple.Apple has begun its two-year transition to Apple Silicon
"We will for the first time have a common platform, a silicon platform that can support what we want to do and what our developers want to do," he says. "That capability will unlock some interesting things that we can think of, but probably more importantly will unlock lots of things for other developers as they go along."
Giannandrea explains that this is specifically because of how Apple is going to be able to leverage the Apple Neural Engine (ANE) works. "It's a multi-year journey because the hardware had not been available to do this at the edge five years ago," he said.
"The ANE design is entirely scalable," he continued. "There's a bigger ANE in an iPad than there is in a phone, than there is in an Apple Watch, but the CoreML API layer for our apps and also for developer apps is basically the same across the entire line of products."
As well as his technology work within Apple, Giannandrea has recently been lobbying European Union officials over their plans to regulate the use and implementation of AI. Separately, Apple is continuing to acquire companies specifically to aid in the development of AI such as Siri.
Comments
I watched a video not too long ago where someone was doing 4K video editing, color grading, and rendering in real-time... on an iPad Pro. And for the most part the silly thing wasn't even breathing hard.
That's on an A12z that's about two-generations behind in process and performance, and on a device where it's throttled due to battery-life constraints and thermal management. But also where the capability is made possible by doing much of the work in custom silicon.
I really can't wait to see what comes out of this...
We hear people complain/predict that Apple doesn't "get" AI and can't succeed because:
1. No one wants to work at Apple before they are handcuffed by privacy concerns.
2. Siri sucks and it's never get better because no one cares.
3. Apple won't be able to compete with Google because of Apple's limited notions about where/how to process the information.
Here's a world-class talent who chose to leave Google to head up these activities at Apple and he (with apparent sincerity) believes that this is all nonsense and (apparently) is getting stuff done at Apple to address deficiencies he's identifying.
So next time someone here pulls up some example from 3 years ago about why Apple is going to fail and AI, my reaction is going to be "is that before or after this dude took over" and "is that dude still working at Apple." Color me optimistic.
From what I've read at Google, they also believe the same, however that ends prior to my last clause, and for Google, the last clause would read, "must be exploited for our bottom line."
For me, the first thing that gets changed on my new iPhones is to change my search engine to DuckDuckGo, and any other Google owned apps are not installed. By the way, I'm looking for an alternative to Waze. I love that app, but want something non-Google owned that does the same thing on my CarPlay stereo.
As for AAPL failing, I'm more worried about their P/E going so high, and their stock split failing to get them to 100. The way it looks, it's going to be closer to $125, which breaks my heart... (not).
The trouble is that while Siri is the most out front AI everyone sees, there is a lot of AI that Apple does very well, but it's not as obvious. Cameras, facial recognition, translation, there's a lot of routine things that have AI running in the back. We just see that they work better.
AMEN, brother! (or sister).
How could Microsoft Voice Command do the processing of who I wanted to call in 2005 on a Windows Mobile Phone with 32MB of RAM, and Siri needs to send my data to some computer so I can call my wife? All of the information (her contact info, the cell chip) is on the phone, so why does the processing need to go to some computer in North Carolina/Nevada/New Caledonia?
Something I've never understood.
Here's how I'd run Siri:
1. If the data can be processed on the device, process it on the device. (i.e. call someone, find the next calendar appointment)
2. If data is needed from the web, figure out what is needed from the web, and gather it, then process on the device. (closest pizza restaurant, driving time to next appt.)
3. If Siri can't figure out 1 or 2, then let the supercomputer in the cloud figure it out with a randomized token as the user.
Someone who gets it.
I cannot stand uncreative people who refuse to use their own working brain. They have become a pet peeve of the worse. I cannot tell you how many times I've told people something was a bad idea at the peak of it's initial hype only for it to die a year later. The funny thing is, the "I told you so" tactic does not work on them because by the time the very idea they were hyped about dies, they no longer care or even remember it as they're on to the next hyped crap.
And I do not buy into the "Siri sucks" opinion as I've seen statistics that show they all excel at different tasks. Except privacy and security *wink*. It's still a shame Siri is in the same league when the copycats were late to the game. Remember Google saying Siri was a bad idea?
And as a (de)coder, I see Siri STILL as a 50+ year project, if not a 100+ year project.
Look at Microsoft, basically have to just outright cancel Cortana.
Look at Samsung, basically have to just push Bixby, to the side, for Google.
Apple has been working on Speech Recongintion since 1990, if not earlier.
I am sorry, but Google's beginnings were simple Wikipedia searches, and just default google searches, in which Apple chose NOT to do in the beginning.
Alexa was getting seriously dominant, when their "Skills" started being developed.
Google has Actions.
And Siri has Shortcuts.
The true battle is going to be with universal language compatability.
https://www.globalme.net/blog/language-support-voice-assistants-compared/
Laters...