Apple CEO Tim Cook calls AI a fundamental technology

Posted:
in General Discussion
Artificial intelligence is the industry buzzword these days, and while Apple is remaining tight-lipped on the subject, CEO Tim Cook promises it's an integral part of its products.

AI is already a big part of Apple
AI is already a big part of Apple

Apple held its
quarterly earnings call on Thursday evening where journalists get a chance to ask CEO Tim Cook and CFO Luca Maestri questions about the earnings and company. While many result in negligible replies, some answers provide insight into where things may be going.

A few questions directed the call towards AI and Apple's investments in it. Apple's leadership was clear on the company's stance on AI -- it wasn't just being developed, it was already part of everything.

"If you kind of zoom out and look at what we've done on AI and machine learning and how we've used it, we view AI and machine learning as fundamental technologies and they're integral to virtually every product that we ship," Cook replied to a question about generative AI. "And so just recently, when we shipped iOS 17, it had features like personal voice and live voicemail. AI is at the heart of these features."

Later in the call, Cook was asked about a roughly $8 billion increase in in R&D spend and where that money was going. Tim's response was simple, among other things, it's AI, silicon, and products like Apple Vision Pro.

These comments are in line with what Cook and the company have said about AI in the past. Plenty of features in iOS already rely on machine learning and even AI, like the new transformer language model for autocorrect.

Some assume that Apple's AI efforts will lead to a more powerful Siri. Though it isn't clear if or when such a project will surface.


Read on AppleInsider

Comments

  • Reply 1 of 11
    and yet...  When my iPhone does a voice-text transcription of voicemail message, it spells my name wrong.  A whole lot of intelligence there!
  • Reply 2 of 11
    and yet...  When my iPhone does a voice-text transcription of voicemail message, it spells my name wrong.  A whole lot of intelligence there!
    Hard to Spell OctoMonkey.
    AfarstardanoxchasmbyronlFileMakerFellerBart YdewmeScot1SleepySheep
  • Reply 3 of 11
    Likewise, I’m going to go out on a limb and declare that in my view that Pi and the speed of light c are fundamental constants of our universe.
    byronlAlex1N
  • Reply 4 of 11
    danoxdanox Posts: 2,875member
    and yet...  When my iPhone does a voice-text transcription of voicemail message, it spells my name wrong.  A whole lot of intelligence there!

    Apple unlike their competition is working towards on device AI solutions and not the ET phone home AI. What’s implemented on the Apple Vision Pro begins with the M2/M3 chip working in conjunction with the R1 chip which knows what to do when you look at something or when you slightly rub your fingers together to execute a command, in short, it won’t be Googles (video boost) implementation where information is sent to Google HQ, and then bounced back to you after being scrubbed of personal information, I think the AI path Apple is taking is more private to the end user (on device).

    On device, AI is more complicated because it actually requires a SOC chip combined with OS level software, and conversely requires a hell of a lot more time to design, engineer, and develop, designing something that works on a super computer back home at Google/Microsoft HQ does not which is why Google resorted to that method to cover for the shortcomings of the Tensor SOC in the Pixel 8 Pro which is five years behind Apple.
    byronlgregoriusmBart YaderutterAlex1Njas99
  • Reply 5 of 11
    danox said:
    and yet...  When my iPhone does a voice-text transcription of voicemail message, it spells my name wrong.  A whole lot of intelligence there!

    Apple unlike their competition is working towards on device AI solutions and not the ET phone home AI. What’s implemented on the Apple Vision Pro begins with the M2/M3 chip working in conjunction with the R1 chip which knows what to do when you look at something or when you slightly rub your fingers together to execute a command, in short, it won’t be Googles (video boost) implementation where information is sent to Google HQ, and then bounced back to you after being scrubbed of personal information, I think the AI path Apple is taking is more private to the end user (on device).

    On device, AI is more complicated because it actually requires a SOC chip combined with OS level software, and conversely requires a hell of a lot more time to design, engineer, and develop, designing something that works on a super computer back home at Google/Microsoft HQ does not which is why Google resorted to that method to cover for the shortcomings of the Tensor SOC in the Pixel 8 Pro which is five years behind Apple.
    Let us say your phone is setup with the owner contact information being Brad Lee Smith.  If somebody leaves a voice-mail message starting with "Hello Brad Lee", the transcription should not read "Hello Bradley".  This is not a particularly complicated issue to understand.  The old Knowledge Navigator concept from 35 years ago showed this type of "AI" in action.
    byronlAlex1NScot1
  • Reply 6 of 11
    avon b7avon b7 Posts: 7,703member
    danox said:
    and yet...  When my iPhone does a voice-text transcription of voicemail message, it spells my name wrong.  A whole lot of intelligence there!

    Apple unlike their competition is working towards on device AI solutions and not the ET phone home AI. What’s implemented on the Apple Vision Pro begins with the M2/M3 chip working in conjunction with the R1 chip which knows what to do when you look at something or when you slightly rub your fingers together to execute a command, in short, it won’t be Googles (video boost) implementation where information is sent to Google HQ, and then bounced back to you after being scrubbed of personal information, I think the AI path Apple is taking is more private to the end user (on device).

    On device, AI is more complicated because it actually requires a SOC chip combined with OS level software, and conversely requires a hell of a lot more time to design, engineer, and develop, designing something that works on a super computer back home at Google/Microsoft HQ does not which is why Google resorted to that method to cover for the shortcomings of the Tensor SOC in the Pixel 8 Pro which is five years behind Apple.
    On-device AI has been around on phones for years now. Phones from Apple and every major manufacturer. All that has happened since then is that it can do more, faster and with improved accuracy.

    Everyone is moving in the same direction and although there are good reasons for keeping some things on-device (privacy/latency) , there are also good reasons to shift things into the edge/cloud. 
  • Reply 7 of 11
    radarthekatradarthekat Posts: 3,843moderator
    avon b7 said:
    danox said:
    and yet...  When my iPhone does a voice-text transcription of voicemail message, it spells my name wrong.  A whole lot of intelligence there!

    Apple unlike their competition is working towards on device AI solutions and not the ET phone home AI. What’s implemented on the Apple Vision Pro begins with the M2/M3 chip working in conjunction with the R1 chip which knows what to do when you look at something or when you slightly rub your fingers together to execute a command, in short, it won’t be Googles (video boost) implementation where information is sent to Google HQ, and then bounced back to you after being scrubbed of personal information, I think the AI path Apple is taking is more private to the end user (on device).

    On device, AI is more complicated because it actually requires a SOC chip combined with OS level software, and conversely requires a hell of a lot more time to design, engineer, and develop, designing something that works on a super computer back home at Google/Microsoft HQ does not which is why Google resorted to that method to cover for the shortcomings of the Tensor SOC in the Pixel 8 Pro which is five years behind Apple.
    On-device AI has been around on phones for years now. Phones from Apple and every major manufacturer. All that has happened since then is that it can do more, faster and with improved accuracy.

    Everyone is moving in the same direction and although there are good reasons for keeping some things on-device (privacy/latency) , there are also good reasons to shift things into the edge/cloud. 
    Evolution has enabled my brain to make many decisions, take many actions and integrate much of the sensor data I take in without having to consult other minds (the analog analogy of the cloud).  I think Apple keeps this in mind during its development of on-device AI.  
    Bart YAlex1Ntmay
  • Reply 8 of 11
    gatorguygatorguy Posts: 24,214member
    danox said:
    and yet...  When my iPhone does a voice-text transcription of voicemail message, it spells my name wrong.  A whole lot of intelligence there!

    Apple unlike their competition is working towards on device AI solutions and not the ET phone home AI. What’s implemented on the Apple Vision Pro begins with the M2/M3 chip working in conjunction with the R1 chip which knows what to do when you look at something or when you slightly rub your fingers together to execute a command, in short, it won’t be Googles (video boost) implementation where information is sent to Google HQ, and then bounced back to you after being scrubbed of personal information, I think the AI path Apple is taking is more private to the end user (on device).

    On device, AI is more complicated because it actually requires a SOCI'm not wanting to get involved in the argument, other than  chip combined with OS level software, and conversely requires a hell of a lot more time to design, engineer, and develop, designing something that works on a super computer back home at Google/Microsoft HQ does not which is why Google resorted to that method to cover for the shortcomings of the Tensor SOC in the Pixel 8 Pro which is five years behind Apple.
    I'm not getting involved in the arguments other than to note that in many areas Google was out ahead of Apple in moving to on-device processing, and still is. If you truly want to know where things stand it's not all that difficult to research it.  
  • Reply 9 of 11
    XedXed Posts: 2,575member
    Remember when Tim Cook said that services were going to see a push? I think that was about a decade ago and look at it now. I'm getting that same vibe this.
  • Reply 10 of 11
    avon b7avon b7 Posts: 7,703member
    avon b7 said:
    danox said:
    and yet...  When my iPhone does a voice-text transcription of voicemail message, it spells my name wrong.  A whole lot of intelligence there!

    Apple unlike their competition is working towards on device AI solutions and not the ET phone home AI. What’s implemented on the Apple Vision Pro begins with the M2/M3 chip working in conjunction with the R1 chip which knows what to do when you look at something or when you slightly rub your fingers together to execute a command, in short, it won’t be Googles (video boost) implementation where information is sent to Google HQ, and then bounced back to you after being scrubbed of personal information, I think the AI path Apple is taking is more private to the end user (on device).

    On device, AI is more complicated because it actually requires a SOC chip combined with OS level software, and conversely requires a hell of a lot more time to design, engineer, and develop, designing something that works on a super computer back home at Google/Microsoft HQ does not which is why Google resorted to that method to cover for the shortcomings of the Tensor SOC in the Pixel 8 Pro which is five years behind Apple.
    On-device AI has been around on phones for years now. Phones from Apple and every major manufacturer. All that has happened since then is that it can do more, faster and with improved accuracy.

    Everyone is moving in the same direction and although there are good reasons for keeping some things on-device (privacy/latency) , there are also good reasons to shift things into the edge/cloud. 
    Evolution has enabled my brain to make many decisions, take many actions and integrate much of the sensor data I take in without having to consult other minds (the analog analogy of the cloud).  I think Apple keeps this in mind during its development of on-device AI.  
    Apple isn't doing anything different to anyone else in terms of on-device AI development.

    Sensor data processing depends on many factors. Sometimes it will be preferable to have it processed on device or spread over multiple devices (and that might not even be a phone). Other times it will be edge or cloud or a combination of both. That will depend on the task at hand.

    Let's not forget either that in certain use cases for 5G, the network itself is the sensor. 

    I've always had the impression that Apple's on device AI was behind the competition in its scope simply because a lot of what has appeared at keynotes has been posterior to already implemented instances on other devices. 
    dewmeAlex1Ngatorguy
  • Reply 11 of 11
    auxioauxio Posts: 2,728member
    avon b7 said:
    avon b7 said:
    danox said:
    and yet...  When my iPhone does a voice-text transcription of voicemail message, it spells my name wrong.  A whole lot of intelligence there!

    Apple unlike their competition is working towards on device AI solutions and not the ET phone home AI. What’s implemented on the Apple Vision Pro begins with the M2/M3 chip working in conjunction with the R1 chip which knows what to do when you look at something or when you slightly rub your fingers together to execute a command, in short, it won’t be Googles (video boost) implementation where information is sent to Google HQ, and then bounced back to you after being scrubbed of personal information, I think the AI path Apple is taking is more private to the end user (on device).

    On device, AI is more complicated because it actually requires a SOC chip combined with OS level software, and conversely requires a hell of a lot more time to design, engineer, and develop, designing something that works on a super computer back home at Google/Microsoft HQ does not which is why Google resorted to that method to cover for the shortcomings of the Tensor SOC in the Pixel 8 Pro which is five years behind Apple.
    On-device AI has been around on phones for years now. Phones from Apple and every major manufacturer. All that has happened since then is that it can do more, faster and with improved accuracy.

    Everyone is moving in the same direction and although there are good reasons for keeping some things on-device (privacy/latency) , there are also good reasons to shift things into the edge/cloud. 
    Evolution has enabled my brain to make many decisions, take many actions and integrate much of the sensor data I take in without having to consult other minds (the analog analogy of the cloud).  I think Apple keeps this in mind during its development of on-device AI.  
    Apple isn't doing anything different to anyone else in terms of on-device AI development.

    Sensor data processing depends on many factors. Sometimes it will be preferable to have it processed on device or spread over multiple devices (and that might not even be a phone). Other times it will be edge or cloud or a combination of both. That will depend on the task at hand.

    Let's not forget either that in certain use cases for 5G, the network itself is the sensor. 

    I've always had the impression that Apple's on device AI was behind the competition in its scope simply because a lot of what has appeared at keynotes has been posterior to already implemented instances on other devices. 
    The main reason why Apple would be behind is because they put privacy first. Machine learning (which is a better term than the vague "AI") requires massive amounts of data to get to the point of being accurate enough at any given task. The vast majority of tech companies simply check to see if what they're doing in regard to data harvesting is illegal, and if not, are happy to go forward with it. Especially if they're going to profit from it. They'll wait for legislators to eventually catch up and put laws around it. Then often find ways to skirt those laws.

    Apple otoh, tends to take a look at the bigger picture: human rights, what is the effect of corporations (and often associated governments) having all of this information about people? Is it being used to manipulate human behaviour, and if so, where is that manipulation for profit/power taking us?

    I often look back on the past (before my time even) where the people creating new technology were also studying philosophy, history, thinking about human nature and human rights, and considering the direction we're going based on a fundamental understanding of, and compassion for, humanity. I benefitted from the thinking of this generation growing up, and hope that my generation doesn't take us backwards by forgetting those fundamentals.

    edited November 2023 ihatescreennames
Sign In or Register to comment.