How Apple is already using machine learning and AI in iOS

Posted:
in iOS edited November 2023

Apple may not be as flashy as other companies in adopting artificial intelligence features, nor does it have as much drama surrounding what it does. Still, the company already has a lot of smarts scattered throughout iOS and macOS. Here's where.

Apple's Siri
Apple's Siri



Apple does not go out of its way to specifically name-drop "artificial intelligence" or AI meaningfully, but the company isn't avoiding the technology. Machine learning has become Apple's catch-all for its AI initiatives.

Apple uses artificial intelligence and machine learning in iOS and macOS in several noticeable ways.

What is machine learning?



It has been several years since Apple started using machine learning in iOS and other platforms. The first real use case was Apple's software keyboard on the iPhone.

Apple utilized predictive machine learning to understand which letter a user was hitting, which boosted accuracy. The algorithm also aimed to predict what word the user would type next.

Machine learning, or ML, is a system that can learn and adapt without explicit instructions. It is often used to identify patterns in data and provide specific results.

This technology has become a popular subfield of artificial intelligence. Apple has also been incorporating these features for several years.

Places with machine learning



In 2023, Apple is using machine learning in just about every nook and cranny of iOS. It is present in how users search for photos, interact with Siri, see suggestions for events, and much, much more.

On-device machine learning systems benefit the end user regarding data security and privacy. This allows Apple to keep important information on the device rather than relying on the cloud.

To help boost machine learning and all of the other key automated processes in iPhones, Apple made the Neural Engine. It launched with the iPhone's A11 Bionic processor to help with some camera functions, as well as Face ID.

Siri



Siri isn't technically artificial intelligence, but it does rely on AI systems to function. Siri taps into the on-device Deep Neural Network, or DNN, and machine learning to parse queries and offer responses.

Siri says hi
Siri says hi



Siri can handle various voice- and text-based queries, ranging from simple questions to controlling built-in apps. Users can ask Siri to play music, set a timer, check the weather, and much more.

TrueDepth camera and Face ID



Apple introduced the TrueDepth camera and Face ID with the launch of the iPhone X. The hardware system can project 30,000 infrared dots to create a depth map of the user's face. The dot projection is paired with a 2D infrared scan as well.

That information is stored on-device, and the iPhone uses machine learning and the DNN to parse every single scan of the user's face when they unlock their device.

Photos



This goes beyond iOS, as the stock Photos app is available on macOS and iPadOS as well. This app uses several machine learning algorithms to help with key built-in features, including photo and video curation.

Apple's Photos app using machine learning
Apple's Photos app using machine learning



Facial recognition in images is possible thanks to machine learning. The People album allows searching for identified people and curating images.

An on-device knowledge graph powered by machine learning can learn a person's frequently visited places, associated people, events, and more. It can use this gathered data to automatically create curated collections of photos and videos called "Memories."

The Camera app



Apple works to improve the camera experience for iPhone users regularly. Part of that goal is met with software and machine learning.

Apple's Deep Fusion optimizes for detail and low noise in photos
Apple's Deep Fusion optimizes for detail and low noise in photos



The Neural Engine boosts the camera's capabilities with features like Deep Fusion. It launched with the iPhone 11 and is present in newer iPhones.

Deep Fusion is a type of neural image processing. When taking a photo, the camera captures a total of nine shots. There are two sets of four shots taken just before the shutter button is pressed, followed by one longer exposure shot when the button is pressed.

The machine learning process, powered by the Neural Engine, will kick in and find the best possible shots. The result leans more towards sharpness and color accuracy.

Portrait mode also utilizes machine learning. While high-end iPhone models rely on hardware elements to help separate the user from the background, the iPhone SE of 2020 relied solely on machine learning to get a proper portrait blur effect.

Calendar



Machine learning algorithms help customers automate their general tasks as well. ML makes it possible to get smart suggestions regarding potential events the user might be interested in.

For instance, if someone sends an iMessage that includes a date, or even just the suggestion of doing something, then iOS can offer up an event to add to the Calendar app. All it takes is a few taps to add the event to the app to make it easy to remember.

There are more machine learning-based features coming to iOS 17:

The stock keyboard and iOS 17



One of Apple's first use cases with machine learning was the keyboard and autocorrect, and it's getting improved with iOS 17. Apple announced in 2023 that the stock keyboard will now utilize a "transformer language model," significantly boosting word prediction.

The transformer language model is a machine learning system that improves predictive accuracy as the user types. The software keyboard also learns frequently typed words, including swear words.

The new Journal app and iOS 17



Apple introduced a brand-new Journal app when it announced iOS 17 at WWDC 2023. This new app will allow users to reflect on past events and journal as much as they want in a proprietary app.

Apple's stock Journal app
Apple's stock Journal app



Apple is using machine learning to help inspire users as they add entries. These suggestions can be pulled from various resources, including the Photos app, recent activity, recent workouts, people, places, and more.

This feature is expected to arrive with the launch of iOS 17.1.

Apple will improve dictation and language translation with machine learning as well.

Notable mentions and beyond iOS



Machine learning is also present in watchOS with features that help track sleep, hand washing, heart health, and more.

As mentioned above, Apple has been using machine learning for years. Which means the company has technically been using artificial intelligence for years.

People who think Apple is lagging behind Google and Microsoft are only considering chatGPT and other similar systems. The forefront of public perception regarding AI in 2023 is occupied by Microsoft's AI-powered Bing and Google's Bard.

Apple is going to continue to rely on machine learning for the foreseeable future. It will find new ways to implement the system and boost user features in the future.

It is also rumored Apple is developing its own chatGPT-like experience, which could boost Siri in a big way at some point in the future. In February 2023, Apple held a summit focusing entirely on artificial intelligence, a clear sign it's not moving away from the technology.

Apple Car render
Apple Car render



Apple can rely on systems it's introducing with iOS 17, like the transformer language model for autocorrect, expanding functionality beyond the keyboard. Siri is just one avenue where Apple's continued work with machine learning can have user-facing value.

Apple's work in artificial intelligence is likely leading to the Apple Car. Whether or not the company actually releases a vehicle, the autonomous system designed for automobiles will need a brain.

Read on AppleInsider

Comments

  • Reply 1 of 18
    mayflymayfly Posts: 385member
    AI this, AI that, AI all over the place.
    Wonder when RobinHood finds a dead AI company to pump and dump.
  • Reply 2 of 18
    I’m wondering if the Apple Car will have a similar fate as the Apple Television Set. It’s hard for me to see how they can innovate on building cars. We already have EV and self-driving, and we already have big digital displays. Cars have been innovating like crazy in the last few years. Instead of building the actual vehicles, Apple should license the software instead. Like CarPlay, but much more advanced. I can’t imagine buying a car built by Foxconn.
    williamlondonsflagel
  • Reply 3 of 18
    "Machine learning" is a much more accurate term than "AI". Technologies like ChatGPT and "AI art" or whatever are not in any meaningful way "artificial intelligence". They just apply algorithms to massive amounts of data to spit out something that looks real, but still falls straight into the uncanny valley, and these technologies have no sense of awareness of what they are doing. They don't make decisions; they wouldn't be able to pass a Turing Test, much less solve a Trolley Problem.
    The current "AI" fad of this era is, in my mind, analogous to what "bleeding edge" tech companies of the late 90s and early 00s thought the internet was going to become, with 3D VR-style, immersive "cyberspace" experiences, where our avatars walk around in rendered worlds (which can currently happen in gaming environments, but is not part of our normal everyday lives). Instead, our internet experiences are mediated by browsers on 2D screens. Not exactly Ready Player One, or Lawnmower Man, or Johnny Mnemonic.
    Alex1Nmuthuk_vanalingamwatto_cobrajony0MacProdewme
  • Reply 4 of 18
    avon b7avon b7 Posts: 8,019member
    "Machine learning" is a much more accurate term than "AI". Technologies like ChatGPT and "AI art" or whatever are not in any meaningful way "artificial intelligence". They just apply algorithms to massive amounts of data to spit out something that looks real, but still falls straight into the uncanny valley, and these technologies have no sense of awareness of what they are doing. They don't make decisions; they wouldn't be able to pass a Turing Test, much less solve a Trolley Problem.
    The current "AI" fad of this era is, in my mind, analogous to what "bleeding edge" tech companies of the late 90s and early 00s thought the internet was going to become, with 3D VR-style, immersive "cyberspace" experiences, where our avatars walk around in rendered worlds (which can currently happen in gaming environments, but is not part of our normal everyday lives). Instead, our internet experiences are mediated by browsers on 2D screens. Not exactly Ready Player One, or Lawnmower Man, or Johnny Mnemonic.
    AI, today, is simply a broad umbrella term which covers a whole range of fields, among them, machine learning, deep learning etc.

    The US government says this:

    "The term ‘artificial intelligence’ means a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments" 
    Alex1NFileMakerFeller
  • Reply 5 of 18
    Apple is not going ‘Look at ME! Look at ME!’ 
    Apple tries to solve a ‘real problem’…to make the technology easier to use.
    That is why a ‘foldable phone’ will never see the light of day.
    edited September 2023 Alex1Nwatto_cobraFileMakerFellerdanoxMacPro
  • Reply 6 of 18
    I’m wondering if the Apple Car will have a similar fate as the Apple Television Set. It’s hard for me to see how they can innovate on building cars. We already have EV and self-driving, and we already have big digital displays. Cars have been innovating like crazy in the last few years. Instead of building the actual vehicles, Apple should license the software instead. Like CarPlay, but much more advanced. I can’t imagine buying a car built by Foxconn.
    Getting into the automaker business is the worst thing Apple could do. Think of how many American carmakers there were in say, 1930: more than 50. Now there are basically 3, if you include Tesla, but not the boutique makers (Fiat Chrysler/Stellantis is no longer an American carmaker). And many of those companies were GREAT automakers. Think Auburn, Cord, Duesenberg, Packard, Studebaker, Hudson, American Motors, Kaiser, DeLorean, Checker and many others.

    These companies were all founded by people with extensive experience in the segment prior to startup, and still failed. It's my belief that Apple would suffer the same fate if they "diWORSEified" into automaking. It could be significantly detrimental to the entire company if they don't cut their losses, from the enormous startup costs to the overhead and payroll costs. Better would be to license Apple's proprietary technologies to existing automakers, and charge them up the wazoo for it!
    Alex1Nwatto_cobraalexsaunders790
  • Reply 7 of 18

    The forefront of public perception regarding AI in 2023 is occupied by Microsoft's AI-powered Bing and Google's Bard.

    My experience is that ChatGPT is recognised and mentioned much more widely than these two.

    The hype about AI is overblown. When I first learned about decision trees (make a list of possible outcomes, assign them a probability and a cost, then calculate to find the "best" option) I was taught that the trick is to get the estimated probability as accurate as possible - that with experience your estimates will get more accurate. The current buzz is happening because people have figured out a way to analyse huge amounts of data and build a bunch of probability lookup tables in a short enough period of time to be feasible and at a low enough cost to be justifiable.

    At the end of the day, it's all just computation. The algorithms are not too complicated but the steps are computationally intensive and to understand how it works you need to be comfortable with matrix multiplication and statistics (e.g. this YouTube Video). The thing I really struggle to wrap my head around is why the machine doesn't have to show how it arrived at an answer - all of my teachers were very particular about that part of the process.
    Alex1Ndewmemayflywatto_cobraalexsaunders790
  • Reply 8 of 18
    mayflymayfly Posts: 385member

    The forefront of public perception regarding AI in 2023 is occupied by Microsoft's AI-powered Bing and Google's Bard.

    My experience is that ChatGPT is recognised and mentioned much more widely than these two.

    The hype about AI is overblown. When I first learned about decision trees (make a list of possible outcomes, assign them a probability and a cost, then calculate to find the "best" option) I was taught that the trick is to get the estimated probability as accurate as possible - that with experience your estimates will get more accurate. The current buzz is happening because people have figured out a way to analyse huge amounts of data and build a bunch of probability lookup tables in a short enough period of time to be feasible and at a low enough cost to be justifiable.

    At the end of the day, it's all just computation. The algorithms are not too complicated but the steps are computationally intensive and to understand how it works you need to be comfortable with matrix multiplication and statistics (e.g. this YouTube Video). The thing I really struggle to wrap my head around is why the machine doesn't have to show how it arrived at an answer - all of my teachers were very particular about that part of the process.
    The hype may be overblown, but you have to examine the underlying reason for that hype. It's not based on what we call artificial intelligence today. It's based on projections of the possibilities and consequences if a true artificial intelligence emerges. That would be a recursive software system that could change its own programming in order to improve itself. That would mean the program has a will of its own, and the means to impose that will. That's what the hype is about. And the implications are both inspiring and deeply concerning. Sure, guardrails can be put into place. But requires the builders of those guardrails to think of literally every possible scenario. You don't just have to be right most of the time. One oversight, just one, in such a recursive model, leaves it open to possibilities we can't even imagine.

    The great futurist Isaac Asimov proposed the famous "Three Laws of Robotics," and countless writers and movie makers have demonstrated how ineffective those guardrails can be against an advanced artificial intelligence.
    watto_cobraalexsaunders790
  • Reply 9 of 18
    danvmdanvm Posts: 1,475member
    geekmee said:
    Apple is not going ‘Look at ME! Look at ME!’ 
    Apple tries to solve a ‘real problem’…to make the technology easier to use.
    That is why a ‘foldable phone’ will never see the light of day.
    You should see how MS and Google are using AI in Office and Workspace, and how they solve real problems. 



    FileMakerFellerjony0
  • Reply 10 of 18
    AppleZuluAppleZulu Posts: 2,180member
    I’m wondering if the Apple Car will have a similar fate as the Apple Television Set. It’s hard for me to see how they can innovate on building cars. We already have EV and self-driving, and we already have big digital displays. Cars have been innovating like crazy in the last few years. Instead of building the actual vehicles, Apple should license the software instead. Like CarPlay, but much more advanced. I can’t imagine buying a car built by Foxconn.
    You’re right about the challenges for innovation in this category. Apple is not likely to license car software to some other company, however. First, CarPlay isn’t really Apple licensing its software to other companies. It’s a software shell in car radios that allows CarPlay running on your iPhone to display in the car radio, and to receive user input from that radio. The software, however is all on the iPhone, where Apple has complete control of it. The screen in the car is just a dumb terminal. 

    Second, licensing major software (e.g. an operating system) to operate another company’s hardware is how Microsoft distributes Windows and Google distributes Android. It is at the very core of Apple’s business model that this is not how Apple handles macOS or iOS. Apple writes its operating systems exclusively for and in concert with development of its own hardware. It would be a profound and fundamental change for them to alter that practice for a new product category. It’s just not how Apple operates. 
    williamlondonMacPro
  • Reply 11 of 18
    eriamjheriamjh Posts: 1,762member
    I have noticed that AirPods Siri is now describing pictures sent to me in messages when I am not using my phone.  
    She’s pretty accurate.   

    However, she is still dumb as a rock when I tell my watch to start a workout while wearing AirPods and both the AirPods and the watch hear me in addition to my phone, then I get “you need an app for that” on my AirPods, my phone opens up some stupid workout app, and my watch then says “could you say that again?”



  • Reply 12 of 18
    danoxdanox Posts: 3,407member
    danvm said:
    geekmee said:
    Apple is not going ‘Look at ME! Look at ME!’ 
    Apple tries to solve a ‘real problem’…to make the technology easier to use.
    That is why a ‘foldable phone’ will never see the light of day.
    You should see how MS and Google are using AI in Office and Workspace, and how they solve real problems. 



    Creepy…..
  • Reply 13 of 18
    tundraboytundraboy Posts: 1,914member
    mayfly said:

    The forefront of public perception regarding AI in 2023 is occupied by Microsoft's AI-powered Bing and Google's Bard.

    My experience is that ChatGPT is recognised and mentioned much more widely than these two.

    The hype about AI is overblown. When I first learned about decision trees (make a list of possible outcomes, assign them a probability and a cost, then calculate to find the "best" option) I was taught that the trick is to get the estimated probability as accurate as possible - that with experience your estimates will get more accurate. The current buzz is happening because people have figured out a way to analyse huge amounts of data and build a bunch of probability lookup tables in a short enough period of time to be feasible and at a low enough cost to be justifiable.

    At the end of the day, it's all just computation. The algorithms are not too complicated but the steps are computationally intensive and to understand how it works you need to be comfortable with matrix multiplication and statistics (e.g. this YouTube Video). The thing I really struggle to wrap my head around is why the machine doesn't have to show how it arrived at an answer - all of my teachers were very particular about that part of the process.
    The hype may be overblown, but you have to examine the underlying reason for that hype. It's not based on what we call artificial intelligence today. It's based on projections of the possibilities and consequences if a true artificial intelligence emerges. That would be a recursive software system that could change its own programming in order to improve itself. That would mean the program has a will of its own, and the means to impose that will. That's what the hype is about. And the implications are both inspiring and deeply concerning. Sure, guardrails can be put into place. But requires the builders of those guardrails to think of literally every possible scenario. You don't just have to be right most of the time. One oversight, just one, in such a recursive model, leaves it open to possibilities we can't even imagine.

    The great futurist Isaac Asimov proposed the famous "Three Laws of Robotics," and countless writers and movie makers have demonstrated how ineffective those guardrails can be against an advanced artificial intelligence.
     "That would be a recursive software system that could change its own programming in order to improve itself."

    Search on the term "ouroboros".  That creature is the apt metaphor for what you are describing, which is a logical impossibility masquerading as a logical paradox.  There are certain self-referential systems or operations that are just plain impossible.  For example, an ER doc suturing a laceration on his thigh is possible.  But a heart transplant surgeon performing transplant surgery on himself is not.
  • Reply 14 of 18
    danvmdanvm Posts: 1,475member
    danox said:
    danvm said:
    geekmee said:
    Apple is not going ‘Look at ME! Look at ME!’ 
    Apple tries to solve a ‘real problem’…to make the technology easier to use.
    That is why a ‘foldable phone’ will never see the light of day.
    You should see how MS and Google are using AI in Office and Workspace, and how they solve real problems. 



    Creepy…..
    Useful...
  • Reply 15 of 18
    Actually Apple has been using machine learning since iPhone 6s.

    Remember when Apple released the 6S and the world poo pooed it because “who needs a 64 bit processor in a phone?”.

    What they failed to notice was the Camera app which could take 100 photos, process them, take the best parts of each photo, then stitch them together to make the best photos ever seen on a mobile device. All in the time it takes to go click.

    That sort of processing needed a powerful processor like say… a 64 bit processor to achieve.

    Apple hasn’t looked back since.
    edited November 2023 MacProdanoxFileMakerFeller
  • Reply 16 of 18
    danoxdanox Posts: 3,407member
    Actually Apple has been using machine learning since iPhone 6s.

    Remember when Apple released the 6S and the world poo pooed it because “who needs a 64 bit processor in a phone?”.

    What they failed to notice was the Camera app which could take 100 photos, process them, take the best parts of each photo, then stitch them together to make the best photos ever seen on a mobile device. All in the time it takes to go click.

    That sort of processing needed a powerful processor like say… a 64 bit processor to achieve.

    Apple hasn’t looked back since.
    On the iPhone photo/video processing without calling HQ for the answer like a Google Pixel 8 Pro, which leaves the end user viewing a 3 minute ad from Spotify.
  • Reply 17 of 18
    If Siri is learning she’s a damn slow learner.
    williamlondonFileMakerFeller
  • Reply 18 of 18
    tundraboy said:
    mayfly said:

    The forefront of public perception regarding AI in 2023 is occupied by Microsoft's AI-powered Bing and Google's Bard.

    My experience is that ChatGPT is recognised and mentioned much more widely than these two.

    The hype about AI is overblown. When I first learned about decision trees (make a list of possible outcomes, assign them a probability and a cost, then calculate to find the "best" option) I was taught that the trick is to get the estimated probability as accurate as possible - that with experience your estimates will get more accurate. The current buzz is happening because people have figured out a way to analyse huge amounts of data and build a bunch of probability lookup tables in a short enough period of time to be feasible and at a low enough cost to be justifiable.

    At the end of the day, it's all just computation. The algorithms are not too complicated but the steps are computationally intensive and to understand how it works you need to be comfortable with matrix multiplication and statistics (e.g. this YouTube Video). The thing I really struggle to wrap my head around is why the machine doesn't have to show how it arrived at an answer - all of my teachers were very particular about that part of the process.
    The hype may be overblown, but you have to examine the underlying reason for that hype. It's not based on what we call artificial intelligence today. It's based on projections of the possibilities and consequences if a true artificial intelligence emerges. That would be a recursive software system that could change its own programming in order to improve itself. That would mean the program has a will of its own, and the means to impose that will. That's what the hype is about. And the implications are both inspiring and deeply concerning. Sure, guardrails can be put into place. But requires the builders of those guardrails to think of literally every possible scenario. You don't just have to be right most of the time. One oversight, just one, in such a recursive model, leaves it open to possibilities we can't even imagine.

    The great futurist Isaac Asimov proposed the famous "Three Laws of Robotics," and countless writers and movie makers have demonstrated how ineffective those guardrails can be against an advanced artificial intelligence.
     "That would be a recursive software system that could change its own programming in order to improve itself."

    Search on the term "ouroboros".  That creature is the apt metaphor for what you are describing, which is a logical impossibility masquerading as a logical paradox.  There are certain self-referential systems or operations that are just plain impossible.  For example, an ER doc suturing a laceration on his thigh is possible.  But a heart transplant surgeon performing transplant surgery on himself is not.
    There was research into software systems that could change their own code back in the 1980s and 90s. "Evolutionary programming" was one of the catch-phrases, IIRC. Point the system at a data source of algorithms (or, usually, pre-coded functions in various libraries), let it spin for a few million iterations, and look at the code that was generated once optimum performance was attained.

    The thing is, though, the definition of "optimum performance" was never decided by the software - it was simply following rules devised by the humans who set up the experiments. There is no agency, no creativity, just brute force iteration over every possible combination of pre-existing ideas. This is useful, but it is not intelligence.

    The chance of true intelligence emerging from the approaches taken thus far is zero. All we are building is a mimicry apparatus with a dogged persistence and speed that is beyond human capacity.

    The hype is completely unfounded.
    muthuk_vanalingam
Sign In or Register to comment.