Apple's iOS 18 AI will be on-device preserving privacy, and not server-side

Posted:
in iOS edited April 29

A new report on Sunday again reiterates that Apple's AI push in iOS 18 is rumored to focus on privacy with processing done directly on the iPhone, that won't connect to cloud services.

An image combining Siri's colorful sphere with the ChatGPT logo
Siri is expected to get a big upgrade with iOS 18



Over the past few months, we've heard a lot about Apple's endeavors in Artificial Intelligence and the features it aims to introduce later this year with iOS 18 and macOS 15. Various sources have all claimed that Apple would introduce AI-related enhancements.

It looks like the initial batch of features will function without the need for an internet connection.

Bloomberg's Mark Gurman, writing in a section of his weekly PowerOn newsletter said on Sunday that the initial set of AI-related features Apple plans to debut with iOS 18 "will work entirely on device." In practice, these AI features would be able to function without an internet connection or any form of cloud-based processing.

AppleInsider has received information from individuals familiar with the matter that suggest the report's claims are accurate. Apple is working on an in-house large language model, or LLM, known internally as "Ajax."

While more advanced features will ultimately require an internet connection, basic text analysis and response generation features should be available offline.

Regarding individual apps, we will likely see improvements to Messages, Safari, Spotlight Search, and Siri. Apple has been testing on-device text-based response generation for a while, meaning this feature will most likely be among those first previewed or released.

More advanced AI-related features and enhancements will still require an internet connection. Apple reportedly discussed licensing Google and OpenAI's AI technology for use in iOS 18 rather than developing its own online LLM.

The significant increase in popularity of AI projects over the past few years is likely the reason for Apple's apparent interest in artificial intelligence. AI tools have become increasingly accessible to everyday consumers, and issues have arisen in the legal and education sectors.

On-device processes could help eliminate certain controversies found with server-side AI tools. For example, these tools have been known to hallucinate, meaning they make up information confidently.

And, the hallucination problem has only gotten worse, as AI models feed on content generated by other AI models.

With its new "Ajax" LLM, the company likely seeks to build a competitor to the tools currently on the market. By eliminating the requirement for cloud-based processing and improving the quality of text generation, the company could gain a significant advantage over rival AI-powered services and tools.

Apple will reveal its AI plans during WWDC, which starts on June 10.

Rumor Score: Likely

Read on AppleInsider

«1

Comments

  • Reply 2 of 28
    22july201322july2013 Posts: 3,575member
    Apple's upcoming AI might not require a remote server to process its AI logic, but it will require an Internet connection to find many kinds of answers. Contrary to what many people seem to think, LLMs do not contain an entire copy of the Internet within them.
    killroyAlex1Nnubuswatto_cobra
  • Reply 3 of 28
    Wesley HilliardWesley Hilliard Posts: 190member, administrator, moderator, editor
    Apple's upcoming AI might not require a remote server to process its AI logic, but it will require an Internet connection to find many kinds of answers. Contrary to what many people seem to think, LLMs do not contain an entire copy of the Internet within them.
    I think that is a misconception too. People hear "AI" and think "smart search engine." That isn't the case. A good local model can function entirely on device with no need to reference anything from the internet.

    The idea is that the model will be able to perform actions and make decisions without internet connections or privacy violating calls. Instead, the user will be able to perform actions to analyze requests, data, or other input and rely entirely on the logic of the AI.

    Now, if you want to ask a question or have an image generated, that's where server-based AI comes in. And it seems Apple has no interest in developing one, at least not yet.
    dewmemattinozwatto_cobrakillroyOnPartyBusinesswilliamlondonjas99command_fAlex1N40domi
  • Reply 4 of 28
    What IQ level will be AI on device? 40?
    williamlondon
  • Reply 5 of 28
    On the topic of AI…
    Prompt: Remove all verbiage from this article.

    ChatGPT:
    Rumors claim Apple's iOS 18 AI features prioritize privacy with on-device processing, except for some advanced functions, including a new large language model called "Ajax." It will enhance Messages, Safari, Spotlight, and Siri with text analysis and response generation. More details will be revealed at WWDC starting June 10.
    edited April 15 watto_cobranubusAlex1N
  • Reply 6 of 28
    mattinozmattinoz Posts: 2,324member
    It is an interesting analysis, but are they building the Colosseum or, like the app store, expanding the sports at the Olympics?
    If it plays out as expected, I would think they are adding sports to the Olympics (and local sports events).
    Apple seems to be you guys all compete to expand what we offer, Google you guys compete to suplant us. 

    watto_cobrakillroyAlex1N
  • Reply 7 of 28
    AppleZuluAppleZulu Posts: 2,011member
    Apple's upcoming AI might not require a remote server to process its AI logic, but it will require an Internet connection to find many kinds of answers. Contrary to what many people seem to think, LLMs do not contain an entire copy of the Internet within them.
    I think that is a misconception too. People hear "AI" and think "smart search engine." That isn't the case. A good local model can function entirely on device with no need to reference anything from the internet.

    The idea is that the model will be able to perform actions and make decisions without internet connections or privacy violating calls. Instead, the user will be able to perform actions to analyze requests, data, or other input and rely entirely on the logic of the AI.

    Now, if you want to ask a question or have an image generated, that's where server-based AI comes in. And it seems Apple has no interest in developing one, at least not yet.
    I’d revise and extend that a bit. An AI-enhanced Siri could do an oral news summary for you, sourced entirely from content in your Apple News+ app on your device. This is all content to which you have subscribed and has been pulled onto your device before any AI work is done. An on-device AI could then filter through that info, enabling Siri to provide a summary briefing, along with an interaction with the user to select items of interest from that briefing and bookmark the source articles for the user to read later. 

    That would be all on-device AI, but doesn’t preclude use of content already downloaded from the internet. The key is that privacy is maintained by keeping the AI functions and interactions entirely on-device, as opposed to performing the entire operation on a server with the phone only functioning as a dumb terminal. 
    mattinozwatto_cobradanoxjas99command_fAlex1N40domi
  • Reply 8 of 28
    mattinozmattinoz Posts: 2,324member
    AppleZulu said:
    Apple's upcoming AI might not require a remote server to process its AI logic, but it will require an Internet connection to find many kinds of answers. Contrary to what many people seem to think, LLMs do not contain an entire copy of the Internet within them.
    I think that is a misconception too. People hear "AI" and think "smart search engine." That isn't the case. A good local model can function entirely on device with no need to reference anything from the internet.

    The idea is that the model will be able to perform actions and make decisions without internet connections or privacy violating calls. Instead, the user will be able to perform actions to analyze requests, data, or other input and rely entirely on the logic of the AI.

    Now, if you want to ask a question or have an image generated, that's where server-based AI comes in. And it seems Apple has no interest in developing one, at least not yet.
    I’d revise and extend that a bit. An AI-enhanced Siri could do an oral news summary for you, sourced entirely from content in your Apple News+ app on your device. This is all content to which you have subscribed and has been pulled onto your device before any AI work is done. An on-device AI could then filter through that info, enabling Siri to provide a summary briefing, along with an interaction with the user to select items of interest from that briefing and bookmark the source articles for the user to read later. 

    That would be all on-device AI, but doesn’t preclude use of content already downloaded from the internet. The key is that privacy is maintained by keeping the AI functions and interactions entirely on-device, as opposed to performing the entire operation on a server with the phone only functioning as a dumb terminal. 
    This concept could potentially be applied to server-based AI as well. For instance, AI image generation weighting could be based on a user's own iCloud photos library or selected sub-library. The weighting training, in effect, makes the feed data private from the server with the grunt needed to run the image generators.
    watto_cobraAlex1N
  • Reply 9 of 28
    avon b7avon b7 Posts: 7,711member
    Sounds like a more marketing focused play to push the privacy and latency angles. Assuming the rumour is true. 

    That's certainly one way to do it but isn't without its own issues. Limited scope and how many older iPhones can make decent use of it for example. 

    We'll have to wait and see how they announce it because other options from multiple vendors will remain open to iOS users anyway (perhaps some will be monetised through usage or subscription options).

    There will also need to be a range of models to play off. 

    NLU, NLG are obviously two key areas. Image related AI will probably require an improvement too.

    Hallucination is a property of AI models and won't go away in that sense. Hallucination is even desired in many scenarios. 

    The problem is that there are obviously scenarios where you would prefer not to have it. I believe that can be done quite well but normally requires the model to sit within another model etc, upping resource usage. 
    gatorguyAlex1N
  • Reply 10 of 28
    40domi40domi Posts: 68member
    AI is well over hyped and unintelligent consumers are swallowing it hook line & sinker 🤣
    iPhone 16's will be the best selling iPhone ever....put a bet on that!
    killroywilliamlondonjas99watto_cobraStrangeDays
  • Reply 11 of 28
    gatorguygatorguy Posts: 24,216member
    avon b7 said:
    Sounds like a more marketing focused play to push the privacy and latency angles. Assuming the rumour is true. 

    That's certainly one way to do it but isn't without its own issues. Limited scope and how many older iPhones can make decent use of it for example. 

    We'll have to wait and see how they announce it because other options from multiple vendors will remain open to iOS users anyway (perhaps some will be monetised through usage or subscription options).

    There will also need to be a range of models to play off. 

    NLU, NLG are obviously two key areas. Image related AI will probably require an improvement too.

    Hallucination is a property of AI models and won't go away in that sense. Hallucination is even desired in many scenarios. 

    The problem is that there are obviously scenarios where you would prefer not to have it. I believe that can be done quite well but normally requires the model to sit within another model etc, upping resource usage. 
    It's much the same way the most recent Pixel phones do it. Most of the (Generative) AI functions happen on device, a privacy-friendly approach. Computationally intensive features and web search are processed "in the cloud" for obvious reasons.  I think Apple will be following the same path, but be better at marketing the privacy of the on-device portions of the system than Google is. 
    edited April 16 ctt_zhmuthuk_vanalingamAlex1N
  • Reply 12 of 28
    What Apple is advertising of nothing more than what Google Pixel Phones already have which is dedicated Machine Learning hardware and ML models that just run on the phones to do things like time transcribing, image recognition, etc. 

    Nothing new. Just a marketing packaged by Apple as if it is a "breakthrough" feature.. 

    YAWN!!! 
  • Reply 13 of 28
    wonkothesanewonkothesane Posts: 1,727member
    Siri 2.0   ;)
    williamlondon
  • Reply 14 of 28
    killroykillroy Posts: 276member
    Isn't Ajax a programming language and can apple really use that name?
    williamlondonAlex1Nwatto_cobra
  • Reply 15 of 28
    MarvinMarvin Posts: 15,331moderator
    What IQ level will be AI on device? 40?
    The larger models need a lot of memory but even the smaller ones are very capable of more meaningful replies than chat apps like Siri. Meta released some models similar to Chat GPT and they work very well:

    https://ai.meta.com/blog/code-llama-large-language-model-coding/

    The smallest Llama-7B model only needs 6GB of memory and around 10GB storage:

    https://www.hardware-corner.net/guides/computer-to-run-llama-ai-model/

    The larger 70B model needs 40GB RAM and is considered close to GPT4:

    https://www.anyscale.com/blog/llama-2-is-about-as-factually-accurate-as-gpt-4-for-summaries-and-is-30x-cheaper

    Apple published a paper about running larger models on SSD instead of RAM:

    https://appleinsider.com/articles/23/12/21/apple-isnt-behind-on-ai-its-looking-ahead-to-the-future-of-smartphones
    https://arxiv.org/pdf/2312.11514.pdf

    Perhaps they can bump the storage sizes up 32-64GB and preinstall a GPT4-level AI model locally. The responses are nearly instant running locally. They may add a censorship/filter model to avoid getting into trouble for certain types of response.
    40domi said:
    AI is well over hyped and unintelligent consumers are swallowing it hook line & sinker 🤣
    They are powerful tools. You can try them here, first is chat, second is an image generator:

    https://gpt4free.io/chat/
    https://huggingface.co/spaces/stabilityai/stable-diffusion

    For example, you can ask the chat 'What is an eigenvector?' or 'What is a good vegetarian recipe using carrots, onions, rice, and peppers?' and it will give a concise answer better than searching through search engines. The image generators can give some poor quality output but they are good for concepts with a specific description. An image prompt can be 'concept of a futuristic driverless van' to get ideas of what an Apple car could have looked like. Usually image prompts need tuned a lot to give good quality output.

    The AI chats are context-aware so they remember what was asked previously. If you ask 'What is the biggest country?', it will answer and if you ask 'What is the capital of that country?', it knows what country is being referred to.

    They are much better than search engines for certain types of information. Web engines are good for news, shopping, social media but when you just need an answer to a very specific question, it's usually very difficult to click through each of the links to find the answer.
    williamlondonwonkothesaneAlex1Nwatto_cobra
  • Reply 16 of 28
    danoxdanox Posts: 2,878member
    gatorguy said:
    avon b7 said:
    Sounds like a more marketing focused play to push the privacy and latency angles. Assuming the rumour is true. 

    That's certainly one way to do it but isn't without its own issues. Limited scope and how many older iPhones can make decent use of it for example. 

    We'll have to wait and see how they announce it because other options from multiple vendors will remain open to iOS users anyway (perhaps some will be monetised through usage or subscription options).

    There will also need to be a range of models to play off. 

    NLU, NLG are obviously two key areas. Image related AI will probably require an improvement too.

    Hallucination is a property of AI models and won't go away in that sense. Hallucination is even desired in many scenarios. 

    The problem is that there are obviously scenarios where you would prefer not to have it. I believe that can be done quite well but normally requires the model to sit within another model etc, upping resource usage. 
    It's much the same way the most recent Pixel phones do it. Most of the (Generative) AI functions happen on device, a privacy-friendly approach. Computationally intensive features and web search are processed "in the cloud" for obvious reasons.  I think Apple will be following the same path, but be better at marketing the privacy of the on-device portions of the system than Google is. 

    Sure, Apple will be different because they have the superior mobile/desktop/laptop hardware/OS to make it different the 3.5-4 hours Video Boost and other AI miscues exist because Google hardware is weak and Samsung has no real control over what Google feeds them OS wise. The Tensor is easily more than five years behind Apples hardware they have no choice but to use a phone home solution.

    https://www.androidpolice.com/video-boost-pixel-8-pro-review/ Phone home to the cloud servers and wait 3.5-4 hours for a AI/video boomerang.
    williamlondonkillroyjas99Alex1Nwatto_cobra40domi
  • Reply 17 of 28
    ctt_zhctt_zh Posts: 67member
    danox said:
    gatorguy said:
    avon b7 said:
    Sounds like a more marketing focused play to push the privacy and latency angles. Assuming the rumour is true. 

    That's certainly one way to do it but isn't without its own issues. Limited scope and how many older iPhones can make decent use of it for example. 

    We'll have to wait and see how they announce it because other options from multiple vendors will remain open to iOS users anyway (perhaps some will be monetised through usage or subscription options).

    There will also need to be a range of models to play off. 

    NLU, NLG are obviously two key areas. Image related AI will probably require an improvement too.

    Hallucination is a property of AI models and won't go away in that sense. Hallucination is even desired in many scenarios. 

    The problem is that there are obviously scenarios where you would prefer not to have it. I believe that can be done quite well but normally requires the model to sit within another model etc, upping resource usage. 
    It's much the same way the most recent Pixel phones do it. Most of the (Generative) AI functions happen on device, a privacy-friendly approach. Computationally intensive features and web search are processed "in the cloud" for obvious reasons.  I think Apple will be following the same path, but be better at marketing the privacy of the on-device portions of the system than Google is. 

    Sure, Apple will be different because they have the superior mobile/desktop/laptop hardware/OS to make it different the 3.5-4 hours Video Boost and other AI miscues exist because Google hardware is weak and Samsung has no real control over what Google feeds them OS wise. The Tensor is easily more than five years behind Apples hardware they have no choice but to use a phone home solution.

    https://www.androidpolice.com/video-boost-pixel-8-pro-review/ Phone home to the cloud servers and wait 3.5-4 hours for a AI/video boomerang.
    Key phrase from the article... "It looks like the initial batch of features will function without the need for an internet connection.". If Apple can get its AI Servers remotely near the capabilities of Google's Cloud / Azure you could well be seeing more intensive / complex features being processed in the cloud by Apple (folks will indeed hope this is the case if the initial on-device features are a bit dull, compared to what the competition has offered for a while now). 

    Regarding the Tensor chip, it's all about the software and models in combination with the chip. You really need to think about this differently, you're looking at AI in terms of solely the SoC like it's 2015... 
    edited April 16 gatorguyAlex1Nwatto_cobra
  • Reply 18 of 28
    nubusnubus Posts: 390member
    avon b7 said:
    Sounds like a more marketing focused play to push the privacy and latency angles. Assuming the rumour is true. 

    That's certainly one way to do it but isn't without its own issues. Limited scope and how many older iPhones can make decent use of it for example. 
    Not super eco-friendly to build super fast iPhones with large batteries, memory, and storage. This will make iPhones and Macs more expensive and the LLMs will still need to ask Google... so much for privacy.  If Apple is doing expensive muscle devices then Android and Windows could win.
    williamlondonAlex1N
  • Reply 19 of 28
    Back when Genius arrived in iTunes, I could ask Siri to play more songs like this whilst offline. 10 years later, are we heading back to the future? Still bugs me I can no longer use Genius on my own music library. /tangent
    williamlondonAlex1Nwatto_cobrachasm
  • Reply 20 of 28
    command_fcommand_f Posts: 422member
    In 2017 Apple released iPhone X including a Neural Engine. In 2020 Apple released a Mac Mini including a Neural Engine. In both product lines (and iPad Pros), year on year, the Neural Engines have become increasingly powerful and ubiquitous. There's a hint there that substantial processing of AI (ML, LLM or whatever) will be done on device.

    There may be a hierarchy here, with recent devices processing substantial AI tasks onboard and older devices off-loading them to Apple servers. There may instead be all on-device processing but with progressively greater limitations on what older devices can support. The second approach would be consistent with other hardware supported features that are only available on new(er) devices, this then motivates users to update their devices.

    I know which one I would I would bet on.
    Alex1Nnubusmattinoz40domi
Sign In or Register to comment.