Apple's iOS 18 AI will be on-device preserving privacy, and not server-side
A new report on Sunday again reiterates that Apple's AI push in iOS 18 is rumored to focus on privacy with processing done directly on the iPhone, that won't connect to cloud services.
Siri is expected to get a big upgrade with iOS 18
Over the past few months, we've heard a lot about Apple's endeavors in Artificial Intelligence and the features it aims to introduce later this year with iOS 18 and macOS 15. Various sources have all claimed that Apple would introduce AI-related enhancements.
It looks like the initial batch of features will function without the need for an internet connection.
Bloomberg's Mark Gurman, writing in a section of his weekly PowerOn newsletter said on Sunday that the initial set of AI-related features Apple plans to debut with iOS 18 "will work entirely on device." In practice, these AI features would be able to function without an internet connection or any form of cloud-based processing.
AppleInsider has received information from individuals familiar with the matter that suggest the report's claims are accurate. Apple is working on an in-house large language model, or LLM, known internally as "Ajax."
While more advanced features will ultimately require an internet connection, basic text analysis and response generation features should be available offline.
Regarding individual apps, we will likely see improvements to Messages, Safari, Spotlight Search, and Siri. Apple has been testing on-device text-based response generation for a while, meaning this feature will most likely be among those first previewed or released.
More advanced AI-related features and enhancements will still require an internet connection. Apple reportedly discussed licensing Google and OpenAI's AI technology for use in iOS 18 rather than developing its own online LLM.
The significant increase in popularity of AI projects over the past few years is likely the reason for Apple's apparent interest in artificial intelligence. AI tools have become increasingly accessible to everyday consumers, and issues have arisen in the legal and education sectors.
On-device processes could help eliminate certain controversies found with server-side AI tools. For example, these tools have been known to hallucinate, meaning they make up information confidently.
And, the hallucination problem has only gotten worse, as AI models feed on content generated by other AI models.
With its new "Ajax" LLM, the company likely seeks to build a competitor to the tools currently on the market. By eliminating the requirement for cloud-based processing and improving the quality of text generation, the company could gain a significant advantage over rival AI-powered services and tools.
Apple will reveal its AI plans during WWDC, which starts on June 10.
Rumor Score: Likely
Read on AppleInsider
Comments
The idea is that the model will be able to perform actions and make decisions without internet connections or privacy violating calls. Instead, the user will be able to perform actions to analyze requests, data, or other input and rely entirely on the logic of the AI.
Now, if you want to ask a question or have an image generated, that's where server-based AI comes in. And it seems Apple has no interest in developing one, at least not yet.
Prompt: Remove all verbiage from this article.
ChatGPT:
Rumors claim Apple's iOS 18 AI features prioritize privacy with on-device processing, except for some advanced functions, including a new large language model called "Ajax." It will enhance Messages, Safari, Spotlight, and Siri with text analysis and response generation. More details will be revealed at WWDC starting June 10.
If it plays out as expected, I would think they are adding sports to the Olympics (and local sports events).
Apple seems to be you guys all compete to expand what we offer, Google you guys compete to suplant us.
That's certainly one way to do it but isn't without its own issues. Limited scope and how many older iPhones can make decent use of it for example.
We'll have to wait and see how they announce it because other options from multiple vendors will remain open to iOS users anyway (perhaps some will be monetised through usage or subscription options).
There will also need to be a range of models to play off.
NLU, NLG are obviously two key areas. Image related AI will probably require an improvement too.
Hallucination is a property of AI models and won't go away in that sense. Hallucination is even desired in many scenarios.
The problem is that there are obviously scenarios where you would prefer not to have it. I believe that can be done quite well but normally requires the model to sit within another model etc, upping resource usage.
iPhone 16's will be the best selling iPhone ever....put a bet on that!
Nothing new. Just a marketing packaged by Apple as if it is a "breakthrough" feature..
YAWN!!!
https://ai.meta.com/blog/code-llama-large-language-model-coding/
The smallest Llama-7B model only needs 6GB of memory and around 10GB storage:
https://www.hardware-corner.net/guides/computer-to-run-llama-ai-model/
The larger 70B model needs 40GB RAM and is considered close to GPT4:
https://www.anyscale.com/blog/llama-2-is-about-as-factually-accurate-as-gpt-4-for-summaries-and-is-30x-cheaper
Apple published a paper about running larger models on SSD instead of RAM:
https://appleinsider.com/articles/23/12/21/apple-isnt-behind-on-ai-its-looking-ahead-to-the-future-of-smartphones
https://arxiv.org/pdf/2312.11514.pdf
Perhaps they can bump the storage sizes up 32-64GB and preinstall a GPT4-level AI model locally. The responses are nearly instant running locally. They may add a censorship/filter model to avoid getting into trouble for certain types of response.
They are powerful tools. You can try them here, first is chat, second is an image generator:
https://gpt4free.io/chat/
https://huggingface.co/spaces/stabilityai/stable-diffusion
For example, you can ask the chat 'What is an eigenvector?' or 'What is a good vegetarian recipe using carrots, onions, rice, and peppers?' and it will give a concise answer better than searching through search engines. The image generators can give some poor quality output but they are good for concepts with a specific description. An image prompt can be 'concept of a futuristic driverless van' to get ideas of what an Apple car could have looked like. Usually image prompts need tuned a lot to give good quality output.
The AI chats are context-aware so they remember what was asked previously. If you ask 'What is the biggest country?', it will answer and if you ask 'What is the capital of that country?', it knows what country is being referred to.
They are much better than search engines for certain types of information. Web engines are good for news, shopping, social media but when you just need an answer to a very specific question, it's usually very difficult to click through each of the links to find the answer.
Sure, Apple will be different because they have the superior mobile/desktop/laptop hardware/OS to make it different the 3.5-4 hours Video Boost and other AI miscues exist because Google hardware is weak and Samsung has no real control over what Google feeds them OS wise. The Tensor is easily more than five years behind Apples hardware they have no choice but to use a phone home solution.
https://www.androidpolice.com/video-boost-pixel-8-pro-review/ Phone home to the cloud servers and wait 3.5-4 hours for a AI/video boomerang.
Regarding the Tensor chip, it's all about the software and models in combination with the chip. You really need to think about this differently, you're looking at AI in terms of solely the SoC like it's 2015...
There may be a hierarchy here, with recent devices processing substantial AI tasks onboard and older devices off-loading them to Apple servers. There may instead be all on-device processing but with progressively greater limitations on what older devices can support. The second approach would be consistent with other hardware supported features that are only available on new(er) devices, this then motivates users to update their devices.
I know which one I would I would bet on.