Siri chatbot may be coming in 2026 as part of iOS 19
A new rumor suggests Apple wants to make Siri more conversational by making it use a large language model backend similar to ChatGPT and AI rivals, but it won't be ready until 2026.
Siri could get an LLM backend in 2026
When Apple was first rumored to be tackling artificial intelligence, one thing that was repeated often -- Apple would not release a chatbot. The idea that users would converse with Siri in the same way they did with ChatGPT was seemingly off the table for the launch.
Since then, a lot has changed, and according to a report from Bloomberg, Apple has begun work on an LLM-based Siri that might debut in iOS 19 in early 2026. This report lines up with AppleInsider's earlier exclusive on Apple AI testing tools that point to Siri becoming more tied to Apple's AI features.
The new Siri would handle back-and-forth conversations. So, tie that to the upgraded type-to-Siri experience, and it is easy to see the Siri chatbot taking shape.
Apple has yet to tie any real Apple Intelligence features to Siri since Apple's AI debuted in iOS 18.1. There's a new animation and fewer errors in interpretation, but knowledge about the user and installed apps is still limited.
For the user, Siri hasn't changed much at all and continues to act as a conduit to other tools like Google search or ChatGPT rather than being of much use on its own. The same structures for controlling the home, setting timers, or making calls still stand.
Moving to an LLM approach from machine learning will introduce a lot of opportunity and complexity to Siri, but also increase the likelihood of hallucinations. Apple's approach to AI does help reduce hallucinations, but they aren't eliminated.
The groundwork has already been laid for a more interactive and proactive Siri with iOS 18. A new app intent system will expose portions of app features and on-device user data to Siri to provide incredible contextual awareness -- think Apple's example of picking up your mom from the airport.
Apple Intelligence is a model Apple has broken up across the operating systems to provide useful tools, but it lacks a central unifying interface. The so-called "Siri LLM" could be that unifier as the face of Apple Intelligence.
The iOS 19 update is expected to be revealed during WWDC 2025 in June. However, the Siri LLM won't actually be ready for launch until early 2026, if the rumor is accurate.
Rumor Score: Possible
Read on AppleInsider
Comments
It can be good at some things and bad at others. Yes, basically it can only be as good as the data it is trained on but that is true of everything unless you are specifically looking for random creations or less precise results.
Let's not forget that AI hallucinations are actually part of many processes and actually a plus in some scenarios.
It is also mind boggling fast at certain things.
Most importantly we shouldn't be generalising on how clean or ethical the data used for AI is.
Long before Apple announced Apple Intelligence there were a huge amount of LLMs and now tiny LLMs that were (and are) trained on 'clean' data for all manner of tasks
Those models have been in use for years and are getting better.
https://www.huaweicloud.com/intl/en-us/about/takeacloudleap2024/ai-weather-prediction.html
https://www.huaweicloud.com/intl/en-us/news/20220919095614888.html
These kinds of models are everywhere and of course include image/language processing/generation.
I don't think we can really answer the question of whether AI generally, or some particular AI specifically, is really intelligence or not since we don't even really understand, and can't really exactly define, what intelligence is in humans and other animals. How would we even know if it were intelligence or not if we don't really know exactly what "intelligence" is? See also, https://en.wikipedia.org/wiki/Problem_of_other_minds
Copying is an integral part of all human culture. That's how, for example, we can recognise jazz or the blues — they have a certain sound, because they all copy and riff off each other from a distance. The point is that intellectual property rules exist precisely to prevent authors from being ripped off, and to credit creators and inventors, or they would refuse to disclose their methods (publish a patent application), or to record their performance, out of fear of being screwed. Art and science would not progress. With AI, the tech bros, believe they can have their cake and eat it. So far, they have been proven right.
But AI is currently being tasked to do the "easy things", easy for developers to make happen. And it's only being tasked to do things we already do. One could make an argument that since it's a commercial product, the creators of the materials it learns from should be compensated to some degree (does CliffsNotes pay royalties?) but I think your doomsday scenario of the end of all progress is more than a little improbable.
Just to be clear, I didn’t predict such a scenario, I merely illustrated why we have intellectual property rules in the first place.