Siri chatbot may be coming in 2026 as part of iOS 19

Posted:
in iOS

A new rumor suggests Apple wants to make Siri more conversational by making it use a large language model backend similar to ChatGPT and AI rivals, but it won't be ready until 2026.

A glowing multicolored infinity symbol encircles a vibrant, luminous sphere with intersecting waves on a black background.
Siri could get an LLM backend in 2026



When Apple was first rumored to be tackling artificial intelligence, one thing that was repeated often -- Apple would not release a chatbot. The idea that users would converse with Siri in the same way they did with ChatGPT was seemingly off the table for the launch.

Since then, a lot has changed, and according to a report from Bloomberg, Apple has begun work on an LLM-based Siri that might debut in iOS 19 in early 2026. This report lines up with AppleInsider's earlier exclusive on Apple AI testing tools that point to Siri becoming more tied to Apple's AI features.

The new Siri would handle back-and-forth conversations. So, tie that to the upgraded type-to-Siri experience, and it is easy to see the Siri chatbot taking shape.

Apple has yet to tie any real Apple Intelligence features to Siri since Apple's AI debuted in iOS 18.1. There's a new animation and fewer errors in interpretation, but knowledge about the user and installed apps is still limited.

For the user, Siri hasn't changed much at all and continues to act as a conduit to other tools like Google search or ChatGPT rather than being of much use on its own. The same structures for controlling the home, setting timers, or making calls still stand.



Moving to an LLM approach from machine learning will introduce a lot of opportunity and complexity to Siri, but also increase the likelihood of hallucinations. Apple's approach to AI does help reduce hallucinations, but they aren't eliminated.

The groundwork has already been laid for a more interactive and proactive Siri with iOS 18. A new app intent system will expose portions of app features and on-device user data to Siri to provide incredible contextual awareness -- think Apple's example of picking up your mom from the airport.

Apple Intelligence is a model Apple has broken up across the operating systems to provide useful tools, but it lacks a central unifying interface. The so-called "Siri LLM" could be that unifier as the face of Apple Intelligence.

The iOS 19 update is expected to be revealed during WWDC 2025 in June. However, the Siri LLM won't actually be ready for launch until early 2026, if the rumor is accurate.

Rumor Score: Possible

Read on AppleInsider

«1

Comments

  • Reply 1 of 21
    That Will be the actual Apple Intelligence. But grok ai Will be better. 
    williamlondon
  • Reply 2 of 21
    That Will be the actual Apple Intelligence. But grok ai Will be better. 
    How does Grok integrate with Siri and Apple's OS's/software?
    IreneWradarthekat
  • Reply 3 of 21
    michelb76 said:
    That Will be the actual Apple Intelligence. But grok ai Will be better. 
    How does Grok integrate with Siri and Apple's OS's/software?
    Here is the thing… it doesn’t. Anyways it is quite interesting how Grok will improve especially since Musk keeps purchasing the best top of the line servers from Nvidia. As much as I distaste his political views and “undying support” for Trump I can’t wait for major technology advancements. The reason I say this is Elon has a hand in control of Trump and in turn next year major control of the government and he can propel spacex and Tesla with less government interference and more funds. So lose-win-tbd win?
  • Reply 4 of 21
    DAalsethDAalseth Posts: 3,046member
    It depends on what article you read. Other sites are saying that it will be a smarter Siri that can better handle branched and linked questions. AI is the only place I’ve seen it referred to as a chatbot. A better Siri would be nice. A chatbot is not something I want anything to do with. 
    Alex_V
  • Reply 5 of 21
    Chatbot isn't the right term. LLMs are going to enable a lot more than question and answer type of engagement. I'm assuming "Siri" stays as the interface and it'll be interesting to see how they upgrade it from the current model to an LLM-supported personal assistant. It won't be long now.
  • Reply 6 of 21
    harrykatsarosharrykatsaros Posts: 90unconfirmed, member
    They should just buy Anthropic. Great reasoning. Actually decent at coding. Achieved both with way less money pumped into it than OpenAI did with ChatGPT and would be way cheaper to acquire. Apple is several years behind on this and acquiring Anthropic would bring them right back to the front of the pack. 
    Alex_V
  • Reply 7 of 21
    That Will be the actual Apple Intelligence. But grok ai Will be better. 
    Yeah right. Siri become a right-wing christian extremist nut case. No thanks.
    DAalsethAlex_Vradarthekatapple4thewin
  • Reply 8 of 21
    That Will be the actual Apple Intelligence. But grok ai Will be better. 
    Yeah right. Siri become a right-wing christian extremist nut case. No thanks.
    Lets talk about AI and SIRI on how it will improve it’s functionality rather politics, Antrophic purchases make sense just like Apple did on Siri. 
    Elections is done and Americans speaks, get over it. 
  • Reply 9 of 21
    Yawn. What a pathetic competitive response. 

    Too little, too late. 
  • Reply 10 of 21
    apple4thewin said:

    Anyways it is quite interesting how Grok will improve especially since Musk keeps purchasing the best top of the line servers from Nvidia.
    Oh, well, it's good to know AI is so simple, just buy the best servers and you are good to go. I guess it basically creates itself.
    Alex_Vapple4thewin
  • Reply 11 of 21
    Alex_VAlex_V Posts: 266member
    AI, in my basic understanding, appears to be a sophisticated copy-and-paste. It gets a query; employs an algorithm to interpret it; searches for answers from existing data (therefore it must have a lot of data, and continually scrape the internet to stay up to date); uses probability to locate the answer; and assembles it into plausible sentences or images. 

    The entire enterprise appears to achieve two things: 1. a way to make a dumb machine look smart, by providing actual answers to user search queries. 2. a way to circumvent copyright restrictions or accusations plagiarism. It’s all a conjuring trick using smoke and mirrors. 
    edited November 24
  • Reply 12 of 21
    avon b7avon b7 Posts: 8,019member
    Alex_V said:
    AI, in my basic understanding, appears to be a sophisticated copy-and-paste. It gets a query; employs an algorithm to interpret it; searches for answers from existing data (therefore it must have a lot of data, and continually scrape the internet to stay up to date); uses probability to locate the answer; and assembles it into plausible sentences or images. 

    The entire enterprise appears to achieve two things: 1. a way to make a dumb machine look smart, by providing actual answers to user search queries. 2. a way to circumvent copyright restrictions or accusations plagiarism. It’s all a conjuring trick using smoke and mirrors. 
    It isn't so much what AI is or isn't. It's more about what can be done with it and we have already seen that a lot can be done for far less effort than was previously possible. 

    It can be good at some things and bad at others. Yes, basically it can only be as good as the data it is trained on but that is true of everything unless you are specifically looking for random creations or less precise results. 

    Let's not forget that AI hallucinations are actually part of many processes and actually a plus in some scenarios. 

    It is also mind boggling fast at certain things.

    Most importantly we shouldn't be generalising on how clean or ethical the data used for AI is. 

    Long before Apple announced Apple Intelligence there were a huge amount of LLMs and now tiny LLMs that were (and are) trained on 'clean' data for all manner of tasks

    Those models have been in use for years and are getting better. 


    https://www.huaweicloud.com/intl/en-us/about/takeacloudleap2024/ai-weather-prediction.html

    https://www.huaweicloud.com/intl/en-us/news/20220919095614888.html


    These kinds of models are everywhere and of course include image/language processing/generation. 

  • Reply 13 of 21
    Alex_VAlex_V Posts: 266member
    avon b7 said:
    It isn't so much what AI is or isn't. It's more about what can be done with it and we have already seen that a lot can be done for far less effort than was previously possible. 
    [snip]
    These kinds of models are everywhere and of course include image/language processing/generation. 

    I know little about computer programming and even less about generative AI. Yet, the hype about AI poses it as a form of intelligence. It is not. Instead, it appears to be a copying machine specifically designed to circumvent authorship. The second part is not accidental, it is the whole point. With AI, these companies can respond to user queries by locating the answers in works produced by humans and then paraphrasing them, to avoid giving credit or paying royalties to the actual authors. Until recently, all of human culture — that is, all images, all music, all of the built environment, all technology, and all written text — was produced by individuals or teams of people. Now, we will be, ever more, confronted by culture that is produced by sophisticated plagiarising machines. 
  • Reply 14 of 21
    Alex_V said:
    avon b7 said:
    It isn't so much what AI is or isn't. It's more about what can be done with it and we have already seen that a lot can be done for far less effort than was previously possible. 
    [snip]
    These kinds of models are everywhere and of course include image/language processing/generation. 

    I know little about computer programming and even less about generative AI. Yet, the hype about AI poses it as a form of intelligence. It is not. Instead, it appears to be a copying machine specifically designed to circumvent authorship. The second part is not accidental, it is the whole point. With AI, these companies can respond to user queries by locating the answers in works produced by humans and then paraphrasing them, to avoid giving credit or paying royalties to the actual authors. Until recently, all of human culture — that is, all images, all music, all of the built environment, all technology, and all written text — was produced by individuals or teams of people. Now, we will be, ever more, confronted by culture that is produced by sophisticated plagiarising machines. 
    Well, isn't that what teams and individuals have been doing for millennia, learning something from somewhere and then "paraphrasing" it (sometimes referred to as "making it your own") without giving credit or paying royalties? (Other than academic articles, how many things does one read, or look at or watch or listen to, with citations?) Is this argument any different than how humans have tried to distance themselves from other animals by claiming unique abilities that set them apart — humans are the only animal with emotions, the only tool using animal, the only reasoning animal, ... — abilities that we continue to discover are not so unique after all, that the difference is more a matter of degree, if any at all?

    I don't think we can really answer the question of whether AI generally, or some particular AI specifically, is really intelligence or not since we don't even really understand, and can't really exactly define, what intelligence is in humans and other animals. How would we even know if it were intelligence or not if we don't really know exactly what "intelligence" is? See also, https://en.wikipedia.org/wiki/Problem_of_other_minds
    edited November 24 apple4thewin
  • Reply 15 of 21
    Alex_VAlex_V Posts: 266member
    Well, isn't that what teams and individuals have been doing for millennia, learning something from somewhere and then "paraphrasing" it (sometimes referred to as "making it your own") without giving credit or paying royalties? (Other than academic articles, how many things does one read, or look at or watch or listen to, with citations?) Is this argument any different than how humans have tried to distance themselves from other animals by claiming unique abilities that set them apart — humans are the only animal with emotions, the only tool using animal, the only reasoning animal, ... — abilities that we continue to discover are not so unique after all, that the difference is more a matter of degree, if any at all?

    I don't think we can really answer the question of whether AI generally, or some particular AI specifically, is really intelligence or not since we don't even really understand, and can't really exactly define, what intelligence is in humans and other animals. How would we even know if it were intelligence or not if we don't really know exactly what "intelligence" is? See also, https://en.wikipedia.org/wiki/Problem_of_other_minds
    Perhaps, @anonymouse, that is “what teams and individuals have been doing for millennia.” But so have they justified questionable practices and phenomena — like said treatment of animals, appalling exploitation and oppression of human beings, the destruction of our world, etc. etc. — just as you are now. AI is new, precisely in how it ends the use of humans to do intellectual work and create culture, (except for low-wage AI ‘trainers’ in S.E. Asia and elsewhere). Instead, a computer is employed to scrape the work of humans, copy-and-paste what it finds, and then paraphrase it plausibly.

    Copying is an integral part of all human culture. That's how, for example, we can recognise jazz or the blues — they have a certain sound, because they all copy and riff off each other from a distance. The point is that intellectual property rules exist precisely to prevent authors from being ripped off, and to credit creators and inventors, or they would refuse to disclose their methods (publish a patent application), or to record their performance, out of fear of being screwed. Art and science would not progress. With AI, the tech bros, believe they can have their cake and eat it. So far, they have been proven right.
  • Reply 16 of 21
    Alex_V said:
    Well, isn't that what teams and individuals have been doing for millennia, learning something from somewhere and then "paraphrasing" it (sometimes referred to as "making it your own") without giving credit or paying royalties? (Other than academic articles, how many things does one read, or look at or watch or listen to, with citations?) Is this argument any different than how humans have tried to distance themselves from other animals by claiming unique abilities that set them apart — humans are the only animal with emotions, the only tool using animal, the only reasoning animal, ... — abilities that we continue to discover are not so unique after all, that the difference is more a matter of degree, if any at all?

    I don't think we can really answer the question of whether AI generally, or some particular AI specifically, is really intelligence or not since we don't even really understand, and can't really exactly define, what intelligence is in humans and other animals. How would we even know if it were intelligence or not if we don't really know exactly what "intelligence" is? See also, https://en.wikipedia.org/wiki/Problem_of_other_minds
    Perhaps, @anonymouse, that is “what teams and individuals have been doing for millennia.” But so have they justified questionable practices and phenomena — like said treatment of animals, appalling exploitation and oppression of human beings, the destruction of our world, etc. etc. — just as you are now. AI is new, precisely in how it ends the use of humans to do intellectual work and create culture, (except for low-wage AI ‘trainers’ in S.E. Asia and elsewhere). Instead, a computer is employed to scrape the work of humans, copy-and-paste what it finds, and then paraphrase it plausibly.

    Copying is an integral part of all human culture. That's how, for example, we can recognise jazz or the blues — they have a certain sound, because they all copy and riff off each other from a distance. The point is that intellectual property rules exist precisely to prevent authors from being ripped off, and to credit creators and inventors, or they would refuse to disclose their methods (publish a patent application), or to record their performance, out of fear of being screwed. Art and science would not progress. With AI, the tech bros, believe they can have their cake and eat it. So far, they have been proven right.
    Well, as Joanna Maciejewska said, “I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes.” Although, I think we have to wait for robotics to catch up before that happens.

    But AI is currently being tasked to do the "easy things", easy for developers to make happen. And it's only being tasked to do things we already do. One could make an argument that since it's a commercial product, the creators of the materials it learns from should be compensated to some degree (does CliffsNotes pay royalties?) but I think your doomsday scenario of the end of all progress is more than a little improbable.
  • Reply 17 of 21
    apple4thewin said:

    Anyways it is quite interesting how Grok will improve especially since Musk keeps purchasing the best top of the line servers from Nvidia.
    Oh, well, it's good to know AI is so simple, just buy the best servers and you are good to go. I guess it basically creates itself.
    Well it it’s not the car it’s the driver and if you give a good driver a better car then you will get a better improvement 
    OnPartyBusiness
  • Reply 18 of 21
    Alex_VAlex_V Posts: 266member
    “but I think your doomsday scenario of the end of all progress is more than a little improbable.”

    Just to be clear, I didn’t predict such a scenario, I merely illustrated why we have intellectual property rules in the first place.
  • Reply 19 of 21
    Alex_V said:
    “but I think your doomsday scenario of the end of all progress is more than a little improbable.”

    Just to be clear, I didn’t predict such a scenario, I merely illustrated why we have intellectual property rules in the first place.
    Well, if you want IP laws to stop AI sifting through publicly available data, you'll need to lobby for IP law reform. What they are using IP for isn't prohibited by existing IP laws.
  • Reply 20 of 21
    apple4thewin said:

    Anyways it is quite interesting how Grok will improve especially since Musk keeps purchasing the best top of the line servers from Nvidia.
    Oh, well, it's good to know AI is so simple, just buy the best servers and you are good to go. I guess it basically creates itself.
    Well it it’s not the car it’s the driver and if you give a good driver a better car then you will get a better improvement 
    You're presuming that Musk is somehow a better driver when in fact he's just a wealthy nut job/sociopath.
    Alex_V
Sign In or Register to comment.