Siri in iOS 18.4 is getting worse before it gets better

Jump to First Reply
Posted:
in iOS

Never mind that the beta of iOS 18.4 doesn't include the promised Siri improvements yet, Apple's voice assistant is now poorer than ever.

Smartphone screen showing a notification about meeting Zac Wingate at Cafe Grenel and a calendar event for a production catch-up on July 3.
The type of answer Apple promises Siri will be able to answer with Apple Intelligence. Image source: Apple



Right from the start of Apple Intelligence, a key part of its promise has been that Siri will be radically improved. Specifically, Apple Intelligence would give Siri a better ability to follow a series of questions, and even understand when we change our mind and correct ourselves.

Siri will be able to use the personal information in our devices, such as our calendars. And while never exactly becoming sentient, Siri will, with permission, be able to relay queries to ChatGPT if more resources are needed.

All of this was promised and still is promised, but then practically from that moment, Apple has said the improvements start with iOS 18.4.

That iOS 18.4 has now entered developer beta and there is no sign of an improved Siri -- beyond its very nice new round-screen animation. Things slip, especially in betas, and there were already reports of problems delaying Siri, so as disappointing as it is to have to wait longer, it's not a surprise.

Smartphone displaying apps and calendar on black, tech-themed background with gradient border.
Siri has a new glow animation, but not much else



What is a surprise is that somehow the existing Siri is definitely worse than it used to be. In one case during AppleInsider testing, the problem was that Siri erroneously wanted to pass a personal information request on to ChatGPT, as if that functionality were in place and this was the correct thing to do.

But the rest of the time, Siri is simply often wrong.

However, as great as Siri can be, it does have the frustrating habit of suddenly being unable to understand something it has successfully parsed many times before. So your mileage may vary, but in testing we asked Siri the same questions over a couple of days and consistently got the same incorrect responses.

Keeping it simple



If you ask Siri, "What's the rest of my day look like?" then it will tell you what's left on your calendar for today. Or it did.

Ask Siri under iOS 18.4 and most of the time you get "You have 25 events from today until March 17." The date keeps moving back -- it's always a month -- but the number of events is always 25, seemingly whether that's correct or not.

Smartphone displaying a virtual assistant interface with a keyboard and recent prompts, including 'What's left on my timer' and 'What's the weather' queries.
Type to Siri works great, but Siri doesn't



But of course, the key thing is that you ask about today, and you get told the next four weeks instead. Just occasionally and for no apparent reason, the same request doesn't get you the wrong verbal answer.

Instead, it gives you the wrong answer visually. You may instead get a dialog box showing today's and the next few days' events.

"Siri, what am I doing on my birthday?" ought to be a straightforward request because iOS has the user's date of birth in their contact card or health data, and it has the calendar. But no, "You have 25 events from today to March 17."

"When's my next trip?" also should be able to check the calendar, and it does. But it returns "There's nothing called 'trip' in your calendar."

Or rather, you can first get the absolutely maddening response of, "You'll have to unlock your iPhone first." Since there is a switch in Settings called Allow Siri While Locked, this is right up there with how Siri will sometimes say it can't give you Apple Maps directions while you're in a car.

Half integrated with ChatGPT



Much of the improvement with Siri is supposed to come with ChatGPT, and that isn't here yet -- except iOS thinks it is. You can try something Siri definitely can't do now, should be able to do eventually, but which it has a go at answering anyway.

Hands hold a smartphone displaying ChatGPT setup options, highlighting Siri integration, text composition tools, and account features on a light background.
Apple Intelligence works with ChatGPT



"Siri, when was I last in Switzerland?" That's using personal on-device data, again really just checking the calendar. But instead, you get the prompt -- "Do you want me to use ChatGPT to answer that?"

If you then say why not, go on then, good luck with it, then Siri passes the request to ChatGPT. That either comes back saying it doesn't do personal information, duh, or sometimes asks you to tell it yourself when you were last there.

Mind you, it seems to always ask you that through a text prompt, and Siri is inconsistent here. Sometimes asking Siri to "Delete all my alarms" will solely get you a text prompt asking if you're sure, but "what's 4 plus 3" gets both text and a spoken response.

Then ChatGPT is also just in an odd place now. If you go to Apple's own support page about using Siri and recite all of its examples in your iPhone, most of them work -- but not in the way you might expect.

It used to be, for instance, that Siri would do a web search if you asked, as Apple suggests, "Who made the first rocket that went to space?" Now if you ask that, you are instead asked permission to send the request to ChatGPT.

iPhone screen showing a prompt to use ChatGPT for sending content from Safari, with cancel and send options.
Sending images and text to ChatGPT is a feature of Apple Intelligence, but also a crutch



The next suggestion in Apple's list, though, is "How do you say 'thank you' in Mandarin?" and the answer depends on whether you've just used ChatGPT or not.

If you haven't, Siri asks which version of Mandarin you want, then audibly pronounces the word. If on your immediately previous request you agreed to use ChatGPT, though, Siri now uses it again without asking.

So suddenly you're getting the notification "Working with ChatGPT" and no option to change that. Plus, ChatGPT gives you the answer to that question in text on screen, rather than pronouncing it aloud.

All of which means that Siri can be flat out wrong, or it can give you different answers depending on the sequence in which you ask your questions.

We are so far away from being able to ask "Siri, what's the name of the guy I had a meeting with a couple of months ago at Cafe Grenel?" -- like Apple has been advertising.

Some signs of improvement



To be fair, you can never know entirely for sure whether an issue with Siri is down to your pronunciation or the load on Apple's servers at the time you ask. But you can know for certain when a request keeps going right or wrong.

Or, indeed, when it suddenly works.

"Siri, set a timer for 10 minutes," has been known to instead set the timer to something random, such as 7 hours, 16 minutes, and 9 seconds. Since iOS 18.4, AppleInsider testing has not shown that problem again, the timer has always set itself correctly.

So there's that. But then there's all the rest of this about inconsistencies, wrong answers, and the will-it-never-be-fixed "You'll have to unlock your iPhone first."

Apple is right to regard the improved Siri as a great and persuasive example of Apple Intelligence because it's a part that will most visibly, most immediately, and most users will benefit from. And it's nobody's fault that the improvements have been delayed.

Close-up of a smartphone featuring three camera lenses against a colorful, blurred neon background.
Apple Intelligence is here, but Siri hasn't benefited from it yet



But Apple ran that ad about whoever the guy was from Cafe Grenel five months ago. Apple was telling us Siri is fantastically improved before it is.

Even this doesn't account for how Siri is worse than before, but new Apple Intelligence buyers will be disappointed. Long-time Apple users will understand things can get delayed, but still, there are limits.

Siri didn't get better at the start of Apple Intelligence as Apple's ads promised. It hasn't gotten better with the first beta release of iOS 18.4.

At some point, it will surely, hopefully, improve exactly as so long rumored -- but by then, there will have to be users who won't ever try Siri again.



Read on AppleInsider

Comments

  • Reply 1 of 17
    Didn't think that was possible. What a fail.
    neoncatdewmewilliamlondonsconosciutograndact73byronl
     5Likes 1Dislike 0Informatives
  • Reply 2 of 17
    badmonkbadmonk Posts: 1,354member
    I don’t understand why Apple doesn’t have a seperate in house competing voice assistant to Siri in development.  Much like Jobs had competing PC teams back in the day.  Apple has resources to fund a Siri competitor in house.
    williamlondonwatto_cobra
     1Like 1Dislike 0Informatives
  • Reply 3 of 17
    badmonk said:
    I don’t understand why Apple doesn’t have a seperate in house competing voice assistant to Siri in development.  Much like Jobs had competing PC teams back in the day.  Apple has resources to fund a Siri competitor in house.
    Well, even if they do, you wouldn't know about it. I mean, it's not like they are going to announce, "We have a separate in house competing voice assistant to Siri in development." And, suppose they do and decide it's better than Siri, do you think they'll announce that they've replaced Siri with a new assistant called Sarah, or that they'll just release it as "Siri" and not tell you it's completely new?
    dewmemuthuk_vanalingamxyzzy01watto_cobra
     4Likes 0Dislikes 0Informatives
  • Reply 4 of 17
    Dramaticising a component or feature not working in a developer beta...

    How exactly is that quality tech journalism ?
    appleinsideruserjellybellywilliamlondonWesley_HilliardRogue01mr moegrandact73watto_cobra
     2Likes 6Dislikes 0Informatives
  • Reply 5 of 17
    Siri on HomePods is definitely worse recently and just tonight took three attempts to set my bedtime scene!
    williamlondonwatto_cobra
     1Like 1Dislike 0Informatives
  • Reply 6 of 17
    There have been rumors of two competing trains of thought from two teams at apple.

    One wanted to start from scratch with AI and the cloud and the other keep Siri the way it is and focus on local processing. You know who won.

    Now we see Apple playing catch up.
    williamlondonwatto_cobra
     1Like 1Dislike 0Informatives
  • Reply 7 of 17
    shad0h said:
    Dramaticising a component or feature not working in a developer beta...

    How exactly is that quality tech journalism ?
    Gotta pander to those children and trolls somehow.
    rundhvidgrandact73watto_cobra
     2Likes 1Dislike 0Informatives
  • Reply 8 of 17
    It is not surprising that Siri continues to be dumb, on every single device.  Apple Intelligence is awful, so that is not going to help Siri either.  So when 18.4 does finally get released, Siri (or Sorry) will have no improvements.  As the saying goes, "You can't fix stupid".
    watto_cobra
     0Likes 1Dislike 0Informatives
  • Reply 9 of 17
    shad0h said:
    Dramaticising a component or feature not working in a developer beta...

    How exactly is that quality tech journalism ?
    In my opinion it is high quality tech journalism. It is honest, based on repeatable observations.  Where the observation was not repeatable, the author points that out.

    The author is writing about observations from different sources as well as within the AppleInsider team that brings a variety of experience and skills to the table.   

    He’s pointing out that Siri is performing less well as it is going through a transition of development.  

    As far as scraping/starting-over and waiting for a new Siri as implied by some, elsewhere online, Apple has to keep some core functions working that have been useful in things such as ‘Apple Home’ functions albeit with new hiccups. 

    Apple is faced with a transition from a type of machine learning that required selective iterations of data in the thousands-on-device to billions or more in iCloud, along with smart search—to integrating LLM’s (Large Language Models).  LLM are an iteration of data at such a large scale that it takes data centers requiring the electrical power of small cities.  
    It’s a different kind of machine learning that is so massive in its data scanning and complex algorithms, that AI software engineers admit they don’t know precisely how results are arrived at in the sense of every iterative test that was tried in the massive scanning of data and the predictive testing tried and discarded to winnow down to the mostly usable predictions of characters (we are talking about the “L” that stands for Language) and results.  

    I believe Apple is seeking to allow eventual transitioning and integration to R-AI—AI with reasoning, not just trillions of predictive tests on language. 

    We are an impatient species, a drive that moves us forward in starts and stops.  We have wants and hopes that can become expectations and even demands.  Our weakness and strength are hopes and wants jumping to demands even when we are too impatient to have them realized (or not) in a time period that is hard to understand and/or predict.  

    In the case of artificial intelligence developing in the Apple sphere, our expectations are getting ahead of reality.  If you are disappointed in the progress so far, that seems quite reasonable.  Disappointment is different than demands.  
    Any blanket conclusions out in the blogosphere that Apple AI is useless because much of it does not yet exist, goes to impatience that’s likely an inefficient use of our energy.  But it is our choice if that is sometimes our reaction. And that’s fine.  We have that choice.   My hope is that we don’t ’throw out the baby with the bath water’ now or in the future.  


    muthuk_vanalingamxyzzy01watto_cobraAlex1N
     4Likes 0Dislikes 0Informatives
  • Reply 10 of 17
    tundraboytundraboy Posts: 1,931member
    If AI is even 95% correct with its answer to queries,  it's still useless because at that accuracy level I'll have to check every answer against more reliable sources.  I actually think AI should be virtually 100% correct to be useful, and for those rare occasions that it makes a mistake, it should come up with the correct 2nd answer if the user says "Are you sure?" after the first attempt.

    If AI is being trained on the vast ocean of garbage that resides in cyberspace, how exactly will it be able to attain near-perfect accuracy?  In fact, what is AI's method for determining whether a piece of information is true or false?  No, the most common or popular answer or opinion doesn't work in a country where the stupid and ignorant vastly outnumber the wise and learned.

    AI is by far the most vast and pervasive instantiation of the Dunning-Kruger effect that humanity has ever seen.  With the possible exceptions of Trump and Musk.
    edited February 24
    Oferwatto_cobraAlex1N
     3Likes 0Dislikes 0Informatives
  • Reply 11 of 17
    tundraboytundraboy Posts: 1,931member

    I believe Apple is seeking to allow eventual transitioning and integration to R-AI—AI with reasoning, not just trillions of predictive tests on language. 

    I read a book by some tech researcher who said "if you can replace a neuron with a man-made nano-device that did everything a neuron did, then would that brain function any differently?" (Or words to that effect.) He then added that logically then, you should be able to replace every neuron in the human brain with the same nano-device and have an artificial brain and AI that is indistinguishable from the human variety.  That is his argument for why AI will eventually instantiate human intelligence.

    Of course the main stumbling block in his argument is that he assumed that a man-made nano-device that does EVERYTHING that a neuron does is unquestionably attainable.  We don't even know how neurons work.  We don't even know if we will ever know enough to truly understand how a neuron works.  This is the fallacy of assuming infinite future knowledge that a lot of futurists including AI advocates unwittingly commit.

    Yes R-AI, AI with Reasoning, would solve a lot of the criticisms leveled on AI.  Only problem is, no one really knows how to get a machine to truly reason the way the smarter segment of the human population does.  We don't even know if that is achievable, but some just power through with  their arguments by treating it as a given.   (Reasoning like the other, much larger, segment of humanity, on the other hand, --well, AI has already achieved that.)
    edited February 24
    muthuk_vanalingamScot1watto_cobraAlex1N
     4Likes 0Dislikes 0Informatives
  • Reply 12 of 17
    This is really bad to see. I understand this stuff is hard to do but advertising these as if they were working when in reality they are this bad, is terrible. 

    I wouldn’t be surprised to see a lawsuit. 
     0Likes 0Dislikes 0Informatives
  • Reply 13 of 17
    doaldoal Posts: 30member
    The only way to fix Siri is to destroy it and start from scratch. 
     0Likes 0Dislikes 0Informatives
  • Reply 14 of 17
    tundraboy said:

    I believe Apple is seeking to allow eventual transitioning and integration to R-AI—AI with reasoning, not just trillions of predictive tests on language. 

    I read a book by some tech researcher who said "if you can replace a neuron with a man-made nano-device that did everything a neuron did, then would that brain function any differently?" (Or words to that effect.) He then added that logically then, you should be able to replace every neuron in the human brain with the same nano-device and have an artificial brain and AI that is indistinguishable from the human variety.  That is his argument for why AI will eventually instantiate human intelligence.

    Of course the main stumbling block in his argument is that he assumed that a man-made nano-device that does EVERYTHING that a neuron does is unquestionably attainable.  We don't even know how neurons work.  We don't even know if we will ever know enough to truly understand how a neuron works.  This is the fallacy of assuming infinite future knowledge that a lot of futurists including AI advocates unwittingly commit.

    Yes R-AI, AI with Reasoning, would solve a lot of the criticisms leveled on AI.  Only problem is, no one really knows how to get a machine to truly reason the way the smarter segment of the human population does.  We don't even know if that is achievable, but some just power through with  their arguments by treating it as a given.   (Reasoning like the other, much larger, segment of humanity, on the other hand, --well, AI has already achieved that.)
    In referring to R-AI addition of reasoning, it is at a low level, very low compared to humans, but nevertheless will add to productivity in small ways at first, along with an error rate.   The 95% accuracy rate you referred to can be as low as 80% and have some uses.  My bicycle tool multiplies my movement and is a very useful tool only in 10% or less of my travels, but I’ll still have some uses.   I barely have a use for AI in writing.  Grammar checkers and spell checkers have been around for over several decades—software trained on rules by humans.  The AI equivalent hasn’t helped me very reliably. But when I find it helpful, it’s for a first pass—it highlights possible errors so fast, I can review the suggestions and ignore what might not be applicable or appropriate.  But it still is a tool that can be used with appropriate expectations.  A few of the AppleInsider staff have mentioned its usefulness.   I expect they are more skilled in leveraging AI as an additive tool.  
    Yes it’s is overhyped in the masses.   But it’s steadily being improved in the labs.   The same will be true for R-AI.  It’ll be a tool for appropriate use, and the reasoning will make it just a bit better.  

    You’re correct point out the fallacy of approaching the  vast network of neurons that are analog, not digital.   Plus there interaction with the incoming body senses from the usual suspects of sight, sound, hearing, taste, smelll, hot, cold, in addition to dull vs sharp pain, muscle tension or relaxation feedback, endocrine interaction — the list goes on and on that a robot won’t have in the same way.  Nevertheless, as overall AI develops, it will be a useful tool in the right circumstances.  

    I’m in for the long haul warts snd all.  Used in  conjunction with radiologists in mammogram reading it’s had increased accuracy—not alone but when used along with human readers ( radiologists).  

    I expect slow progress so I’m not cynical, but rather hopeful and patient,


    watto_cobraAlex1N
     2Likes 0Dislikes 0Informatives
  • Reply 15 of 17
    Folks, we should not get our shorts in a wedgie. The most obvious option is to NOT use Siri.

    I only use it for a count down timer for my coffee machine so I can work in the office and when the timer goes off on my watch, I go get a fresh cup and take it to my wife for her first cup of the day while in bed.

    Based upon my limited use, Siri works.  o:)
    muthuk_vanalingamcharlesnwatto_cobraAlex1N
     3Likes 1Dislike 0Informatives
  • Reply 16 of 17
    shad0h said:
    Dramaticising a component or feature not working in a developer beta...

    How exactly is that quality tech journalism ?

    I mean, that’s kind of exactly what they should be doing when said feature was advertised as the main selling point of new hardware 6 months ago with literal ads promising something which clearly didn’t exists in any way shape of form yet.

    It would be one thing if they had internal builds nearly capable of what they showed, but everything we’ve seen with the betas suggest they were really far off from that and it was pure smoke / mirrors / wishful thinking. At this point it seems likely we won’t truly get the new Siri (assuming it ever materializes) until the next iPhone launches which when you consider people may have bought physical hardware for this stuff is shady as hell.

     0Likes 0Dislikes 0Informatives
  • Reply 17 of 17
    charlesncharlesn Posts: 1,432member
    Fun fact: Apri 28th will mark 15 years since Apple bought Siri. To put that in context: the current iPhone at that time was the 3GS. And here we are, a decade and a half later, and Siri still can't reliably tell you what's on your calendar for today. This is why, in an otherwise all-Apple household, I'm still using 7-year-old Echo Dots for Alexa to operate voice-controlled appliances. Alexa has no problem parsing requests to set lights by specific percentages of brightness, or color or color temperature for my white lights. Or filling pots with specific amounts of water at specific temperatures from my kitchen faucet. Or setting a sous vide device to a specific temperature and then starting it. Etc. Meanwhile, Siri still struggles with today's calendar and so many other voice assistant 101 tasks. smh

    It's hard not to think that there is truly something about Siri that is irreparably broken. One can only imagine how much money and resources Apple has thrown at this problem by now. How can we still be HERE after 15 years? How could the "Siri promised land" of 18.4 possibly be such an embarrassment upon its first release? How could Siri actually be WORSE? "Delayed" is an acceptable excuse for the launch of an all-new feature. But it's no excuse for Siri, who's about to celebrate her quinceañera.  
    edited February 26
    appleinsideruserwatto_cobraAlex1N
     1Like 1Dislike 1Informative
Sign In or Register to comment.