Well, Siri launched back in 2011 with the iPhone 4S and it's been a moaned about disappointment since then. Might as well continue the trend.
Yes and no. Siri (which stands for Speech interpretation and Recognition Interface) was a start-up company acquired by Apple in April of 2010. You are correct that it was first released in an Apple product in October of 2011 when it debuted in the iPhone 4s.
You guys don’t reflect what most people use Siri for. Apple intelligence is not Siri. All the other tools like summaries and email categories are actually super useful. Apple has to build out massive AI compute farms and integrate Siri into iOS and apps, that takes time. And Apple likes to get things right.
...which may require scrapping Siri altogether.
EDIT: There are reports coming out this morning that Apple has wasted more than a year trying to integrate Apple Intelligence with Siri before determining it just wasn't going to work. If accurate, that would help explain the "new" Siri moving to 2026.
I don't doubt that this is true. But I thought it was already well known inside Apple that Siri had structural limitations that were never going to allow it to advance very far. I heard this several years ago, and I thought Apple was working on a completely new product in this category. Guess I was wrong--too bad.
This is what happens when you have a supply chain expert in charge of an innovative company.
Want to make good computers so make some profit to do that; vs
Want to make a profit so make some good computers to do that .
A subtle difference, but turns out it is everything. It takes time to be revealed, and then it is clear to see and difficult to fix. Tim Apple is exposed as in the second category.
This Siri debacle is a clear example of it. It has been poor for years, and nothing was done about it because software isn’t in the supply chain for hardware (thus low priority), and here we are.
and now Apple is so far behind it is becoming uncool. But hey, let’s get that geriatric pompous git King Charles (or more likely, an underling of his media manager) to do a playlist.
Yeah..... I totally get the impression that Craig Federighi is timid and would be incapable of advocating for himself
Do you actually think this nonuse through when you type it?
Craig Federighi isn’t in charge of the company and wherever he fits on the product vs profit continuum doesn’t matter.
Ah, like entropys, you are an expert on how the company run. The fact that the Tim talks about how the executive team debates things an he doesn't always get his way is a little bit problematic for you. And Apple has been in lawsuits where internal communications have been subpoenaed an show those debates which backs up that claim.
But the biggest obstacle you an entopys have with your profits over products are argument is that in the majority of the time since Steve Jobs stepped down as CEO Apple's profit margins declined. In the last few years they have started to grow again but that is entirely related to services becoming the second largest business segment, services has margins around 70%. So even to this day Apple's hardware margins are well below what they used to be. If Cook focused on margins to the exclusion of everything else one would assume that the companies margins would eclipsed the Jobs tenure not dropped.
Also, how does delaying a production that isn't ready, something that happens under Joba as well, put profits over products? There is no profit in a delayed product. It seems to me that if one were driven by profit they would release the product even if it wasn't ready. Like say the original AirPort base station or MobileMe.
Y'all have a really large problem in that your claims are completely divorced from reality.
I only use Siri as an egg clock. And I am not interested in AI integrations, I sometimes use ChatGPT to write some texts but that's it - it can be done in a browser. Does Apple push this AI thing because the customers want it or is it just because it's a thing now?
This is what happens when you have a supply chain expert in charge of an innovative company.
Want to make good computers so make some profit to do that; vs
Want to make a profit so make some good computers to do that .
A subtle difference, but turns out it is everything. It takes time to be revealed, and then it is clear to see and difficult to fix. Tim Apple is exposed as in the second category.
This Siri debacle is a clear example of it. It has been poor for years, and nothing was done about it because software isn’t in the supply chain for hardware (thus low priority), and here we are.
and now Apple is so far behind it is becoming uncool. But hey, let’s get that geriatric pompous git King Charles (or more likely, an underling of his media manager) to do a playlist.
Yeah..... I totally get the impression that Craig Federighi is timid and would be incapable of advocating for himself
Do you actually think this nonuse through when you type it?
I don’t think Federghi actually uses an iPhone because nothing gets fixed. Siri has become worse if that is possible. He is just good at swanning about on a stage.
As someone who is a frequent critic of Apple on these forums you might think I would be happy to see that my recent post on Apple’s failure to deliver new products was accurate. But I am not. I have been around here for a long time when Apple was the underdog against companies like Microsoft, RIM etc. But it is painful to watch. Apple needs a creative leader, a risk taker, an innovator, someone who can inspire and motivate teams to actually deliver new products and services people want on reasonable timelines that can then be turned over to Cook to iterate as much profit from as he can over ten or twenty years. It is impossible for one person to do all these things. Even Jobs realized he couldn’t do it all and brought in people like Cook. The board should consolidate innovation under a new leader brought in from the outside, protect them operationally and move business operations under Cook (or whoever succeeds him). And no, not Ivey. Innovation for him just meant “thinner”. We need the next Altman who is working away at delivering the next big thing. Musk and Zuck need not apply either having seemingly fallen into the same iteration trap, over promising and not delivering.
CNBC is reporting that the Siri improvements are now delayed til 2026. That's the headline of the article currently running on the website. This (obviously) would mean an entire year of additional delay beyond a launch date that was already pushed out nine months from when A.I. Siri was first announced. My "Apple memory" goes back to roughly 1990 and I can't ever recall something like this happening before--a product announced for debut nine months later (that in itself is rare enough) and then blowing by that debut date by a full year. So I guess AI Siri now debuts in time for its sweet sixteen--but you really have to wonder if Apple is ever going to be able to fix Siri. smh. $166 billon in cash on hand and we can't do this? For context: just Apple's cash, if it were a separate company, would rank #90 on the list of the 100 most valuable companies.
I know that nobody wants to hear it, but developing software is incredibly difficult. If you look at the way hardware has been scaling up over the past decade alone, it's absolutely phenomenal. I'm not implying that hardware is not extremely complex, it is anything but, especially with increased specialization with CPUs, GPUs, TPUs, NPUs, cache logic, etc. In these areas the complexity has increased in each functional domain, but the numbers shoot up massively as a result of parallelism.
Once a specialized hardware design is built out in one of these areas, optimized, verified and validated, it is repeated numerous times over to achieve greater parallelism and multiprocessing capability and capacity that scales out as far as the fabrication technology and physics allow it to. The M3 Ultra's 180+ billion transistors attest to the massive scaling that is possible. But once it's certified as being done, it's done.
Software at all levels including, machine level, microcode, firmware, drivers, kernel, system, application, etc., are all built to take advantage of the capabilities provided by the software layers below them and ultimately the hardware. The software layers can be changed tremendously over the life of the underlying hardware.
At each software layer there are humans involved in generating the requirements, designs, carrying out the implementation, testing every function, testing every component, testing every library and executable, and integrating all of the pieces together. Humans make mistakes. Bugs don't crawl into the code from the swampy surroundings, humans create them.
While hardware logic can be tested and verified to be correct, or at least certified to meet the specified requirements and perform exactly as intended, humans and the software they create can pretty much do anything, with variations between which human is doing what things. You really can't certify ahead of time that any developer is going to produce the same output and behaviors given a defined set of inputs and requirements. Heck, you can't even assume the human generated requirements are correct in all cases.
The massive parallelism provided by the hardware just makes things a lot more difficult for software developers. There are specialized teams that are tasked with building specific pieces and parts of the software solution, but every team still has a great deal of complexity to tame, and bringing together all of artifacts from each team into a coherent solution is also very challenging. Parallelism at the hardware layer has been around for much longer than the majority of software developers have been able to take advantage of it. In a lot of cases the problems to be solved that would benefit from the available parallelism were few and far between. This is no longer the case.
With AI the nature of programming has changed in many ways. In the past, developing software mostly involved solving problems that had a deterministic and logical solution. It was correct or it was incorrect. You could put together a logical truth table and determine for a given set of inputs all of the possible and finite outcomes. AI isn't constrained to a finite set of logically provable outcomes based on logically provable inputs. It's largely driven by probabilistic outcomes driven with massive numbers of inputs that are also subject to probabilistic behaviors, all of which are constantly evolving.
In my opinion, AI has moved a lot of the cognitive burden for coming up with a "correct" solution from the software development team to the software specification team that now must include mathematicians, data scientists, social scientists, computer scientists, statisticians, linguists, human factors engineers, and even psychologists.
It's not fair to say that Apple's software development teams, their leaders, or the leadership team are not up to snuff because they can no longer deliver software in the same manner and timeframe's that they were doing when AI/ML and Apple Intelligence were not in the picture.
The software development teams are a crucial part of the machine, but with Apple Intelligence they are by far not the only critical part of the machine. Software development at any level involves two high level areas of concern. The one we tend to focus on is the software team always "building things right." But the other high level area of concern is "building the right things." With Apple Intelligence a lot more of the burden has moved to the latter concern, and the "right thing" is a lot more fuzzy. Once Apple regroups and settles on “what” they want Siri to be, their software teams will be fully capable of making it happen and deciding “how” to do it. The boundary between who is responsible for the “what” and the “how” has plagued software development since day one.
I only use Siri as an egg clock. And I am not interested in AI integrations, I sometimes use ChatGPT to write some texts but that's it - it can be done in a browser. Does Apple push this AI thing because the customers want it or is it just because it's a thing now?
Apple has been introducing AI features for almost a decade now. Face ID is AI-based, crash detection is AI-based, fall detection is AI-based, AFIB detection is AI-based, workout detection is AI-based, subject detection in photos is AI-based, spelling auto-correct and auto-completion is AI-based, noise canceling/transparency mode in AirPods is AI-based, and I could go on. You may not like those things being integrated, but looking at that list, I think there are a bunch of AI features that are pretty popular with customers, and Apple's products would be worse off if they were missing. Also, I don't think they added those features "because it's a thing now".
As far as generative AI goes, and folks like yourself seem to miss that generative AI is a subset of AI, I personally think the Writing Tools are helpful. Having a built-in spelling and grammar check is much nicer than just spelling. I also like that it can re-write text or make it more concise. Genmoji, Imagine Playground are fun, but I can't say I use them a lot. I am also not bothered by their existence. I didn't find notification summaries and Chat GPT integration particularly helpful, so I turned it off, but it doesn't bother me that it exists, and perhaps other people like either. I'm someone who lost on why people are bothered by features that they don't have to use. When it comes to Siri, I would love if it could do what is promised, and the best part is that if Apple does get it there, you will still be able to use it as a timer.
I tried to use image playground again. It was working before but now I get an error message saying that Siri and my iPhone must use the same language. Currently your iPhone is English Canada and Siri is set for English Canada, lol
At the risk of bringing in a distraction to the topic, and coming from the other side if the pacific, what is the difference between Yankee English and Canadian English? Cause it sounds all the same to me. Does Siri put an “eh” at the end of each sentence?
I simply don't go back to unreliable products. I don't use Apple Maps anywhere but on CarPlay. Even then, I have a healthy dose of skepticism. There is very little I trust Siri to do; and none of it is critical stuff. It is just a product I have no interest in or use for. I'm not surprised they can't make it work, and really....if you bought an iPhone 16 based on their claims...you're rather naive. Yes, YMMV, and the world is welcome to all of this...if they can make it work for them. I gave up, and won't try again.
Not too surprising—AI features are complex, and Apple likely wants to get it right rather than rush it. Siri’s upgrade with Apple Intelligence is a big step, so delays might mean they’re fine-tuning it for better performance. Hopefully, it’ll be worth the wait!
I only use Siri as an egg clock. And I am not interested in AI integrations, I sometimes use ChatGPT to write some texts but that's it - it can be done in a browser. Does Apple push this AI thing because the customers want it or is it just because it's a thing now?
Apple has been introducing AI features for almost a decade now. Face ID is AI-based, crash detection is AI-based, fall detection is AI-based, AFIB detection is AI-based, workout detection is AI-based, subject detection in photos is AI-based, spelling auto-correct and auto-completion is AI-based, noise canceling/transparency mode in AirPods is AI-based, and I could go on. You may not like those things being integrated, but looking at that list, I think there are a bunch of AI features that are pretty popular with customers, and Apple's products would be worse off if they were missing. Also, I don't think they added those features "because it's a thing now".
As far as generative AI goes, and folks like yourself seem to miss that generative AI is a subset of AI, I personally think the Writing Tools are helpful. Having a built-in spelling and grammar check is much nicer than just spelling. I also like that it can re-write text or make it more concise. Genmoji, Imagine Playground are fun, but I can't say I use them a lot. I am also not bothered by their existence. I didn't find notification summaries and Chat GPT integration particularly helpful, so I turned it off, but it doesn't bother me that it exists, and perhaps other people like either. I'm someone who lost on why people are bothered by features that they don't have to use. When it comes to Siri, I would love if it could do what is promised, and the best part is that if Apple does get it there, you will still be able to use it as a timer.
I hear you. Yes those AI functions that just work there in the background is nice. Except for the "Computational photography" which ruins photos. Siri could be better, in almost every way still. And this whole AI-hype, I think Apple should just still lay low and provide a good platform that is accessible for different AI -companies and let those AI-companies keep competing and let the users decide which AI-service they prefer.
Somewhat shocking opinion piece by John Gruber. In general he's always been pretty upbeat about Apple, but not today.
"What Apple showed regarding the upcoming “personalized Siri” at WWDC was not a demo. It was a concept video. Concept videos are bullshit, and a sign of a company in disarray, if not crisis. The Apple that commissioned the futuristic “Knowledge Navigator” concept video in 1987 was the Apple that was on a course to near-bankruptcy a decade later... Last week’s announcement — “It’s going to take us longer than we thought to deliver on these features and we anticipate rolling them out in the coming year” — was, if you think about it, another opportunity to demonstrate the current state of these features. Rather than simply issue a statement to the media, they could have invited select members of the press to Apple Park, or Apple’s offices in New York, or even just remotely over a WebEx conference call, and demonstrate the current state of these features live, on an actual device. That didn’t happen. If these features exist in any sort of working state at all, no one outside Apple has vouched for their existence, let alone for their quality....
Why did Apple show these personalized Siri features at WWDC last year, and promise their arrival during the first year of Apple Intelligence? Why, for that matter, do they now claim to “anticipate rolling them out in the coming year” if they still currently do not exist in demonstratable form?
And now they look so out of their depth, so in over their heads, that not only are they years behind the state-of-the-art in AI, but they don’t even know what they can ship or when. Their headline features from nine months ago not only haven’t shipped but still haven’t even been demonstrated, which I, for one, now presume means they can’t be demonstrated because they don’t work."
Comments
But the biggest obstacle you an entopys have with your profits over products are argument is that in the majority of the time since Steve Jobs stepped down as CEO Apple's profit margins declined. In the last few years they have started to grow again but that is entirely related to services becoming the second largest business segment, services has margins around 70%. So even to this day Apple's hardware margins are well below what they used to be. If Cook focused on margins to the exclusion of everything else one would assume that the companies margins would eclipsed the Jobs tenure not dropped.
Also, how does delaying a production that isn't ready, something that happens under Joba as well, put profits over products? There is no profit in a delayed product. It seems to me that if one were driven by profit they would release the product even if it wasn't ready. Like say the original AirPort base station or MobileMe.
Y'all have a really large problem in that your claims are completely divorced from reality.
Software at all levels including, machine level, microcode, firmware, drivers, kernel, system, application, etc., are all built to take advantage of the capabilities provided by the software layers below them and ultimately the hardware. The software layers can be changed tremendously over the life of the underlying hardware.
While hardware logic can be tested and verified to be correct, or at least certified to meet the specified requirements and perform exactly as intended, humans and the software they create can pretty much do anything, with variations between which human is doing what things. You really can't certify ahead of time that any developer is going to produce the same output and behaviors given a defined set of inputs and requirements. Heck, you can't even assume the human generated requirements are correct in all cases.
The massive parallelism provided by the hardware just makes things a lot more difficult for software developers. There are specialized teams that are tasked with building specific pieces and parts of the software solution, but every team still has a great deal of complexity to tame, and bringing together all of artifacts from each team into a coherent solution is also very challenging. Parallelism at the hardware layer has been around for much longer than the majority of software developers have been able to take advantage of it. In a lot of cases the problems to be solved that would benefit from the available parallelism were few and far between. This is no longer the case.
With AI the nature of programming has changed in many ways. In the past, developing software mostly involved solving problems that had a deterministic and logical solution. It was correct or it was incorrect. You could put together a logical truth table and determine for a given set of inputs all of the possible and finite outcomes. AI isn't constrained to a finite set of logically provable outcomes based on logically provable inputs. It's largely driven by probabilistic outcomes driven with massive numbers of inputs that are also subject to probabilistic behaviors, all of which are constantly evolving.
In my opinion, AI has moved a lot of the cognitive burden for coming up with a "correct" solution from the software development team to the software specification team that now must include mathematicians, data scientists, social scientists, computer scientists, statisticians, linguists, human factors engineers, and even psychologists.
Apple has been introducing AI features for almost a decade now. Face ID is AI-based, crash detection is AI-based, fall detection is AI-based, AFIB detection is AI-based, workout detection is AI-based, subject detection in photos is AI-based, spelling auto-correct and auto-completion is AI-based, noise canceling/transparency mode in AirPods is AI-based, and I could go on. You may not like those things being integrated, but looking at that list, I think there are a bunch of AI features that are pretty popular with customers, and Apple's products would be worse off if they were missing. Also, I don't think they added those features "because it's a thing now".
As far as generative AI goes, and folks like yourself seem to miss that generative AI is a subset of AI, I personally think the Writing Tools are helpful. Having a built-in spelling and grammar check is much nicer than just spelling. I also like that it can re-write text or make it more concise. Genmoji, Imagine Playground are fun, but I can't say I use them a lot. I am also not bothered by their existence. I didn't find notification summaries and Chat GPT integration particularly helpful, so I turned it off, but it doesn't bother me that it exists, and perhaps other people like either. I'm someone who lost on why people are bothered by features that they don't have to use. When it comes to Siri, I would love if it could do what is promised, and the best part is that if Apple does get it there, you will still be able to use it as a timer.
Not too surprising—AI features are complex, and Apple likely wants to get it right rather than rush it. Siri’s upgrade with Apple Intelligence is a big step, so delays might mean they’re fine-tuning it for better performance. Hopefully, it’ll be worth the wait!
More details here: Apple Official Website
"What Apple showed regarding the upcoming “personalized Siri” at WWDC was not a demo. It was a concept video. Concept videos are bullshit, and a sign of a company in disarray, if not crisis. The Apple that commissioned the futuristic “Knowledge Navigator” concept video in 1987 was the Apple that was on a course to near-bankruptcy a decade later...
Last week’s announcement — “It’s going to take us longer than we thought to deliver on these features and we anticipate rolling them out in the coming year” — was, if you think about it, another opportunity to demonstrate the current state of these features. Rather than simply issue a statement to the media, they could have invited select members of the press to Apple Park, or Apple’s offices in New York, or even just remotely over a WebEx conference call, and demonstrate the current state of these features live, on an actual device. That didn’t happen. If these features exist in any sort of working state at all, no one outside Apple has vouched for their existence, let alone for their quality....
Why did Apple show these personalized Siri features at WWDC last year, and promise their arrival during the first year of Apple Intelligence? Why, for that matter, do they now claim to “anticipate rolling them out in the coming year” if they still currently do not exist in demonstratable form?
And now they look so out of their depth, so in over their heads, that not only are they years behind the state-of-the-art in AI, but they don’t even know what they can ship or when. Their headline features from nine months ago not only haven’t shipped but still haven’t even been demonstrated, which I, for one, now presume means they can’t be demonstrated because they don’t work."
https://daringfireball.net/2025/03/something_is_rotten_in_the_state_of_cupertino