Doom and gloom reporting on Apple Intelligence continues to ignore Apple's playbook
Chances are you've been underwhelmed by Apple Intelligence, but a combination of unrealistic expectations, fictional narratives, and ignored progress won't stop it from being a key part of Apple's ecosystem.

Apple Intelligence isn't chasing the same AI grift as other companies
The world viewed WWDC 2024 with middling expectations for Apple's late entry into the artificial intelligence race. The slow rollout and limited functions caused an expected yawn from analysts that were used to the grift and glamour promised by other tech companies, and that should be expected by now.
The protracted exhale of doom and gloom reporting continues with a ranging history and speculation on Apple's AI initiatives from The Information. Much of the information has been previously covered by Bloomberg, but it did detail the history of Apple's foundation model team.
However, my focus is on the expected narrative around Apple's supposed internal confusion around AI and its strategy (or lack thereof) resulting in discontent and staff departures. As always, it's important to remember that "people familiar with the matter" are likely disgruntled employees excited to speak about their pet peeves, and the coverage will likely lean into any negative narratives that help amplify a story.
The reality is the artificial intelligence industry is large, complex, and full of momentum that results in more flux and noise than even Apple is used to dealing with. It would be impossible to ignore the allure of working on what is promised to be the future of AI, superintelligence, with a giant pay package larger than Apple's CEO.
Apple's true AI problem
The issue isn't that Apple got caught off guard or has some kind of talent brain drain. It's the usual problem with Apple that the world has with the company -- its pursuit of consumer privacy and maintaining product plan secrecy.
Both of these facets are polar opposites of the AI mantra that requires being loud, declaring the end of civilization with every point update, and sucking up every ounce of data available, legal or not. That said, Apple's offerings are very different from what we've seen from other AI companies and certainly come off as underwhelming compared to seemingly sentient AI chatbots.
There's no good way to make advanced autocorrect sound interesting. Apple Intelligence launched with Writing Tools that edit, manipulate, or generate text, Image Playground with poorly made renderings of people and emoji, and notification summaries that caught a lot of flack from news publications.
On the range between Apple's worst launch, Apple Maps, and iPhone, it's closer to Apple Maps on the spectrum. However, it's not the absolute disaster being peddled by many reports on the technology.
For the most part, people outside of the tech space likely have some idea what artificial intelligence is and have no idea that Apple even has a version running on their iPhone. Apple's strategy in the space is to get out of the way and offer the technology in ways that enhance the user experience without always shouting that it is an AI feature.
The most you'll see of Apple indicating Apple Intelligence is in use is the shiny rainbow elements that show up in the UI. A lot of Apple Intelligence isn't even explained beyond a tiny symbol in a summary, if a sign of AI is shown at all.

The Siri animation is the most prominent use of Apple's AI rainbow
Apple's strategy going forward, offering developers and users direct access to the foundation models, is to get further out of the way. There will be times Apple Intelligence is in use that the user will never even realize -- and that's the point.
Which is the perceived problem with Apple Intelligence. It isn't here to allegedly replace jobs or discover "new" physics, but it is here to improve workflows and help users with mundane tasks.
AI reporting's red herrings
One of the biggest misses that has been repeated around Apple Intelligence since its launch is the idea of Apple being behind or caught off guard by AI. Rewind to before ChatGPT's debut, and Apple was at the forefront of machine learning and was even offering Neural Engines in all of its products before it was an industry standard.

Apple Intelligence was built on Apple's strong foundation of ML
Once the industry learned that the catch-all term of "artificial intelligence" was being used everywhere, even in place of the more accurate machine learning term, demand for that term skyrocketed. Instead of misleading customers, Apple continued referring to its machine learning tools as such, and waited to call anything AI until it was using generative technologies.
The report suggests that Giannandrea was so disconnected from the idea that LLMs could be useful that he stopped Apple from pursuing them in 2022 when Ruoming Pang, the now-departed head of Apple's foundation model team, introduced them. That narrative is likely accurate, but it leaves out the fact that LLMs were largely useless in their debut state as incredibly hallucinatory chatbots.
The story around Apple Intelligence continues to suggest that Apple's executive team woke up sometime in 2023 and realized the technology was important and should be pursued. However, this very same report suggests that Apple's teams under Giannandrea had been working on various AI technologies since they were put together in 2018.
The reality is much more simple -- analysts and pundits are unhappy with how Apple has approached AI so far. They seem to have completely missed what Apple's strategy is and continue to push the idea Apple is behind instead of considering the company's own direction.
Of course, I won't ignore that there has been some reshuffling of leadership internally as Apple has adjusted how it wants to tackle AI. Craig Federighi is now overseeing Apple's AI initiatives, and the company is looking externally to bolster its efforts.
However, instead of celebrating Apple's ability to develop both its internal models while seeking to give customers options, reports suggest Apple is giving up on its own efforts. There don't appear to be any signs of that actually occurring beyond these reports from anonymous employees.
Apple's true AI goals
For some reason, everyone seems completely ready to accept that Apple is just going to hire OpenAI or Anthropic to build the backend of Siri while alienating its own teams. This incredible departure from Apple's usual strategy of vertical integration has just been accepted as expected because, given the narrative around how behind Apple is, the company would have no other choice but to give up.

Apple CEO Tim Cook isn't sleeping on Apple Intelligence. Image source: Apple
Instead, when framed from a company that knows what it is doing and is actively evolving its strategy to play in the consistently exaggerated AI field, it's easy to see where Apple may be going with Apple Intelligence. It starts with app intents and ends with a private AI ecosystem.
Apple's executive team likely stepped in and surprised the Apple Intelligence team with its announcement of the contextual AI and Siri delay. The report reveals what I've suspected all along -- that Apple had a working version ready to ship, but the failure rate was below Apple's incredible standards.
The primary example Apple showed off with the new contextual Siri that relies on app intents is the mom at the airport scenario. Imagine if there was a 30% chance that it told you the wrong time, or even the wrong airport or wrong day.
If poorly summarized headlines made the news, wait and see what would happen if one person out of billions of users goes to the wrong airport.
Apple's pursuit of perfection with AI is likely a folly since the technology is inherently failure-prone. It can't think or reason, so unless there are some incredible checks and balances with human intervention, there likely will never be a 100% guarantee of accuracy.
Whatever system Apple devises to help overcome this limitation of AI, it'll likely debut later in 2025 or early 2026 during the iOS 26 cycle. I have no doubt that a version of that technology is coming, even as Apple further utilizes app intents in other ways across its platforms.

Apple is already brining third-party tools into the Apple Intelligence umbrella
Where the contextual, on-device, and private Apple Intelligence will be game-changing on its own, it's nothing compared to what could come next. Apple's willingness to develop its models while still giving users access to others will be its ultimate victory in the space.
I don't believe Apple is going to replace the Siri backend with Claude or ChatGPT. That reporting seems to be the result of pundit wishcasting and the desire to see Apple fail.
Instead, Apple seems to be in talks with OpenAI, Anthropic, and others to bring models of their popular AI programs to Private Cloud Compute. If that happens, it goes well beyond Siri simply shaking hands with ChatGPT over a privacy-preserving connection -- it guarantees user data will never be used by third parties.
If Apple's pitch for Private Cloud Compute is a server-side AI that runs with the same privacy and security measures as an on-device AI, then imagine those same conditions applying to ChatGPT.
The future of Apple Intelligence is a combination of contextually aware local models running on devices with access to third-party models running via Private Cloud Compute. Users would be able to choose which models suit their needs in the moment and submit data without fear of it becoming part of training data.

Apple could win the race by giving users choice in an AI ecosystem
The Apple Intelligence ecosystem would be the only one built from the ground up for ethical artificial intelligence. Regardless of model, on device or in Apple's servers, it would run privately, securely, and on renewable energy.
In a world filled with exaggerations around AI and its future, we should celebrate the grounded and realistic promises. Artificial intelligence is a tool, like a hammer, that can enhance human productivity, not replace it, and Apple's strategy reflects that.
Apple's tagline for Apple Intelligence is "AI for the rest of us." And by providing billions of Apple customers access to sustainable, ethical, private, and secure AI models, I'd argue Apple is pursuing a potential future that's not only realistic, but places it well ahead of the competition.
Read on AppleInsider



Comments
this is the line that says it all and what the media and so called “AI Experts” are missing. It’s the search wars all over again only this time Apple will not only have others AI models but now Apple will also have its own AI models. Sounds like a win win.
Usually, I would say the assertion of this article is correct. But in this case, Apple was caught off guard by the popularity of consumer available AI tools, that's why they have not yet gotten a model working to their satisfaction and have been delayed. They likely had too much of a focus on the abandoned Apple Car and Apple Vision Pro and failed to devote resources to AI early enough and recognize its importance. We all know they sat on Siri and didn't improve it for years.
So normally, you would be right but in this case, they did drop the ball. I blame the executive team for not putting enough emphasis on it earlier.
In summary, I think everyone expected a much better Siri in at most a year's time. That wasn't an unreasonable/unrealistic expectation given what was demoed in 2024. Apple simply shouldn't have demoed and subsequently advertised that feature if it couldn't deliver it.
But I don't think consumers really care about these services.
AI saves time for many tasks. It doesn't make anyone smarter, it just makes them more efficient. What someone does with their newly gained free time is something else. If you use it to read and learn new topics, yes, you can get smarter. If you just sink your extra free time just watching TikToks, well, I don't know about.
Having AI write your work e-mails doesn't make you understand your business any better. Nor does it help improve your language and communication skills. It might smooth things over with your colleagues but it won't help you come up with bright ideas on how to better the place you work.
Anyhow, yes, all of these companies are fighting over a limited pool of talent. Some top level AI engineer working on a model at Meta isn't helping Apple get their model improved. Yes, hiring the best people does matter which why these people are being paid more than minimum wage or even many other people in the same company.
It's also important to reiterate that consumer AI business is actually turning a profit right now. Everyone is running at a loss right now, including OpenAI (operators of ChatGPT, the most popular consumer AI service). All of these companies are trying to set a foundational framework for future success and profit. It's a marathon not a sprint and Apple's mission of protecting user privacy is always going to put them at a disadvantage compared to their competitors who make the lion's share of their profits by selling user activity data (Alphabet, Meta).
Apple's senior management knows how bad Siri is. Even the "improved" Siri they were developing wasn't good enough so it was postponed to some future release date. Apple's years of neglect of Siri is now utterly and completely crystal clear.
Most of the currently deployed Apple Intelligence features are pretty lackluster. Some of them work some of the time but there's no killer app or killer feature. In the end, most consumer AI functionality is smoke-and-mirrors.
But yes, they should have been more cautious about what they announced. My gut feeling is they really expected to deliver but belatedly discovered that getting these features to typical Apple polish was harder than they expected. We've seen periodic stumbles from Apple, after all, they are human.
Generally the better practice is: "under promise, over deliver" IMHO. But I'm not running the CEO of a Fortune 5 company.
Siri was a standalone startup whose roots were SRI International (um, the name S i R I isn't accidental); the speech recognition engine was based on Nuance technology. That startup had a standalone iOS app (I even remember the very un-Apple-like script "Siri" on the app icon on my iPod touch) in 2010 before the company was acquired by Apple. Apple most certainly did not invent the technology. They did incorporate it in a relatively quick 18 months into iOS.
After that, they effectively did nothing of consequence with Siri. Siri's poor voice recognition has been well hashed over for 10+ years. People expected Apple to make Siri better once it was part of iOS. They did NOTHING. Siri has been festering for 15 years. They made bombastic statements in WWDC 2024 about Siri which failed to come to fruition.
When Apple's senior management belatedly got the hint that ChatGPT-like consumer-facing AI was a thing, they understood that voice recognition was an integral part of this experience. But Siri's long neglect slowed down their efforts. Had they been continuously maintaining and improving Siri over 15+ years, they would have been able to leverage these improvements into launching more Apple Intelligence features.
Voice command would be integral for the rumored screen-based HomePod device as well as something like an Apple Ring. Voice recognition needs to work 99+% of the time for 99+% of users, not just some of the time for some users. Including ones with strong accents, people not speaking in their native tongues, people with unusual cadences, lisps, etc.
Today's Siri is no where close which is why Apple senior management postponed the "improved Siri" deployment. It's still back in the garage. My guess is that New & Improved Siri™ has maybe a 50% chance of debuting in iOS 26. That's right, it might slip until iOS 27.
While I have no special insight into Apple's engineering efforts my guess is that Apple hoped on bolting on AI features onto the existing Siri service and belatedly realized in late 2024 that they needed to do an entire code rewrite -- both front end and back end -- to get Siri to where they want it to go. One thing Apple knows it cannot do is to rush out a "new" Siri that is only marginally better than what current exists. People need to be able to TRUST the voice recognition system if they are going to use it regularly to do important things.
Today's Siri does not enable any sense of trust whatsoever. It's actually closer to an SNL parody of voice recognition systems which it wouldn't be had Apple committed to continually improving it. What was a cute novelty in 2010 is now just an embarrassment.
With each passing week, the AI stakes grow enormously. Apple is aware of that and they can't have "new" Siri going rogue like AI Dumpster Fire of the Week Replit.
https://www.msn.com/en-us/news/other/replits-ceo-apologizes-after-its-ai-agent-wiped-a-companys-code-base-in-a-test-run-and-lied-about-it/ar-AA1J2IlP
And that comes just a week after Grok 4's "MechaHitler" delusions of grandeur.
Hell, you don't need comedy writers to make up AI technology skits. You can just cut-and-paste actual AI industry news in July 2025.
Wouldn't it be fun if "new" Siri With Apple Intelligence accidentally on purpose deleted your entire calendar and contacts databases?
One takeaway I've gotten from following the AI industry over the past three years is that AI CEOs are the absolutely least trustworthy executives in the entire technology industry. They are snake oil salesmen: Altman, Amodei, Musk, Zuckerberg, now this Replit doofus.
Apple must take their time and get AI right because now the bar is dropping every single day. No one really needs some silly fawning AI chatbot that exhibits serious bias.
Overall, what Apple is (and should be) afraid of is the iPhone simply becoming a conduit for other AI services and apps. This was referenced last year when someone mentioned a concern that it would just become a "dumb box." We are a long way away from that right now, thankfully. However, Jonny Ives' pairing with OpenAI to create new products must be increasing Apple's concern. Nothing OpenAI (or others) create will replace an iPhone in the next few years, but in 5 years, 10? If those products (which are not meant to replace the phone) are successful, of course they will build a phone in the future.
Whether AI is hype or not, right now people are buying into that hype (as you said) and one day, a "black box" type phone with AI as the primary driver may emerge that can do all sorts of things an iPhone can't. Which is why Apple is (and should be) busting a move on the technology now.
The good part of last year's fiasco is they are unlikely to be caught with their pants down again. I hope.
I would certainly differentiate Apple's previous "machine learning" with its neural chip vs. "Apple Intelligence". I think the article conflates the two, and perhaps, I'm making more of a distinction than there really is. The reason I differentiate between the two is that the former is really more behind the scenes, and Apple has been dedicating time, effort, and money to developing it. However, the latter is what Apple came up when it realized how utterly behind it was on AI because Giannandrea was disconnected from the industry's emphasis on LLMs , which is why Siri never evolved. So, the leadership scrambled and came up with "Apple Intelligence" to make a more overt attempt at pushing "AI" to users. Mind you, the latter is what Apple sold the public on; pundits did not push their opinions or wishes; Apple advertised those Apple Intelligence features. And, it delivered junk and even failed to deliver the debacle that is "smarter Siri". Furthermore, "Apple's strategy in the space is to get out of the way and offer the technology in ways that enhance the user experience without always shouting that it is an AI feature" is exactly the former "machine learning" strategy. However, again, the public is expecting the more overt AI interactions, which is why I believe there needs to be a distinction between "machine learning" and "Apple Intelligence". Apple did well with the former but utterly failed to anticipate the latter.
I agree with the opinion of many of the other posts describing Apple's failure to develop Siri for the last 15 years. As with my other posts, I believe this stems from Apple's inability to work on multiple projects at the same time instead switching from one product to another like a computer before Microsoft Windows 386. With so many products, money, and employees, Apple has to start multitasking.
The iPod had almost no critical technology that Apple owned. The OS was licensed (Pixo). The key bit of hardware was a 1.8" hard drive from Toshiba.
The value-added in the iPod was the *design*, not the technology. Apple did a masterful job of combining off-the-shelf technologies into a unique product, and marketed it well. Later, they replaced the hard drive with flash -- a technology they also did not invent and did not control.
Apple today is almost the polar opposite. Apple has developed and/or owns many compelling technologies relevant to AI/ML, from Apple Silicon on up the stack -- compilers, programming languages, APIs (like CoreML), you name it. And any gaps in Apple's technology toolkit could be easily filled by licensing from others, because Apple today also has a ton of cash.
What Apple seems to be struggling with is combining those technologies into a compelling product. They are good at integrating those technologies into existing products, like when they added the neural engine to the iPhone and gave us Face ID. But they seem to really be struggling to create an AI product analogous to the iPod.
Meanwhile, we have companies like ChatGPT creating a product -- the chatbot -- that is clearly an "AI product". It's not a perfect product - -there are many pain points. But it is an AI product, not just a product that happens to use AI.
If Apple can get its product development act together, it could do great things in the AI space because it does have so many great technologies. I suspect Apple will eventually get its act together and develop that great AI product, but it might require a new CEO to do it (presumably that will be John Ternus).
AI is here to stay. It's doing some amazing things in the enterprise markets and if it can eke out 100 basis points in net profit for some Fortune 500 company, guess what? They're gonna use it.
I bet 99.9% of people on these tech news site discussion forums who say they don't use AI are over 30 years old. That's right. There's a generational gap in AI usage.
Just yesterday, the AP reported on this:
https://apnews.com/article/ai-companion-generative-teens-mental-health-9ce59a2b250f3bd0187a717ffa2ad21f
That's right pre-teens are using this stuff and some older teenagers even see the danger in young children using AI.
And the consumer AI industry is largely a lawless frontier right now, it needs heavy government regulation from world governments, not just your state's governor or 1600 Pennsylvania.
And many of today's consumer AI companies are really no better than tobacco companies. They are creating AI chatbots that look and behave like anime characters (Grok's assistants, SpicyChat AI, Character.AI, et cetera) to attract youngsters into interacting with them. It's the digital equivalent of adding candy flavors to vaping products.
Look at the way Grok started rolling out their AI anime-skinned assistants like Ani. They debuted on iPhones first, still not available on many Android devices. Why? Probably because iPhone is the platform of choice for young people (the under 25 market), especially teens. If you are over 35, guess what? You are not the target audience of consumer AI companies. I'm way beyond that but at least I know that I am not who AI companies crave as a user.
If you care about the future of today's youngsters -- the ones who will be tasked with fixing many of the world's problems -- you need to pay attention to what AI is today, where it is going, who is using it, for what reasons, etc.
There's one oldtimer here who continually gripes about AI, fearing it will displace him from his job as a writer. AI's potential effects are far, Far, FAR greater than that.
Just sticking your head in the sand or plugging your ears and saying "I'm not using AI so nyah!" like a little brat throwing a tantrum isn't going to stop AI from proliferating. That much is clear in the 3+ years I've been closely following AI.
The problem isn't with the ability of these AI chatbot assistants to have a conversational tone. The problems are things like accuracy, bias, lack of reasoning, lack of emotion/compassion, inability to recognize satire/sarcasm, lack of common sense (which is a growing problem with a lot of humans).
I occasionally try out all of these consumer AI assistants and they are all dreadfully inaccurate. About 5.5 months ago I asked "what time is the Super Bowl kickoff?" Not a single one got it right. About a week before the NCAA men's basketball tournament, I asked 4-5 chatbots to fill out a bracket. Not a single one provided a bracket that had the actual teams playing in the tournament.
Grok's performance was unbelievably bad at the March Madness bracket. Not only did it create fake competitors playing a fake bracket, it predicted that all of the higher seeds would win their games. ZERO upsets. That *NEVER* happens in a real sports competition.
Sure, it might sound like you're having a human-like conversation but Grok is about the worst of the AI chatbots. Embarrassingly bad. And let's not forget its MechaHitler hallucinations. That just happened a couple of weeks ago.
This is a good example of how a consumer thinks Grok sounds real so it must be accurate and trustworthy. LOL, terrible assumption. And now with Grok's new found language skills, would you vote it into office? Accept its recommendation to eat rocks and put glue on pizza? Give up your ICBM launch codes?
Ahahahahahahaha!!!!!