iOS 18 adoption steady as users explore AI & customization
Apple's iOS 18 is improving the user experience with AI-powered tools, deeper customization, and enhanced privacy, driving strong adoption across iPhones and iPads.

iPhone 15
The iOS 18 update brings AI-powered tools, deeper customization, and enhanced privacy. Currently, 76% of iPhones released in the last four years and 68% of all iPhones are running the latest OS.
The latest OS, iOS 18, has matched its predecessor in overall adoption, with 76% of iPhones from the past four years and 68% of all iPhones now running the software. In comparison, iOS 17 reached the same 76% adoption rate for newer devices in February 2024 but fell short of iOS 16's performance during a reflect a solid uptake, iOS 17 achieved higher adoption in the similar timeframe.
Meanwhile, over half of all iPads -- 53% -- are now on iPadOS 18, while 63% of newer iPads have upgraded. Despite these milestones, around 27% of iPads remain on iPadOS 17, and 10% continue to use older software.
What makes iOS 18 stand out to users
The success of iOS 18 stems from its blend of practical enhancements, personalization, and user-centric AI features. Apple Intelligence, the standout addition, introduces a suite of tools powered by machine learning.
Tasks like organizing photos, drafting messages, and curating personalized suggestions feel more intuitive and tailored than before. Customization is another major draw.
For the first time, users can freely arrange app icons, apply color themes, and tweak the Control Center with resizable buttons and third-party app integrations. The Lock Screen has also evolved, allowing users to replace default shortcuts with apps or functions that fit their routines.
Messaging has received notable updates, including the ability to schedule messages, apply animated effects, and respond with an expanded range of emoji reactions. Apple has also embraced Rich Communication Services (RCS), improving interactions with Android users through features like high-resolution media sharing and read receipts.

Home Screen customization in iPadOS 18
Privacy remains a cornerstone of iOS. In iOS 18, users can lock or hide specific apps, requiring Face ID or a passcode to access them. Hidden apps are tucked into a secure folder within the App Library, further bolstering security.
A closer look at adoption trends
While iOS 18 has been widely adopted, its adoption rates are roughly comparable to those of iOS 17 at a similar point in its release cycle. Recent Apple statistics show 76% of iPhones released in the last four years are running iOS 18, a strong showing that aligns with previous versions.
Speculation about slower hardware upgrade cycles and cautious attitudes toward updates may have some merit, but there's no concrete evidence tying these factors to adoption rates. Meanwhile, iOS 18's ecosystem-specific integrations and privacy features maintain its edge over Android.
On the iPad side, iPadOS 18 shows steady adoption, with 63% of newer iPads running the latest software. Features like advanced multitasking, enhanced Apple Pencil support, and expanded widget options likely contribute to its popularity.
However, iPads often experience slower upgrade cycles compared to iPhones, reflecting their different usage patterns and upgrade habits.
For app developers, iOS 18's rapid adoption opens doors for apps that leverage new APIs. Features like enhanced machine learning capabilities in Core ML and improvements in ARKit for augmented reality experiences can offer new capabilities.
Overall, the numbers paint a positive picture for Apple, even if adoption rates for iOS 18 haven't outpaced previous versions.
Read on AppleInsider
Comments
This year -- like the past two years -- I will upgrade from iOS 17 to iOS 18 (and Sonoma to Sequoia) in June, somewhere around WWDC. This pretty much ensures that I will have a stable OS experience. All subsequent updates will be bugfixes/security patches with no new features (which generally increase instability).
As for Apple Intelligence, Apple themselves clearly label it Beta. But many of Apple Intelligence's individual features are just alpha quality at this point. One thing the past few years has proven is that LLMs are as dumb as rocks. They are just probability estimators, but not like "this has the probability of being right" but more like "this has the probability of resembling something you might find on the general Internet". LLM-based chatbots have proven time and time again that they have A.) zero common sense, B.) no taste, and C.) no ability to discern sarcasm, satire, or humor.
A lot of this instability comes from the fact that smartphones really took over as drivers of personal technology innovation 10+ years ago because smartphones are the primary computing modality for consumers. Apple -- with its predictable annual cadence of a fall iPhone refreshes -- is forced to march and deliver new functionality every year regardless of feature maturity or stability.
Macs and macOS no longer lead innovation and are not forced into a fixed hardware release schedule. So there is less innovation both on the hardware and software side for Macs/macOS. So macOS Sonoma in its current state is pretty stable and its predecessor Ventura even more so.
Siri is going to be especially big challenge to fix because Apple has utterly neglected their assistant for well over a decade where the competition has periodically worked on and improved their offerings. Apple had done nothing to Siri which is why AI-assisted Siri is taking such a long time to release. Hopefully it will not launch as a steaming heap of garbage but based on other consumer-facing AI chatbots, I'm not terribly confident in Apple's ability to create a meaningfully differentiated experience. After all, all of these LLMs are based on AI 'bots skimming the broader Internet which is mostly junk collection.
Let's not forget the old consumer science expression GIGO: "Garbage In, Garbage Out." That's LLM-powered AI chatbots right now. Providing answers of a high probability of resembling Internet detritus.
LLMs are pretty good at helping figure out mathematics and physics problems though.
In the interim it is a kludgy, stumbling, mumbling piece of garbage that needs a heck of a lot more development. Consider it a challenged infant learning to walk and talk.
For the IT industry it has become a byword to invigorate sluggish sales from phones to electric toothbrushes.
2. Tap the "three dots" menu in the upper right corner.
3. Choose "List View"
Done!
You are right that "AI" has become as much a marketing buzzword as much as a genuine technological innovation. This is quite specifically why Apple chose not to call it "AI," and instead hit on the clever but un-abbreviatable moniker of "Apple Intelligence." But sure, if you don't want to have it now (or ever), you can turn it off -- something Apple gives you that I doubt most other companies will. You can certainly ignore Gemini or Copilot -- for now -- but you can't really banish them completely.
The issue is the fact that these chatbot assistants will provide the wrong answers occasionally. Let's take this example from 9to5Mac:
https://9to5mac.com/2025/01/24/siri-failed-super-easy-super-bowl-test-getting-38-out-of-58-wrong/
The problem is unless you have the knowledge yourself about Super Bowl winners, you don't really know whether or not Siri (or any other chatbot) is actually correct or hallucinating badly. Which means you really have to do your own due diligence after Siri and/or a chatbot(s) provides their answers to make sure they aren't smoking dope.
Worse, the Super Bowl queries had different results depending on how they were worded. This shouldn't be the case. If you ask an intern to find out who won Super Bowl XXXI, Super Bowl 31, or the 31st Super Bowl, or the one after Super Bowl XXX, the answer should all be the same regardless.
Incorrect answers, inability to answer questions, inconsistent responses, hallucinations, et cetera ad nauseam all contribute to loss of confidence and trust in the AI technology. It's alpha quality software/service at best right now.
Do you know why I've never actively used Siri? Because it has sucked from the beginning. I tried it when it was still a third-party service back in 2010 (before Apple's acquisition) and periodically tried it since then. It has ranged anywhere from uselessness to laughably bad to unreliable over its 14+ year lifespan. Over half the time, I need to follow up with a search engine query because Siri's results are wrong. I needed to double check anything I asked it to do.
Today's consumer facing AI tools risk losing all consumer confidence before the technology is actually mature enough to do what companies say it can do. A lot of this stuff is completely underbaked because everyone is rushing things out the door.
It was the same train wreck with a lot of early GPS navigation tools. Some of these tools/services simply sent you the wrong direction. It took years to fix this and there are still occasional errors. Hell, we're seeing this with autonomous driving tech where cars are being navigated onto train tracks.
All of these AI tools need to be turned off by default. And people who choose to use them must be aware that the answers might be wrong (again GIGO) and that they might be wasting time because they need to double check AI's accuracy (which in many cases is appallingly bad).
Consumer facing AI will get to the point where it is useful and accurate enough to be operated without suspicion or unintended hilarity. But it appears that we are still years away from that. There have been some instances of regression in recent AI data models, it is not linear growth.
I believe that many people are most frequently using the most unreliable forms of consumer-facing AI tools -- LLM-powered chatbots -- to do things that those services simply aren't ready to handle in 2024-2025. Some of the image processing tools are actually worthwhile because not everyone is a Photoshop guru and even if you are one of the few in that league, some of the things that AI tools can do are far quicker than a human doing them.
For sure 2025 will NOT be the Year of the AI Chatbot. It will likely take about five years for this technology to reach maturity. Already we are seeing a dramatic slowdown in improvements between LLM model generations, at least in the consumer-facing versions.
We can't be asking 8 different AI chatbots "who was the American League batting champion in 1968?" and getting 4 different answers. There is only one correct answer and unless you do your own independent research, you aren't going to know what the correct answer is. And we know today's AI chatbots don't double check their work. We also know that in 2025 there is no one AI chatbot that is correct all the time. Being a little more accurate than the competition in a battery of questions isn't helpful to Joe Consumer if he asks one of the questions that the AI chatbot got wrong.
General intelligence also includes the notion of the answers "I don't know" or "I'm not sure" or "let me research this some more" or "I'm seeing conflicting data" or "Ask someone who knows more about this than I do." AI chatbots are presenting answers as fact when they are not. They are just probability estimates: "This answer has a higher probability of being something you might find on the Internet if you searched for it yourself."
One commenter noted that much of Apple Intelligence is still considered "beta" level as if we need to cut it some slack. I would be totally cool with that if Apple wasn't promoting it to the degree that they are. Promoting a feature set that is still baking in the oven isn't that unusual today. Microsoft's done it for years and Tesla does it too, but it still stinks because it's fundamentally changing what buyers should expect when they purchase what they believe is a ready to use "product" and soon discover some of the advertised feature aren't there yet. It would be like buying a new pickup truck and it arrives without a cargo bed. Yeah, you can still drive it but it's not all there - yet. Maybe in a few months the bed will get delivered and you now have a complete product that matches what you thought you paid for.
I totally understand that sophisticated hardware/software products like the iPhone are always being updated, software defects fixed, and enhancements added. Despite that, I still believe that Apple overemphasized the immediate impact Apple Intelligence was bringing to the iPhone 16 when it hit the market. Promoting "beta" features to mom & pop buyers is disingenuous in my opinion. They're not going to be installing Developer or Public beta versions of software on their devices so they can experience not ready for prime time features that are still in development. I am signing up to be a beta tester, they are not. To Apple's credit, they did not turn on Apple Intelligence by default up until now. The next release of iOS 18 will have Apple Intelligence turned on by default. But Apple still promoted the *&%$ out of Apple Intelligence on the new iPhones since day one.
Which leads me to the software quality related question. Is Apple's software more defect ridden today than it's been in the past? I don't know, but my gut tells me that as a percentage of the code base I bet their defect ratio is less. However, their code based has been growing substantially. At the same time Apple is effectively following a continuous integration, continuous release software development process. Unlike in the past their release cycles are now in days and weeks and they ship whenever they feel they have to get the currently most severe defect fixes resolved and their codebase is stable and tested to the degree that they can test it. I'm certain they use regression testing to give them a level of confidence that the newly added features aren't breaking the existing features.
What I have personally experienced now more than in the past is that things that were working fine are suddenly broken or no longer working. If I were overseeing the testing process I would question whether the regression tests are getting adequate test coverage. It's not uncommon at all that regression tests, especially automated ones, do not achieve total (100%) test coverage. Some things require human interaction or special test fixtures outside of the software environment. The newly introduced features may have exposed new dependencies or gaps in test coverage that were not obvious in the previous builds. Maybe the previously working code was refactored and new defects were injected. It happens. Perhaps something in the build process got broken or weren't updated properly when the new features were integrated. It happens. Whatever the reason, when something that was working is suddenly broken, it should be traceable to a root cause. As long as Apple keeps growing the size and functional scope of the code base they will continue to introduce new defects and uncover latent defects that slipped through unnoticed in previous releases.