iOS Wi-Fi exploit enables zero-click remote iPhone access without user knowledge
A newly discovered -- and already patched -- iOS vulnerability allowed hackers to access and gain control over nearby iPhones using a proprietary Apple wireless mesh networking protocol called AWDL.
Discovered by security researcher Ian Beer, a member of Google's Project Zero team, the AWDL scheme enabled remote access to photos, emails, messages, real-time device monitoring, and more.
As detailed in an exhaustive technical breakdown posted to the Project Zero blog on Tuesday, Beer uncovered the mechanism behind the exploit in a 2018 iOS beta that accidentally shipped with intact function name symbols tied to the kernel cache. After poking around in Apple's code, he uncovered AWDL, a cornerstone technology that powers AirDrop, Sidecar, and other tentpole connectivity features.
From there, the researcher engineered an exploit and crafted an attack platform consisting of a Raspberry Pi 4B and two Wi-Fi adapters.
"AWDL is enabled by default, exposing a large and complex attack surface to everyone in radio proximity. With specialist equipment the radio range can be hundreds of meters or more," Beer explained in a tweet. Part of exploit involves forcing AWDL to activate if it was switched off.
Beer says AWDL is a "neat" technology that makes way for "revolutionary" peer-to-peer connectivity solutions, but notes that "having such a large and privileged attack surface reachable by anyone means the security of that code is paramount, and unfortunately the quality of the AWDL code was at times fairly poor and seemingly untested." He offers the example of a drone flying over a protest to collect information from unsuspecting iPhone users.
The process took six months to develop, but when Beer was done, he could hack any iPhone in radio proximity.
It is unclear if Beer's work is eligible for Apple's Bug Bounty program, but if it is, the developer said he would donate the proceeds to charity.
Discovered by security researcher Ian Beer, a member of Google's Project Zero team, the AWDL scheme enabled remote access to photos, emails, messages, real-time device monitoring, and more.
As detailed in an exhaustive technical breakdown posted to the Project Zero blog on Tuesday, Beer uncovered the mechanism behind the exploit in a 2018 iOS beta that accidentally shipped with intact function name symbols tied to the kernel cache. After poking around in Apple's code, he uncovered AWDL, a cornerstone technology that powers AirDrop, Sidecar, and other tentpole connectivity features.
From there, the researcher engineered an exploit and crafted an attack platform consisting of a Raspberry Pi 4B and two Wi-Fi adapters.
"AWDL is enabled by default, exposing a large and complex attack surface to everyone in radio proximity. With specialist equipment the radio range can be hundreds of meters or more," Beer explained in a tweet. Part of exploit involves forcing AWDL to activate if it was switched off.
Beer says AWDL is a "neat" technology that makes way for "revolutionary" peer-to-peer connectivity solutions, but notes that "having such a large and privileged attack surface reachable by anyone means the security of that code is paramount, and unfortunately the quality of the AWDL code was at times fairly poor and seemingly untested." He offers the example of a drone flying over a protest to collect information from unsuspecting iPhone users.
The process took six months to develop, but when Beer was done, he could hack any iPhone in radio proximity.
Apple patched the vulnerability in May with iOS 13.5 and a spokesperson for the company said a majority of its users are using updated software. Beer has found no evidence that the technique was used in the wild.The takeaway from this project should not be: no one will spend six months of their life just to hack my phone, I'm fine.
Instead, it should be: one person, working alone in their bedroom, was able to build a capability which would allow them to seriously compromise iPhone users they'd come into close contact with."
It is unclear if Beer's work is eligible for Apple's Bug Bounty program, but if it is, the developer said he would donate the proceeds to charity.
Comments
But thanks.
Nov 98 - earliest AI Forum registration.
Not sure how finding a security hole that allows virtually complete access to a device is lame. We need more people like this. Every hole that is found makes security better for everyone, both on iOS and Android, and that's a good thing.
How would that help? you would need to have your phone turned on at some point and then it would be vulnerable. Not to mention you would need to go though the hassle of powering it up and down every time you took it out. If you're that paranoid, a better approach is to put it in a faraday cage when you're not using it.
It was patched by Apple, but what he did will ensure that Apple is more careful with this stuff in the future. Apple probably appreciated his efforts.
He worked on this for six months and came up with an exploit that he shared with Apple instead of going public. He then said that he’ll donate the bounty to charity.
Apple, meanwhile, left a large poorly-implemented, untested attack surface in millions of phones. Any lameness here belongs with Apple.
I think what happens is they remain in the wild until discovered, and then they patch and replace them with other similar exploits, and sit back and wait for them to be discovered too. They likely have prepared a bunch of NSA-demanded back doors that they can roll out quickly.
I don’t think it explains all cases however because sometimes Apple are really slow to patch exploits and even let them remain in the wild despite being given adequate notice by security researchers. Or perhaps that is even further evidence.
I’m very suspicious at how lax these software can be at times. The post is right. If a single person can do it in his spare time in 6 months, imagine what a team of state-sponsored hackers working full-time can accomplish?
Giving all my thanks to this guy and Google’s team for improving our security.
The safest (and only guaranteed) way to never create problematic code is never write any code at all. Any developer claiming they’ve never created a bug in code is one that’s not to be trusted for one of many reasons.
It's always like that. He acts like iOS is a simple Pong program. This exploit took 6 months so obviously it's very advanced.
Love that Google is working on Apple security instead of fixing their crappy knockoffs though.
That's without even taking into account all the data mining that could be applied to a data trawling effort of this kind.
Can you specify who those people may be? As a software dev and frequent AI visitor, I've never seen anyone claim any system is impenetrable. Never seen that said, ever. In fact just the opposite is said -- no system is hack-proof. Apple even has events and bounty programs for this purpose. Now, people often say iOS has better security than competing platforms like Android, but you seem to be conflating these two statements.
Anyway, it isn't clear to me from the article wither this was patched after the exploit was proven, or before.
Then can you explain further why "open" Android has had more exploits and vulnerabilities over the years than the closed iOS?