Apple Silicon vulnerability leaks encryption keys, and can't be patched easily

Posted:
in Current Mac Hardware

A new vulnerability in Apple Silicon chips can allow a determined attacker to access a user's data by stealing the cryptographic keys -- and a fix could considerably impact encryption performance.

Apple Silicon M2 in front of a MacBook
Apple Silicon M2 in front of a MacBook



Researchers have discovered an issue with Apple's M-series chips, in how it deals with cryptographic operations, such as the encryption of files. However, since it is an issue with the chip's architectural design, it's something that's very difficult to mitigate.

Detailed on Thursday by a group of researchers and reported by ArsTechnica, the problem lies in the data memory-dependent prefetcher (DMP), which predicts memory addresses of data that will most likely be accessed by currently-running code. By prefetching data, it becomes a target for probing from malicious code.

This is because prefetchers are using previous access patterns to determine its predictions of the next bit of data to fetch. It is possible for an attacker to use this way of working to influence the data being prefetched, opening the door to accessing sensitive data.

GoFetch attack can steal encryption keys



The attack, referred to by the researchers by the name "GoFetch," takes advantage of a quirk in DMP usage in Apple Silicon. Specifically how a DMP could confuse the content of memory with pointer values used to load more data, with the former occasionally used as the latter.

In explaining the attack, the researchers confirm that it is possible to make the data "look like" a pointer, which the DMP will treat as an address location and in turn pull that data to the cache. The appearance of the address in the cache is visible, meaning malicious code can see it.

The attack manipulates data within the encryption algorithm to look like a pointer, using a chosen input attack. The DMP, seeing the data value as appearing like an address, then brings the data from that address, with the address itself being leaked.

The attack is not an instant crack of an encryption key. However, the attack can be carried out repeatedly, allowing the key to be revealed over time.

The GoFetch attack uses the same user privileges as many other third-party macOS apps, rather than root access. This lowers the barrier to entry for actually run the attack, but it's not entirely the whole story.

The GoFetch app running the attack must also be used on the same chip cluster as the cryptographic target app in order to function, and both must use the efficiency cores or the performance cores at the same time. It is cluster-dependent, meaning that it will still work if the apps are run on different cores within the same cluster.

The researchers claim the attack works against classic encryption algorithms as well as newer quantum-hardened versions.

As to its effectiveness, the researchers' test app was able to extract a 2,048-bit RSA key in less than an hour, and just over two hours for a 2,048-bit Diffie-Hellman key. Ten hours of data extraction is needed to secure a Dilithium-2 key, excluding offline processing time.

Difficult to thwart



The main problem with the attack is that it's one that cannot be patched in Apple Silicon itself, since its a central part of the design. Instead, it requires mitigations by the developers of cryptographic software to work around the problem.

The problem is that any mitigation changes will increase the workload required to perform the operations, in turn impacting performance. However, these impacts should only affect applications that use encryption and employ the mitigations, rather than other general app types.

In the case of one mitigation, ciphertext blinding, the effectiveness varies between algorithms, and can require twice the resources than usual.

Running the processes only on efficiency cores is also a possibility, since they do not have DMP functionality. Again, encryption performance will take a hit since it's not running on the faster cores.

A third option actually applies to M3 chips, in that a special bit can be flipped to disable DMP. The researchers admit they don't know the level of performance penalty that would occur.

Apple declined to comment to the report on the matter. The researchers claimed they performed a responsible disclosure to Apple before the public release, informing the company on December 5, 2023.

Some of the researchers previously worked on another discovery from 2022, also concerning Apple Silicon's DMP usage. At the time, the so-called Augury flaw was deemed to be not "that bad," and was "likely only a sandbox threat model."

History repeating



Chip vulnerabilities can be a big problem for device producers, especially if they have to make changes to operating systems and software in order to maintain security.

In 2018, Meltdown and Spectre chip flaws were discovered, which affected all Mac and iOS devices, as well as nearly every X86 device produced since 1997.

Those security exploits relied on "speculative executive," when a chip would improve its speed by working on multiple instructions simultaneously, or even out of order. As the name suggests, the CPU will speculatively continue executions down a path before a branch completes.

Both Meltdown and Spectre used the functionality to access "privileged memory," which could include the CPU kernel.

The discovery of the flaws led to a flood of other similar attacks, chiefly against Intel chips, including Foreshadow and Zombieload.

This is also not the first issue found with the design of Apple Silicon chips. In 2022, MIT researchers discovered an unfixable vulnerability dubbed "PACMAN," which capitalized on pointer authentication processes to create a side-channel attack.



Read on AppleInsider

Comments

  • Reply 1 of 19
    twolf2919twolf2919 Posts: 112member
    While definitely a vulnerability, I wish the author had highlighted how difficult it is to exploit.   Just telling me that the attack "must also be used on the same chip cluster as the cryptographic target app in order to function, and both must use the efficiency cores or the performance cores at the same time" doesn't really tell me that.  It sure sounds very esoteric - I'm not running a cluster on my MacBook (cluster of one?) nor do I really know what a 'cryptographic target app' means.

    As most attacks, this requires you - at the very least - to download and run an app of unknown origin.  Don't do that.
    retrogustokillroywilliamlondonbyronlwatto_cobra
  • Reply 2 of 19
    twolf2919 said:
    While definitely a vulnerability, I wish the author had highlighted how difficult it is to exploit.   Just telling me that the attack "must also be used on the same chip cluster as the cryptographic target app in order to function, and both must use the efficiency cores or the performance cores at the same time" doesn't really tell me that.  It sure sounds very esoteric - I'm not running a cluster on my MacBook (cluster of one?) nor do I really know what a 'cryptographic target app' means.

    As most attacks, this requires you - at the very least - to download and run an app of unknown origin.  Don't do that.
    So, stick to open source only?

    Because the reality is that this can be hidden just about anywhere in a closed source app, and your precious app store can't protect you.


    williamlondongatorguy9secondkox2
  • Reply 3 of 19
    killroykillroy Posts: 276member
    Well, heck, if it ain't one thing, it's another.
    williamlondon
  • Reply 4 of 19
    maltzmaltz Posts: 454member
    twolf2919 said:
    As most attacks, this requires you - at the very least - to download and run an app of unknown origin.  Don't do that.

    Well, it's not always that simple.  A couple of times a year there's a security issue that allows arbitrary code execution when processing an image or some other type of data - sometimes already in the wild.  If your un-patched phone visits a website with such malicious content, or sometimes even receive a text containing it, you've "downloaded an app of unknown origin" and run it without even knowing it.
    Because the reality is that this can be hidden just about anywhere in a closed source app, and your precious app store can't protect you.
    Sure it can - to a point.  The app store definitely scans app code for malicious activity such as this.  It's a cat-and-mouse game, though, as malware tries to obfuscate what it's doing.  So it's not perfect, but it's far from useless.
    StrangeDaysbyronlwatto_cobradope_ahmine
  • Reply 5 of 19
    maltz said:
    twolf2919 said:
    As most attacks, this requires you - at the very least - to download and run an app of unknown origin.  Don't do that.

    Well, it's not always that simple.  A couple of times a year there's a security issue that allows arbitrary code execution when processing an image or some other type of data - sometimes already in the wild.  If your un-patched phone visits a website with such malicious content, or sometimes even receive a text containing it, you've "downloaded an app of unknown origin" and run it without even knowing it.
    Because the reality is that this can be hidden just about anywhere in a closed source app, and your precious app store can't protect you.
    Sure it can - to a point.  The app store definitely scans app code for malicious activity such as this.  It's a cat-and-mouse game, though, as malware tries to obfuscate what it's doing.  So it's not perfect, but it's far from useless.

    Was going to post something similar to the above. The hidden print flaw and 2 other zero days which a text message was sent, you never got it/received any notification of the text in imessage etc...and it exploited the 3 zero days in the background installing malicious software just waiting to be used later on. App store scanning doesn't mean much when it is typically zero day exploits that are a means to get your mac/iphone to perform some function to then exploit this vulnerability. But I guess the ultimate question here is...what exactly would be the case use of this flaw? Would like to see/hear some examples of how this could be used to perform some function/malicious thing. Like steal banking information/credentials or other sensitive things??

    In the meantime, Devs will need to rebuild apps and push them out for M3 platform and disable that switch after some testing. Problem is...is the M3 now turned into an M1 with the performance hit or now an i9 Intel equivalent? They didn't do any testing with that. And all the other mitigation things for M1/M2 and my iPhone 12 Pro max doesn't sound like fun or good for performance.
    edited March 21 watto_cobra
  • Reply 6 of 19
    killroy said:
    Well, heck, if it ain't one thing, it's another.
    Isn't that Chicken Little's line?
    watto_cobra
  • Reply 7 of 19
    Attacks based on behavior of pre-fetching processes go back nearly as long as pre-fetching itself (well over a decade).  

    If patching attacks on Intel server chips back around a decade ago, performance would drop 25-50%.  

    Build a better mouse-trap, and the miscreants will build a better mouse.  

    Malicious code has been hidden in spreadsheet macros, various graphics and video images (including advertisements) and just about anything else you can imagine.  Among the more dangerous is use of social engineering to convince you that a boss wants certain work done, causing you to open a document that then infects the corporate servers and encrypts the data, followed by a bitcoin ransom request, possible sale or disclosure of the data, and a report to the government if you do not report the ransom attack by the government imposed deadline (4 days, if I remember correctly).  A large number of pharmacies and mostly small medical groups are suffering financially from such an attack on a company that processes payments for those insured by Tricare (including retired military), Medicare, and a host of insurance companies.  
    williamlondonwatto_cobra
  • Reply 8 of 19
    StrangeDaysStrangeDays Posts: 12,886member
    twolf2919 said:
    While definitely a vulnerability, I wish the author had highlighted how difficult it is to exploit.   Just telling me that the attack "must also be used on the same chip cluster as the cryptographic target app in order to function, and both must use the efficiency cores or the performance cores at the same time" doesn't really tell me that.  It sure sounds very esoteric - I'm not running a cluster on my MacBook (cluster of one?) nor do I really know what a 'cryptographic target app' means.

    As most attacks, this requires you - at the very least - to download and run an app of unknown origin.  Don't do that.
    So, stick to open source only?

    Because the reality is that this can be hidden just about anywhere in a closed source app, and your precious app store can't protect you.
    “your precious app store” ? Ok who wants to tell him? Fine, I’ll do it. iOS has fewer vulnerabilities than android. Sorry, bub. 
    edited March 21 tmay9secondkox2tyler82williamlondonwatto_cobra
  • Reply 9 of 19
    twolf2919 said:
    While definitely a vulnerability, I wish the author had highlighted how difficult it is to exploit.   Just telling me that the attack "must also be used on the same chip cluster as the cryptographic target app in order to function, and both must use the efficiency cores or the performance cores at the same time" doesn't really tell me that.  It sure sounds very esoteric - I'm not running a cluster on my MacBook (cluster of one?) nor do I really know what a 'cryptographic target app' means.

    As most attacks, this requires you - at the very least - to download and run an app of unknown origin.  Don't do that.
    So, stick to open source only?

    Because the reality is that this can be hidden just about anywhere in a closed source app, and your precious app store can't protect you.


    Good luck getting an average mac user to understand source code.
    williamlondonkillroywatto_cobra
  • Reply 10 of 19
    gatorguygatorguy Posts: 24,213member
    twolf2919 said:
    While definitely a vulnerability, I wish the author had highlighted how difficult it is to exploit.   Just telling me that the attack "must also be used on the same chip cluster as the cryptographic target app in order to function, and both must use the efficiency cores or the performance cores at the same time" doesn't really tell me that.  It sure sounds very esoteric - I'm not running a cluster on my MacBook (cluster of one?) nor do I really know what a 'cryptographic target app' means.

    As most attacks, this requires you - at the very least - to download and run an app of unknown origin.  Don't do that.
    So, stick to open source only?

    Because the reality is that this can be hidden just about anywhere in a closed source app, and your precious app store can't protect you.
    “your precious app store” ? Ok who wants to tell him? Fine, I’ll do it. iOS has fewer vulnerabilities than android. Sorry, bub. 
    I'll agree that applies to the app stores, not that Google Play is insecure, but not necessarily the OS'es themselves. There's circumstantial evidence that it's harder to find Android exploits than those crafted for iOS.
  • Reply 11 of 19
    9secondkox29secondkox2 Posts: 2,727member
    twolf2919 said:
    While definitely a vulnerability, I wish the author had highlighted how difficult it is to exploit.   Just telling me that the attack "must also be used on the same chip cluster as the cryptographic target app in order to function, and both must use the efficiency cores or the performance cores at the same time" doesn't really tell me that.  It sure sounds very esoteric - I'm not running a cluster on my MacBook (cluster of one?) nor do I really know what a 'cryptographic target app' means.

    As most attacks, this requires you - at the very least - to download and run an app of unknown origin.  Don't do that.
    So, stick to open source only?

    Because the reality is that this can be hidden just about anywhere in a closed source app, and your precious app store can't protect you.


    LOL who wrote that line? Doctor Evil? Or one of the bad guys from Scooby Doo? 

    What’s next? Complaining about those PESKY privacy and security initiatives? 

    LOL

    this is a bummer of an issue, but not the end of the world and nowhere near what Intel and AMD got hit with years ago. 

    iMessage, WhatsApp, FileVault, etc. will be affected amongst others. Be interesting to know if Apple was able to address this prior to m4 tape out. 
    watto_cobra
  • Reply 12 of 19
    9secondkox29secondkox2 Posts: 2,727member
    michelb76 said:
    twolf2919 said:
    While definitely a vulnerability, I wish the author had highlighted how difficult it is to exploit.   Just telling me that the attack "must also be used on the same chip cluster as the cryptographic target app in order to function, and both must use the efficiency cores or the performance cores at the same time" doesn't really tell me that.  It sure sounds very esoteric - I'm not running a cluster on my MacBook (cluster of one?) nor do I really know what a 'cryptographic target app' means.

    As most attacks, this requires you - at the very least - to download and run an app of unknown origin.  Don't do that.
    So, stick to open source only?

    Because the reality is that this can be hidden just about anywhere in a closed source app, and your precious app store can't protect you.


    Good luck getting an average mac user to understand source code.
    Or an average pc user for that matter. 
    michelb76killroywatto_cobra
  • Reply 13 of 19
    dewmedewme Posts: 5,376member
    This sounds like a vulnerability that's a consequence of the interface designers trying to minimize the amount of required programming memory required by overloading a function call parameter so its type has to be implicitly interpreted by the runtime, as opposed to be enforced at compile time. In this case the malicious code is given an easy way to access protected memory that it should not be able to access, albeit in very small chunks at a time. The function in question, DMP, is also an optimization intended to extract a little more performance and efficiency from the system.

    So again we have another performance enhancing functionality that was implemented with very good intentions that fell victim to a specific exploit. In retrospect, and with the knowledge of similar exploits in optimization logic that we saw with Spectre, maybe it could have been scrutinized a little more thoroughly during design and verification? I only say that because the Spectre vulnerabilities were put into code 10-20 years prior to their discovery, when security wasn't at the forefront of all concerns as it is today. It was all about performance and efficiency. Anything that is totally new, unique, and hasn't been done before by the development team should probably trigger a heightened level of concern with security being baked-in and scrutinized at every level from the start.

    Apple obviously has great designers, engineers, and scientists who truly know what they are doing. They have an extremely difficult job of trying to bring as much value to Apple's customers as possible. Unfortunately, there are just as many hackers, some of them bad guys, who are just as smart and clever as Apple's best people out there trying to break Apple's best stuff. Fortunately there are a lot of good guy hackers out there who get a very high level of satisfaction from finding vulnerabilities in other peoples code and reporting what they find in a very fair and equitable way. Nobody like having someone else pointing out their flaws, but the ethical hackers are providing an incredibly valuable and vital service and are probably underappreciated for all they do. 

    Finally, the architecture upon which most digital computers since the earliest days to today are based on were never designed with security, fault tolerance, efficiency, performance, or usability (just to name a few qualities) in mind. At some point we'll have to rebase the most basic notions of how computers work to account for all of the things that the current architectures, even as evolved as they are today, overlooked at the point of their inception.
    muthuk_vanalingamkillroywatto_cobra
  • Reply 14 of 19
    I'm not a software engineer, nor have a degree in computer science. But I consider myself a fairly intelligent individual, and from what I've read about this issue, it seems as time goes on and its exploited more, it will become easier to utilize?  I just want to know if I should be worried about buying another Mac. I've had my M1 Mac Mini since March of 2021 and in the last three months I've returned to my computer in the morning to a window saying that the computer had to shut down due to an error, and there are super long text reports that I cannot make heads or tails of, but after reading this, it didn't leave me feeling very good about using it.  Its probably a lot to ask, but if someone could break down on a scale of your choosing from "It's not that big of a deal" to "Yeah, holy crap stop using it" I'd be very appreciative. These articles about security flaws are not very easy to understand, and I just don't have the time to sit and try to decipher just exactly what I need watch for and worry about. 
    edited March 24 williamlondonwatto_cobra
  • Reply 15 of 19
    programmerprogrammer Posts: 3,458member
    dewme said:
    This sounds like a vulnerability that's a consequence of the interface designers trying to minimize the amount of required programming memory required by overloading a function call parameter so its type has to be implicitly interpreted by the runtime, as opposed to be enforced at compile time. In this case the malicious code is given an easy way to access protected memory that it should not be able to access, albeit in very small chunks at a time. The function in question, DMP, is also an optimization intended to extract a little more performance and efficiency from the system. 


    No
  • Reply 16 of 19
    programmerprogrammer Posts: 3,458member
    I'm not a software engineer, nor have a degree in computer science. But I consider myself a fairly intelligent individual, and from what I've read about this issue, it seems as time goes on and its exploited more, it will become easier to utilize?  I just want to know if I should be worried about buying another Mac. I've had my M1 Mac Mini since March of 2021 and in the last three months I've returned to my computer in the morning to a window saying that the computer had to shut down due to an error, and there are super long text reports that I cannot make heads or tails of, but after reading this, it didn't leave me feeling very good about using it.  Its probably a lot to ask, but if someone could break down on a scale of your choosing from "It's not that big of a deal" to "Yeah, holy crap stop using it" I'd be very appreciative. These articles about security flaws are not very easy to understand, and I just don't have the time to sit and try to decipher just exactly what I need watch for and worry about. 
    No, this shouldn’t stop you from buying a Mac.  And while the bug is in hardware, it doesn’t mean there aren’t mitigations Apple can put in place to prevent it from being exploited.  Those will undoubtedly have some performance impacts on encryption algorithms, but most software won’t be affected.  And exploiting this security flaw requires software running locally, so (as always) exercise caution about what you install on your computer.  

    Also keep in mind that while this is making big news right now, there have been lots of similar security flaws found in Intel and AMD (and other) chips… all chip vendors have these issues, and a new one might be discovered in any chip or OS tomorrow.

    It sounds like your exiting machine has a bug causing frequent crashes.  It could be something corrupted in the OS, so you may want to try reinstalling the operating system or taking it to the Apple Genius Bar for them to look at.  It could be a hardware bug, but usually such things are a bit of data in flash gone bad, or something along those lines.
    watto_cobra
  • Reply 17 of 19
    killroy said:
    Well, heck, if it ain't one thing, it's another.
    I don’t understand this. Could you please explain it a bit further, using formal logic?
  • Reply 18 of 19
    @dewme said:
    This sounds like a vulnerability that's a consequence of the interface designers trying to minimize the amount of required programming memory required by overloading a function call parameter so its type has to be implicitly interpreted by the runtime, as opposed to be enforced at compile time. In this case the malicious code is given an easy way to access protected memory that it should not be able to access…
    No, this is not how it works. DMPs focus on optimizing memory access patterns and do not involve themselves with the types of data being pre-fetched. Type checking and data integrity belong to high-level programming languages and are managed at higher levels of the system architecture. This doesn’t directly apply in the context of DMPs and their low-level hardware operations. DMPs are concerned with patterns of memory access, rather than the semantic meaning or the type of data being accessed.

    When it comes to overall data integrity and type safety however, it is the duty of the entire system to ensure these are maintained. This responsibility is typically handled by the combination of the CPU’s architecture, the OS, and the application software running on the system.
    avon b7programmer
  • Reply 19 of 19
    killroykillroy Posts: 276member
    killroy said:
    Well, heck, if it ain't one thing, it's another.
    I don’t understand this. Could you please explain it a bit further, using formal logic?

    Do you understand speculative processing bugs?.
Sign In or Register to comment.