Intel chip kernel flaw requires OS-level fix that could impact macOS performance, report s...

135

Comments

  • Reply 41 of 90
    According to early reports, the flaw allows to make some kernel-specific VM pages accessible from the user space, using some techniques not disclosed yet. Not writing critical information to VM pages is the ABC of kernel programming in Apple development culture, critical information must be held in wired memory that cannot be paged. That flaw and the potential exploits are probably already known by operating system developers so that saga may end up with nothing. Forget class actions and alike...
    edited January 2018 GeorgeBMacwatto_cobra
  • Reply 42 of 90
    polymniapolymnia Posts: 1,080member
    Hell hath no fury like a herd of nerds whipped into a self-righteous frenzy. 
    GG1StrangeDaysrandominternetpersonwatto_cobra
  • Reply 43 of 90
    polymniapolymnia Posts: 1,080member
    If this bug does go back years, and none of you have noticed yet, what is the big deal? If you don’t want to sacrifice the performance, don’t apply the patch. Wait until you get a new Mac with a new chip to update. The sky hasn’t fallen in the years this flaw has existed. Probably sill stay up there going forward. 
    StrangeDayswatto_cobra
  • Reply 44 of 90
    iCintosiCintos Posts: 3unconfirmed, member
    netrox said:
    ... really no reason to soldier everything on a logic board considering that it can be expensive if they are forced to replace all logic boards with soldiered cpu's/ram. The components keep getting smaller and thinner yet they can easily be socketed. 
    Ah, yes netrox.  I fondly remember the good old days when everything was soldered.  All the resisters, capacitors, and diodes that made up a flip-flop.  And the memory, oh how elegant. You could extract a single core and replace it, and the sense wires were easily threaded back and soldered to their lands. 422 chassis modules to execute up a long division in a few nanoseconds, and all those great interconnect wires - you could synch the clock to a module by simply trimming an bit off of its length (11.8" = 1 nanosecond). Yes, lets get back to solder and freon-chilled computers.
  • Reply 45 of 90
    LucioguidoLucioguido Posts: 9unconfirmed, member
    This is similar to me as when people got up in arms over the Secure Enclave and the inability to extract sensitive data from an iPhone if it’s been disabled. Suddenly all these laypeople started saying we needed the smartest people to get in a room and figure out how to have some magic encryption that can only be decrypted by good actors. Only, mathematically that’s an invalid statement. 

    Similarly, people saying this is inexcusable and that if we’re going to keep our lives reliant on computers, they can’t be fallible. Well, the smartest minds in the world are the ones who have been designing computers, and these people are absolutely not perfect. So, yeah, sometimes they’ll learn something today they didn’t know yesterday, and sometimes that new knowledge will reveal a new vulnerability.

    We know next to nothing about the details of this issue yet. This isn’t like when Target or Yahoo! stored user passwords in plain text. That was a dumb thing anyone with a reasonable understanding of security couodve avoided. Chances are high this issue is fairly abstract and almost no one will fully understand it or how it works. 
  • Reply 46 of 90
    linkmanlinkman Posts: 1,035member
    netrox said:
    Actually, we should sue Apple as well, think of it - by sueing Apple (and every other PC company), we can force their devices to be more modular and more accessible for upgrades/exchanges. There is really no reason to soldier everything on a logic board considering that it can be expensive if they are forced to replace all logic boards with soldiered cpu's/ram. The components keep getting smaller and thinner yet they can easily be socketed. I cannot think of a reason why it should be soldered. 
    Thickness, weight, reliability, and cost all come to mind for reasons to solder components.
    brucemcStrangeDayswatto_cobra
  • Reply 47 of 90
    nhtnht Posts: 4,522member
    welshdog said:
    dewme said:
    welshdog said:
    This seems like exceptionally bad news - for everyone. Just knowing this flaw exists, even without details, means the bad actors of the world will be working overtime to find an exploit.  And then they'll work to find an exploit to the fix.  This has got to be Intel's biggest screwup ever.  It would be nice to get some info eventually on what chips/machines are affected and by how much.  
    Yeah this is bad news like every one of the many thousands of major security issues that exist in all manner of products. Biggest screwup ever? Biggest this week ... but the week is less than half over. As debilitating as a 30% performance hit would be on tweaking a picture in Photoshop, I'm actually more concerned about this type of threat: https://www.wired.com/story/triton-malware-targets-industrial-safety-systems-in-the-middle-east/ . For those who don't know what a safety system is, it's an independent, isolated, and redundant control system that is put in place (at very great expense) to prevent a failure of the primary control system from leading to a catastrophic failure in an important system, like a power plant or refinery. In other words, the safety net that was put in place to prevent the worst-case scenarios that could occur in a plant from happening - was hacked to break the system it was supposed to be protecting. 

    The "bad actors" have already been working overtime for decades trying (and often succeeding) to exploit everything and anything that has any logic in it, from software, firmware, microcode, markup, macros, scripts, social media, humans, etc. Heck, there are professional-quality development toolkits freely available for anyone to download so you can discover your exploits in your spare time. Unannounced exploits are a worldwide unit of currency. Oh, and who is considered a "bad actor" is entirely relative and depends on who the actor is working for. Are the NSA, FBI, CIA, DOD, and the thousands of public and private companies working on behalf of government agencies, etc., "bad actors?" Depends on whose flag you fly, I guess.

    I don't want to sound like Chicken Little, but cybersecurity is a much greater threat than most lay people can comprehend or deal with at a personal level. It's an ongoing and existential threat that is the primary daily focus of hundreds of thousands of professionals just in the US, and there are easily as many unfilled jobs as ones that are currently filled. The good news is that the previous US administration truly understood the cybersecurity threat from day one and at least got the ball rolling on doing something about it in an apolitical and highly cooperative way between the public and private sectors. I hope the current administration's war on science doesn't lead to regression on this very serious concern. Dealing with the fallout from cybersecurity incidents is simply the new normal today, and it will stay that way at least until managing it gets woven into the fabric of everyday life - like destructive weather, tsunamis, and earthquakes so workarounds, mitigation, and compensation will be required, especially for legacy systems. Going forward everything that has logic in it must be designed with cybersecurity in mind and people must be aware and adapt as well.  
    I called it Intel's worst screwup because it probably is. This is a vulnerability in the chip itself that potentially exposes the kernel and thus pretty much everything. The exploits and hacking efforts you listed are attempts to break/attack/exploit a system that generally is working properly. This flaw puts computer users at a disadvantage right out of the gate.  So while I appreciate the info you provided, I think you kind of missed the point.  If the bad actors (anyone engaged general purpose computer fuckery) get around the fixes it's pretty hard to imagine a worse situation for every single Intel based computer with this flaw. This most likely would include all the systems you mentioned, with the safety systems being the most critical.  I don't think downplaying this flaw is wise.

    To me this is just another reason for some big thinking to be done regarding how computers and their related softwares work and are designed. As we have become utterly dependent on these machines it seems pretty ridiculous that we are not able to create systems that are safe.  The Russians are becoming specialists at entering and crippling power distribution grids.  Do we really want to wait to see what happens when they try something?  Of even worse if someone steals their tech and shuts down a grid just for fun?  Computers were once seen as a panacea for so many things, and to a degree they are.  But now they are a huge risk to world stability and have become a digital albatross around the world's neck.  I love computers and what they do, but we have completely blown it when it comes to deploying them responsibly and with restraint.
    Nice to see that Luddites still exist.  

    Well, no, not really.

    The sky isn't falling.  There are many potential attack vectors both cyber and physical. 

    This is one that has a fix albeit with a sometimes hefty performance penalty.  It's safer to do it this way even if the kernel protections worked...it's just slower.
    watto_cobra
  • Reply 48 of 90
    nhtnht Posts: 4,522member

    wizard69 said:
    netrox said:
    does anyone remember the intel division bug?
    Sadly not many.  I never understood the freedom Intel is given with these failures.   AMD screws up with a tiny problem and everybody and their brother wants blood from the company. 

    Lucky for AMD they are in a position to exploit this failure due to having very competitive hardware finally.  
    AMD has never had sufficient production capability to exploit anything except to raise prices since they can't easily increase volume.  

    Good for AMD stockholders.  Not so much for AMD fans.
    hodar
  • Reply 49 of 90
    dewmedewme Posts: 5,376member
    welshdog said:
    dewme said:
    welshdog said:
    This seems like exceptionally bad news - for everyone. Just knowing this flaw exists, even without details, means the bad actors of the world will be working overtime to find an exploit.  And then they'll work to find an exploit to the fix.  This has got to be Intel's biggest screwup ever.  It would be nice to get some info eventually on what chips/machines are affected and by how much.  
    Yeah this is bad news like every one of the many thousands of major security issues that exist in all manner of products. Biggest screwup ever? Biggest this week ... but the week is less than half over. As debilitating as a 30% performance hit would be on tweaking a picture in Photoshop, I'm actually more concerned about this type of threat: https://www.wired.com/story/triton-malware-targets-industrial-safety-systems-in-the-middle-east/ . For those who don't know what a safety system is, it's an independent, isolated, and redundant control system that is put in place (at very great expense) to prevent a failure of the primary control system from leading to a catastrophic failure in an important system, like a power plant or refinery. In other words, the safety net that was put in place to prevent the worst-case scenarios that could occur in a plant from happening - was hacked to break the system it was supposed to be protecting. 

    The "bad actors" have already been working overtime for decades trying (and often succeeding) to exploit everything and anything that has any logic in it, from software, firmware, microcode, markup, macros, scripts, social media, humans, etc. Heck, there are professional-quality development toolkits freely available for anyone to download so you can discover your exploits in your spare time. Unannounced exploits are a worldwide unit of currency. Oh, and who is considered a "bad actor" is entirely relative and depends on who the actor is working for. Are the NSA, FBI, CIA, DOD, and the thousands of public and private companies working on behalf of government agencies, etc., "bad actors?" Depends on whose flag you fly, I guess.

    I don't want to sound like Chicken Little, but cybersecurity is a much greater threat than most lay people can comprehend or deal with at a personal level. It's an ongoing and existential threat that is the primary daily focus of hundreds of thousands of professionals just in the US, and there are easily as many unfilled jobs as ones that are currently filled. The good news is that the previous US administration truly understood the cybersecurity threat from day one and at least got the ball rolling on doing something about it in an apolitical and highly cooperative way between the public and private sectors. I hope the current administration's war on science doesn't lead to regression on this very serious concern. Dealing with the fallout from cybersecurity incidents is simply the new normal today, and it will stay that way at least until managing it gets woven into the fabric of everyday life - like destructive weather, tsunamis, and earthquakes so workarounds, mitigation, and compensation will be required, especially for legacy systems. Going forward everything that has logic in it must be designed with cybersecurity in mind and people must be aware and adapt as well.  
    I called it Intel's worst screwup because it probably is. This is a vulnerability in the chip itself that potentially exposes the kernel and thus pretty much everything. The exploits and hacking efforts you listed are attempts to break/attack/exploit a system that generally is working properly. This flaw puts computer users at a disadvantage right out of the gate.  So while I appreciate the info you provided, I think you kind of missed the point.  If the bad actors (anyone engaged general purpose computer fuckery) get around the fixes it's pretty hard to imagine a worse situation for every single Intel based computer with this flaw. This most likely would include all the systems you mentioned, with the safety systems being the most critical.  I don't think downplaying this flaw is wise.

    To me this is just another reason for some big thinking to be done regarding how computers and their related softwares work and are designed. As we have become utterly dependent on these machines it seems pretty ridiculous that we are not able to create systems that are safe.  The Russians are becoming specialists at entering and crippling power distribution grids.  Do we really want to wait to see what happens when they try something?  Of even worse if someone steals their tech and shuts down a grid just for fun?  Computers were once seen as a panacea for so many things, and to a degree they are.  But now they are a huge risk to world stability and have become a digital albatross around the world's neck.  I love computers and what they do, but we have completely blown it when it comes to deploying them responsibly and with restraint.
    I'm not downplaying the latest Intel flaw at all. But when I see people implying that moving to AMD chips or A-series RISC chips is a reasonable mitigation strategy it tells me that people absolutely don't get it. It's only a matter of time before a similar or more severe exploit is found in those chips as well, with similar impact on microkernel based OS architectures. I'm not worried about a potential 30% performance hit on some OS operations or trying to shame Intel or any other narrow minded knee jerk reactions. I am worried that normal computer users, politicians, administrators, and software/firmware/hardware/social system architects and designers will only pull their heads out of the sand temporarily for this one exploit and then quickly reinsert it back into the sand when this specific exploit is handled.  It's not a "one and done" problem, it's systemic and pervasive. 

    The damage potential of this exploit is large due to the ubiquitous nature of Intel chips and how operating systems rely upon them, but there are greater or equal threats to other common technologies including communication chips, communication protocols, security algorithms, etc. Finally, the safety system exploit tells us that no matter how much big thinking goes into the design of a secure system, it can still be flawed. The level of "design for - X" where X is security, resiliency, availability, safety, etc., that goes into a safety system, including triple mode redundancy and dynamic memory integrity testing, is unmatched by any consumer product on the market - and it still got exploited. I'm not advocating a wait-and-see strategy at all but rather a level headed understanding that these threats already exist and have for years, that the search for exploits is already at ridiculously high levels (on all sides), and that we must treat this as a existential and permanent systematic concern. This week is Intels time in the box - but they won't be the last one nor will they be the most severe case. A whack-a-mole response isn't going to solve this problem.

    The type of exploit that Intel is now dealing (unintended escalation of execution privileges) with has occurred and been dealt with many times in all of the major microkernel based operating systems (Unix and variants up to and including macOS and Linux, Windows NT and variants up to and including Windows 10) over their lifetime but has mainly occurred at the system software interface between user mode software (~ applications) and kernel mode software (~ drivers) layers. Earlier implementations mostly relied upon the kernel software to ensure integrity but newer CPUs have hardware level support to maintain the integrity of these privileged interactions without relying solely on kernel level software. Unfortunately, when the safety net has holes in it and someone has found out where the holes are - we have a problem.

    I totally agree on the "big thinking" around cybersecurity but I'd rephrase it as "system thinking" since it has to consider everything that impacts security at all levels, including people.
    welshdog
  • Reply 50 of 90
    hodarhodar Posts: 357member

    I see two separate things happening.

    1.  Stop shipping ALL Mac Pro's, until the chip is fixed.  It makes very little sense for Apple to continue shipping a product that will face a 5-30% performance reduction.  The Mac Pro is their new "Flagship", and no company ships a defective flagship.

    2.  This issue will force a refresh of every Apple line.  From Mac Mini to the laptop lines, to the iMac Pro.  Everything needs to have a delineation point, to show customers that their new product is not defective from Intel.

    1k9
  • Reply 51 of 90
    lkrupplkrupp Posts: 10,557member
     Alternatively, users can purchase a new processor that does not contain the fault.
    That’s the information we need the most. Which processors have the fault and which ones don’t? Do the processors in the new iMac Pro have the fault?
  • Reply 52 of 90
    hodarhodar Posts: 357member
    nht said:

    wizard69 said:
    netrox said:
    does anyone remember the intel division bug?
    Sadly not many.  I never understood the freedom Intel is given with these failures.   AMD screws up with a tiny problem and everybody and their brother wants blood from the company. 

    Lucky for AMD they are in a position to exploit this failure due to having very competitive hardware finally.  
    AMD has never had sufficient production capability to exploit anything except to raise prices since they can't easily increase volume.  

    Good for AMD stockholders.  Not so much for AMD fans.

    At best, AMD can maybe provide 5-10% of the production that Intel currently supports.  It takes YEARS to build a FAB that is capable of doing the geometries required for modern CPUs.  When it comes to sheer brute production capabilities - NO ONE can touch intel.
  • Reply 53 of 90
    thewbthewb Posts: 80member
    Reports that benchmarks on an unfinished update to the Linux kernel are 5-30% slower means that all Macs will become 30% slower? That's too many logical leaps. You're going from point A to point D skipping past B and C.
    watto_cobra
  • Reply 54 of 90
    dewmedewme Posts: 5,376member
    iCintos said:
    netrox said:
    ... really no reason to soldier everything on a logic board considering that it can be expensive if they are forced to replace all logic boards with soldiered cpu's/ram. The components keep getting smaller and thinner yet they can easily be socketed. 
    Ah, yes netrox.  I fondly remember the good old days when everything was soldered.  All the resisters, capacitors, and diodes that made up a flip-flop.  And the memory, oh how elegant. You could extract a single core and replace it, and the sense wires were easily threaded back and soldered to their lands. 422 chassis modules to execute up a long division in a few nanoseconds, and all those great interconnect wires - you could synch the clock to a module by simply trimming an bit off of its length (11.8" = 1 nanosecond). Yes, lets get back to solder and freon-chilled computers.
    I built my first flip-flop (bistable multivibrator) for a military electronics training school using vacuum tubes. I think the B+ power supply was around 200 volts with a separate 6 volt supply needed to heat the cathode filaments.  We went on to reimplement the same circuit using transistors (woo hoo). By the end of the 20 week training school we were writing 6502 machine code level programs for diagnosing wiring faults.  
    watto_cobra
  • Reply 55 of 90
    larryjwlarryjw Posts: 1,031member
    I’m not likely understanding some of this story. One key statement is there is no firmware fix. Looking on Intels site, they have a table listing each cpu, and the minimum ME firmware version that resolves the issue, or at least one of the issues. 

    Like one of the other commenters, I too, in December, purchased a new mbp but also an new iMac. The About My MAC doesn’t describe the cpu level of detail about its firmware versions to know whether my chips have the latest firmware versions. It’s goimg to be Apple’s resonsibilty to push out the firmware updates. 

    A second thing to note. Intel has downloadable software to analyze your system for these flaws, but the software only runs under windows or Linux. Intel needs to push out a version for macOS. 
    GeorgeBMactdknox
  • Reply 56 of 90
    GeorgeBMacGeorgeBMac Posts: 11,421member
    This is why I keep multiple computers.  Most notably I keep one in virtual lock-down status to use as a financial computer.  It stores my financial records and only accesses a very few specific financial sites that I deal with.  There is no web browsing on it and no email.  Plus it's powered down unless I'm using it which is roughly about once a week.

    That doesn't guarantee security of course, but it does improve the odds that my most valuable personal information will be safe from hackers.
  • Reply 57 of 90
    sirozhasirozha Posts: 801member
    daven said:
    netrox said:
    does anyone remember the intel division bug?
    I do. I had a CPU with it and Intel send me a replacement chip. Back then it was easy to fix on my PC. Open the case. Move the lever that held the heat sink to the CPU, remove the heat sink, lift out the CPU, orient the new CPU properly and insert, put the heat sink on, tighten with the lever. Done.
    You forgot to close the case. 
    randominternetpersonfastasleep
  • Reply 58 of 90
    sirozhasirozha Posts: 801member
    k2kw said:
    It should be time for an A series based MacBook Air or an iOS laptop?
    I hope this will persuade Apple to start using their own chips in MacBooks. 
  • Reply 59 of 90
    foggyhillfoggyhill Posts: 4,767member
    netmage said:
    This is bad but hardly as bad as the WPA2 issues recently or any network OS vulnerabilities. If bad actors are running code on your box, you’ve already lost the game - this is just another way they can exploit that. 
    The problem is that most time you fix you sw bug, it doesn't severely effect your performance, here people will be put in very bad situation were they'll NOT be able to actually fix this without impacting services.

    Exploits that allow a normal user to jump to a privileged user are way worse than you say. If I got someone locked into a X non admin account, I'm worried, but I can fix that without crapping out my services.

    But,m this is worse since it will make any zero day that allow local access much much worse. Most virus are introduced locally by things like malware and then use privilege escalation bugs to reach root. This is the mother of these kind of escalation. Once it has root on one machine, it can then sit on the network and wait for the chance to get into a user account on some server than has a vulnerability and then again use this to get to root and the server. Eventually with this bug they'll get there.

    the WPA2 bug was actually not as bad as initially thought since a quick fix in software on the client solved 95% and most access points were not affected directly in their default use case.

    The other problem with hw bugs is that you know that most of those machines will NEVER be fixed leaving them as near eternal vulnerabilities to whatever chain of exploits in the future; bots hosts in the future.
    edited January 2018 danh
  • Reply 60 of 90
    polymniapolymnia Posts: 1,080member
    This is why I keep multiple computers.  Most notably I keep one in virtual lock-down status to use as a financial computer.  It stores my financial records and only accesses a very few specific financial sites that I deal with.  There is no web browsing on it and no email.  Plus it's powered down unless I'm using it which is roughly about once a week.

    That doesn't guarantee security of course, but it does improve the odds that my most valuable personal information will be safe from hackers.
    Where do you buy your tin foil hats?
Sign In or Register to comment.