Apple's shift to ARM Mac at WWDC will define a decade of computing

124»

Comments

  • Reply 61 of 76
    Rayz2016Rayz2016 Posts: 6,957member
    What about the pro market with their investment in x86 software running on their $30,000 Mac Pros?  
    They’ll recompile the software. 
    techconccornchipcommentzillawatto_cobra
  • Reply 62 of 76
    cornchipcornchip Posts: 1,911member
    TLDR;


    There is far more potential in moving modern iPad code to the Mac than there is in supporting legacy x86 Windows software on Intel Macs. 





    GeorgeBMac
  • Reply 63 of 76
    elijahgelijahg Posts: 2,633member
    techconc said:
    elijahg said:
    Another point no one seems to have factored is that Intel has tons of extensions to x86 which are essentially never compared in benchmarks against ARM. SSE1/2/3, AVX, MMX, Quicksync etc. Lots of cross-platform software uses these extensions to speed things up, and the code can be directly ported from the huge market that is Windows to the smaller Mac market. Software that makes use of those extensions is much much faster than that which uses general x86 instructions because they come with much less legacy cruft. If Apple switches, cross platform devs aren't going to waste time optimising their software to double the speed on the tiny Mac market, Mac users will just get an inferior experience, again.
    For starters, you should understand that ARM has the same kind of SIMD extensions (called NEON).  I don't think you understand how they are accessed though.  Apple has an accelerate framework that leverages these functions natively.  Developers on Apple platforms aren't writing for Intel or ARM instructions specifically.  If they have an application that can leverage this sort of thing, they are using Apple's Accelerate framework and they are getting this huge performance improvement.  Apple abstracts the CPU specific details, so applications can simply be recompiled and they will also be tuned to leverage the SIMD instructions of ARM processors. 
    I am talking about this in the context of apps ported from Windows, which many things that use these instructions are. Ports generally only use Apple's UI frameworks, for everything else they either use cross-platform ones or access the registers through low level in-house frameworks specifically designed for portability, where the underlying instruction set is the same. There are very few major third party programs that require SIMD instructions that are written specifically for macOS. Unfortunately - with no thanks to Apple's pricing, the macOS market share is just too small for devs to spend much time optimising for macOS. The Mac version will just do everything in software and end up slower overall. This is another area where iOS and macOS development differs and Apple's our way or the highway doesn't apply the same way, there are few ports on iOS compared to macOS.
  • Reply 64 of 76
    techconc said:
    braytonak said:
    While a new ARM-based MacBook is logical, I would think it would also reinforce the expectation that %desktopOS%-on-ARM = slow. Apple’s confidence in ARM would be clearer if they put it in a MacBook Air, which we already know is a capable machine. 

    Either way, I would replace my 2015 and 2017 MacBooks with an ARM-based model if they ditched the butterfly keyboard in them. If this comes to fruition this year I will find it a very fascinating time, indeed. 
    I really don't understand why there is this perception that ARM is on desktop would be slow.  What leads you to think that?  The single core performance of Apple's A13 chip is within 6% of the performance of Intel's fastest i9 core.  That's with a low wattage mobile chip running on a phone as compared to a much higher wattage desktop based chip from Intel.  The A14 will likely exceed what Intel can do at single core performance and they'll do it on a phone.  On the desktop, it's just a matter of Apple providing a chip with more cores.   Also, as it stands now, the 2 year old A12x based iPad pro is more powerful than a brand new Core i5 based MacBook Air.  Provided we have native applications, performance will not be an issue.

    Also, Apple has already moved away from the butterfly keyboard in all of their models, so that concern is completely a non-issue.
    The A13 in Geekbench has about the same overall performance (single ,multi-core and graphics) as the 10th Gen Quad-Core i5 in the 2013 Macbook Pro. I imagine the 5nm A14 is comparable to the i7 (all tests) and the i9 (single-core).

    That's pretty good considering the thermal design is twice as efficient at the i5 (7w vs 15/25w).


    GeorgeBMacwatto_cobra
  • Reply 65 of 76
    elijahgelijahg Posts: 2,633member

    techconc said:
    braytonak said:
    While a new ARM-based MacBook is logical, I would think it would also reinforce the expectation that %desktopOS%-on-ARM = slow. Apple’s confidence in ARM would be clearer if they put it in a MacBook Air, which we already know is a capable machine. 

    Either way, I would replace my 2015 and 2017 MacBooks with an ARM-based model if they ditched the butterfly keyboard in them. If this comes to fruition this year I will find it a very fascinating time, indeed. 
    I really don't understand why there is this perception that ARM is on desktop would be slow.  What leads you to think that?  The single core performance of Apple's A13 chip is within 6% of the performance of Intel's fastest i9 core.  That's with a low wattage mobile chip running on a phone as compared to a much higher wattage desktop based chip from Intel.  The A14 will likely exceed what Intel can do at single core performance and they'll do it on a phone.  On the desktop, it's just a matter of Apple providing a chip with more cores.   Also, as it stands now, the 2 year old A12x based iPad pro is more powerful than a brand new Core i5 based MacBook Air.  Provided we have native applications, performance will not be an issue.

    Also, Apple has already moved away from the butterfly keyboard in all of their models, so that concern is completely a non-issue.
    As usual benchmarks are very theoretical. There are some benchmarks where older CPUs still do better at some tasks than modern Intel CPUs. No one does any grunt work on a phone, no scientific number crunching, no compilation, no complex video effect processing or CPU-based 3D rendering. Also remember the i5 MBA is clocked *much* slower than the iPad Pro. Remember Samsung was caught optimising for specific benchmarks to make their CPUs seem better than they were? I'm not saying Apple does that - far from it, but it shows that benchmarks aren't everything.
    edited June 2020 GeorgeBMac
  • Reply 66 of 76
    elijahgelijahg Posts: 2,633member
    mcdave said:
    elijahg said:
    Another point no one seems to have factored is that Intel has tons of extensions to x86 which are essentially never compared in benchmarks against ARM. SSE1/2/3, AVX, MMX, Quicksync etc. Lots of cross-platform software uses these extensions to speed things up, and the code can be directly ported from the huge market that is Windows to the smaller Mac market. Software that makes use of those extensions is much much faster than that which uses general x86 instructions because they come with much less legacy cruft. If Apple switches, cross platform devs aren't going to waste time optimising their software to double the speed on the tiny Mac market, Mac users will just get an inferior experience, again.
    The article does cover custom silicon engines.  Apple has answers for every one you mentioned & what about those that Apple’s SoCs cover which Intel doesn’t? Where are Intel’s CPU AI extensions? 



    docno42 said:
    elijahg said:
    Another point no one seems to have factored is that Intel has tons of extensions to x86 which are essentially never compared in benchmarks against ARM. SSE1/2/3, AVX, MMX, Quicksync etc. Lots of cross-platform software uses these extensions to speed things up, and the code can be directly ported from the huge market that is Windows to the smaller Mac market. Software that makes use of those extensions is much much faster than that which uses general x86 instructions because they come with much less legacy cruft. If Apple switches, cross platform devs aren't going to waste time optimising their software to double the speed on the tiny Mac market, Mac users will just get an inferior experience, again.
    I dunno - the Apple ARM chips have dedicated ML cores - I could see writers of some of that software being excited to access such power on a desktop/laptop.  And often that functionality is in a library - port the library once and programs that rely on it can more easily be adapted to use it.  It all depends on what Apple bakes into the chips they put in these future Mac's.  It could turn out to be another significant differentiator. 
    They do, but who other than Apple uses ML? Barely anyone. The only area ML is used on desktop at the moment AFAIK is in Photos, and even the initial scan of my 17,000 photos took just a few hours on my i9. Adding even 100 photos takes an unnoticeable time to scan. There's a reason Intel has no ML stuff in their CPUs, and the rest of the world just uses the GPU.
  • Reply 67 of 76
    elijahg said:

    techconc said:
    braytonak said:

    As usual benchmarks are very theoretical. There are some benchmarks where older CPUs still do better at some tasks than modern Intel CPUs. No one does any grunt work on a phone, no scientific number crunching, no compilation, no complex video effect processing or CPU-based 3D rendering. Also remember the i5 MBA is clocked *much* slower than the iPad Pro.
    That's all true. But one place where we might be able to make a reasonable comparison is with games like Fortnight?

    From what I understand the iPad Pro can achieve a sustained 90-100 fps with medium settings and 60 fps on high, which puts it right up there with the 16" MBP and the Dell XPS 15, with its Nvidia GTX 1650. It's not a straightforward comparison but the A-Series chips show a lot of promise considering that Apple can fine tune the on die processor graphics with the operating system in ways that cannot be done on a PC. This would present a considerable upside for the 13" MBPs integrated graphics, well above anything INTEL or AMD can offer.

    iPad Pro = 2732 x 2048 resolution

    16" MBP = 3072 x 1920 resolution
    edited June 2020 watto_cobra
  • Reply 68 of 76
    Rayz2016Rayz2016 Posts: 6,957member
    elijahg said:
    techconc said:
    elijahg said:
    Another point no one seems to have factored is that Intel has tons of extensions to x86 which are essentially never compared in benchmarks against ARM. SSE1/2/3, AVX, MMX, Quicksync etc. Lots of cross-platform software uses these extensions to speed things up, and the code can be directly ported from the huge market that is Windows to the smaller Mac market. Software that makes use of those extensions is much much faster than that which uses general x86 instructions because they come with much less legacy cruft. If Apple switches, cross platform devs aren't going to waste time optimising their software to double the speed on the tiny Mac market, Mac users will just get an inferior experience, again.
    For starters, you should understand that ARM has the same kind of SIMD extensions (called NEON).  I don't think you understand how they are accessed though.  Apple has an accelerate framework that leverages these functions natively.  Developers on Apple platforms aren't writing for Intel or ARM instructions specifically.  If they have an application that can leverage this sort of thing, they are using Apple's Accelerate framework and they are getting this huge performance improvement.  Apple abstracts the CPU specific details, so applications can simply be recompiled and they will also be tuned to leverage the SIMD instructions of ARM processors. 
    I am talking about this in the context of apps ported from Windows, which many things that use these instructions are. Ports generally only use Apple's UI frameworks, for everything else they either use cross-platform ones or access the registers through low level in-house frameworks specifically designed for portability, where the underlying instruction set is the same. There are very few major third party programs that require SIMD instructions that are written specifically for macOS. Unfortunately - with no thanks to Apple's pricing, the macOS market share is just too small for devs to spend much time optimising for macOS. The Mac version will just do everything in software and end up slower overall. This is another area where iOS and macOS development differs and Apple's our way or the highway doesn't apply the same way, there are few ports on iOS compared to macOS.
    Well, the sad fact is that these vendors have had well over a decade to get their apps ready for the switch. If they see the Mac as not viable or too expensive then that’s fine, but I don’t really see why Apple should hold the platform back or compromise its future to support vendors who aren’t really invested in the Mac; that would be unfair to the developers, some who are even more strapped for cash, who have put the work in.  After every transition there are always casualties, and I have no doubt we’ll see some here. And the options open to you are exactly the same:

    • Talk to your software supplier and point out the situation. Since they’re Windows-centric then they may not know. If Apple does switch then it won’t be overnight, so they may have time to rectify the situation. My guess is that they won’t if they don’t have that many Mac users. 
    • Look for an alternative package if you can. 

    • Buy the most powerful Mac you can before the switch over is done and keep going with it for as long as you can. I wouldn’t recommend this one because it’s just delaying inevitable, which brings us to you should actually do:

    • Admit that your best option for Windows-centric software is a Windows machine and move to the Windows platform. This is possibly what you should have done in the first place if you knew that the Mac version of your must-have software was a warmed-over Windows port, because even if Apple doesn’t announce a switch to ARM next week, this is a position you were always going arrive at eventually. 
    edited June 2020 tmaywatto_cobra
  • Reply 69 of 76
    croprcropr Posts: 1,078member
    jharner said:
    Apple has already made it difficult for data scientists by not providing Nvidea drivers supporting cuda. The real problem is not virtualization for Windows. Rather it is virtualization for Docker and and to a lesser extent for VirtualBox. Docker containers herald the wave of the future for STEM, including data science and statistics. It also will be critical for reproducible research. Now the Mac is dominant in STEM---go to any related conference to see what attendees are using. Two factors may change this---ARM Macs and Windows Subsystem for Linux. STEM increasingly is running on Linux-based open-source software using multiple software ecosystems that must work together---R and PostgreSQL, R or Python with Spark and Hadoop, etc. These integrated environments are best containerized and reproducibility of research and collaboration of developers and scientists along with bash and git will make Docker and related infrastructure essential. Currently, the Mac excels in this area, but an ARM Mac may not support Docker, Singularity, Kubernetes, etc., even if it does continue to support open-source UNIX software. But now Windows runs containerized Linux. What faculty and researchers use will filter down to students. We are not talking about a 2% loss---it will be much larger for STEM related disciplines in academia and industry. In principal, Apple could support Docker and related technologies, but will they? They could support cuda, but they don't. Even if computing moves to the cloud, what technologies are behind this? Docker, Kubernetes, open-source software, etc. Everyone will need to develop, test and deploy their code from their local machine. This cannot be done using iPadOS and if cannot be done on macOS, the the Mac is done in STEM.

    Fully agree.  I own a software company that develops cloud services.  We develop not only front ends (web, iOS, Android) but also back ends that run in Docker containers on  Cloud Providers (Google Cloud, Azure, AWS, ...). 

    Currently my developers can choose between a MacBookPro or a (Linux based) Dell XPS machine, leading an even split between both machines.  

    If Apple moves to ARM based Macs only without good and performant support for Docker containers,  we will have to abandon Mac , with the exception for the iOS app developers (8% of the workforce).   Taking into account that Docker containers on Mac only reached an excellent performance 3 years after the first version of Docker for Mac was released, it is clear that making a good Docker implementation on ARM could be a serious challenge. 

    And if we have to abandon Mac for the developers, I don't see a reason to keep using Macs for other non developing tasks.

    Professional people that need Docker support, typically buy the high end machines.  So if Apple does not continue to support Intel or have a performant Docker implementation,Apple will loose a number of high end high margin sales.
  • Reply 70 of 76
    elijahgelijahg Posts: 2,633member
    elijahg said:

    techconc said:
    braytonak said:

    As usual benchmarks are very theoretical. There are some benchmarks where older CPUs still do better at some tasks than modern Intel CPUs. No one does any grunt work on a phone, no scientific number crunching, no compilation, no complex video effect processing or CPU-based 3D rendering. Also remember the i5 MBA is clocked *much* slower than the iPad Pro.
    That's all true. But one place where we might be able to make a reasonable comparison is with games like Fortnight?

    From what I understand the iPad Pro can achieve a sustained 90-100 fps with medium settings and 60 fps on high, which puts it right up there with the 16" MBP and the Dell XPS 15, with its Nvidia GTX 1650. It's not a straightforward comparison but the A-Series chips show a lot of promise considering that Apple can fine tune the on die processor graphics with the operating system in ways that cannot be done on a PC. This would present a considerable upside for the 13" MBPs integrated graphics, well above anything INTEL or AMD can offer.

    iPad Pro = 2732 x 2048 resolution

    16" MBP = 3072 x 1920 resolution
    I'm not really sure that's a fair CPU comparison, Fortnite is GPU bound not CPU bound. What would be better is to run several different real world number crunching programs, compilations of different software and languages, and perhaps web page rendering speeds and see how they perform on the different machines. 
  • Reply 71 of 76
    fastasleepfastasleep Posts: 6,145member
    What about the pro market with their investment in x86 software running on their $30,000 Mac Pros?  
    That stops functioning immediately.
    jdb8167
  • Reply 72 of 76
    mattinozmattinoz Posts: 1,911member
    Rayz2016 said:
    What about the pro market with their investment in x86 software running on their $30,000 Mac Pros?  
    They’ll recompile the software. 

    Maybe even fix a bug or 2 along the way. Anyone who deals with the sort of software that demands the upper ends of hardware costs bugs abound. Indeed any that cross-compile without change will be marked "Worked as designed".


    GeorgeBMac
  • Reply 73 of 76
    GeorgeBMacGeorgeBMac Posts: 11,421member
    mattinoz said:
    Rayz2016 said:
    What about the pro market with their investment in x86 software running on their $30,000 Mac Pros?  
    They’ll recompile the software. 

    Maybe even fix a bug or 2 along the way. Anyone who deals with the sort of software that demands the upper ends of hardware costs bugs abound. Indeed any that cross-compile without change will be marked "Worked as designed".



    Yeh, I inherited one of those systems (it tracked ATM transactions) and blew up pretty much nightly.   I ended up with my arm and hand in a cast after putting it through a wall.  I still have a steel plate & 5 screws in my hand.
  • Reply 74 of 76
    techconctechconc Posts: 237member
    elijahg said:
    I am talking about this in the context of apps ported from Windows, which many things that use these instructions are. Ports generally only use Apple's UI frameworks, for everything else they either use cross-platform ones or access the registers through low level in-house frameworks specifically designed for portability, where the underlying instruction set is the same. 
    That's part of the porting process.  Typically any sort of hard coded SIMD instructions would exist in libraries / gaming engines, etc.  Yes, there is an effort to port those libraries / gaming engines, but when they do, they would be written to Apple's frameworks.   The same goes for things like Direct 3D to Metal, etc. It's all part of porting an application.  
  • Reply 75 of 76
    techconctechconc Posts: 237member

    elijahg said:

    As usual benchmarks are very theoretical. There are some benchmarks where older CPUs still do better at some tasks than modern Intel CPUs. No one does any grunt work on a phone, no scientific number crunching, no compilation, no complex video effect processing or CPU-based 3D rendering. Also remember the i5 MBA is clocked *much* slower than the iPad Pro. Remember Samsung was caught optimising for specific benchmarks to make their CPUs seem better than they were? I'm not saying Apple does that - far from it, but it shows that benchmarks aren't everything.
    You can't just use the "benchmarks are theoretical" as an excuse to ignore them.  Benchmarks are based on real applications and if you look at the results of subtests, you can see individual strengths and weaknesses. 

    To your point, yes, I'm aware of the games many OEMs do to cheat on benchmarks.  This is more of an issue for phones that have very real thermal limitations than it is for desktops with proper active cooling.  
Sign In or Register to comment.