Apple may shun Intel for custom A-series chips in new Macs within 1-2 years

1456810

Comments

  • Reply 141 of 183
    Don't understand, if there processors are about atom speed, will be between i3 and atom in a couple of years, that's still not close enough to it. i5 and i7 and are in the MacBook lines and iMac(Mac mini may have i3 so it may gain it on lower models?) so what's the point to an downgrade in speeds with a claim of a cath up in another couple? Think they wait for it first, still expecting the processors to tie about 2020, could it be around then rather then 2016-17?
  • Reply 142 of 183
    Here's an interesting read:
    http://memkite.com/blog/2014/12/15/data-parallel-programming-with-metal-and-swift-for-iphoneipad-gpu/

    I don't know how applicable this approach would be for most apps -- but with relative ease, the author was able to use Swift and Metal to exploit the CPUs and GPUs for a specific task. In a link he reports that this scales from 20x faster to 75x than other approaches.
    http://memkite.com/blog/2014/12/15/data-parallel-programming-with-metal-and-swift-for-iphoneipad-gpu/

    Metal has nothing to do with OpenCL. It is a low level alternative to OpenGL to reduce the overhead, just like Mantle.

    The same advantages in Mantle and Metal are being rolled into OpenGL NG (OpenGL 5).

    Games are heavily congestive on the CPU, not the GPGPU. The reduction in overhead calls improves the performance and reduces tearing, all on the CPU.
  • Reply 143 of 183
    ...and yet ADM is struggling to remain relevant to OEMs... Last I heard the engineers were fleeing the company. Maybe that's changed; I just don't follow them any more.

    Stay in the loop. The entire management team compromises of quite a few new engineers, with several from Apple. The people leaving are business background folks.

    AMD is following the Apple 2.0 reinvention process. Engineers are now running the show, top to bottom.
  • Reply 144 of 183
    Here's an interesting read:
    http://memkite.com/blog/2014/12/15/data-parallel-programming-with-metal-and-swift-for-iphoneipad-gpu/

    I don't know how applicable this approach would be for most apps -- but with relative ease, the author was able to use Swift and Metal to exploit the CPUs and GPUs for a specific task. In a link he reports that this scales from 20x faster to 75x than other approaches.
    http://memkite.com/blog/2014/12/15/data-parallel-programming-with-metal-and-swift-for-iphoneipad-gpu/

    Metal has nothing to do with OpenCL. It is a low level alternative to OpenGL to reduce the overhead, just like Mantle.

    The same advantages in Mantle and Metal are being rolled into OpenGL NG (OpenGL 5).

    Games are heavily congestive on the CPU, not the GPGPU. The reduction in overhead calls improves the performance and reduces tearing, all on the CPU.

    All I did was quote an article:

    Metal is an alternative to OpenGL for graphics processing, but for general data-parallel programming for GPUs it is an alternative to OpenCL and Cuda. This (simple) example shows how to use Metal with Swift for calculating the Sigmoid function (Sigmoid function is frequently occurring in machine learning settings, e.g. for Deep Learning and Kernel Methods/Support Vector Machines).

    If you want to read up on Metal I recommend having a look at https://developer.apple.com/metal/ (Metal Programming Guide, Metal Shading Language and Metal Framework Reference)

    http://memkite.com/blog/2014/12/15/data-parallel-programming-with-metal-and-swift-for-iphoneipad-gpu/


    The author claims that he can get a 20x improvement using Swift and Metal as an alternative to OpenCL ... Is he wrong?

    If not, Isn't that significant?

    20x ...
  • Reply 145 of 183
    brlawyer wrote: »

    I am still waiting for those apps to appear - GCD currently seems like great vaporware.
    GCD is anything but vaporware; it's a well-integrated part of the API that's been there, and working great, for years now. I think it's pretty safe to say that just about every major Mac app out there today is going to be using it in some capacity.

    Now, GCD isn't going to magically turn every single-threaded app into a multi-threaded one, nor is it going to make non-threadable problems suddenly solvable using threads. But for applications which do use threading (and it's hard to write a sufficiently large app without it), GCD really does make it a lot easier to work with on the development side.
  • Reply 146 of 183
    wigginwiggin Posts: 2,265member
    Quote:
    Originally Posted by chadbag View Post

     

     

    They don't need to make a CPU with equivalent performance to an intel chip.   For the same energy and thermal footprint, they can use many of their chips to replace a single intel chip.    A single quad core i7 looks to be around 80-85W depending on model (65/84/88 for the three listed on wikipedia).  I was not able to find power consumption on the A8X but found "tablet CPU power consumption" to be about 4-8W in estimates.  So lets use 8, and kick it up to 12W for an improved version of the A8X with faster clock speed, etc.  Now stick 6 of those in one machine, each with 3 cores.  That is 18 cores of 64bit ARM goodness, running 30-50% faster (based on our power bump estimates), vs 4 in the intel, for about the same or less power/heat usage/output as the single intel chip.

     

    I suspect that such a thing, once engineering had optimized things for that many cores, etc. would run quite well.


     

    But that last statement is the kicker. How long have we had multiple CPUs/cores in our computers? And we are still at the point that for most users anything beyond dual CPUs is already pushing past the point of diminishing returns (unless you are running highly parallelized programs like video compression and such). You'd get a little more juice out of quad-core, but for most users it would be maybe 3x, not 4x. So just cramming 18 cores into a box really isn't going to benefit most users. Cranking up the clock might be a challenge, too.

     

    In the next few years Apple could maybe produce a Mac mini-class computer, and then only because the 2014 mini's aren't any more powerful than the previous generation. Performance has been stagnant for 3 years. Replacing the MBP and iMac would be a pretty tall order, performance-wise, I think. And if they can't transition the entire Mac line it would be foolish to only transition part of it.

     

    It would be almost better to create a whole new computer line and then slowly (gasp!) kill off the Mac once the new A-series CPU computer has reached critical mass. Much cleaner to market an upsell iPhone and iPad users to a new line of ARM-based laptop/desktop than to have to market a mixed Intel-ARM Mac lineup.

  • Reply 147 of 183
    MarvinMarvin Posts: 15,326moderator
    Virtual PC was slow -- but better than the alternative, IMO! It is interesting that MS bought VPC -- presumably to kill it :D

    There was speculation it was for backwards compatibility with the XBox. The original XBox was x86 but the 360 moved to PPC so no original XBox games would run (they're back to x86 again with the latest models). They used emulation in order to get the games to run but they only made some games work and each had an emulation profile:

    http://en.wikipedia.org/wiki/List_of_Xbox_games_compatible_with_Xbox_360#How_compatibility_is_achieved

    The XBox 360 developer kits were G5 Macs early on, they tried to hide them running the early game demos while putting the powered-off console in front:

    http://www.anandtech.com/show/1686/5
  • Reply 148 of 183
    Marvin wrote: »
    Mac software developers would just target 2 architectures to be compatible.
    Oh boy. Ask most developers whether they'd want to be stuck with a PPC->Intel like transition period, but indefinitely. Remember that it's not a matter of "just" turning on the two architectures, there's always little nits that come up, so you also have to thoroughly test everything on both architectures. Not to do so is just inviting disaster.
  • Reply 149 of 183
    Marvin wrote: »
    Virtual PC was slow -- but better than the alternative, IMO! It is interesting that MS bought VPC -- presumably to kill it :D

    There was speculation it was for backwards compatibility with the XBox. The original XBox was x86 but the 360 moved to PPC so no original XBox games would run (they're back to x86 again with the latest models). They used emulation in order to get the games to run but they only made some games work and each had an emulation profile:

    http://en.wikipedia.org/wiki/List_of_Xbox_games_compatible_with_Xbox_360#How_compatibility_is_achieved

    The XBox 360 developer kits were G5 Macs early on, they tried to hide them running the early game demos while putting the powered-off console in front:

    http://www.anandtech.com/show/1686/5

    Ahh ... Thanks for that insight!
  • Reply 150 of 183
    Quote:
    Originally Posted by Dick Applebaum View Post





    Not at all! Not what I meant. If anyone could solve software problems by throwing hardware [more cores] at it, it would be Apple because they have the expertise and control of the entire stack -- in the case of ARM and iOS/OSX ... and significant leverage to get special treatment from Intel, etc.



    I posted earlier that:



    A current Intel CPU chip, in real-time, interprets x386 CISC instructions and translates/executes them as RISC (ARM-like) instructions



    How much of the power of, say, an i5 or i7 is spent [wasted] doing that?



    Couldn't Apple dedicate an A8X or A9X chip (or 2) to the same task -- translating x86 CISC into ARM RISC ... at equal or better performance, 1/2 the power and 1/3 the cost?

    You also have to translate it from x86_64 to ARM64 not just CISC to RISC. It would be very unlikely to have a general purpose CPU just sitting there to decode and convert instruction sets. It has been done before, but it has always been termed emulation and that never really worked well or fast.

     

    I am also sure very little is wasted on an i7 or i5 as it has specialized hardware to do it. Read this, he says it better and more knowledgeably.

    Quote:

    Originally Posted by MachineShedFred View Post

     



    Intel's latest designs only have to to µOp translate when dealing with *very* old software.  Anything that is x86-64 native (OS X is) isn't using any of that, and is running at full speed with full register use.  Intel solved that back with the Pentium Pro in 1996, and re-solved it more recently with Pentium-D when they moved away from the garbage Pentium4 "NetBurst" architecture.

     

    More to the point, anyone thinking that you can just throw cores at it in order to make up the difference is very much falling prey to the biggest modern fallacy of general-purpose computing - multiprocessing absolutely does not scale linearly with core count anywhere except benchmarks.  Plus, unless the app is written to spin off shloads of threads and execute out-of-order, you're just creating wait-states as you wait for the result of that one thread you need to come back after all the other ones are done.  Oh, and if that result changes the work of the other ones?  Whoops - re-do all that work.

     

    There's a reason why multiprocessing has been in the domain of servers for years before making it's way to endpoints.  Servers have much more predictable workloads; thus multithreading makes far more sense, and the performance gains from x-way CPU designs are far easier to realize.


  • Reply 151 of 183
    pfisherpfisher Posts: 758member

    More and more people are using iOS as their primary computing device - or only device. Office is online. A cheaper, non-Intel iOS laptop should be a no brainer. 

  • Reply 152 of 183
    pfisherpfisher Posts: 758member

    Most people could run an A-series laptop and be fine.

     

    Apple could continue the Pro machines running OS X and be able to run iOS apps.

     

    For now at least.

     

    Anyway, Apple is a mass market company. Not niche. WIth most people using iOS and not Macs, makes sense, maybe to them, to later dump OS X.

     

    OS X could later run, if that is the case, on A-series chips.

     

    Look at what Apple did with Final Cut Pro and Aperture and their hardware. "Dumbing down"? Maybe. Or maybe going Mass market with their own chips and own the whole enchilada. AND primarily have control over chip development.

     

    Apple kicks butt. I am often critical of them, but for hardware - they smoke....this is only the beginning.

  • Reply 153 of 183
    Dear AppleInsider Readers

    Don't be jerks. Also, you didn't read the article if you are making an Ax Chip vs. an Intel Chip, one-to-one comparison. The article mentions multiple chips (each a quad-core chip) running Macs, something quite interesting. 8 quad-core chips for 32 concurrent threads? Interesting, indeed.

    And quit snarking about the "is this breaking news" issue. It comes across as petty and childish.

    Thank you.
  • Reply 154 of 183
    staticx57 wrote: »
    Not at all! Not what I meant. If anyone could solve software problems by throwing hardware [more cores] at it, it would be Apple because they have the expertise and control of the entire stack -- in the case of ARM and iOS/OSX ... and significant leverage to get special treatment from Intel, etc.


    I posted earlier that:


    A current Intel CPU chip, in real-time, interprets x386 CISC instructions and translates/executes them as RISC (ARM-like) instructions


    How much of the power of, say, an i5 or i7 is spent [wasted] doing that?


    Couldn't Apple dedicate an A8X or A9X chip (or 2) to the same task -- translating x86 CISC into ARM RISC ... at equal or better performance, 1/2 the power and 1/3 the cost?
    You also have to translate it from x86_64 to ARM64 not just CISC to RISC. It would be very unlikely to have a general purpose CPU just sitting there to decode and convert instruction sets. It has been done before, but it has always been termed emulation and that never really worked well or fast.

    I am also sure very little is wasted on an i7 or i5 as it has specialized hardware to do it. Read this, he says it better and more knowledgeably.
     


    Intel's latest designs only have to to µOp translate when dealing with *very* old software.  Anything that is x86-64 native (OS X is) isn't using any of that, and is running at full speed with full register use.  Intel solved that back with the Pentium Pro in 1996, and re-solved it more recently with Pentium-D when they moved away from the garbage Pentium4 "NetBurst" architecture.

    More to the point, anyone thinking that you can just throw cores at it in order to make up the difference is very much falling prey to the biggest modern fallacy of general-purpose computing - multiprocessing absolutely does not scale linearly with core count anywhere except benchmarks.  Plus, unless the app is written to spin off shloads of threads and execute out-of-order, you're just creating wait-states as you wait for the result of that one thread you need to come back after all the other ones are done.  Oh, and if that result changes the work of the other ones?  Whoops - re-do all that work.

    There's a reason why multiprocessing has been in the domain of servers for years before making it's way to endpoints.  Servers have much more predictable workloads; thus multithreading makes far more sense, and the performance gains from x-way CPU designs are far easier to realize.

    Claro!
  • Reply 155 of 183
    melgrossmelgross Posts: 33,510member
    Stay in the loop. The entire management team compromises of quite a few new engineers, with several from Apple. The people leaving are business background folks.

    AMD is following the Apple 2.0 reinvention process. Engineers are now running the show, top to bottom.

    Doesn't matter, they're in a race to the bottom.

    Which process technology are their latest processors running on? 32nm. That's right! They are two full generations behind intel. Hard to believe. Their top units need a full 220 watts. Those are compared to intel's 125 watts for the equivalent units.

    Nah! Nothing for them now.
  • Reply 156 of 183
    Quote:

    Originally Posted by brlawyer View Post

     



    Absolutely correct. And as I've said elsewhere, Intel is on the verge of causing the same problems as when we had the PowerPC fiasco - any doubts? Just look at how many threads complain about the reliance on a long-overdue Broadwell, not to mention Skylake (or whatever stupid codename Intel comes up with) and their attached dependencies (TB 3 etc.). Mac Pro? Waiting. New TB Display? Waiting. And on and on.

     

    In fact, and as much as I'd hate it, the OS X ARM transition will come MUCH sooner than anyone would expect; prepare your handkerchiefs. 


    "In fact, and as much as I'd hate it, the OS X ARM transition will come MUCH sooner than anyone would expect; prepare your handkerchiefs. "

     

    Actually, I'll be popping the champagne open

  • Reply 157 of 183
    Quote:

    Originally Posted by pfisher View Post

     

    Most people could run an A-series laptop and be fine.

     

    Apple could continue the Pro machines running OS X and be able to run iOS apps.

     

    For now at least.

     

    Anyway, Apple is a mass market company. Not niche. WIth most people using iOS and not Macs, makes sense, maybe to them, to later dump OS X.

     

    OS X could later run, if that is the case, on A-series chips.

     

    Look at what Apple did with Final Cut Pro and Aperture and their hardware. "Dumbing down"? Maybe. Or maybe going Mass market with their own chips and own the whole enchilada. AND primarily have control over chip development.

     

    Apple kicks butt. I am often critical of them, but for hardware - they smoke....this is only the beginning.


    "Anyway, Apple is a mass market company. Not niche."

     

    Pretty much.  They are now a consumer mobile devices company.  And I don't mean that in a negative way.

  • Reply 158 of 183
    asciiascii Posts: 5,936member

    "Apple may shun Intel for custom A-series chips in new Macs within 1-2 years"

     

     

    Shunned... by the ARMish?

  • Reply 159 of 183
    nolamacguynolamacguy Posts: 4,758member
    Quote:
    Originally Posted by AppeX View Post



    Disaster ahead. It seems that Apple did not learn past lessons. We need x86 for full true and real compatibility with the world (read Windows).

     

    nonsense. apple's never been bound by backwards compatibility. where have you been?

     

    further, you dont need x86 to emulate Windows.

  • Reply 160 of 183
    nolamacguynolamacguy Posts: 4,758member
    Quote:

    Originally Posted by djames4242 View Post

     

    No it didn't actually. It ran emulated Windows. There's a big difference. Emulating a processor takes a huge performance hit, while virtualizing one does not. Trying to run a modern Windows environment with the Intel chipset emulated on an AX processor would be a horrible exercise.


     

    1) how do you know it would be horrible? unless you work inside apple's labs, youve never done it. ever.

     

    2) your use case as a pro needing windows is an outlier -- you are not the majority of mac customers.

Sign In or Register to comment.