ARM Mac coming in first half of 2021, says Ming-Chi Kuo

123468

Comments

  • Reply 101 of 149
    knowitallknowitall Posts: 1,648member
    frantisek said:
    Is anyone able to estimate performance of A14XX in 4 or 8 core or 4+4 Little/Big design?
    Yes, awesome. 
     0Likes 0Dislikes 0Informatives
  • Reply 102 of 149
    knowitallknowitall Posts: 1,648member
    red oak said:
    wizard69 said:
    red oak said:
    There is also (I believe) a substantial cost savings to Apple here 

    Assumptions: 

    -  Half of Macs use internally developed Apple CPU: 10M/yr    (the other half stays on Intel)
    -  Current cost Apple pays per Intel chip:  $200  (likely higher)
    -  Cost to Apple to manufacture new chip at TMSC:   $30 (likely lower)
    -  Dedicated Apple CPU chip employees and cost:   300 employees x $400K/yr fully weighted cost =  $120M 


    Total Intel Cost:  10M x $180 =  $2B
    Total internal Apple Cost:   10M x $30 =  $300M + $120M = $420M 


    Total Annual Savings:    $2B - $420M = $1.6 Billion 


    It would be $3.2 Billion if Apple were to move it all internal.   That would increase overall gross margin of the company by ~ 1%.    It would be a huge financial win.   But more importantly, it gets Apple untangled from the mess that is Intel

    Not included here is the massive R&D effort/spend to get to launch.   Maybe this is one of the driver's of Apple R&D spend exploding over the last 5 years 

    I suspect that your chip costs are wrong.  This mainly because Apple is always on TSMC bleeding edge processes and Apples  chips are rather large.  Apple pays for being first so I’d double that processor cost.  Maybe even more depending upon how it is packaged, if Apple does a multi chip module we could be seeing a huge jump above $30.  

    As for Apple and Intel’s chips that will be a lot harder to determine.  For one Intel is under the watchful eyes of regulators and can not offer any discounts beyond those that are volume based.   So Apple will pay about the same as any other builder with about the same volume.  The sticker here is that Apple often buys Intel hardware not sold to anybody else.  In any event I suspect their average price might be a bit more than $200 a crack. 

    As for R&D spending who knows where it all goes.  Like the pentagon Apple likely has black projects we don’t know about.   I kinda like to believe Apples rumored car development is more about autonomous robots than cars.  
    Here is something on the A9 that is old but well thought out.   The estimate from IHS was $22 to $24.  But the chips will be much more complicated and larger.   But I would be surprised if Apple pays anything north of $40 to mfg

    https://www.fool.com/investing/general/2015/11/10/how-much-does-the-apple-inc-a9-cost-to-make.aspx


    And, here is Intel's official price list.  You're right that Apple buys chips made just for it.   Eyeballing,  my $200 estimate looks low.   The list also reinforces to me what a mess Intel is.   All these models and they could not even figure out mobile.   Steve would have walked in the door and slashed this monstrosity to 8 SKUs.  

    https://s21.q4cdn.com/600692695/files/doc_downloads/cpu_price/2020/02/Feb_24_20_Recommended_Customer_Price_List.pdf
    Intel has become a marketing company with no real product innovation.
     0Likes 0Dislikes 0Informatives
  • Reply 103 of 149
    knowitallknowitall Posts: 1,648member

    knowitall said:
    knowitall said:
    I expect Arm Macs this year.
    I bought my Arm desktop computer already.
    It’s a $44 rock64, add a 27inch 4K ips monitor, touch mouse (bluetooth) and bluetooth keyboard and it will top out at $400.
    I expect Macs to be priced similarly.
    There is right about zero percent chance of this. The RK3328 in there supports a whopping 4GB of RAM too.
    I think you are right (hope is still allowed, I hope), yes 4GB and low power consumption credit card size and quad core (about twice the computing power as my 2009 27inch iMac), makes it very interesting.
    It's in the pipeline directly from China, so I can't wait.
    Oh yeah, don't get me wrong. I really like this kind of thing. I just don't see it (officially) for macOS.
    Also true, thats the biggest caveat, though when Apple releases an Arm Mac with macOS it will be a lot easier to get an ArmHackintosh.
     0Likes 0Dislikes 0Informatives
  • Reply 104 of 149
    Rev2Livrev2liv Posts: 7unconfirmed, member
    mcdave said:
    larryjw said:
    lkrupp said:
    Any ideas on how Apple will handle the X86 code of current apps to run on ARM architecture? I am not educated on this. Is ARM close enough to X86 that the transition will be easy or will it require a Rosetta-like translation framework like the move from Moto 68000 to X86 did. Will we have universal binaries again or something else during the transition?
    Well, both ARM and X86 chips are micro coded. Could Apple microcode the x86 instruction set into the ARM?
    My guess is they already have, hence the lower than usual performance bump when the moved to Vortex (virtual cortex).

    Technically yes, but, 

    1. it would be a nightmare to graft ARM ISA onto x86 ISA since you’d end up with a bloated processor when an emulation layer or dynamic recompile would suffice.

    2. xCode 13 is processor agnostic within the confines of the Apple ecosystem.

    3. You still a PCI-E 3.0 chipset controller capable of MacOS grade I/O

    4. Lack of drivers for 3rd party devices.

    5. No AVX-512, QuickSync support or Intel supported SIMD extensions.


     0Likes 0Dislikes 0Informatives
  • Reply 105 of 149
    I expect Apple has pretty solid data about what applications Mac users actually use.  I'm sure a decent chunk of the market (such as my wife) use no applications beyond a Web browser, the applications Apple provides, and Word, PowerPoint, and Excel.  And push comes to shove, many people in this category could do without the last 3.

    One approach for Apple would be to target this market with a high-quality, low-price, super portable MacOS laptop (that just happens to have very little third party support initially).  But is there enough of a market for this since the same audience can do just fine with an iPad and external keyboard.
     0Likes 0Dislikes 0Informatives
  • Reply 106 of 149
    melgrossmelgross Posts: 33,715member
    I expect Apple has pretty solid data about what applications Mac users actually use.  I'm sure a decent chunk of the market (such as my wife) use no applications beyond a Web browser, the applications Apple provides, and Word, PowerPoint, and Excel.  And push comes to shove, many people in this category could do without the last 3.

    One approach for Apple would be to target this market with a high-quality, low-price, super portable MacOS laptop (that just happens to have very little third party support initially).  But is there enough of a market for this since the same audience can do just fine with an iPad and external keyboard.
    Office is extremely important on the Mac. It’s so important, that Apple never tried to have iWork rival it. They just gave up. We had hoped Apple would enhance it to become a real rival, which they could have, but they didn’t.

    in fact, in the late 1990’s, when Apple found Microsoft had their hand in the QuickTime cookie jar, one of the requirements Apple made to Microsoft was that Microsoft upgrade Office for the Mac for at least 5 years.
    muthuk_vanalingam
     1Like 0Dislikes 0Informatives
  • Reply 107 of 149
    mattinozmattinoz Posts: 2,680member
    mattinoz said:
    tht said:
    lkrupp said:
    Any ideas on how Apple will handle the X86 code of current apps to run on ARM architecture? I am not educated on this. Is ARM close enough to X86 that the transition will be easy or will it require a Rosetta-like translation framework like the move from Moto 68000 to X86 did. Will we have universal binaries again or something else during the transition?
    Microsoft has an X86 emulation layer for ARM Windows. It isn't great, but it's a solution.

    A Rosetta-like framework is the most likely.
    It can be a lot like the last time.

    1. Apps using Apple’s frameworks and Xcode will mostly be a recompile
    2. Apps that mostly use Apple’s frameworks, but not Xcode, or use 3rd party libraries, will use a binary translator like Rosetta (switching x86 instructions with ARM instructions at runtime)
    3. Apps that only work with macOS/x86 with 3rd party x86 libraries will need to be run in a macOS/x86 VM a la classic. If Apple provides it, presumably, apps can be overlapped, and it won’t be macOS run inside a window.

    And once again, the single biggest impediment to success will be Microsoft and Adobe moving their apps to macOS/ARM. If that doesn’t happen, macOS/x86 won’t be successful. Presumably, Apple will just do the work to bring FCPX and LPX over. Plugins may need to use Rosetta for awhile before they are moved over.
    If you can run a VM why not just have a much down spec'ed intel chip as APU?

    Most of these older apps are going to be poorly multi-threaded so having a good single core performance CPU and GPU SOC that can be dedicated to those apps or high demand apps running inside a dedicated sub Mac instance inside the machine. Not needed it turns off.  Sets Apple up to make an eAPU and the Mac Pro could be that eAPU for a whole office, same idea just longer cables as lag can be dealt with.

    Your expectations that older apps are likely to be poorly multi-threaded seems to suggest you believe new apps are much better multi-threaded.  Grand Central Dispatch has been out a number of years, and while it makes another nice tool of abstraction for multi-threading tasks, it changes nothing in the nature that most applications have no viable path to having added threads increasing performance because the nature of what they do has no natural parallelizable tasks.  In that situation, adding threads adds complexity (and likely bugs) while often causing performance costs you wouldn't have otherwise.

    So you're half right: individual core performance will always matter for real-world applications, and it's this fact that made me laugh at the core wars octocore Android phones being a useful thing compared to the few faster iPhone cores, even if the SOCs were capable of keeping those CPU cores fed with data and instructions (no chance of that).
    To be fair I don't use any complex productive apps that aren't 15+ years old, no one has really challenged most of the stalwarts due not being able to provide a competitive edge like multi-threading. Well, at least they haven't in my industry, the optimist in me hoped at least some other workers had found and advantage. That is the point everyone has that app that is critical to getting paid and no matter how well it's written could use one (extra-big) core with nothing getting in its way.

    To me, it just makes sense if Supercomputers do pairs of chips, for this reason. Then a small SOC acting as Personal Assistant keeping the Productive Processor free to do the real work, would make more sense on a PC where the user has lots of things going on that want to interrupt the main workflow. We know the A12 and beyond can handle all those sideline things very well, including remote access to another machine just in this instance the machine wouldn't be so remote.

    If people still need bootcamp the internal remote machine could actually boot windows to host the key app.

    It gives Apple some of the advantages of a pure ARM machine without selling the Mac short. They could still do a pure ARM machine on different branding although the iPad could already be that if it had it's own true OS not shackled to iOS conventions.

     0Likes 0Dislikes 0Informatives
  • Reply 108 of 149
    melgross said:
    asdasd said:
    melgross said:
    lkrupp said:
    Any ideas on how Apple will handle the X86 code of current apps to run on ARM architecture? I am not educated on this. Is ARM close enough to X86 that the transition will be easy or will it require a Rosetta-like translation framework like the move from Moto 68000 to X86 did. Will we have universal binaries again or something else during the transition?
    This is the problem I’ve been wondering about for some time. While some people dismiss this as an issue, or in most cases, don’t even think about it (aren’t aware it is an issue), it’s the biggest issue apple will need to deal with. In previous changeovers, even Apple was very lax in getting their own big apps out. It took a year for them. It took a long time for Adobe and Microsoft, with their massive software, to come over too.

    ARM is not close to x86. It’s optimized for battery life over performance. Apple and ARM have made significant advances on that front, but the instruction sets are different enough. We know from previous attempts at emulation, that a processor family needs to be 5 times as powerful in order to be able to run software at the same speed as the family they’re emulating. This hasn’t changed. Microsoft supposedly does it now, with their “universal” sdk. But they don’t, really. They require software to be rewritten, and recompiled for ARM. And there have still been issues with performance, specific features and bugs.

    im not saying it can’t be done, because obviously it can. But if Apple is really going to release a device next year, there will either be significant limitations, or they’ve figured out a way around them. My suggestion, which no one here has ever commented on, from my memory, is to add a dozen x86 instructions to the chip. It’s been found that 80% of the slowdown between chip families is from about a dozen instructions. The chip, or OS, could hand that over to those when native x86 software needs them. Individual instructions aren’t patented, or copyrighted, as far as I know. If true, that would give Apple a way around the problem.
    I can’t emphasise enough that the vast majority of apps produced for the Mac right now will be a recompile and some won’t even need that. The compiler is doing the work if you use the Apple tool chain. 

    Dont confuse the compiled machine code with the higher level frameworks that might be used. 
    You shouldn’t, because it’s a myth. Yes, small apps can be recompiled, and will often work without more revision other than to fix bugs that always creep in when recompiling. But anything else needs to be rewritten.  It seems that people here forget the announcements that some developers have made during these transition periods in Apple demonstrations. They come out and announce how easy it was to get their massive app up and running in just a weekend. But then, it actually takes six months, or more, before that app is released. Why, because for the demo, they showed a few chosen features that worked out, after frenzied work. But the rest needed a good deal of work to function properly.

    its incrediably naive to think that this will be easy.
    I laugh when I read you making these insane claims.  If it takes a company 6 months to move only from one CPU to another CPU, where the APIs are identical, and they're not adding features/rewriting their applications in any meaningful way, then they're grossly incompetent.  Consider that when a new CPU/system comes out, they don't want to look bad compared to others, and it's a great time to add new features to show off the new CPU, assuming it gives any added power.  Short of them coding to the lowest-level CPU instructions for multimedia encoding/decoding or the like, assembly language is a major waste of developer time: incredibly few applications will get anything resembling the time/effort/cost back by rewriting anything in assembly language, because compilers more often than not do a better job with optimizations these days than the majority of the best hand-coded assembly language implementations.

    When you talk of developers needing to rewrite their applications because of a new CPU, even a new family/ISA where all that's changed is the CPU architecture? NOBODY REWRITES ANY MAJOR CODE FOR THAT!  That's insanely stupid for many reasons (including economically) and plain WRONG.  No major operating system is unable to be readily  rebuilt for other CPUs with only a very tiny amount of low-level assembly, and 99.99%  (admittedly, a number that's guessed, but no more than your no-proof data) of user applications in the last 10 years won't have any hand-coded assembly, regardless of their size. 

    The lowest-level language you'll find used in that percentage of apps these days is C.  While you can write it in a not-fully-portable manner because the C/C++ standards leave some details to the implementation of the vendors, it's actually very straightforward, even if you do convert between CPU architectures, to make the required changes to have the code work as it did before: it's not rocket science, it's not even interesting computer science.  If they do find a need to fix it to be portable, then they should ensure they never need to fix that again.

    For most applications, it truly is as simple as flipping an Xcode selection, as long as you've written in a reasonably portable way.  It's not hard in Objective-C/C/C++ to do so, and Swift makes it easier.  Switching from 32-bits to 64-bits had the biggest change with Apple changing from regular floats to double floats for CGFloats, but that's not nearly the hardest thing to fix.  You don't know what you're talking about.

    asdasd
     1Like 0Dislikes 0Informatives
  • Reply 109 of 149
    crowleycrowley Posts: 10,453member
    I think anyone claiming any absolute knowledge of how easy or hard it will be to do something that at this time is entirely theoretical, using an unknown toolset, for an unconfirmed platform, running on rumoured architecture, is likely to be blagging it.  Especially when they're calling other people insane.
    muthuk_vanalingam
     1Like 0Dislikes 0Informatives
  • Reply 110 of 149
    asdasdasdasd Posts: 5,686member
    melgross said:
    I expect Apple has pretty solid data about what applications Mac users actually use.  I'm sure a decent chunk of the market (such as my wife) use no applications beyond a Web browser, the applications Apple provides, and Word, PowerPoint, and Excel.  And push comes to shove, many people in this category could do without the last 3.

    One approach for Apple would be to target this market with a high-quality, low-price, super portable MacOS laptop (that just happens to have very little third party support initially).  But is there enough of a market for this since the same audience can do just fine with an iPad and external keyboard.
    Office is extremely important on the Mac. It’s so important, that Apple never tried to have iWork rival it. They just gave up. We had hoped Apple would enhance it to become a real rival, which they could have, but they didn’t.

    in fact, in the late 1990’s, when Apple found Microsoft had their hand in the QuickTime cookie jar, one of the requirements Apple made to Microsoft was that Microsoft upgrade Office for the Mac for at least 5 years.
    Office is written in C++ and Objective C according to this wiki page.

    https://en.wikipedia.org/wiki/Microsoft_Office

    So that should compile down to ARM code readily enough. 
    canukstorm
     1Like 0Dislikes 0Informatives
  • Reply 111 of 149
    asdasdasdasd Posts: 5,686member

    crowley said:
    I think anyone claiming any absolute knowledge of how easy or hard it will be to do something that at this time is entirely theoretical, using an unknown toolset, for an unconfirmed platform, running on rumoured architecture, is likely to be blagging it.  Especially when they're calling other people insane.
    Some of us work on this stuff. Anything that compiles now using Apple's toolset for Mac OS Intel will work on ARM, with a recompile. Unless it was uploaded to the App Store using bitcode in which case it won't need a recompile. 

    edit: 

    Anything in modern code, past the carbon era. But that probably doesn't even compile on the latest Xcode. If developers have transitioned to 64bit using Xcode they are golden.
    edited February 2020
     0Likes 0Dislikes 0Informatives
  • Reply 112 of 149
    Solisoli Posts: 10,038member
    asdasd said:
    melgross said:
    I expect Apple has pretty solid data about what applications Mac users actually use.  I'm sure a decent chunk of the market (such as my wife) use no applications beyond a Web browser, the applications Apple provides, and Word, PowerPoint, and Excel.  And push comes to shove, many people in this category could do without the last 3.

    One approach for Apple would be to target this market with a high-quality, low-price, super portable MacOS laptop (that just happens to have very little third party support initially).  But is there enough of a market for this since the same audience can do just fine with an iPad and external keyboard.
    Office is extremely important on the Mac. It’s so important, that Apple never tried to have iWork rival it. They just gave up. We had hoped Apple would enhance it to become a real rival, which they could have, but they didn’t.

    in fact, in the late 1990’s, when Apple found Microsoft had their hand in the QuickTime cookie jar, one of the requirements Apple made to Microsoft was that Microsoft upgrade Office for the Mac for at least 5 years.
    Office is written in C++ and Objective C according to this wiki page.

    https://en.wikipedia.org/wiki/Microsoft_Office

    So that should compile down to ARM code readily enough. 
    But even if it can't 1) MS has been writing for iOS devices for over a decade, 2) this is very likely going to be for low-end Macs first where needing expensive and complex 3rd-party solutions aren't as necessary for consumers as they are on the upper-end, and 3) massive changes have occurred in their code base, inclusion of the MAS, and cross-app exportation since the days of Apple getting MS to agree to build Office for Mac (so long as Apple agrees to make IE for Mac the default web browser).

    I don't have Office on my Mac because what I need from Pages and Numbers is sufficient for my needs. That's not to say that Word and Numbers aren't more robust apps, but  most people simply don't need a lot of the features included, just like most are probably alright with Mail.app instead of buying Outlook for Mac.

    It's so how some people keep making all sorts of weak excuses for Apple doing what has been so clearly written on the wall for years. I was ridiculed here when I brought up this switch to ARM as a hypothesis and an argument as how it could benefit Apple. The same thing happened when I hypothesized how Apple could benefit from making macOS a free, annual update like iOS, as well as countless other paradigm shifting moves. At least we're in the 2nd stage now:

    All truth passes through three stages. First, it is ridiculed. Second, it is violently opposed. Third, it is accepted as being self-evident. —Arthur Schopenhauer
    edited February 2020
    asdasd
     1Like 0Dislikes 0Informatives
  • Reply 113 of 149
    crowleycrowley Posts: 10,453member
    asdasd said:

    crowley said:
    I think anyone claiming any absolute knowledge of how easy or hard it will be to do something that at this time is entirely theoretical, using an unknown toolset, for an unconfirmed platform, running on rumoured architecture, is likely to be blagging it.  Especially when they're calling other people insane.
    Some of us work on this stuff. Anything that compiles now using Apple's toolset for Mac OS Intel will work on ARM, with a recompile. Unless it was uploaded to the App Store using bitcode in which case it won't need a recompile. 

    edit: 

    Anything in modern code, past the carbon era. But that probably doesn't even compile on the latest Xcode. If developers have transitioned to 64bit using Xcode they are golden.
    Caveat.
    Caveat.
    Caveat.

    Ergo, it may be easy, it may be hard, depends on the situation, and it depends on the solution, and it depends on what Apple actually announces.
     0Likes 0Dislikes 0Informatives
  • Reply 114 of 149
    Solisoli Posts: 10,038member
    crowley said:
    asdasd said:
    crowley said:
    I think anyone claiming any absolute knowledge of how easy or hard it will be to do something that at this time is entirely theoretical, using an unknown toolset, for an unconfirmed platform, running on rumoured architecture, is likely to be blagging it.  Especially when they're calling other people insane.
    Some of us work on this stuff. Anything that compiles now using Apple's toolset for Mac OS Intel will work on ARM, with a recompile. Unless it was uploaded to the App Store using bitcode in which case it won't need a recompile. 

    edit: 

    Anything in modern code, past the carbon era. But that probably doesn't even compile on the latest Xcode. If developers have transitioned to 64bit using Xcode they are golden.
    Caveat.
    Caveat.
    Caveat.

    Ergo, it may be easy, it may be hard, depends on the situation, and it depends on the solution, and it depends on what Apple actually announces.
    His statement says it's already very doable and may not require any work at all for some devs so I'm not sure why you're attacking his salient comment by saying their are "caveats" that are showing how it could be even less work for devs.
    edited February 2020
     0Likes 0Dislikes 0Informatives
  • Reply 115 of 149
    asdasdasdasd Posts: 5,686member
    crowley said:
    asdasd said:

    crowley said:
    I think anyone claiming any absolute knowledge of how easy or hard it will be to do something that at this time is entirely theoretical, using an unknown toolset, for an unconfirmed platform, running on rumoured architecture, is likely to be blagging it.  Especially when they're calling other people insane.
    Some of us work on this stuff. Anything that compiles now using Apple's toolset for Mac OS Intel will work on ARM, with a recompile. Unless it was uploaded to the App Store using bitcode in which case it won't need a recompile. 

    edit: 

    Anything in modern code, past the carbon era. But that probably doesn't even compile on the latest Xcode. If developers have transitioned to 64bit using Xcode they are golden.
    Caveat.
    Caveat.
    Caveat.

    Ergo, it may be easy, it may be hard, depends on the situation, and it depends on the solution, and it depends on what Apple actually announces.
    No real caveat. Modern means 20 years or so. 

    Can you explain using your best expertise why an objective c or swift application would not work or what difficulties you would expect. 
     0Likes 0Dislikes 0Informatives
  • Reply 116 of 149
    melgrossmelgross Posts: 33,715member
    melgross said:
    asdasd said:
    melgross said:
    lkrupp said:
    Any ideas on how Apple will handle the X86 code of current apps to run on ARM architecture? I am not educated on this. Is ARM close enough to X86 that the transition will be easy or will it require a Rosetta-like translation framework like the move from Moto 68000 to X86 did. Will we have universal binaries again or something else during the transition?
    This is the problem I’ve been wondering about for some time. While some people dismiss this as an issue, or in most cases, don’t even think about it (aren’t aware it is an issue), it’s the biggest issue apple will need to deal with. In previous changeovers, even Apple was very lax in getting their own big apps out. It took a year for them. It took a long time for Adobe and Microsoft, with their massive software, to come over too.

    ARM is not close to x86. It’s optimized for battery life over performance. Apple and ARM have made significant advances on that front, but the instruction sets are different enough. We know from previous attempts at emulation, that a processor family needs to be 5 times as powerful in order to be able to run software at the same speed as the family they’re emulating. This hasn’t changed. Microsoft supposedly does it now, with their “universal” sdk. But they don’t, really. They require software to be rewritten, and recompiled for ARM. And there have still been issues with performance, specific features and bugs.

    im not saying it can’t be done, because obviously it can. But if Apple is really going to release a device next year, there will either be significant limitations, or they’ve figured out a way around them. My suggestion, which no one here has ever commented on, from my memory, is to add a dozen x86 instructions to the chip. It’s been found that 80% of the slowdown between chip families is from about a dozen instructions. The chip, or OS, could hand that over to those when native x86 software needs them. Individual instructions aren’t patented, or copyrighted, as far as I know. If true, that would give Apple a way around the problem.
    I can’t emphasise enough that the vast majority of apps produced for the Mac right now will be a recompile and some won’t even need that. The compiler is doing the work if you use the Apple tool chain. 

    Dont confuse the compiled machine code with the higher level frameworks that might be used. 
    You shouldn’t, because it’s a myth. Yes, small apps can be recompiled, and will often work without more revision other than to fix bugs that always creep in when recompiling. But anything else needs to be rewritten.  It seems that people here forget the announcements that some developers have made during these transition periods in Apple demonstrations. They come out and announce how easy it was to get their massive app up and running in just a weekend. But then, it actually takes six months, or more, before that app is released. Why, because for the demo, they showed a few chosen features that worked out, after frenzied work. But the rest needed a good deal of work to function properly.

    its incrediably naive to think that this will be easy.
    I laugh when I read you making these insane claims.  If it takes a company 6 months to move only from one CPU to another CPU, where the APIs are identical, and they're not adding features/rewriting their applications in any meaningful way, then they're grossly incompetent.  Consider that when a new CPU/system comes out, they don't want to look bad compared to others, and it's a great time to add new features to show off the new CPU, assuming it gives any added power.  Short of them coding to the lowest-level CPU instructions for multimedia encoding/decoding or the like, assembly language is a major waste of developer time: incredibly few applications will get anything resembling the time/effort/cost back by rewriting anything in assembly language, because compilers more often than not do a better job with optimizations these days than the majority of the best hand-coded assembly language implementations.

    When you talk of developers needing to rewrite their applications because of a new CPU, even a new family/ISA where all that's changed is the CPU architecture? NOBODY REWRITES ANY MAJOR CODE FOR THAT!  That's insanely stupid for many reasons (including economically) and plain WRONG.  No major operating system is unable to be readily  rebuilt for other CPUs with only a very tiny amount of low-level assembly, and 99.99%  (admittedly, a number that's guessed, but no more than your no-proof data) of user applications in the last 10 years won't have any hand-coded assembly, regardless of their size. 

    The lowest-level language you'll find used in that percentage of apps these days is C.  While you can write it in a not-fully-portable manner because the C/C++ standards leave some details to the implementation of the vendors, it's actually very straightforward, even if you do convert between CPU architectures, to make the required changes to have the code work as it did before: it's not rocket science, it's not even interesting computer science.  If they do find a need to fix it to be portable, then they should ensure they never need to fix that again.

    For most applications, it truly is as simple as flipping an Xcode selection, as long as you've written in a reasonably portable way.  It's not hard in Objective-C/C/C++ to do so, and Swift makes it easier.  Switching from 32-bits to 64-bits had the biggest change with Apple changing from regular floats to double floats for CGFloats, but that's not nearly the hardest thing to fix.  You don't know what you're talking about.

    Yes, it took Apple a YEAR to bring Final Cut over. More than a half year to bring LOGIC pro over. It took both Adobe and Microsoft a year as well. It took WolframAlpha (the company doing the demo) about 7 months to move over. There are numerous other examples.

    of course, by your thinking, they are all incompetent.
    muthuk_vanalingam
     0Likes 0Dislikes 1Informative
  • Reply 117 of 149
    asdasdasdasd Posts: 5,686member
    melgross said:
    melgross said:
    asdasd said:
    melgross said:
    lkrupp said:
    Any ideas on how Apple will handle the X86 code of current apps to run on ARM architecture? I am not educated on this. Is ARM close enough to X86 that the transition will be easy or will it require a Rosetta-like translation framework like the move from Moto 68000 to X86 did. Will we have universal binaries again or something else during the transition?
    This is the problem I’ve been wondering about for some time. While some people dismiss this as an issue, or in most cases, don’t even think about it (aren’t aware it is an issue), it’s the biggest issue apple will need to deal with. In previous changeovers, even Apple was very lax in getting their own big apps out. It took a year for them. It took a long time for Adobe and Microsoft, with their massive software, to come over too.

    ARM is not close to x86. It’s optimized for battery life over performance. Apple and ARM have made significant advances on that front, but the instruction sets are different enough. We know from previous attempts at emulation, that a processor family needs to be 5 times as powerful in order to be able to run software at the same speed as the family they’re emulating. This hasn’t changed. Microsoft supposedly does it now, with their “universal” sdk. But they don’t, really. They require software to be rewritten, and recompiled for ARM. And there have still been issues with performance, specific features and bugs.

    im not saying it can’t be done, because obviously it can. But if Apple is really going to release a device next year, there will either be significant limitations, or they’ve figured out a way around them. My suggestion, which no one here has ever commented on, from my memory, is to add a dozen x86 instructions to the chip. It’s been found that 80% of the slowdown between chip families is from about a dozen instructions. The chip, or OS, could hand that over to those when native x86 software needs them. Individual instructions aren’t patented, or copyrighted, as far as I know. If true, that would give Apple a way around the problem.
    I can’t emphasise enough that the vast majority of apps produced for the Mac right now will be a recompile and some won’t even need that. The compiler is doing the work if you use the Apple tool chain. 

    Dont confuse the compiled machine code with the higher level frameworks that might be used. 
    You shouldn’t, because it’s a myth. Yes, small apps can be recompiled, and will often work without more revision other than to fix bugs that always creep in when recompiling. But anything else needs to be rewritten.  It seems that people here forget the announcements that some developers have made during these transition periods in Apple demonstrations. They come out and announce how easy it was to get their massive app up and running in just a weekend. But then, it actually takes six months, or more, before that app is released. Why, because for the demo, they showed a few chosen features that worked out, after frenzied work. But the rest needed a good deal of work to function properly.

    its incrediably naive to think that this will be easy.
    I laugh when I read you making these insane claims.  If it takes a company 6 months to move only from one CPU to another CPU, where the APIs are identical, and they're not adding features/rewriting their applications in any meaningful way, then they're grossly incompetent.  Consider that when a new CPU/system comes out, they don't want to look bad compared to others, and it's a great time to add new features to show off the new CPU, assuming it gives any added power.  Short of them coding to the lowest-level CPU instructions for multimedia encoding/decoding or the like, assembly language is a major waste of developer time: incredibly few applications will get anything resembling the time/effort/cost back by rewriting anything in assembly language, because compilers more often than not do a better job with optimizations these days than the majority of the best hand-coded assembly language implementations.

    When you talk of developers needing to rewrite their applications because of a new CPU, even a new family/ISA where all that's changed is the CPU architecture? NOBODY REWRITES ANY MAJOR CODE FOR THAT!  That's insanely stupid for many reasons (including economically) and plain WRONG.  No major operating system is unable to be readily  rebuilt for other CPUs with only a very tiny amount of low-level assembly, and 99.99%  (admittedly, a number that's guessed, but no more than your no-proof data) of user applications in the last 10 years won't have any hand-coded assembly, regardless of their size. 

    The lowest-level language you'll find used in that percentage of apps these days is C.  While you can write it in a not-fully-portable manner because the C/C++ standards leave some details to the implementation of the vendors, it's actually very straightforward, even if you do convert between CPU architectures, to make the required changes to have the code work as it did before: it's not rocket science, it's not even interesting computer science.  If they do find a need to fix it to be portable, then they should ensure they never need to fix that again.

    For most applications, it truly is as simple as flipping an Xcode selection, as long as you've written in a reasonably portable way.  It's not hard in Objective-C/C/C++ to do so, and Swift makes it easier.  Switching from 32-bits to 64-bits had the biggest change with Apple changing from regular floats to double floats for CGFloats, but that's not nearly the hardest thing to fix.  You don't know what you're talking about.

    Yes, it took Apple a YEAR to bring Final Cut over. More than a half year to bring LOGIC pro over. It took both Adobe and Microsoft a year as well. It took WolframAlpha (the company doing the demo) about 7 months to move over. There are numerous other examples.

    of course, by your thinking, they are all incompetent.
    You realise this isn’t the same thing, right? In the os 9 to OS X days devs had to change the api they were calling (a process called carbonisation) from the old OS 9 code to a subset of this api, which for some devs was difficult because they used some low level and often dangerous features that were removed.

    in the move to 64 bit some lower level c type structures had to be changed. That’s done now. 

     To move to arm today, if the developers are using the Apple compilers, there should be no change in the api. No change in the api means no work. 
    edited February 2020
    Soli
     0Likes 0Dislikes 1Informative
  • Reply 118 of 149
    thttht Posts: 6,018member
    wizard69 said:
    hexclock said:
    red oak said:
    An "A15X" type 5 nm chip in a laptop is going to blow the doors off anything offered by Intel

    Batteries in the MBP are 2-2.5x the size of iPad Pros.   Plus the thermal envelope of the MBPs are much greater.    It will allow Apple to dramatically increase the number of cores plus boost the clock frequency.    It is going to be something to behold 

    This is the laptop I want 

    How about a dual processor Mac Mini?
    The future is chiplet technology putting as many cores as you need in one package.  We don’t need a dual socket Mini we just need for Apple to pay attention to the Mini because even today the Mini is a joke.  

    In anyevent if you are not up on the latest tech read up on chiplet tech and the way AMD is using the tech to beat Intel senseless.  AMD’s high end Threadripper no comes with up to 64 cores in one package.  That is 64 cores and 128 threads.  These chips will not fit into the Mini but simply highlight chiplet tech.  

    Frankly the newest Ryzen mobile chips could do wonders in a Mini As these are 8 core 16 thread at the top end with a very good GPU all of 45 watts.  No chiplets in Ryzen mobile either just one well designed processor chip.   AMD has a whole range of desktop chips too, but the point is the Mini could easily move forward if Apple really wanted to and that is on X86.   On ARM we probably could see even more CPU cores.    The interesting thing about ARM is that Apple can tailor it in any way Apple sees fit.  I would not be surprised at all to see far more AI acceleration cores than regular CPU cores.  It is an interesting time in processor design and CPU cores are now just a portion of the overall chip.    Apple could easily put out a 16 core ARM based processor if they wanted, at this point it comes down to if they want to allocate the die space.  
    I don’t think anyone is going to put two sockets into a Mac mini, nor are they are going to put a lot of cores in it. Apple should have updated it with Core i7-9700 and 9900 processors, 8-core 65W processors at varying frequencies, and dropped the 4-core from the lineup for the holiday quarter, or at least next month, and that’s basically just about what it can do with what 65 W TDP processors the Mac mini is designed for. Who knows, maybe it’ll happen next month. Don’t think Intel is releasing 10 nm desktop chips until the fall.

    AMD’s big advantage over Intel is that are using a mature TSMC 7 nm fab while Intel is struggling to get their 10 nm to produce. The two are about equivalent in transistor densities, it’s just TSMCs has actually been working for about 2 years now. They are about a factor of 2x in transistors over Intel’s 14 nm fab, which is now on its 5 year I think. 6th year? That’s a huge advantage that is pretty hard to overcome if not impossible.

    Apple already does MCM packaging with their SoCs. The RAM and SoC are layered on top of each other in iPhone SoC packaging. If Apple is making macOS/ARM laptops, I’m thinking they will design it like iPad chips and logic boards. They are going to make it as much as a System-in-Package as possible, and RAM could be in-package. It will only be in the high end machines that require a lot of RAM, discrete GPUs, and or slots where they’ll break it out of the packaging.

    Apple isn’t in the business of selling server chips, yet I guess, so all the fancy MCM integration techniques like AMD’s chiplets, switched fabrics, in-package IO or Intel’s Foveros/EMIB, really aren’t going to come into play. They aren’t going to be making >500 mm^2 server chips or equivalent with MCM packaging. AMD’s MCM strategy is to not make gigantic chips either, as they don’t have their own fabs, and they need to maximize their production efficiency.

    I don’t think Apple needs to do what AMD is doing. I think they will be fine with multi-million unit runs of 8-core w/x GPU performance and 16-core w/2x GPU performance for laptops and lower end desktops. For the higher end desktops, there needs to be some thought. If they sell like 1 million high end machines a year, a special run with a monolithic die and a discrete GPU could be perfectly reasonable for them.

    So, an extension of what they do between iPhone SoCs and high end iPad SoCs. They’ll add CPU core complexes, GPU cores, memory channels, neural engine cores, other IO as required for the machine it is going into. The die sizes aren’t going to be bigger than 250 mm^2, maybe 300 mm^2.


     0Likes 0Dislikes 0Informatives
  • Reply 119 of 149
    mcdavemcdave Posts: 1,927member
    melgross said:

    mcdave said:
    larryjw said:
    lkrupp said:
    Any ideas on how Apple will handle the X86 code of current apps to run on ARM architecture? I am not educated on this. Is ARM close enough to X86 that the transition will be easy or will it require a Rosetta-like translation framework like the move from Moto 68000 to X86 did. Will we have universal binaries again or something else during the transition?
    Well, both ARM and X86 chips are micro coded. Could Apple microcode the x86 instruction set into the ARM?
    My guess is they already have, hence the lower than usual performance bump when the moved to Vortex (virtual cortex).
    mcdave said:
    ...yet more proprietary lockdown upgrade fatigue...?
    ARM Aarch64 ISA is no more proprietary than x86. What’s your point.
    No. It was less because companies are running out of headroom in chip development. You’ve kept up with what’s happening, haven’t you? You know that Moore’s Law is pretty much kaput. Apple isn’t immune from that.

    x86 is proprietary, as is ARM. You need to read more about all of this instead of just making things up from your imagination.
    Upon reflection the A10-13 single core performance has been tapering consistently despite node reductions. I’m assuming power takes priority over performance.

    You realise you’re agreeing with my second point?
     0Likes 0Dislikes 0Informatives
  • Reply 120 of 149
    mcdavemcdave Posts: 1,927member
    melgross said:
    I expect Apple has pretty solid data about what applications Mac users actually use.  I'm sure a decent chunk of the market (such as my wife) use no applications beyond a Web browser, the applications Apple provides, and Word, PowerPoint, and Excel.  And push comes to shove, many people in this category could do without the last 3.

    One approach for Apple would be to target this market with a high-quality, low-price, super portable MacOS laptop (that just happens to have very little third party support initially).  But is there enough of a market for this since the same audience can do just fine with an iPad and external keyboard.
    Office is extremely important on the Mac. It’s so important, that Apple never tried to have iWork rival it. They just gave up. We had hoped Apple would enhance it to become a real rival, which they could have, but they didn’t.

    in fact, in the late 1990’s, when Apple found Microsoft had their hand in the QuickTime cookie jar, one of the requirements Apple made to Microsoft was that Microsoft upgrade Office for the Mac for at least 5 years.
    Google Docs proved Office was overkill. It also taught everyone that Pages is adequate & if your output is PDF, the creation tool is irrelevant.
     0Likes 0Dislikes 0Informatives
Sign In or Register to comment.