anonconformist

About

Username
anonconformist
Joined
Visits
111
Last Active
Roles
member
Points
585
Badges
0
Posts
202
  • ARM Mac coming in first half of 2021, says Ming-Chi Kuo

    tht said:
    melgross said:
    I expect Apple has pretty solid data about what applications Mac users actually use.  I'm sure a decent chunk of the market (such as my wife) use no applications beyond a Web browser, the applications Apple provides, and Word, PowerPoint, and Excel.  And push comes to shove, many people in this category could do without the last 3.

    One approach for Apple would be to target this market with a high-quality, low-price, super portable MacOS laptop (that just happens to have very little third party support initially).  But is there enough of a market for this since the same audience can do just fine with an iPad and external keyboard.
    Office is extremely important on the Mac. It’s so important, that Apple never tried to have iWork rival it. They just gave up. We had hoped Apple would enhance it to become a real rival, which they could have, but they didn’t.

    in fact, in the late 1990’s, when Apple found Microsoft had their hand in the QuickTime cookie jar, one of the requirements Apple made to Microsoft was that Microsoft upgrade Office for the Mac for at least 5 years.
    They should have.
    Once Google released Docs, making it free for consumers, and whatever it costs for a business license for their suite of office automation stuff, the room for iWork, or any other office suite other than MS Office which is the “standard” for most large businesses and schools, there was no room for iWork to be anything other than a free office suite of apps for Apple customers. Keynote is the best presentation software though.

    I don’t think there was a real window to begin with. Petabytes, whatever absurdly high amount, of files is in MS Office formats. That legacy drives Office usage in perpetuity. Not sure how people get off that train other than for MS to go out of business for other reasons. Making iWork popular would be a pretty difficult if not impossible road to take. If Google decides to stop doing GSuite stuff, that would open a window too, but it isn’t going to happen unless something bad happens to Google.

    Apple must get MS to port Office to macOS/ARM. Office is the standard office automation tool for large businesses. They won’t be selling ARM Macs to these places without it because all those old files are in Office file formats.

    It’s the same story for Adobe. An absurdly high amount of storage is in Adobe app formats. Compatibility is hugely important, and it is imperative for them to get Adobe to port as fast as possible.
    Microsoft has always made money on Office for Mac, so no persuasion required beyond Microsoft looking out for Microsoft.

    ”Porting” will be as easy as clicking a checkbox to also build for ARM (Apple will likely require fat binaries anyway) building it, run a BVT and shipping it. Trivial.  Microsoft has experience making code cross-CPU portable by default so unless Apple does something stupid by changing behaviors of APIs from Intel MacOS this is less involved than investigating a crash bug in Word if they get one.
    asdasd
  • ARM Mac coming in first half of 2021, says Ming-Chi Kuo

    melgross said:
    asdasd said:
    melgross said:
    lkrupp said:
    Any ideas on how Apple will handle the X86 code of current apps to run on ARM architecture? I am not educated on this. Is ARM close enough to X86 that the transition will be easy or will it require a Rosetta-like translation framework like the move from Moto 68000 to X86 did. Will we have universal binaries again or something else during the transition?
    This is the problem I’ve been wondering about for some time. While some people dismiss this as an issue, or in most cases, don’t even think about it (aren’t aware it is an issue), it’s the biggest issue apple will need to deal with. In previous changeovers, even Apple was very lax in getting their own big apps out. It took a year for them. It took a long time for Adobe and Microsoft, with their massive software, to come over too.

    ARM is not close to x86. It’s optimized for battery life over performance. Apple and ARM have made significant advances on that front, but the instruction sets are different enough. We know from previous attempts at emulation, that a processor family needs to be 5 times as powerful in order to be able to run software at the same speed as the family they’re emulating. This hasn’t changed. Microsoft supposedly does it now, with their “universal” sdk. But they don’t, really. They require software to be rewritten, and recompiled for ARM. And there have still been issues with performance, specific features and bugs.

    im not saying it can’t be done, because obviously it can. But if Apple is really going to release a device next year, there will either be significant limitations, or they’ve figured out a way around them. My suggestion, which no one here has ever commented on, from my memory, is to add a dozen x86 instructions to the chip. It’s been found that 80% of the slowdown between chip families is from about a dozen instructions. The chip, or OS, could hand that over to those when native x86 software needs them. Individual instructions aren’t patented, or copyrighted, as far as I know. If true, that would give Apple a way around the problem.
    I can’t emphasise enough that the vast majority of apps produced for the Mac right now will be a recompile and some won’t even need that. The compiler is doing the work if you use the Apple tool chain. 

    Dont confuse the compiled machine code with the higher level frameworks that might be used. 
    You shouldn’t, because it’s a myth. Yes, small apps can be recompiled, and will often work without more revision other than to fix bugs that always creep in when recompiling. But anything else needs to be rewritten.  It seems that people here forget the announcements that some developers have made during these transition periods in Apple demonstrations. They come out and announce how easy it was to get their massive app up and running in just a weekend. But then, it actually takes six months, or more, before that app is released. Why, because for the demo, they showed a few chosen features that worked out, after frenzied work. But the rest needed a good deal of work to function properly.

    its incrediably naive to think that this will be easy.
    I laugh when I read you making these insane claims.  If it takes a company 6 months to move only from one CPU to another CPU, where the APIs are identical, and they're not adding features/rewriting their applications in any meaningful way, then they're grossly incompetent.  Consider that when a new CPU/system comes out, they don't want to look bad compared to others, and it's a great time to add new features to show off the new CPU, assuming it gives any added power.  Short of them coding to the lowest-level CPU instructions for multimedia encoding/decoding or the like, assembly language is a major waste of developer time: incredibly few applications will get anything resembling the time/effort/cost back by rewriting anything in assembly language, because compilers more often than not do a better job with optimizations these days than the majority of the best hand-coded assembly language implementations.

    When you talk of developers needing to rewrite their applications because of a new CPU, even a new family/ISA where all that's changed is the CPU architecture? NOBODY REWRITES ANY MAJOR CODE FOR THAT!  That's insanely stupid for many reasons (including economically) and plain WRONG.  No major operating system is unable to be readily  rebuilt for other CPUs with only a very tiny amount of low-level assembly, and 99.99%  (admittedly, a number that's guessed, but no more than your no-proof data) of user applications in the last 10 years won't have any hand-coded assembly, regardless of their size. 

    The lowest-level language you'll find used in that percentage of apps these days is C.  While you can write it in a not-fully-portable manner because the C/C++ standards leave some details to the implementation of the vendors, it's actually very straightforward, even if you do convert between CPU architectures, to make the required changes to have the code work as it did before: it's not rocket science, it's not even interesting computer science.  If they do find a need to fix it to be portable, then they should ensure they never need to fix that again.

    For most applications, it truly is as simple as flipping an Xcode selection, as long as you've written in a reasonably portable way.  It's not hard in Objective-C/C/C++ to do so, and Swift makes it easier.  Switching from 32-bits to 64-bits had the biggest change with Apple changing from regular floats to double floats for CGFloats, but that's not nearly the hardest thing to fix.  You don't know what you're talking about.

    asdasd
  • ARM Mac coming in first half of 2021, says Ming-Chi Kuo

    red oak said:
    tht said:
    lkrupp said:
    Any ideas on how Apple will handle the X86 code of current apps to run on ARM architecture? I am not educated on this. Is ARM close enough to X86 that the transition will be easy or will it require a Rosetta-like translation framework like the move from Moto 68000 to X86 did. Will we have universal binaries again or something else during the transition?
    Microsoft has an X86 emulation layer for ARM Windows. It isn't great, but it's a solution.

    A Rosetta-like framework is the most likely.
    It can be a lot like the last time.

    1. Apps using Apple’s frameworks and Xcode will mostly be a recompile
    2. Apps that mostly use Apple’s frameworks, but not Xcode, or use 3rd party libraries, will use a binary translator like Rosetta (switching x86 instructions with ARM instructions at runtime)
    3. Apps that only work with macOS/x86 with 3rd party x86 libraries will need to be run in a macOS/x86 VM a la classic. If Apple provides it, presumably, apps can be overlapped, and it won’t be macOS run inside a window.

    And once again, the single biggest impediment to success will be Microsoft and Adobe moving their apps to macOS/ARM. If that doesn’t happen, macOS/x86 won’t be successful. Presumably, Apple will just do the work to bring FCPX and LPX over. Plugins may need to use Rosetta for awhile before they are moved over.
    Microsoft and (to a lesser extent) Adobe have invested heavily in developing iOS apps.   I don't think there will be a lot of work to make them native on an ARM Mac 


    They both have native MacOS apps so there's zero overhead for "porting" because they will already by now (13 years and counting!) have optimized performance-sensitive media code in ARM-native instructions, if they somehow can't use MacOS APIs that already wrap top performance native-CPU versions of those functions.

    With likely few or no significant exceptions, the costs will be a simple change of an Xcode build configuration and a rebuild, followed by a full application test suite just to be sure: trivial.
    netmage
  • TSMC denies short term plans for US-based chip production

    As important as the chips are for national security, it’s insane that they’d ever be allowed to have their IP into some place directly by threat of an adversarial country, no matter how good the price: that’s strategic suicide to do that, even if they don’t capture the IP, because if things don’t remain peaceful, this works out badly.

    Let there be no mistake: the government of China is NOT a friend of the US, and they’ve made it clear that Taiwan is theirs and will be reclaimed by them in a manner they see fit.
    ronnwatto_cobra
  • Intel aims beyond 5Ghz for future MacBook Pro H-series processors

    rob53 said:
    FUD from Intel to try and slow down Apple's migration to i's own chips. Intel won't come out with these any time soon. Single CPU clock speed seems to still be important because too many applications are not programmed to make use of multi cores. If developers spent more time getting away from single thread programs, they could see their applications run faster by using more cores. Unless something has changed recently, Adobe products continue to make use of single cores to run the majority of their consumer software. Non-Adobe products are using multi cores making them run a lot faster, especially on "slower" CPUs.
    I’m inferring you’re not an experienced developer if you’re a developer at all: very few applications can be made much more parallel because computations have a purely sequential flow.  This is why single-core speed still matters, and why throwing more cores at a problem is a fool’s gold, not to mention that adding more threads can often make it take longer to actually accomplish something due to communication overhead between threads.  
    fastasleepRayz2016mattinozwatto_cobra