knowitall

About

Banned
Username
knowitall
Joined
Visits
170
Last Active
Roles
member
Points
825
Badges
1
Posts
1,648
  • Read the fine print of Apple Card's customer agreement

    The fine print: "Goldman Sachs"
    Carnagewatto_cobra
  • Why Apple's Macs can now ditch Intel x86 and shift to ARM

    Dave1960 said:
    Not a smart move. 

    Already run into compatibility issues with Windows tablets not able to print from ARM CPU's. 

    Not enough printers support ARM drivers yet, although this could possibly spur manufacturers to build ARM drivers for their machines. 

    Still this won't happen for older devices - they won't put the resources into it. 

    Nonsense, driver support is usb level.
    watto_cobra
  • Why Apple's Macs can now ditch Intel x86 and shift to ARM

    elijahg said:
    knowitall said:
    elijahg said:
    A significant factor in the PPC > x86 switch was Rosetta. It is much easier to emulate RISC PPC with its relatively small instruction set than it is CISC x86, and now x64. PPC apps running in Rosetta weren't much slower than the native ones, but that was also partly offset by the Intel CPUs being much, much faster than PPC ones. The A-series CPUs are quick, and in a less power and thermally constrained environment no doubt even quicker - but CISC emulation on RISC architectures is excruciatingly slow, no matter how fast the native CPU. Remember Connectix's Virtual PC? That emulated an x86 machine on PPC. Installing Win98 took 3 or 4 hours even on a G5. Of course API level emulation a-la Rosetta has less overhead, but it's still slow. 

    Also, people who are switching to Mac can still use the Mac as a PC if they need to. It provides a comfort blanket. As soon as Apple switched to x86, Mac sales took off.
    I wouldn't call running Windows comfortable, not even in another universe.
    Its best to get rid of it.
    Running ppc apps under Rosetta was slow, very slow and some apps didn't run at all.
    Running CISC on RISC or vise versa isn't inherently more difficult. It isn't guaranteed to be symmetrical but that doesn't depend on CISC or RISC (this is nowadays an outdated distinction) or the number of instructions one or the other has.
    I would say that a 64 bit instruction set (or not) is a more important notion when translating instruction sets. The internal state of the processor and how easily it is represented on another (processor) is also an important notion.      
    I would expect to see a difference in efficiency even per instruction.
    All in all I expect that on average only a few instructions are needed to translate one instruction set to another no matter what.
    Current processors are extremely fast so a factor 5 or so will not be noticed when running most apps.

    Edit: one fun fact to consider is that Intel already hardware translates its (CISC) instruction set(s) on its internal RISC instruction set at full clock speed, so its certain that this isn't a slow option. 
    But people have to use Windows to get work done on programs that aren't available on macOS. So you just say to all those who may rely on those programs for a living "sorry mate, you're out of luck. Goodbye"? 

    Rosetta wasn't great, but it wasn't particularly slow. Almost everything worked. Do you think then, Apple shouldn't bother with a compatibility layer if x86 > ARM happens, and just again, tell everyone who relies on some x86-only programs for a living that it's too bad, we're moving on and don't care?

    RISC and CISC are still relevant when talking about processor emulation.

    "All in all I expect that on average only a few instructions are needed to translate one instruction set to another no matter what." Not that simple. Even if it was on average 5 native instructions per emulated instruction it would still be a huge performance hit. And as I said, it's more difficult to emulate the massive CISC x64 instruction set than it was the relatively concise RISC PPC one. There's also the endianness issue, which means every byte has to be flipped. 

    "one fun fact to consider is that Intel already hardware translates its (CISC) instruction set(s) on its internal RISC instruction set at full clock speed, so its certain that this isn't a slow option. " 

    That's because it runs at native speed in hardware. It's not software emulation. It still is at least one extra pipeline stage too.
    The point is that it is an instruction translation and when done in hardware you have little flexibility and room for logic so the translation scheme is simple which means easy to implement in software and with little overhead.

    When Apples libraries are translated in advance, code will run at native speed when within the library (this can be an significant part of the apps runtime) (think of Rosetta).
    Its even possible to translate the app binary in advance which will improve performance even further. 
    Of course if the source code is available, a recompile will make the app run native.
    Another way to run windows apps on Mac is to use Wine, its impressive to see what works nowadays.
    I expect Wine to be running on ARM (because Linux runs on ARM), so you can try this.
    Its also possible to ask the app developer to produce an ARM version of his software, why wouldn't he be willing to do that, especially because windows 10 runs on ARM now.
      
      
    watto_cobra
  • Why Apple's Macs can now ditch Intel x86 and shift to ARM

    Mr. Dilger's article almost makes me feel sorry for Intel. Almost. What's interesting about Intel isn't that they failed to recognize the corner they'd painted themselves into with the x86 architecture -- it's that they DID recognize, and tried to solve it, and failed. They tried to get into other chip fabrications like broadband, and failed. They tried IA-64, and failed. They tried the Atom, and failed. For broadband and Atom it became clear to the industry that Intel's solution wasn't good enough, but for IA-64 and Itanium Intel fell into the classic trap of having a superior product that others wouldn't invest in to use. Microsoft wasn't going for it and neither were the other industry leaders. You'd think that someone at Intel would learn from all this failure -- Apple (well, really, Jobs, and to a fair extent Cook and Ive) certainly learned from failure, which is why we got the iMac, iPod, iPhone, Mac OS X, etc. Failure, if you survive it, is a good teacher. What has Intel learned? Darned if I know.
    I think Intel hasn't got it anymore because they consist of managers (without a clue of course) only.
    Intels boss has no clue about technology and can only utter manager speak.
    watto_cobra
  • Why Apple's Macs can now ditch Intel x86 and shift to ARM

    knowitall said:
    elijahg said:
    A significant factor in the PPC > x86 switch was Rosetta. It is much easier to emulate RISC PPC with its relatively small instruction set than it is CISC x86, and now x64. PPC apps running in Rosetta weren't much slower than the native ones, but that was also partly offset by the Intel CPUs being much, much faster than PPC ones. The A-series CPUs are quick, and in a less power and thermally constrained environment no doubt even quicker - but CISC emulation on RISC architectures is excruciatingly slow, no matter how fast the native CPU. Remember Connectix's Virtual PC? That emulated an x86 machine on PPC. Installing Win98 took 3 or 4 hours even on a G5. Of course API level emulation a-la Rosetta has less overhead, but it's still slow. 

    Also, people who are switching to Mac can still use the Mac as a PC if they need to. It provides a comfort blanket. As soon as Apple switched to x86, Mac sales took off.
    I wouldn't call running Windows comfortable, not even in another universe.
    Its best to get rid of it.
    Running ppc apps under Rosetta was slow, very slow and some apps didn't run at all.
    Running CISC on RISC or vise versa isn't inherently more difficult. It isn't guaranteed to be symmetrical but that doesn't depend on CISC or RISC (this is nowadays an outdated distinction) or the number of instructions one or the other has.
    I would say that a 64 bit instruction set (or not) is a more important notion when translating instruction sets. The internal state of the processor and how easily it is represented on another (processor) is also an important notion.      
    I would expect to see a difference in efficiency even per instruction.
    All in all I expect that on average only a few instructions are needed to translate one instruction set to another no matter what.
    Current processors are extremely fast so a factor 5 or so will not be noticed when running most apps.
    If it was so easy why Microsoft has failed with Surface RT? Why desktop Windows applications didn’t run on RT?

    Thats a good question.
    Its maybe best to ask MS.
    I think this has noting to do with translating instruction set(s), but with (memory) capacity and processor speed.
    So, this is possibly by design (which probably means in this case that some manager pushed his/her faulty ideas and didn't listen to engineering).
    watto_cobra