Mach-O compiled for CISC, not RISC.

Posted:
in macOS edited January 2014
Check out this link from Unsanity's company blog:



<a href="http://www.unsanity.org/archives/000044.php"; target="_blank">Unsanity Company Blog (May be Slashdotted.)</a>



Very interesting--if this is true, does any of the luminaries here know why Apple didn't optimize it for the RISC architecture?



I *seriously* doubt that this is a "back door" for future x86 compatability, though someone will start shouting that in a few minutes.



What do people think? Is there a good reason, or was it lack of resources, lack of time, or just a damned error?



[ 10-21-2002: Message edited by: mrmister ]</p>
«1

Comments

  • Reply 1 of 25
    While I could see someone shouting CISC = x86 processors it COULD be thought of in the line of that Apple is moving from RISC to CISC but still not using x86 processors. CISC doesn't make a processor an x86 'in theory'.



    Mac Guru
  • Reply 2 of 25
    mrmistermrmister Posts: 1,095member
    riiiiiiiight, but isn't it more likely that it is just legacy code that didn't get refitted b/c OSX was already drastically behind?
  • Reply 3 of 25
    rogue27rogue27 Posts: 607member
    I'd say lack of time. It would be nie if they could find a way to go in and re-write it, but the author of the article made it sound like that won't be possible.
  • Reply 4 of 25
    rodukroduk Posts: 706member
    I'd guess that Apple were aware of this and made an informed choice not to change it, having weighed up the advantages of getting the OS out the door against the disadvantages of a small performance penalty.
  • Reply 5 of 25
    mrmistermrmister Posts: 1,095member
    Sad thing is, it doesn't sound like a very small penalty to me.
  • Reply 6 of 25
    rodukroduk Posts: 706member
    [quote]Originally posted by mrmister:

    <strong>Sad thing is, it doesn't sound like a very small penalty to me.</strong><hr></blockquote>



    10% isn't so bad. At the time, perhaps Apple were conscious of the failed attempts in the past to produce the next generation Mac OS, and verged on the side of caution. A lot of software is developed along the lines of "lets get it working first, we can try to optimise it later".
  • Reply 7 of 25
    mrmistermrmister Posts: 1,095member
    [quote] 10% isn't so bad. <hr></blockquote>



    That's conservative, but even if we take it as that, it really would make a difference if it is across the board.



    I understand that sometimes you need to make tough choices--this looks to be one.
  • Reply 8 of 25
    programmerprogrammer Posts: 3,457member
    Conspiracy theorists unite!



    No, this is clearly an engineering decision made in order to expedite the initial development of MacOS X. This isn't such a big deal for a few reasons...



    (1) My gut feel is that even 10-12% is an over-estimate. Applications where you care that much about performance are likely to be doing more inside of their individual function calls.



    (2) The major cost of a function call is going to depend on whether the code is already in the cache or not.



    (3) Future processors are going to be more highly pipelined and function call overhead is generally the kind of stuff that pipeline's very well. (Raw processor speed doesn't really change anything since this overhead is more of a %age than a fixed overhead).



    (4) Compilers are free to call between an application's functions however they please, it is primarily the system ABI that matters here. This considerably reduces the number of functions subject to this performance penalty. Object oriented C++ code is probably doing pointer-relative operations anyhow.



    (5) System calls which need to do cross-process work are going to cost way more than the few cycles spent on the initial call so the overhead is much much lower as a %age.



    (6) Destabilizing your tool chain is a very effective way to cripple your development, and it always takes longer to stabilize than you planned. Not mucking with it was probably a very wise choice on Apple's part.



    (7) There are much bigger fish to fry when it comes to improving performance when compiling for "RISC" (well, PowerPC in this case). These generally involve how the C/C++ code is written such that the compiler can take better advantage of the ABI and the processor's load & store architecture.



    Certainly this in no way means that MacOS X is somehow intended to go back to x86. The choice of PowerPC ABI was made to hasten the port from x86.
  • Reply 9 of 25
    rodukroduk Posts: 706member
    [quote]Originally posted by mrmister:

    <strong>



    That's conservative, but even if we take it as that, it really would make a difference if it is across the board.



    I understand that sometimes you need to make tough choices--this looks to be one.</strong><hr></blockquote>



    10% - 12% is realistic.



    In a perfect world, the OS would be fully optimised for the processor architecture. As you say though, we live in the real world and choices have to be made. At the end of the day, Apple is in the business of making money, not writing perfect software. <img src="graemlins/hmmm.gif" border="0" alt="[Hmmm]" />



    [ 10-21-2002: Message edited by: RodUK ]</p>
  • Reply 10 of 25
    frawgzfrawgz Posts: 547member
    [quote]Originally posted by RodUK:

    <strong>



    10% - 12% is realistic.



    In a perfect world, the OS would be fully optimised for the processor architecture. As you say though, we live in the real world and choices have to be made. At the end of the day, Apple is in the business of making money, not writing perfect software. <img src="graemlins/hmmm.gif" border="0" alt="[Hmmm]" />



    [ 10-21-2002: Message edited by: RodUK ]</strong><hr></blockquote>



    What is "perfect software" anyway?
  • Reply 11 of 25
    mrmistermrmister Posts: 1,095member
    Thanks Programmer. As usual, you are a font of valuable information.
  • Reply 12 of 25
    nevynnevyn Posts: 360member
    I'd think they could gain more than 10% through improvements to GCC.



    Apple used to develop an extremely aggressive compiler 'MrC', and pretty much anyone that's used it and then worked with GCC for Darwin is aghast at the speed difference.
  • Reply 13 of 25
    [quote]Originally posted by Nevyn:

    <strong>I'd think they could gain more than 10% through improvements to GCC.



    Apple used to develop an extremely aggressive compiler 'MrC', and pretty much anyone that's used it and then worked with GCC for Darwin is aghast at the speed difference.</strong><hr></blockquote>



    This is very true. GCC itself isn't particular RISC friendly, judging by my experiences with 2.9x on various RISC machines. From what I've heard, however, 3.x has made substantial strides on that score (I haven't used GCC 3.x myself yet) and is responsible for a fair amount of the improvement in Jaguar.
  • Reply 14 of 25
    kickahakickaha Posts: 8,760member
    Definitely. The egcs based code that became gcc3.x was a huge leap forward for RISC support. The PPC backend work done by Apple (and some from RedHat, although they seem to have faded from view recently... too bad, really) improved the execution speed a non-insignificant amount (~10%), and set the stage for some *serious* speed boosts down the line, in both compilation time (PFEs anyone?) and execution.



    The nicest thing? Most of these changes seem to be acceptable to the FSF for inclusion in the standard gcc distro. (Especially the PFE... they're loving the PFE...)
  • Reply 15 of 25
    zozo Posts: 3,117member
    I thought everyone and their pet cousin Sal said that RISC is THE way to design a processor and that CISC simply sucks behind. This would be rediculous to go to an inferior architechture.
  • Reply 16 of 25
    airslufairsluf Posts: 1,861member
  • Reply 17 of 25
    The whole RISC/CISC argument really went out the window when Intel realized that they could build a RISC machine and stick a CISC decoder on the front and thus retain backward compatibility.



    The only real advantage that RISC retains is that the "better" (but not necessarily less complex) ISA allows the writer or compiler to be more precise and expressive in how the algorithms are represented in the ISA. More registers, more orthogonal instructions, better use of the condition codes, etc. On the other hand, most RISC instruction sets use a fixed size 32-bit opcode which makes the code somewhat larger. I think RISC still has an advantage, but its not nearly what it used to be.



    I still much prefer coding for PowerPC, but the whole RISC vs. CISC thing really fizzled a long time ago. Now its about who's willing and able to spend the money to design and build a better chip.
  • Reply 18 of 25
    [quote]Originally posted by Programmer:

    <strong>I still much prefer coding for PowerPC, but the whole RISC vs. CISC thing really fizzled a long time ago. Now its about who's willing and able to spend the money to design and build a better chip.</strong><hr></blockquote>



    ...and also what type of chip the compilers are best optimized for...



    It will be nice when GCC gets up to speed on the PPC.
  • Reply 19 of 25
    dfilerdfiler Posts: 3,420member
    [quote]Originally posted by RodUK:

    [QB]In a perfect world, the OS would be fully optimised for the processor architecture. As you say though, we live in the real world and choices have to be made. At the end of the day, Apple is in the business of making money, not writing perfect software. <img src="graemlins/hmmm.gif" border="0" alt="[Hmmm]" /> QB]<hr></blockquote>



    Well said.



    This is related to the whole 970 discussion as well. Apple execs have known for quite a while that a major architecture change would be neccessary. They knew far before any of us that the G4 wasn't scaling as well as desired. Is it possible that they held off on this work until after the transition to a new chip and supplier?



    From a buisness perspective, it makes little sense to invest in optimizing PPC mach-O immediately prior to adopting a different chip. Is this train of reasoning correct? Would apple's low level optimization for the G4 be reusable during 970 optimization?
  • Reply 20 of 25
    programmerprogrammer Posts: 3,457member
    [quote]Originally posted by dfiler:

    <strong>



    Well said.



    This is related to the whole 970 discussion as well. Apple execs have known for quite a while that a major architecture change would be neccessary. They knew far before any of us that the G4 wasn't scaling as well as desired. Is it possible that they held off on this work until after the transition to a new chip and supplier?



    From a buisness perspective, it makes little sense to invest in optimizing PPC mach-O immediately prior to adopting a different chip. Is this train of reasoning correct? Would apple's low level optimization for the G4 be reusable during 970 optimization?</strong><hr></blockquote>



    970 vs. G4 won't really make any difference to the mach-O ABI, they are both PowerPCs. There will be slight efficiency difference, but for the most part an ABI that is good for one PowerPC is good for another. The real issue is one designed for 680x0 vs. x86 vs. PowerPC.
Sign In or Register to comment.