Chickens, eggs, and multiprocessing

Posted:
in Future Apple Hardware edited January 2014
OK: Apple's chicken and egg problem is that their 5% market share is insufficient for Motorola to invest the resources necessary to develop an Athlon-busting desktop CPU. And since Mac desktops are falling behind their x86 counterparts, this is exerting downward pressure on Apple's marketshare.



By now, we've seen that Steve's strategy is to enhance the perceived value of Macs with bundled iApps and cutting-edge industrial design. But we'd all like more speed.



Now, it would be great if Apple could double their marketshare, but from Motorola's perspective, it would be just as good if Apple doubled their orders for CPUs. And if Apple went multi-processing crazy, then they could do just that in a short space of time. Suppose that iMacs were all duals, and towers were quads or even more. Apple would be ordering many more CPUs, and Motorola would be injecting more funds into PPC design and fabbing.



Obviously, the OS would need to make use of all these processors: my understanding is that more than two processors are pretty much wasted in OSX. On the other hand, I also gather that micro-kernel based systems generally handle multi-processing better than monolithic kernel OSs. So if Apple wanted to go with this strategy I presume that OSX could have been relatively easily modified to accommodate.



My best guess is that this idea is a no-goer because of relatively obscure (to me) things like bus architectures and memory controllers. Am I on the right track here?
«13

Comments

  • Reply 1 of 56
    thuh freakthuh freak Posts: 2,664member
    not sure myself, but i get the impression that making chips is timeconsuming. To that end, apple would not be able to turn-around sales as quickly, as it waited on moto. but i kno i'd love a d(ual)p(roc), or q(uad)p. my geethree is long in th' tooth.



    hack the planit.
  • Reply 2 of 56
    airslufairsluf Posts: 1,861member
  • Reply 3 of 56
    jcgjcg Posts: 777member
    If I remember correctly the Unix backbone of Darwin and OS X can handle either 16 or 32 processors.



    The problem for Motorolla in your scenario is supplying all of those chips, particularly at the high end. Although the supply of 1ghz chips seems to be good now, historically the high end chips are in the shortest supply. So a more likely scenerio would be that Apple would be producing 1/2-1/4 of the volume of PowerMacs while Motorolla is trying to keep up with the production. To back up this argument, where are the 800mhz G4 upgrade cards from Sonnet? They arent here becouse Motorolla is selling all thier spare 800mhz chips to Apple.
  • Reply 4 of 56
    cdhostagecdhostage Posts: 1,038member
    I wish Motorola would sell some of the microprocessor business to Apple./



    Or perhaps Apple should set up its own.



    I mean, Apple's got enough cash to make a new processor plant - you need a couple billion and Apple's got at least $4.

    Of course, Apple'd have to explain the cost - and the enormous R&D. Hmm... well, if Apple made its own chips, it could at least pour enough money into it to make sure that the things got STINKING FASTER over time.



    And perhaps make enough to make all desktops dual processor?
  • Reply 5 of 56
    crusadercrusader Posts: 1,129member
    Dual processors in a iMac? Nope. I think maybe a multicore chip is coming.
  • Reply 6 of 56
    Mach 4.0 is going to be coming out pretty soon. It should expand support from 16 to 32 processors.



    Just remember, from the OS perspective, Our current Desktop machines are SMALL. It was designed to be scalable up to hardware that had 2 Terebytes of RAM, 16 processors (or 32 on Mach 4) and hundreds of terebytes of mass storage.



    Xserve with 1.6 terebytes of storage only woke the OS up and let out a little yawn. Ho hum....



    Yes MicroKernals are better Period. As new pardigms of storage and processing and networking come into existence, Small fragments of the OS get rewritten and implemented unlike a Monolithic core that has to be replumbed everytime a change is needed. The monolithic may be a bit more opimized, but it's not as flexable. Ever wonder why there are so many versions of Linux on so many different versions of the core? Talk about market polution.
  • Reply 7 of 56
    eugeneeugene Posts: 8,254member
    Mach 4.0 is a completely separate branch from Mach 3.0. It's also pretty dead as far as maintenance and development goes...
  • Reply 8 of 56
    getafxgetafx Posts: 21member
    Quads would be great, used to RULE & were only on th mac side - almost a clone co (Radius). Four chips gives threefold speed increase which puts Hz ahead of th pack. Always cost a fortune; that & chip constraints as mentioned above keeps them in a niche whereas moving duals into th mainstream is good value.
  • Reply 9 of 56
    boy_analogboy_analog Posts: 315member
    Thanks for your responses.



    I could have expressed myself a little more clearly in my initial post. I don't think that there is any inherent limitation in OSX that stops Apple from shipping quads or greater. If the hardware folks wanted to make quads, I'm sure that the OS people would have no trouble releasing the appropriate patches in time.



    Let me rephrase the question. Apple hasn't released any quads because:
    • the memory controller wouldn't recognise the additional CPUs

    • the memory controller wouldn't be able to cope with the cache coherency issues

    • the memory controller would be so flummoxed by the additional CPUs that there would be no performance gain

    • substitute "memory controller" with "MPX bus" in any of the above

    • it's all doable, but the costs would be uneconomical

    Which of these propositions (or combination thereof) is closest to the mark?
  • Reply 10 of 56
    programmerprogrammer Posts: 3,467member
    [quote]Originally posted by boy_analog:

    <strong>Which of these propositions (or combination thereof) is closest to the mark?</strong><hr></blockquote>



    None.



    The MPX bus can move ~1 GB/sec. A single G4 working on a memory intensive process can consume &gt;1 GB/sec. Two G4s working at the same time will deliver some performance improvement because not all tasks are memory bound, but that memory bound task will be no faster. Three, four, or more G4s will mostly be sitting around twiddling their thumbs waiting for the MPX bus to get them more data. The extra computing power you get isn't worth the extra money or the heat generated in the case by 4 processors.



    Would you pay an extra $1500 for a machine that wasn't any faster, but raised your electrical bill?
  • Reply 11 of 56
    lemon bon bonlemon bon bon Posts: 2,383member
    "I think maybe a multicore chip is coming."



    PPC heading this way?



    Wonder if IBM will be making a dual core G3 with SIMD unit in the next year?



    The way IBM is going with the G3 seems to suggest that width and not length is the way PPC will 'who's the daddy' over x86 man.



    As apple heads into 3D, server markets etc, it'll be interesting to see where PPC goes in the next year. Mhz wise, it appears to be going nowhere.



    So, something else must be in the offing.



    I'm intrigued by the fact apple sits on the hypertransport consortium. Will that come after a rapid io set up or instead of?



    With the x86 Hammer around the corner, one can only guess that Apple has something equally as compelling in the next 6 months. The Rapid Io 'G5' maybe some kind of retort.



    That's probably the Apple 'tower' I buy. But my gut says I'm going to have to wait for it.



    A dual 1.2 gig ddr G4 aint going to cut it. It's already out of date. (Those rumours are old already...)



    Lemon Bon Bon
  • Reply 12 of 56
    Right now the Motherboard is the bottleneck. Since Motrorola decided to use Rapid I/O on their first incarnation of the G5, Apple will have to use this technology.



    Right now, If apple offered a 1.3 Ghz G4 on a 133Mhz Bus your facing a 10 to 1 ratio. Apple can lessen the effects of this ratio with DDR level 3 buffers. But, The fact remains that if you add a second or even third and fourth processors..... Your screwed. Right now I'd say the Duel Ghz boxes are probably pushing the bus to the top. Proper system balance is key to total performance characteristics. 10 to 1 isn't very balanced.
  • Reply 13 of 56
    randycat99randycat99 Posts: 1,919member
    [quote]Originally posted by Programmer:

    <strong>The MPX bus can move ~1 GB/sec. A single G4 working on a memory intensive process can consume &gt;1 GB/sec. Two G4s working at the same time will deliver some performance improvement because not all tasks are memory bound, but that memory bound task will be no faster.</strong><hr></blockquote>



    What is this "MPX bus" thing? Is that a new memory bus standard or something? Is there any chance of using parallel channels of the thing (similar to a dual channel Rambus scheme)? Why not just use 4 channels of Rambus then? That would be worth 6.4 GB/s of bandwidth (more if you count the newer, faster Rambus standard), and if you interleave 4 channels (instead of the current 2) that should give you competitive latency characteristics, no?



    4 channels would probably be pretty pricey, then right? I guess you could endeavor to make it cheap by consolidating the whole deal on a single chip and making that chip really small. Also you would need to install your memory modules in quantities of 4. Not too big of a deal when performance is the 1st priority, but a bit of a pain for the casual user. Perhaps they could design it so the memory architecture is "tolerant" with just 1 or 2 memory cards installed (you just get less bandwidth and more latency).



    Aside from the propensity for Apple to put such "hardcore" hardware in a desktop product, maybe the answer has always been there for us- distributed computing. Though not logistically trivial in of itself, it seems more logistically practical than using the "memory controller from Hell" as a matter of routine in your entire mainstream desktop product line. I'm just rambling, of course, but I am somewhat intrigued by the idea of a 4-channel Rambus system, however.
  • Reply 14 of 56
    airslufairsluf Posts: 1,861member
  • Reply 15 of 56
    randycat99randycat99 Posts: 1,919member
    Ah yes, that is where I've heard MPX mentioned. I forgot.



    I know that fundamental latency cannot be solved by multi-channels, but I've always heard that interleaving of multiple channels can somewhat offset the effects of latency when you are looking at an entire system where many sequential and random accesses are occuring.
  • Reply 16 of 56
    snoopysnoopy Posts: 1,901member
    Likely everybody knows, but it didn't come out clearly in the original post or replies, Mach is a micro-kernel, and therefore OS X is based on a micro-kernel. It is not monolithic. Just from reading the postings, I don't think I would be able to tell which way it is, though the facts seem to be correct.
  • Reply 17 of 56
    boy_analogboy_analog Posts: 315member
    [quote] Would you pay an extra $1500 for a machine that wasn't any faster, but raised your electrical bill? <hr></blockquote>



    Hey, if Jeff Goldblum thinks I should, who am I to argue?



    Thanks for setting me straight.
  • Reply 18 of 56
    spookyspooky Posts: 504member
    Apple has in the past (when we really loved it) taken bizarre, new stuff and added it to their systems - even if the rest of the industry hadn't even had a chance to give it a whirl. Yet, with processors, Steve seems to have a blind spot. Can it really be true that apart from IBM, MOTO, Intel, AMD and transmeta nothing else is going on with regards to revolutionary processor or computing design on the planet?

    Why doesn't the apple of today take a look around at whatever is happeining? Maybe someone out there is designing a processor or has an idea that semmingly breaks the rules? There was a time when apple would have scoured the earth for it - now it could even fund it. Yet its all down to the Mghz myth. Steve tells us that Intel's extra 800Mhz+ means nothing yet when he bumps up the PowerMac line by 200Mhz we're supposed to think we have a much superior machine. He can't have it both ways.

    face it Moto will never give mac users what they need or want. Long term apple has to look elsewhere. It needs a techno/engineering equivalent of Jonathan Ives.
  • Reply 19 of 56
    scott f.scott f. Posts: 276member
    [quote]Originally posted by spooky:

    <strong>Long term apple has to look elsewhere. It needs a techno/engineering equivalent of Jonathan Ives.</strong><hr></blockquote>



    Great... an egg-shaped processor... woo hooo...



    <img src="graemlins/lol.gif" border="0" alt="[Laughing]" />



    I hear ya... I stated something similar to this in another thread. I wish they could "find" a new chipmaker and "killer" process that allows for an AMD/Intel killer... in the range of 200%-400% faster or something. And I'm not talking Mhz-Myth speed... I'm talking raw horsepower.



    - Scott
  • Reply 20 of 56
    zazzaz Posts: 177member
    [quote]Originally posted by spooky:

    <strong>Apple has in the past (when we really loved it) taken bizarre, new stuff and added it to their systems - even if the rest of the industry hadn't even had a chance to give it a whirl. Yet, with processors, Steve seems to have a blind spot. Can it really be true that apart from IBM, MOTO, Intel, AMD and transmeta nothing else is going on with regards to revolutionary processor or computing design on the planet?

    Why doesn't the apple of today take a look around at whatever is happeining? Maybe someone out there is designing a processor or has an idea that semmingly breaks the rules? There was a time when apple would have scoured the earth for it - now it could even fund it. Yet its all down to the Mghz myth. Steve tells us that Intel's extra 800Mhz+ means nothing yet when he bumps up the PowerMac line by 200Mhz we're supposed to think we have a much superior machine. He can't have it both ways.

    face it Moto will never give mac users what they need or want. Long term apple has to look elsewhere. It needs a techno/engineering equivalent of Jonathan Ives.</strong><hr></blockquote>



    3 words:

    Legacy Code Base



    Changing the CPU architecture for a mass-market product isn't like changing Nokia Face Plates.



    Even if they did and could get it to run X that would require HUGE amounts of dollars and work.



    If they did, it would have to support all legacy stuff ala Itanium does for x86. And we all know that is catching on like wildfire!



    Zaz
Sign In or Register to comment.