Chickens, eggs, and multiprocessing
OK: Apple's chicken and egg problem is that their 5% market share is insufficient for Motorola to invest the resources necessary to develop an Athlon-busting desktop CPU. And since Mac desktops are falling behind their x86 counterparts, this is exerting downward pressure on Apple's marketshare.
By now, we've seen that Steve's strategy is to enhance the perceived value of Macs with bundled iApps and cutting-edge industrial design. But we'd all like more speed.
Now, it would be great if Apple could double their marketshare, but from Motorola's perspective, it would be just as good if Apple doubled their orders for CPUs. And if Apple went multi-processing crazy, then they could do just that in a short space of time. Suppose that iMacs were all duals, and towers were quads or even more. Apple would be ordering many more CPUs, and Motorola would be injecting more funds into PPC design and fabbing.
Obviously, the OS would need to make use of all these processors: my understanding is that more than two processors are pretty much wasted in OSX. On the other hand, I also gather that micro-kernel based systems generally handle multi-processing better than monolithic kernel OSs. So if Apple wanted to go with this strategy I presume that OSX could have been relatively easily modified to accommodate.
My best guess is that this idea is a no-goer because of relatively obscure (to me) things like bus architectures and memory controllers. Am I on the right track here?
By now, we've seen that Steve's strategy is to enhance the perceived value of Macs with bundled iApps and cutting-edge industrial design. But we'd all like more speed.
Now, it would be great if Apple could double their marketshare, but from Motorola's perspective, it would be just as good if Apple doubled their orders for CPUs. And if Apple went multi-processing crazy, then they could do just that in a short space of time. Suppose that iMacs were all duals, and towers were quads or even more. Apple would be ordering many more CPUs, and Motorola would be injecting more funds into PPC design and fabbing.
Obviously, the OS would need to make use of all these processors: my understanding is that more than two processors are pretty much wasted in OSX. On the other hand, I also gather that micro-kernel based systems generally handle multi-processing better than monolithic kernel OSs. So if Apple wanted to go with this strategy I presume that OSX could have been relatively easily modified to accommodate.
My best guess is that this idea is a no-goer because of relatively obscure (to me) things like bus architectures and memory controllers. Am I on the right track here?
Comments
hack the planit.
The problem for Motorolla in your scenario is supplying all of those chips, particularly at the high end. Although the supply of 1ghz chips seems to be good now, historically the high end chips are in the shortest supply. So a more likely scenerio would be that Apple would be producing 1/2-1/4 of the volume of PowerMacs while Motorolla is trying to keep up with the production. To back up this argument, where are the 800mhz G4 upgrade cards from Sonnet? They arent here becouse Motorolla is selling all thier spare 800mhz chips to Apple.
Or perhaps Apple should set up its own.
I mean, Apple's got enough cash to make a new processor plant - you need a couple billion and Apple's got at least $4.
Of course, Apple'd have to explain the cost - and the enormous R&D. Hmm... well, if Apple made its own chips, it could at least pour enough money into it to make sure that the things got STINKING FASTER over time.
Just remember, from the OS perspective, Our current Desktop machines are SMALL. It was designed to be scalable up to hardware that had 2 Terebytes of RAM, 16 processors (or 32 on Mach 4) and hundreds of terebytes of mass storage.
Xserve with 1.6 terebytes of storage only woke the OS up and let out a little yawn. Ho hum....
Yes MicroKernals are better Period. As new pardigms of storage and processing and networking come into existence, Small fragments of the OS get rewritten and implemented unlike a Monolithic core that has to be replumbed everytime a change is needed. The monolithic may be a bit more opimized, but it's not as flexable. Ever wonder why there are so many versions of Linux on so many different versions of the core? Talk about market polution.
I could have expressed myself a little more clearly in my initial post. I don't think that there is any inherent limitation in OSX that stops Apple from shipping quads or greater. If the hardware folks wanted to make quads, I'm sure that the OS people would have no trouble releasing the appropriate patches in time.
Let me rephrase the question. Apple hasn't released any quads because:
- the memory controller wouldn't recognise the additional CPUs
- the memory controller wouldn't be able to cope with the cache coherency issues
- the memory controller would be so flummoxed by the additional CPUs that there would be no performance gain
- substitute "memory controller" with "MPX bus" in any of the above
- it's all doable, but the costs would be uneconomical
Which of these propositions (or combination thereof) is closest to the mark?<strong>Which of these propositions (or combination thereof) is closest to the mark?</strong><hr></blockquote>
None.
The MPX bus can move ~1 GB/sec. A single G4 working on a memory intensive process can consume >1 GB/sec. Two G4s working at the same time will deliver some performance improvement because not all tasks are memory bound, but that memory bound task will be no faster. Three, four, or more G4s will mostly be sitting around twiddling their thumbs waiting for the MPX bus to get them more data. The extra computing power you get isn't worth the extra money or the heat generated in the case by 4 processors.
Would you pay an extra $1500 for a machine that wasn't any faster, but raised your electrical bill?
PPC heading this way?
Wonder if IBM will be making a dual core G3 with SIMD unit in the next year?
The way IBM is going with the G3 seems to suggest that width and not length is the way PPC will 'who's the daddy' over x86 man.
As apple heads into 3D, server markets etc, it'll be interesting to see where PPC goes in the next year. Mhz wise, it appears to be going nowhere.
So, something else must be in the offing.
I'm intrigued by the fact apple sits on the hypertransport consortium. Will that come after a rapid io set up or instead of?
With the x86 Hammer around the corner, one can only guess that Apple has something equally as compelling in the next 6 months. The Rapid Io 'G5' maybe some kind of retort.
That's probably the Apple 'tower' I buy. But my gut says I'm going to have to wait for it.
A dual 1.2 gig ddr G4 aint going to cut it. It's already out of date. (Those rumours are old already...)
Lemon Bon Bon
Right now, If apple offered a 1.3 Ghz G4 on a 133Mhz Bus your facing a 10 to 1 ratio. Apple can lessen the effects of this ratio with DDR level 3 buffers. But, The fact remains that if you add a second or even third and fourth processors..... Your screwed. Right now I'd say the Duel Ghz boxes are probably pushing the bus to the top. Proper system balance is key to total performance characteristics. 10 to 1 isn't very balanced.
<strong>The MPX bus can move ~1 GB/sec. A single G4 working on a memory intensive process can consume >1 GB/sec. Two G4s working at the same time will deliver some performance improvement because not all tasks are memory bound, but that memory bound task will be no faster.</strong><hr></blockquote>
What is this "MPX bus" thing? Is that a new memory bus standard or something? Is there any chance of using parallel channels of the thing (similar to a dual channel Rambus scheme)? Why not just use 4 channels of Rambus then? That would be worth 6.4 GB/s of bandwidth (more if you count the newer, faster Rambus standard), and if you interleave 4 channels (instead of the current 2) that should give you competitive latency characteristics, no?
4 channels would probably be pretty pricey, then right? I guess you could endeavor to make it cheap by consolidating the whole deal on a single chip and making that chip really small. Also you would need to install your memory modules in quantities of 4. Not too big of a deal when performance is the 1st priority, but a bit of a pain for the casual user. Perhaps they could design it so the memory architecture is "tolerant" with just 1 or 2 memory cards installed (you just get less bandwidth and more latency).
Aside from the propensity for Apple to put such "hardcore" hardware in a desktop product, maybe the answer has always been there for us- distributed computing. Though not logistically trivial in of itself, it seems more logistically practical than using the "memory controller from Hell" as a matter of routine in your entire mainstream desktop product line. I'm just rambling, of course, but I am somewhat intrigued by the idea of a 4-channel Rambus system, however.
I know that fundamental latency cannot be solved by multi-channels, but I've always heard that interleaving of multiple channels can somewhat offset the effects of latency when you are looking at an entire system where many sequential and random accesses are occuring.
Hey, if Jeff Goldblum thinks I should, who am I to argue?
Thanks for setting me straight.
Why doesn't the apple of today take a look around at whatever is happeining? Maybe someone out there is designing a processor or has an idea that semmingly breaks the rules? There was a time when apple would have scoured the earth for it - now it could even fund it. Yet its all down to the Mghz myth. Steve tells us that Intel's extra 800Mhz+ means nothing yet when he bumps up the PowerMac line by 200Mhz we're supposed to think we have a much superior machine. He can't have it both ways.
face it Moto will never give mac users what they need or want. Long term apple has to look elsewhere. It needs a techno/engineering equivalent of Jonathan Ives.
<strong>Long term apple has to look elsewhere. It needs a techno/engineering equivalent of Jonathan Ives.</strong><hr></blockquote>
Great... an egg-shaped processor... woo hooo...
I hear ya... I stated something similar to this in another thread. I wish they could "find" a new chipmaker and "killer" process that allows for an AMD/Intel killer... in the range of 200%-400% faster or something. And I'm not talking Mhz-Myth speed... I'm talking raw horsepower.
- Scott
<strong>Apple has in the past (when we really loved it) taken bizarre, new stuff and added it to their systems - even if the rest of the industry hadn't even had a chance to give it a whirl. Yet, with processors, Steve seems to have a blind spot. Can it really be true that apart from IBM, MOTO, Intel, AMD and transmeta nothing else is going on with regards to revolutionary processor or computing design on the planet?
Why doesn't the apple of today take a look around at whatever is happeining? Maybe someone out there is designing a processor or has an idea that semmingly breaks the rules? There was a time when apple would have scoured the earth for it - now it could even fund it. Yet its all down to the Mghz myth. Steve tells us that Intel's extra 800Mhz+ means nothing yet when he bumps up the PowerMac line by 200Mhz we're supposed to think we have a much superior machine. He can't have it both ways.
face it Moto will never give mac users what they need or want. Long term apple has to look elsewhere. It needs a techno/engineering equivalent of Jonathan Ives.</strong><hr></blockquote>
3 words:
Legacy Code Base
Changing the CPU architecture for a mass-market product isn't like changing Nokia Face Plates.
Even if they did and could get it to run X that would require HUGE amounts of dollars and work.
If they did, it would have to support all legacy stuff ala Itanium does for x86. And we all know that is catching on like wildfire!
Zaz