"Apple must realize it cannot continue to allow the MHz gap to widen, MHz Myth or otherwise. They surely have an answer, whether it means taking development in-house, going with IBM completely or going with a different player. If Apple had to switch chip lines in order to remain in business, it would. Any switch of that magnitude would have to include PPC instruction set compatibility, since another code transition at this point in time would be a disaster."
I've moaned endlessly about the mhz thing.
But it's not just that.
I'd happily pay the 'apple premium', if performance was 'equal'.
I walked into a PC World store the other day.
Athlon 2.1 XP, 512 megs of ram, Geforce 4 (not the top 'Ti' model though...), 80 gigs hard drive and monitor for £1, 200 inc VAT.
I couldn't get Apple's low end 'power'Mac for that! Said x86 spec would kick the snot out of it.
('Sod off and buy a PC!')
I've already got a PC otherwise I'd a bought it!
Why moan? (Because I gotta...)
It's the performance 'gap'...not just the mhz. You can stick two G4's against the 2.1 XP and you're still second. Do the maths. 2 fpus vs the Athlon's 3. x86 land is winning the brute force argument.
I don't mind low mhz but paying over twice as money with no monitor and 'time and a half' performance sucks. Low mhz? Fine. But can we have a few more fpus in the G4, more integer, 400 DDR and a faster bus and at least a 15 LCD included with the top end spec instead of the lousy rebates.
Yeah. My moaning about won't do any good. But neither will any of the other 'well reasoned' comments on these boards. You either buy or vote with wallet. I'm on the latter.
(Or if you're Amorph, you lodge a formal letter of complaint at Apple's feedback site... )
And your moaning about this in every single thread in Future Hardware accomplishes nothing but to derail any topical discussion and bring down the overall quality of the boards.
<strong>And your moaning about this in every single thread in Future Hardware accomplishes nothing but to derail any topical discussion and bring down the overall quality of the boards.
I'm getting really tired of it.</strong><hr></blockquote>
that's because most of the threads in FH are essentially moot, given the simple conclusion intrinsic to them all.
this pointlessness derails the quality of the boards.
<strong>It' s time to think out of the box and move towards some new technology.</strong><hr></blockquote>
I remember reading some months ago about some "clockless" chips from Philips. <img src="graemlins/bugeye.gif" border="0" alt="[Skeptical]" /> IIRC the article was about embedded markets (cell phones and such), but it strongly pointed out its performance gains and reduction in power requirements, as opposed to clocked chips which have to run all the time, all the chip, at the same frequency -> and that's why all major chip designers are struggling, for portables, to design CPUs that can shut down unused units or modify the clock rate while on...
I'm no CPU guru, can somebody share his wisdom on this clockless thing? And, if such chips exist and really have these advantages, how long (very long, I s'pose) would it take for Moto or IBM to come out with a clockless PPC design? In time to save Apple from the 6 GHz PR disaster otherwise scheduled to take place in 2004?
"I'm no CPU guru, can somebody share his wisdom on this clockless thing? And, if such chips exist and really have these advantages, how long (very long, I s'pose) would it take for Moto or IBM to come out with a clockless PPC design? In time to save Apple from the 6 GHz PR disaster otherwise scheduled to take place in 2004?"
I've heard about the 'clockless' chip thing. Sounded interesting. Can't recall the link to it.
Well, I think 'dual' processing is the way to go....the dual core thing seems to be the way PPC will go in the next few years.
IBM and Moto have made it plain and clear what their markets are and are developing cpus accordingly. Apple, seems to have little choice. Bad in terms of 'mhz' and raw power but good in terms of the pending 'Rapid Io' framework. Add multi processing to that and things a looking much better. It's just not likely to be now...more likely next year some time.
The last few years haven't been good for the PPC cpu wise but ironically, the design principles of the 'AIM' alliance...elegance, running cool and small chip size may pay off in the long term.
It seems most of the 'big' high end chip makers focus on 'width' not 'length' and mhz doesn't seem to matter here.
And when you get into 4 and 8 processors in multiprocessing...I sometimes wonder if slight performance differences here and there on a per chip basis will really matter?
The modular and pending Book 'E' architecture may put 'us' near enough on 'level' or better terms with the x86 mob.
The 'dual core' G3 with a SIMD unit idea floating out there on some article pages makes me think that future iBooks could be real speed demons...
For now? Frustrating. But the future offers a glimmer of hope, I feel.
Yeah, I hear a few people are still working on clockless chips. There was a report a few months ago that Intel experimented with a clockless version of the Pentium that ran 3 times as fast as the standard version. I don't recall why Intel didn't use this design. Look it up in your fave search engine: I'm sure you'll find lots of useful stuff.
<feeble attempt to get this thread back on track>
[quote]The MPX bus can move ~1 GB/sec. A single G4 working on a memory intensive process can consume >1 GB/sec. Two G4s working at the same time will deliver some performance improvement because not all tasks are memory bound, but that memory bound task will be no faster. Three, four, or more G4s will mostly be sitting around twiddling their thumbs waiting for the MPX bus to get them more data. The extra computing power you get isn't worth the extra money or the heat generated in the case by 4 processors. <hr></blockquote>
OK, the MPX bus is the villain here. I assume that it could be replaced with either HyperTransport or Rapid IO. Once Apple moves to one of these buses (or a funky combination of both) do quads and greater suddenly become a viable proposition?
<strong>I think 'dual' processing is the way to go....the dual core thing seems to be the way PPC will go in the next few years.</strong><hr></blockquote>
Ok, and that should fix the issue hardware-wise. But then, what about software? AFAIK an OS like X can distribute different threads across multiple CPUs, so as to distribute the workload more evenly.
I know nothing about Cocoa, and much less than that about Carbon, so the following are simply my assumptions, and I welcome all the necessary corrections:
Carbon apps I think face more difficulties in taking advantage of multiple processing units. At least from a GUI perspective, Carbon apps are not yet multi-threaded: pull down a menu or click and hold on a title bar (is this a heritage of AppleEvents and the MainEventLoop() ?) inside a Carbon app and you'll notice that the rest of that same application freezes just like in OS 9. I noticed Cocoa apps don't follow this behaviour--does this mean anything from a multi-threading/SMP PoV?
Under BeOS almost every element of an app was coded as a different thread--and I think that's why the BeOS performance scaled so well across mutiple CPUs. Under OS X is a behaviour like this tied to the API with which an app is built? And assuming Cocoa is much more suited to build multi-threaded apps, will current Cocoa apps take advantage of 4 or more CPUs (setting aside for a moment the current memory bottleneck)?
Don't get me wrong, I'd love to see multicore PPC chips, or 4 and 8 way systems from Apple--my only doubt is: to what extent can OS X, Carbon and Cocoa apps in their present state take advantage of such a monster system? And if changes are needed, what would their magnitude be under each of these instances (OS, Carbon apps, Cocoa apps)?
"Clockless chips": The so-called "clockless" chips have been discussed and in development on and off for nearly a decade. In all cases I've heard details of "clockless" was something of a misnomer -- rather than have no clocks, these things actually had many internal clocks. Each part of the CPU runs on its own clock. Traditional single-clock CPUs are limited by the maximum clock rate of the slowest part of the processor, which makes it a challenge to ensure that all parts run at the highest possible clock rate even if that's not the best way to design it. In a "clockless" design each part can run as fast as it is able to and no faster. I believe some experiments have been done with designs which change their clock rate depending on what they are doing (much like the power saving modes on current chips). The Pentium 4 actually uses a limited version of this in that its integer units run on a higher speed clock than the floating point units.
MPX: The main reason that larger scale multiprocessing isn't effective with the G4 is memory bandwidth... there isn't enough of it. If there was suddenly >6 times the bandwidth (i.e. HT) then quad and greater processor configurations would become effective. Whether they are practical from cost, heat, and power perspectives is another matter. I don't really believe that we'll see desktop machines with more than 2 chips in them, although those could become multicore and thus give you effectively 4-way SMP.
Carbon/Cocoa SMP: I believe that you are correct that Carbon doesn't yet have a multi-threaded event model. This could happen over time, but I suspect that developers would need to patch their apps even if only to "turn on" this feature -- turning it on "behind their backs" would be a significant source of bugs. I didn't think Cocoa was any different in this respect though... at least not according to the documentation that I read last year. The difference is that Cocoa apps were written from scratch on a system that has always been multi-threaded, whereas Carbon apps were originally developed for a single threaded OS. The event model doesn't need to be inherently multi-threaded in order to build a multi-threaded app -- simply running the app's main processing in a thread other than the event handling thread can give it the appearance of continuing to function "in the background". Quartz and OpenGL are multi-threaded, I believe, which means rendering can continue while the GUI is doing its thing.
<strong>I'm with qazII and Scott F on this one. I think what apple needs is to employ some bright young things who are not neccessarily chip deigners and engineers to sit ina room day after day and say things like:
"what if . . ."
Then let the engineers and tech whizzes try to solve the problems put to them. What apple needs is an insanely great change in how we think about processor design. For example, does a computer really need a single CPu responsible for evrything? Can a motherboard be a CPU if you like with its various "engines" functioning as a system?
At the mo it seems as if apple just employs a load of people whose sole job is to say "we can't becuase . . ." There must be people left in the world who no longer believe the earth is flat.</strong><hr></blockquote>
Just Registered - Someone mentioned the Scientific American artical about multiple processors - being just more cache.
It wasn't the "iRAM" idea, was that each memory chip would have a (relativly) simple embedded processor, whos actions would be coordinated by a control program (may be on another processor) (think orchestras & conductors). If you plugged in another memory 256 MB memory chip, you automatically added another processor to your system, which was surrounded by this memory - ie very fast access to it. somewhat like all memory being cache memory.
Double the memory and you double the processing power. Use 8 memory chips and you have an 8 processor system etc.
- Not that Apple has anything like this
- It's just that that was the original idea.
It was called iRAM short for "intellegent RAM"
Excellent idea, maybe someone will implement it someday.
[quote]Quartz and OpenGL are multi-threaded, I believe, which means rendering can continue while the GUI is doing its thing.<hr></blockquote>
The same is true for OpenTransport (networking stuff) and some file-I/O-stuff.
You can build a preemptive multithreaded app in Carbon if you take care to put all the UI-relevant code in one cooperative thread.
Some problems, however, are not possible to break down into pieces suitable for multithreaded architectures - so in the end a multiprocessor machine may be significantly slower than a singleprocessor monster.
<strong>Some problems, however, are not possible to break down into pieces suitable for multithreaded architectures - so in the end a multiprocessor machine may be significantly slower than a singleprocessor monster.</strong><hr></blockquote>
OSX has enough tasks to keep one processor busy, so a dual processor system should always provide an advantage. More than two is much less of an advantage though.
Comments
I've moaned endlessly about the mhz thing.
But it's not just that.
I'd happily pay the 'apple premium', if performance was 'equal'.
I walked into a PC World store the other day.
Athlon 2.1 XP, 512 megs of ram, Geforce 4 (not the top 'Ti' model though...), 80 gigs hard drive and monitor for £1, 200 inc VAT.
I couldn't get Apple's low end 'power'Mac for that! Said x86 spec would kick the snot out of it.
('Sod off and buy a PC!')
I've already got a PC otherwise I'd a bought it!
Why moan? (Because I gotta...)
It's the performance 'gap'...not just the mhz. You can stick two G4's against the 2.1 XP and you're still second. Do the maths. 2 fpus vs the Athlon's 3. x86 land is winning the brute force argument.
I don't mind low mhz but paying over twice as money with no monitor and 'time and a half' performance sucks. Low mhz? Fine. But can we have a few more fpus in the G4, more integer, 400 DDR and a faster bus and at least a 15 LCD included with the top end spec instead of the lousy rebates.
Yeah. My moaning about won't do any good. But neither will any of the other 'well reasoned' comments on these boards. You either buy or vote with wallet. I'm on the latter.
(Or if you're Amorph, you lodge a formal letter of complaint at Apple's feedback site...
Lemon Bon Bon
[ 06-04-2002: Message edited by: Lemon Bon Bon ]</p>
I'm getting really tired of it.
Piss and moan.
<strong>And your moaning about this in every single thread in Future Hardware accomplishes nothing but to derail any topical discussion and bring down the overall quality of the boards.
I'm getting really tired of it.</strong><hr></blockquote>
that's because most of the threads in FH are essentially moot, given the simple conclusion intrinsic to them all.
this pointlessness derails the quality of the boards.
<strong>It' s time to think out of the box and move towards some new technology.</strong><hr></blockquote>
I remember reading some months ago about some "clockless" chips from Philips. <img src="graemlins/bugeye.gif" border="0" alt="[Skeptical]" /> IIRC the article was about embedded markets (cell phones and such), but it strongly pointed out its performance gains and reduction in power requirements, as opposed to clocked chips which have to run all the time, all the chip, at the same frequency -> and that's why all major chip designers are struggling, for portables, to design CPUs that can shut down unused units or modify the clock rate while on...
I'm no CPU guru, can somebody share his wisdom on this clockless thing?
ZoSo
this pointlessness derails the quality of the boards."
Bullseye.
To Amorph: relax. You'll give yourself a stroke
Lemon Bon Bon
I've heard about the 'clockless' chip thing. Sounded interesting. Can't recall the link to it.
Well, I think 'dual' processing is the way to go....the dual core thing seems to be the way PPC will go in the next few years.
IBM and Moto have made it plain and clear what their markets are and are developing cpus accordingly. Apple, seems to have little choice. Bad in terms of 'mhz' and raw power but good in terms of the pending 'Rapid Io' framework. Add multi processing to that and things a looking much better. It's just not likely to be now...more likely next year some time.
The last few years haven't been good for the PPC cpu wise but ironically, the design principles of the 'AIM' alliance...elegance, running cool and small chip size may pay off in the long term.
It seems most of the 'big' high end chip makers focus on 'width' not 'length' and mhz doesn't seem to matter here.
And when you get into 4 and 8 processors in multiprocessing...I sometimes wonder if slight performance differences here and there on a per chip basis will really matter?
The modular and pending Book 'E' architecture may put 'us' near enough on 'level' or better terms with the x86 mob.
The 'dual core' G3 with a SIMD unit idea floating out there on some article pages makes me think that future iBooks could be real speed demons...
For now? Frustrating. But the future offers a glimmer of hope, I feel.
Lemon Bon Bon
[ 06-07-2002: Message edited by: Lemon Bon Bon ]</p>
[quote]The MPX bus can move ~1 GB/sec. A single G4 working on a memory intensive process can consume >1 GB/sec. Two G4s working at the same time will deliver some performance improvement because not all tasks are memory bound, but that memory bound task will be no faster. Three, four, or more G4s will mostly be sitting around twiddling their thumbs waiting for the MPX bus to get them more data. The extra computing power you get isn't worth the extra money or the heat generated in the case by 4 processors. <hr></blockquote>
OK, the MPX bus is the villain here. I assume that it could be replaced with either HyperTransport or Rapid IO. Once Apple moves to one of these buses (or a funky combination of both) do quads and greater suddenly become a viable proposition?
</>
<strong>I've moaned endlessly about the mhz thing.</strong><hr></blockquote>
You've moaned endlessly about everything without a pause
<strong>I think 'dual' processing is the way to go....the dual core thing seems to be the way PPC will go in the next few years.</strong><hr></blockquote>
Ok, and that should fix the issue hardware-wise. But then, what about software? AFAIK an OS like X can distribute different threads across multiple CPUs, so as to distribute the workload more evenly.
I know nothing about Cocoa, and much less than that about Carbon, so the following are simply my assumptions, and I welcome all the necessary corrections:
Carbon apps I think face more difficulties in taking advantage of multiple processing units. At least from a GUI perspective, Carbon apps are not yet multi-threaded: pull down a menu or click and hold on a title bar (is this a heritage of AppleEvents and the MainEventLoop() ?) inside a Carbon app and you'll notice that the rest of that same application freezes just like in OS 9. I noticed Cocoa apps don't follow this behaviour--does this mean anything from a multi-threading/SMP PoV?
Under BeOS almost every element of an app was coded as a different thread--and I think that's why the BeOS performance scaled so well across mutiple CPUs. Under OS X is a behaviour like this tied to the API with which an app is built? And assuming Cocoa is much more suited to build multi-threaded apps, will current Cocoa apps take advantage of 4 or more CPUs (setting aside for a moment the current memory bottleneck)?
Don't get me wrong, I'd love to see multicore PPC chips, or 4 and 8 way systems from Apple--my only doubt is: to what extent can OS X, Carbon and Cocoa apps in their present state take advantage of such a monster system? And if changes are needed, what would their magnitude be under each of these instances (OS, Carbon apps, Cocoa apps)?
ZoSo
MPX: The main reason that larger scale multiprocessing isn't effective with the G4 is memory bandwidth... there isn't enough of it. If there was suddenly >6 times the bandwidth (i.e. HT) then quad and greater processor configurations would become effective. Whether they are practical from cost, heat, and power perspectives is another matter. I don't really believe that we'll see desktop machines with more than 2 chips in them, although those could become multicore and thus give you effectively 4-way SMP.
Carbon/Cocoa SMP: I believe that you are correct that Carbon doesn't yet have a multi-threaded event model. This could happen over time, but I suspect that developers would need to patch their apps even if only to "turn on" this feature -- turning it on "behind their backs" would be a significant source of bugs. I didn't think Cocoa was any different in this respect though... at least not according to the documentation that I read last year. The difference is that Cocoa apps were written from scratch on a system that has always been multi-threaded, whereas Carbon apps were originally developed for a single threaded OS. The event model doesn't need to be inherently multi-threaded in order to build a multi-threaded app -- simply running the app's main processing in a thread other than the event handling thread can give it the appearance of continuing to function "in the background". Quartz and OpenGL are multi-threaded, I believe, which means rendering can continue while the GUI is doing its thing.
<strong>I'm with qazII and Scott F on this one. I think what apple needs is to employ some bright young things who are not neccessarily chip deigners and engineers to sit ina room day after day and say things like:
"what if . . ."
Then let the engineers and tech whizzes try to solve the problems put to them. What apple needs is an insanely great change in how we think about processor design. For example, does a computer really need a single CPu responsible for evrything? Can a motherboard be a CPU if you like with its various "engines" functioning as a system?
At the mo it seems as if apple just employs a load of people whose sole job is to say "we can't becuase . . ." There must be people left in the world who no longer believe the earth is flat.</strong><hr></blockquote>
It wasn't the "iRAM" idea, was that each memory chip would have a (relativly) simple embedded processor, whos actions would be coordinated by a control program (may be on another processor) (think orchestras & conductors). If you plugged in another memory 256 MB memory chip, you automatically added another processor to your system, which was surrounded by this memory - ie very fast access to it. somewhat like all memory being cache memory.
Double the memory and you double the processing power. Use 8 memory chips and you have an 8 processor system etc.
- Not that Apple has anything like this
- It's just that that was the original idea.
It was called iRAM short for "intellegent RAM"
Excellent idea, maybe someone will implement it someday.
-- Zoz
[quote]Quartz and OpenGL are multi-threaded, I believe, which means rendering can continue while the GUI is doing its thing.<hr></blockquote>
The same is true for OpenTransport (networking stuff) and some file-I/O-stuff.
You can build a preemptive multithreaded app in Carbon if you take care to put all the UI-relevant code in one cooperative thread.
Some problems, however, are not possible to break down into pieces suitable for multithreaded architectures - so in the end a multiprocessor machine may be significantly slower than a singleprocessor monster.
<strong>Some problems, however, are not possible to break down into pieces suitable for multithreaded architectures - so in the end a multiprocessor machine may be significantly slower than a singleprocessor monster.</strong><hr></blockquote>
OSX has enough tasks to keep one processor busy, so a dual processor system should always provide an advantage. More than two is much less of an advantage though.