Has Motorola actually stated it plans to take its G4 to RapidIO? I know it has been rumored for quite sometime, but there was little hard evidence backing up the vapor. The lack of a modern bus on the G4 is the most pressing concern of mine since it will continue to drag the consumer line down.
No, the best analogy is the G4 was like a Ferrari with a 2 gear transmission.
Even better:
G4:
Lots of power, but can't transfer it to the system.
=Ferrari with lawnmower-sized tires.
Matsu, I see the G4 soley inhabiting the consumer range: iMac and iBook. The problem is that it will be a bit awkward with Powermacs at 3-4 GHz, and iMacs at 1-2 GHz. I just don't see how Apple could market the iMacs well with such a performance gap.
Matsu, I see the G4 soley inhabiting the consumer range: iMac and iBook. The problem is that it will be a bit awkward with Powermacs at 3-4 GHz, and iMacs at 1-2 GHz.
If Moto is successful at getting a SOI-CMOS 90nm process onto 300mm wafers this fall, we will see G4s at 2-2.5Ghz. This might require them to add some pipeline-stages, but since the G4 has quite a short pipeline, it's not a really big problem.
And of course, they would have to evolve to a massively better FSB-interface really quick.
It could well be that the case of Moto is quite dragic, them having a massively faster G4 almost ready but Apple jumping ship for the high-speed, high-margin chips at the same moment.
The problem is, that the 970 is, from the ground up, intended to be a Computer Processor for high performance systems.
The G4 however is aimed at appliances and signal processing, it doesn't really require a 1GHz bus to control the injection of a car. Apple is just another customer for Motorola, while they're the prime customer for IBM (along with IBM itself).
Thus it's no surprise that the development focus doesn't match for the 2 companies.
Has Motorola actually stated it plans to take its G4 to RapidIO? I know it has been rumored for quite sometime, but there was little hard evidence backing up the vapor. The lack of a modern bus on the G4 is the most pressing concern of mine since it will continue to drag the consumer line down.
They've committed to RapidIO - why not, since they invented it! - but there's nothing on a roadmap right now. I note that the Mot spokesman in the article I read kept referring cryptically to "a PowerPC" which, so soon after they laid out the G4's roadmap for the next several years, probably means that Apple's got designs on it.
As for the difference between Mot and IBM, I think it's fairly simple: When AIM started making all sorts of noises about how much better PowerPC was going to be than x86, I don't think it occurred to them that Intel would pursue a wattage-no-object design model. Motorola is still making CPUs they way they've always made CPUs, more or less (although the 7455 just might be the most bandwidth-starved CPU ever used in a desktop), but because of Intel their design model is no longer standard on the desktop. IBM had gone and made the huge, hungry POWER4, so it was easier for them to adapt to the new PC landscape by repurposing the high-end technology they'd already built - which, to me, is a better way of doing things than either Mot or Intel are attempting.
I'm not expecting anything like a 970 out of Motorola. It's not their core competency. I'm basically counting on the fact that it really wouldn't take all that much to turn the current G4 into an excellent performer at 10W with a low price to boot. Die shrink; a bit of Mot's process tech; memory controller on board + RapidIO; and you're there. From there, Mot can add execution units fairly easily, or even deepen the pipeline and add enough out-of-order execution capabilities (and a robust enough ability to predict branches ) to make simultaneous multithreading worthwhile. That last one would involve a major redesign, but it would certainly be worth it. The performance would be a major improvement over the G4 in the current PowerBooks in addition to letting them run cooler and cheaper, and that's a good thing.
The biggest improvement (IMO) IBM is reportedly working on (according to some folks at ars that got to talk to Peter Sandon at the WWDC) lies in Altivec. IBM is doing something that Motorolla should have, which is tune Altivec in such a way that it lends itself to auto-vectorization. Right now you hand-code Altivec and the few compilers that try to auto-vectorize are very poor at the job. Now this might mean less emphasis on the performance that a hand coder could extract out of Altivec, but would benefit us greatly in the long run by opening up all apps that the compiler deems has vectorization potential.
On another note, since the compiler would vectorize, certain parts of the SPEC test would finally benefit from Altivec. SPEC might actually become a more relevant bit of info on our platform.
On another note, since the compiler would vectorize, certain parts of the SPEC test would finally benefit from Altivec. SPEC might actually become a more relevant bit of info on our platform.
Actually, probably less relevant? Unfortunately (as cool and great as it would be to have a vectorizing compiler), I think that such SPEC numbers would be even less reliable benchmarks of CPU speed. Not all code can be vectorized, in fact, quite a bit can't. I don't doubt that such a compiler would produce some great SPEC numbers and that a vectorizing compiler in general would be great, but I don't think that inflated SPEC numbers would help to clarify who was the faster chip. Most Intel people would simply see a vectorizing GCC as being an Apple specific compiler and insist that its SPEC numbers be compared to Intel's compiler numbers.
Either way, benchmarks are pretty worthless and I'm all in favor of vectorizing compilers, but if you think that the current SPEC debate is bad, then wait until you start introducing some back end compiler optimizations and see how things go!
Most Intel people would simply see a vectorizing GCC as being an Apple specific compiler and insist that its SPEC numbers be compared to Intel's compiler numbers.
I certainly don't care much for SPEC myself and sure it won't make decoding performance or comparing any easier or better. But SPEC is an entirely compiler dependent benchmark and purports to measure not hardware performance but system performance (hardware + compiler). SSE2 on Intel gets used by their compilers, why would a more auto-vectorization friendly Altivec be a bone of contention?
Yeah, not everything can be vectorized, but what can will benefit from IBM's approach of making Altivec less dependent on hand-coding.
Either way, benchmarks are pretty worthless and I'm all in favor of vectorizing compilers, but if you think that the current SPEC debate is bad, then wait until you start introducing some back end compiler optimizations and see how things go!
Intel's back end already does all sorts of things so I don't see how it would really be any different.
My guess would be because of the speed that the g5 will reach 3ghz in twelve months that the 980 will indeed be called the g6 and will be for the xserve when it its released late next year.
Then u got the 3ghz g5 duals and xserves with 980 even faster.
Then the powermac will change over to 980 6 months after xserver...unless the g5 scales up better than they expect which is whats been happening...
Well I'm glad somebody appreciates the double meaning.
There are a lot of assumptions being made about future cores, memory controllers, processes, features from the POWER5, etc and the timeframe in which they're going to happen. Almost everybody is very optimistic about it -- I think overly so. IBM is going to do good things, but the stuff being bantered around is too much to expect without further info from IBM.
Well I'm glad somebody appreciates the double meaning.
There are a lot of assumptions being made about future cores, memory controllers, processes, features from the POWER5, etc and the timeframe in which they're going to happen. Almost everybody is very optimistic about it -- I think overly so. IBM is going to do good things, but the stuff being bantered around is too much to expect without further info from IBM.
Be careful Programmer..."you're treading on my Dreams"
There are a lot of assumptions being made about future cores, memory controllers, processes, features from the POWER5, etc and the timeframe in which they're going to happen. Almost everybody is very optimistic about it -- I think overly so. IBM is going to do good things, but the stuff being bantered around is too much to expect without further info from IBM.
IBM claims to have protoypes of the next generation. It's usually 12-18 months from first silicon to production, no? Now the question would be, do they mean the 90nm 970, or the POWER5-derived CPU we're calling the 980, or something else?
In the early articles on Fishkill, IBM seemed impatient to get to 90nm. I seem to remember something about "months". So, given that IBM appears to be justifiably optomistic about its fab's capabilities, I'm expecting a shorter than average lifespan for the 130nm 970. But not for the 970 itself. I don't remember reading any approximate timeline for the POWER5 to appear.
Intel's back end already does all sorts of things so I don't see how it would really be any different.
My point is that we would be comparing compilers more than comparing CPUs. This of course it the problem with SPEC- it is a comparison of compiler + CPU, although there is no way to seperate the two nowadays. Making a vectorizing compiler (which I am all in favor of) would simply muddy the already muddied SPEC benchmarks. Of course, if PPC+vector compiler could beat intel + intel compiler, then Apple could at least say that they have a better CPU+compiler combo. Of course, the intel folk would just say that the vectorizing compiler was giving artifically high SPEC results.
In the end, benchmarks are all smoke and mirrors, but a vectorizing compiler would be a good thing even if it only speeds up certain portions of code.
What interests me is that IBM claims an quadrupling of the preformance over POWER4+, but it only expected to ship initially with a max clockspeed of 2Ghz, ca 12% more then what the POWER4+ is currently capable of scaling to.
IBM claims to have protoypes of the next generation. It's usually 12-18 months from first silicon to production, no? Now the question would be, do they mean the 90nm 970, or the POWER5-derived CPU we're calling the 980, or something else?
Apple marketing can call it a G5, but the 970 is really a G1+ or G2 chip (if we call the Power4 the progenitor of the line). Would the analogy of the move from 601 to subsequent iterations of chips apply in seeing how the 970 "fits in to whole scheme of things"?
What interests me is that IBM claims an quadrupling of the preformance over POWER4+, but it only expected to ship initially with a max clockspeed of 2Ghz, ca 12% more then what the POWER4+ is currently capable of scaling to.
The article states the POWER5 will start its life on the 130nm process. Looks like they intend to replace POWER3 systems with POWER4+ and POWER4 with POWER5.
The article states the POWER5 will start its life on the 130nm process. Looks like they intend to replace POWER3 systems with POWER4+ and POWER4 with POWER5.
With some of the "roadmaps" disappearing from the internet it is a bit difficult to say which process the Power 5 & 980 will begin production with. The roadmap I recall seeing some time ago had the Power 5 & 980 debuting the 90 nm process, not the 970. It now appears that the 90 nm process will be introduced with the 970+. It is less clear whether the Power 5 & 980 will remain with the 130 nm process at their introduction and then change to the 90 nm process or whether they may be introduced with the 90 nm process. It is just possible that this remains to be determined base upon how well the implementation goes in the 970+.
In any event, I concur with your belief that the Power 4 will wind up taking the position now occupied by the Power 3 and that the Power 5 will then take over from the Power 4.
Comments
Originally posted by Outsider
No, the best analogy is the G4 was like a Ferrari with a 2 gear transmission.
Even better:
G4:
Lots of power, but can't transfer it to the system.
=Ferrari with lawnmower-sized tires.
Matsu, I see the G4 soley inhabiting the consumer range: iMac and iBook. The problem is that it will be a bit awkward with Powermacs at 3-4 GHz, and iMacs at 1-2 GHz. I just don't see how Apple could market the iMacs well with such a performance gap.
Give Moto the boot, I say!
...of course when you compare the Mc. F1 (G5) well haha
then again this is all time assuming...the G5 will in the next couple yrs become the ferrari
Originally posted by Junkyard Dawg
Matsu, I see the G4 soley inhabiting the consumer range: iMac and iBook. The problem is that it will be a bit awkward with Powermacs at 3-4 GHz, and iMacs at 1-2 GHz.
If Moto is successful at getting a SOI-CMOS 90nm process onto 300mm wafers this fall, we will see G4s at 2-2.5Ghz. This might require them to add some pipeline-stages, but since the G4 has quite a short pipeline, it's not a really big problem.
And of course, they would have to evolve to a massively better FSB-interface really quick.
It could well be that the case of Moto is quite dragic, them having a massively faster G4 almost ready but Apple jumping ship for the high-speed, high-margin chips at the same moment.
The G4 however is aimed at appliances and signal processing, it doesn't really require a 1GHz bus to control the injection of a car. Apple is just another customer for Motorola, while they're the prime customer for IBM (along with IBM itself).
Thus it's no surprise that the development focus doesn't match for the 2 companies.
Originally posted by Big Mac
Has Motorola actually stated it plans to take its G4 to RapidIO? I know it has been rumored for quite sometime, but there was little hard evidence backing up the vapor. The lack of a modern bus on the G4 is the most pressing concern of mine since it will continue to drag the consumer line down.
They've committed to RapidIO - why not, since they invented it! - but there's nothing on a roadmap right now. I note that the Mot spokesman in the article I read kept referring cryptically to "a PowerPC" which, so soon after they laid out the G4's roadmap for the next several years, probably means that Apple's got designs on it.
As for the difference between Mot and IBM, I think it's fairly simple: When AIM started making all sorts of noises about how much better PowerPC was going to be than x86, I don't think it occurred to them that Intel would pursue a wattage-no-object design model. Motorola is still making CPUs they way they've always made CPUs, more or less (although the 7455 just might be the most bandwidth-starved CPU ever used in a desktop), but because of Intel their design model is no longer standard on the desktop. IBM had gone and made the huge, hungry POWER4, so it was easier for them to adapt to the new PC landscape by repurposing the high-end technology they'd already built - which, to me, is a better way of doing things than either Mot or Intel are attempting.
I'm not expecting anything like a 970 out of Motorola. It's not their core competency. I'm basically counting on the fact that it really wouldn't take all that much to turn the current G4 into an excellent performer at 10W with a low price to boot. Die shrink; a bit of Mot's process tech; memory controller on board + RapidIO; and you're there. From there, Mot can add execution units fairly easily, or even deepen the pipeline and add enough out-of-order execution capabilities (and a robust enough ability to predict branches
On another note, since the compiler would vectorize, certain parts of the SPEC test would finally benefit from Altivec. SPEC might actually become a more relevant bit of info on our platform.
Originally posted by razor
On another note, since the compiler would vectorize, certain parts of the SPEC test would finally benefit from Altivec. SPEC might actually become a more relevant bit of info on our platform.
Actually, probably less relevant? Unfortunately (as cool and great as it would be to have a vectorizing compiler), I think that such SPEC numbers would be even less reliable benchmarks of CPU speed. Not all code can be vectorized, in fact, quite a bit can't. I don't doubt that such a compiler would produce some great SPEC numbers and that a vectorizing compiler in general would be great, but I don't think that inflated SPEC numbers would help to clarify who was the faster chip. Most Intel people would simply see a vectorizing GCC as being an Apple specific compiler and insist that its SPEC numbers be compared to Intel's compiler numbers.
Either way, benchmarks are pretty worthless and I'm all in favor of vectorizing compilers, but if you think that the current SPEC debate is bad, then wait until you start introducing some back end compiler optimizations and see how things go!
Originally posted by Yevgeny
Most Intel people would simply see a vectorizing GCC as being an Apple specific compiler and insist that its SPEC numbers be compared to Intel's compiler numbers.
I certainly don't care much for SPEC myself and sure it won't make decoding performance or comparing any easier or better. But SPEC is an entirely compiler dependent benchmark and purports to measure not hardware performance but system performance (hardware + compiler). SSE2 on Intel gets used by their compilers, why would a more auto-vectorization friendly Altivec be a bone of contention?
Yeah, not everything can be vectorized, but what can will benefit from IBM's approach of making Altivec less dependent on hand-coding.
Originally posted by Yevgeny
Either way, benchmarks are pretty worthless and I'm all in favor of vectorizing compilers, but if you think that the current SPEC debate is bad, then wait until you start introducing some back end compiler optimizations and see how things go!
Intel's back end already does all sorts of things so I don't see how it would really be any different.
Then u got the 3ghz g5 duals and xserves with 980 even faster.
Then the powermac will change over to 980 6 months after xserver...unless the g5 scales up better than they expect which is whats been happening...
Originally posted by Programmer
Intel's back end already does all sorts of things
There are a lot of assumptions being made about future cores, memory controllers, processes, features from the POWER5, etc and the timeframe in which they're going to happen. Almost everybody is very optimistic about it -- I think overly so. IBM is going to do good things, but the stuff being bantered around is too much to expect without further info from IBM.
Originally posted by Programmer
Well I'm glad somebody appreciates the double meaning.
There are a lot of assumptions being made about future cores, memory controllers, processes, features from the POWER5, etc and the timeframe in which they're going to happen. Almost everybody is very optimistic about it -- I think overly so. IBM is going to do good things, but the stuff being bantered around is too much to expect without further info from IBM.
Be careful Programmer..."you're treading on my Dreams"
Originally posted by Programmer
There are a lot of assumptions being made about future cores, memory controllers, processes, features from the POWER5, etc and the timeframe in which they're going to happen. Almost everybody is very optimistic about it -- I think overly so. IBM is going to do good things, but the stuff being bantered around is too much to expect without further info from IBM.
IBM claims to have protoypes of the next generation. It's usually 12-18 months from first silicon to production, no? Now the question would be, do they mean the 90nm 970, or the POWER5-derived CPU we're calling the 980, or something else?
In the early articles on Fishkill, IBM seemed impatient to get to 90nm. I seem to remember something about "months". So, given that IBM appears to be justifiably optomistic about its fab's capabilities, I'm expecting a shorter than average lifespan for the 130nm 970. But not for the 970 itself. I don't remember reading any approximate timeline for the POWER5 to appear.
Originally posted by Programmer
Intel's back end already does all sorts of things so I don't see how it would really be any different.
My point is that we would be comparing compilers more than comparing CPUs. This of course it the problem with SPEC- it is a comparison of compiler + CPU, although there is no way to seperate the two nowadays. Making a vectorizing compiler (which I am all in favor of) would simply muddy the already muddied SPEC benchmarks. Of course, if PPC+vector compiler could beat intel + intel compiler, then Apple could at least say that they have a better CPU+compiler combo. Of course, the intel folk would just say that the vectorizing compiler was giving artifically high SPEC results.
In the end, benchmarks are all smoke and mirrors, but a vectorizing compiler would be a good thing even if it only speeds up certain portions of code.
http://www.theregister.co.uk/content/3/31467.html
What interests me is that IBM claims an quadrupling of the preformance over POWER4+, but it only expected to ship initially with a max clockspeed of 2Ghz, ca 12% more then what the POWER4+ is currently capable of scaling to.
IBM claims to have protoypes of the next generation. It's usually 12-18 months from first silicon to production, no? Now the question would be, do they mean the 90nm 970, or the POWER5-derived CPU we're calling the 980, or something else?
Apple marketing can call it a G5, but the 970 is really a G1+ or G2 chip (if we call the Power4 the progenitor of the line). Would the analogy of the move from 601 to subsequent iterations of chips apply in seeing how the 970 "fits in to whole scheme of things"?
Originally posted by Eric_Z
The latest info regarding POWER5 from The Register:
http://www.theregister.co.uk/content/3/31467.html
What interests me is that IBM claims an quadrupling of the preformance over POWER4+, but it only expected to ship initially with a max clockspeed of 2Ghz, ca 12% more then what the POWER4+ is currently capable of scaling to.
The article states the POWER5 will start its life on the 130nm process. Looks like they intend to replace POWER3 systems with POWER4+ and POWER4 with POWER5.
Originally posted by Outsider
The article states the POWER5 will start its life on the 130nm process. Looks like they intend to replace POWER3 systems with POWER4+ and POWER4 with POWER5.
With some of the "roadmaps" disappearing from the internet it is a bit difficult to say which process the Power 5 & 980 will begin production with. The roadmap I recall seeing some time ago had the Power 5 & 980 debuting the 90 nm process, not the 970. It now appears that the 90 nm process will be introduced with the 970+. It is less clear whether the Power 5 & 980 will remain with the 130 nm process at their introduction and then change to the 90 nm process or whether they may be introduced with the 90 nm process. It is just possible that this remains to be determined base upon how well the implementation goes in the 970+.
In any event, I concur with your belief that the Power 4 will wind up taking the position now occupied by the Power 3 and that the Power 5 will then take over from the Power 4.