<strong>Likewise we've had pretty much the same MB for 4 years.</strong><hr></blockquote>
What is that supposed to mean!? Macs have shipped with 32MB to 512MB RAM in the past 4 years - it's cheap to buy, what's the difference if you buy it yourself or they put it in for you?
Capacity has been 1-2GB in that time, how much do you want?
I think Apple should surrender in the desktop war and focus 60-70% of R&D towards laptops....where they actually have a small lead and a fighting chance....
<strong>I think Apple should surrender in the desktop war and focus 60-70% of R&D towards laptops....where they actually have a small lead and a fighting chance....</strong><hr></blockquote>
What's so great about PC hardware? Other than a DDR bus and evolutionary improvements like ATA-133 I don't really see many huge gains in Mobo tech the last 2 years.
It's obvious that nexgen tech like Hypertransport, RapidIO and PCIExpress are vying for their place on future mobo's. What I don't understand is why Mac user view HW as something that cannot be updated quickly. I could see Processor problems but I'm fairly confident that Apple will be in position to be competitive when nexgen HW starts shipping.
Threads like this always de-evolve into bitching threads about Apple Hardware. Perhaps AI should create a Doom and Gloom Topic so that the worrywarts can congregate and moan in unison.
<strong>Because they will also have higher clockspeeds?</strong><hr></blockquote>
No. I'm sorry, but I don't think you're getting this totally right. Regarding the Hammer, it already is some 20-40% faster than an equally clocked Athlon running the same 32 bit code.
I think CPUs are not so simple: just as raw frequency can be a performance myth, so can be pipeline depth. The P4's pipeline is certainly huge, but it's got its benefits (it's arguably the most powerful desktop CPU around). Some people say the P4 architecture will not last long after 2004, because there isn't the technology to keep pushing the frequency so much, but this is Intel's fault, it doesn't lie in the architecture itself.
I don't have the knowledge necessary to prove this, but as far as I can tell there are other factors beyond pipeline depth in determining the IPC rate of a CPU.
[quote]I just think Apple should have a long hard look at their business model. It's too rigid. Too rigid to reach 10%. I'd like to be proven wrong.<hr></blockquote>
Could they support more BTO options? Methinks yes.
Some ramblings about IPC and performance, correct me if I'm wrong.
Instructions Per Second =
Instructions Per Cycle x Clock Rate.
To make up for the difference in clock rate between G4+ and IA32, G4+ needs a vastly better IPC. (Divide IPS by one million and you get MIPS, which ain't a good cross platform benchmark anyway. )
IPC has nothing to do (directly) with clock rate.
Branch prediction accuracy affects IPC: the less cycles wasted in processing (and then discarding) mispredictions, the closer the actual IPC is to the processors' optimum. Beefing up the branch prediction unit (BPU) can increase IPC but has a transistor cost: this is probably partly the case with Hammer's increased IPC.
The commonly mentioned advantage of a short pipeline is that it wastes less cycles on a misprediction, as there's fewer stages of work to throw away. (However, each of those fewer stages has to do more work).
The CPU will be working at its optimal IPC when instructions are being retired are completed by all of the possible units as quickly as possible. Adding more functional units (usually) increases IPC (probably Hammer's other IPC trick).
All of the CPU's functional units have to be kept busy and fed with data. With caches having high hit rates (90%), this isn't as tricky as it might appear from the CPU/FSB clock rate difference. Cache size and speed therefore gives another IPC tradeoff. Having more registers (PPC: 32 general purpose reg.s, x86: 8) is also useful for keeping units fed.
Out-Of-Order-Execution can increase IPC by running independant code while waiting for other code's results, but (no surprise) is complex and costs transistors.
Apple seems to be in a constant state of suspense. As I long time Apple watcher and Mac user, I have never know what Apple will do next. It can be very frustrating when making purchasing decision.
I understand why Apple runs things this way. But it is hard on the consumer.
In the wintel world, its easy to figure out what chips the machines will be running in 6mos and how fast they will be. But on the Apple side, we dont even know what kind of chip we will be offered in 6mos or who will be making it. All we have is rumors.
This makes business uneasy. I would like to see Apple announce some type of intention. Vague comments about 'having options' seem to imply that other chips besides the G4 are being considered, but Id like Apple to give us more. I dont need them to tell me details, but is it asking too much for a general plan?
Since rumors of the next gen chip have been circulating, I have been holding of on a new machine as long as I can. But, next year Ill be in the market. I hope there are signs of Apple's direction be then.
<strong>There is no Hardware roadmap. From this you can only look at previous performance. We have had a 750 mhz improvemnt in 3 years. We were stuck at 500mhz for 18months. Likewise we've had pretty much the same MB for 4 years. Consider.
What happens if we get stuck at 1.25ghz until 2004/5?
What happens if the speed only increases to 2ghz by 2006?
</strong><hr></blockquote>
Moore's observation is called 'Moore's LAW' because it holds so, so true.
Yes, there are aberrations - there are aberrations in _any_ real data-set.
-> Intel had compelling reasons to push MHz _hard_ while allowing instructions-per-clock to be a secondary priority.
-> Motorola (back in 1998, the year of the beleaguered Apple) didn't emphasize the 'high' end of their chip lines as much as we wish they would have (with 20-20 hindsight).
-> It's also pretty clear that at least one _major_ chip project inside Mot had a Titanic scale disaster.
Now. Extrapolating from the MHz (Or GFLOPS, or whatever you want) data from Mot _for_2001 & 2002_, we'll get minimal boosts. As you indicated. Moore's law is really about transistor counts, but observed speed is pretty well correlated. (Though, note: MHz is _not_)
Extrapolating from limited data is not the best plan. Particularly in the face of a strong model that indicates otherwise, and evidence that some of the recent data suck.
This doesn't change any of your other reasons (or reasoning), but just pointing out fallacies in your extrapolation. Extrapolating from the Stock Market data in the latter half of 1929 would also be pretty pointless.
we have hope that they will pull something out of the fire without turning into Microscum. However this time Apple have right royally f*cked themselves and every loyal mac user into the bargain.
For god's sake steve, Style over Substance was over by the mid eighties. get over it and deliver the kick ass goods.
This is a far more periolous time for apple that at any point in their incredibly stupid past.
yet, I'm still in awe. How did this company that eventually ruins everything great they make, that continually snatch massive defeats from the jaws of certain success, that dumped the Newton, Hypercard, Copeland et all still survive and occasionally wow us?
Comments
<strong>Likewise we've had pretty much the same MB for 4 years.</strong><hr></blockquote>
What is that supposed to mean!? Macs have shipped with 32MB to 512MB RAM in the past 4 years - it's cheap to buy, what's the difference if you buy it yourself or they put it in for you?
Capacity has been 1-2GB in that time, how much do you want?
If we don't see even a hint of what's to come by January 2003, that's very worrying.
I think we'll have a hint by Oct. 14, and the new chip by the middle of next year. But who knows.
It's clear that Apple is cooking up their next-generation chip. The question that no one (perhaps not even Apple) can answer is when...
[ 09-11-2002: Message edited by: Hobbes ]</p>
<strong>I think Apple should surrender in the desktop war and focus 60-70% of R&D towards laptops....where they actually have a small lead and a fighting chance....</strong><hr></blockquote>
What's so great about PC hardware? Other than a DDR bus and evolutionary improvements like ATA-133 I don't really see many huge gains in Mobo tech the last 2 years.
It's obvious that nexgen tech like Hypertransport, RapidIO and PCIExpress are vying for their place on future mobo's. What I don't understand is why Mac user view HW as something that cannot be updated quickly. I could see Processor problems but I'm fairly confident that Apple will be in position to be competitive when nexgen HW starts shipping.
Threads like this always de-evolve into bitching threads about Apple Hardware. Perhaps AI should create a Doom and Gloom Topic so that the worrywarts can congregate and moan in unison.
<strong>Because they will also have higher clockspeeds?</strong><hr></blockquote>
No. I'm sorry, but I don't think you're getting this totally right. Regarding the Hammer, it already is some 20-40% faster than an equally clocked Athlon running the same 32 bit code.
I think CPUs are not so simple: just as raw frequency can be a performance myth, so can be pipeline depth. The P4's pipeline is certainly huge, but it's got its benefits (it's arguably the most powerful desktop CPU around). Some people say the P4 architecture will not last long after 2004, because there isn't the technology to keep pushing the frequency so much, but this is Intel's fault, it doesn't lie in the architecture itself.
I don't have the knowledge necessary to prove this, but as far as I can tell there are other factors beyond pipeline depth in determining the IPC rate of a CPU.
ZoSo
Could they support more BTO options? Methinks yes.
Some ramblings about IPC and performance, correct me if I'm wrong.
Instructions Per Second =
Instructions Per Cycle x Clock Rate.
To make up for the difference in clock rate between G4+ and IA32, G4+ needs a vastly better IPC. (Divide IPS by one million and you get MIPS, which ain't a good cross platform benchmark anyway.
IPC has nothing to do (directly) with clock rate.
Branch prediction accuracy affects IPC: the less cycles wasted in processing (and then discarding) mispredictions, the closer the actual IPC is to the processors' optimum. Beefing up the branch prediction unit (BPU) can increase IPC but has a transistor cost: this is probably partly the case with Hammer's increased IPC.
The commonly mentioned advantage of a short pipeline is that it wastes less cycles on a misprediction, as there's fewer stages of work to throw away. (However, each of those fewer stages has to do more work).
The CPU will be working at its optimal IPC when instructions are being retired are completed by all of the possible units as quickly as possible. Adding more functional units (usually) increases IPC (probably Hammer's other IPC trick).
All of the CPU's functional units have to be kept busy and fed with data. With caches having high hit rates (90%), this isn't as tricky as it might appear from the CPU/FSB clock rate difference. Cache size and speed therefore gives another IPC tradeoff. Having more registers (PPC: 32 general purpose reg.s, x86: 8) is also useful for keeping units fed.
Out-Of-Order-Execution can increase IPC by running independant code while waiting for other code's results, but (no surprise) is complex and costs transistors.
Apple seems to be in a constant state of suspense. As I long time Apple watcher and Mac user, I have never know what Apple will do next. It can be very frustrating when making purchasing decision.
I understand why Apple runs things this way. But it is hard on the consumer.
In the wintel world, its easy to figure out what chips the machines will be running in 6mos and how fast they will be. But on the Apple side, we dont even know what kind of chip we will be offered in 6mos or who will be making it. All we have is rumors.
This makes business uneasy. I would like to see Apple announce some type of intention. Vague comments about 'having options' seem to imply that other chips besides the G4 are being considered, but Id like Apple to give us more. I dont need them to tell me details, but is it asking too much for a general plan?
Since rumors of the next gen chip have been circulating, I have been holding of on a new machine as long as I can. But, next year Ill be in the market. I hope there are signs of Apple's direction be then.
[ 09-11-2002: Message edited by: jimmac ]</p>
<strong>There is no Hardware roadmap. From this you can only look at previous performance. We have had a 750 mhz improvemnt in 3 years. We were stuck at 500mhz for 18months. Likewise we've had pretty much the same MB for 4 years. Consider.
What happens if we get stuck at 1.25ghz until 2004/5?
What happens if the speed only increases to 2ghz by 2006?
</strong><hr></blockquote>
Moore's observation is called 'Moore's LAW' because it holds so, so true.
Yes, there are aberrations - there are aberrations in _any_ real data-set.
-> Intel had compelling reasons to push MHz _hard_ while allowing instructions-per-clock to be a secondary priority.
-> Motorola (back in 1998, the year of the beleaguered Apple) didn't emphasize the 'high' end of their chip lines as much as we wish they would have (with 20-20 hindsight).
-> It's also pretty clear that at least one _major_ chip project inside Mot had a Titanic scale disaster.
Now. Extrapolating from the MHz (Or GFLOPS, or whatever you want) data from Mot _for_2001 & 2002_, we'll get minimal boosts. As you indicated. Moore's law is really about transistor counts, but observed speed is pretty well correlated. (Though, note: MHz is _not_)
Extrapolating from limited data is not the best plan. Particularly in the face of a strong model that indicates otherwise, and evidence that some of the recent data suck.
This doesn't change any of your other reasons (or reasoning), but just pointing out fallacies in your extrapolation. Extrapolating from the Stock Market data in the latter half of 1929 would also be pretty pointless.
Lemon Bon Bon
<strong>There's always 'hope'.
Lemon Bon Bon</strong><hr></blockquote>
we have hope that they will pull something out of the fire without turning into Microscum. However this time Apple have right royally f*cked themselves and every loyal mac user into the bargain.
For god's sake steve, Style over Substance was over by the mid eighties. get over it and deliver the kick ass goods.
This is a far more periolous time for apple that at any point in their incredibly stupid past.
yet, I'm still in awe. How did this company that eventually ruins everything great they make, that continually snatch massive defeats from the jaws of certain success, that dumped the Newton, Hypercard, Copeland et all still survive and occasionally wow us?