or Connect
AppleInsider › Forums › Mac Hardware › Future Apple Hardware › power mac won't get any faster
New Posts  All Forums:Forum Nav:

power mac won't get any faster - Page 4  

post #121 of 297
Quote:
Originally posted by smalM
Some good cooling and that problem is solved too

I bet you $100, that Apple will not release a 4.0Ghz Powermac that has a great fucking open copper tube filled with liquid nitrogen, gassing out steam, with a bank of test equipment already installed, and will not run a benchmark for love or money.
post #122 of 297
Oh come on - those maschines producing liquid nitrogen are not that big anymore
post #123 of 297
Quote:
Originally posted by Nr9
plans change

How you can say that with a straight face is beyond me.
post #124 of 297
Quote:
Originally posted by MarcUK
Simply put, the clockspeed walls are due to the number of pipelines,

a 31 stage Prescott P4 at 4.0GHZ = 129Mhz per stage (90nm)
a 14(?)stage PPC970 at 2.5 GHZ = 178.5Mhz per stage (90nm)

compare this to

a four stage G4 (7400) at 600 Mhz = 150Mhz per stage (180nm)
a seven stage G4 (7447) at 1.5 Ghz = 214Mhz per stage (130nm)
a 12 stage Athlon 64 at 2.4 ghz = 200Mhz per stage (130nm)
a 20 stage Northwood P4 at 3.2 Ghz = 160Mhz per stage (130nm)

it would appear simplistically that we are approaching the limits already, except Intel who have dropped the ball. Getting 200Mhz per stage is doing extremely well. So I go wih Nr9 and Programmer

The 130 nm processors are at the end of their life cycles on mature 130 nm fabs. The 90 nm processors are at the beginning of their life cycles on immature fabs. If you allow a 130 nm fab 2 years to produce 200+ MHz per stage, you should at least give the 90 nm fabs another 1.5 years to do the same before coming to a conclusion.
post #125 of 297
One thing peopel should realize is that parts of those pentium 4 are running at twice the system clock rate. So we have chips that are clocking some logic at close to 7GHz today.

The thing I find disturbing with PPC is that it is very hot considering the number of transistor in the chip. Here with the 970 we have a small chip leading the pack heat production wise. This is one reason I believe there is a ways to go with PPC. There needs to be process adjustments to address those heat issues (watts/transitor).

The other thing that seems to come up missing is that IBM did cut the dynamic power considerably with the transisition to 90nm. Controlling static power would give them extra head room. Obviously IBM is working on the static power issue as we speak simply to fill the portable roll at Apple. What is interesting is how they address the operational power with the 970 family to become more competitive.

All this talk about hitting the wall is to much negativity to early into the process. Sit back take a rest and we should all see significant changes in a year or two. After all the different companies prove their processes and the various cross licenses are issued then we may have a clearer picture of where things stand. As it is I don't see the industry as a whole staying at the current clock rates, IBM might but that is another issue.

Dave


Quote:
Originally posted by THT
The 130 nm processors are at the end of their life cycles on mature 130 nm fabs. The 90 nm processors are at the beginning of their life cycles on immature fabs. If you allow a 130 nm fab 2 years to produce 200+ MHz per stage, you should at least give the 90 nm fabs another 1.5 years to do the same before coming to a conclusion.
post #126 of 297
Quote:
Originally posted by wizard69
All this talk about hitting the wall is to much negativity to early into the process. Sit back take a rest and we should all see significant changes in a year or two. After all the different companies prove their processes and the various cross licenses are issued then we may have a clearer picture of where things stand.
Dave

Couldn't have said it better.

 

 

Quote:
The reason why they are analysts is because they failed at running businesses.

 

 

 

Quote:
The reason why they are analysts is because they failed at running businesses.

 

post #127 of 297
Thread Starter 
I find it amusing that people associate knowledge with sophisticated, grammatically-correct English.

the fact is, most people who know anything worth shit don't have time to sit around and compose essays on forums.

it doesnt matter to me if you belive or not

the fact is, the limit has been reached.

come back in 2 years and you'll see.
post #128 of 297
I say come back in 6 months... when the Power5 has been upgraded.

And oh... btw... I know you're full of crap because my friend (when I say friend I mean someone I know and talk to daily in person) is hardware QA at apple... they HAVE HAD samples of 2.6ghz 970fx procs since Jan 04.

 

 

Quote:
The reason why they are analysts is because they failed at running businesses.

 

 

 

Quote:
The reason why they are analysts is because they failed at running businesses.

 

post #129 of 297
Quote:
Originally posted by wizard69
It is not that we don't want to hear it it is just a matter of the information being wrong. Sure there have been problems with 90nm, and IBM's process could hardly becalled innovative but that does mean we won't see progress.

Are there physical limits to how fast a CPU can operate - most certainly. The problem is that we are far from those limits, the problems today are almost universally thermal and can be dealt with.

The problem is that the innovations on the horizon don't hold much promise. At best the pending process improvements look like they could deliver ~30% improvement in clock rate (based on old promises of more significant material changes). From 2.5 GHz that'll get the 970FX to ~3.2 GHz, and that will realize less than a 30% improvement in performance. And that is a wildly optimistic prediction -- sure they've got 3.4 GHz in the lab with super-duper cooling systems on sampled parts, but that's a far cry from what you could reasonably ship in hundreds of thousands of units in a consumer level product.

Intel doesn't seem to think they'll achieve even that much of an improvement, and AMD's first 90nm chips have a zero clock rate improvement.

Quote:
Maybe not so much a troll as somebody that maybe is being mis informed for various reasons.

Well if you can quote current industry publications that will inform us correctly, then please do.

Quote:
Ah no I can't accept it because the premiss doesn't support what is happening in the industry. If I saw every manufacture give up on trying to produce smaller and faster trasistors I'd say we may be on to something. This isn't the case at all, corporations are still pursuing much faster devices.

Like who? If Intel (the leading clock rate advocate for the last 15 years) publicly abandons the pursuit of the all powerful clock rate bump, then who are you talking about? Freescale, who is still hanging around at a mere 1.5 GHz?

Besides, the pursuit of smaller transistors is not synonymous with the pursuit of higher clock rates. Smaller means you can fit more in the same space. More transistors means more cache, more memory, more cores. We're not suggesting that miniturization will stop (quite the opposite). Instead we're saying that the benefits of increasing frequency have reached or passed the point where they outweight the negatives.

Quote:
One should not get side tracked by multicore processors either, this is just an out growth of having the hardware available to implement them. Multicore chips are not an excuse for poor single core performance, rather they are a avenue to increasing SYSTEM performance.

Time for a paradigm shift.

Quote:
You may not be making such a decree but Nr9 certainly is trying to. This is what people are rejecting

Nr9 seems to "suffer" from communication issues. He seems to place lower value on clear and articulate conversation in these forums than I do -- but then I work with a lot of brilliant engineers who do much the same thing, so I'm not going to write him off purely on that evidence alone. I tend to be considered more credible, but please keep in mind that what I'm better at is communicating technical issues (or perhaps I'm just a better typist) and this is not the same as having a better knowledge base. I'm not saying Nr9 is the lead hardware engineer at the IBM's processor design facility, or even that I'm convinced he works at IBM! However, what he is saying does resonate with what I see in the industry and know about recent developments.

Quote:
With the initial transistion to 90nm and the lack of process development this may be the case. The reality is that manufactures have no choice but to find a way around this issue.

Not everything has a way around. The way around may very well be approaches other than increasing clock frequency.

Quote:
We have heard about such issues since the industry started. Heck I've been following this since Byte was a young magazine. At each point in the development of the industry somebody has dealt with the supposed limits of the time. There is nothing to keep that fm happening again.

Yeah, I'm old too. I remember everybody scoffing at Intel's wild predictions of 100 MHz. This time things feel different. This time the old tricks have been stretched so far that we've been seeing seriously diminishing returns for a few years. They are going to have to work a lot harder to continue making progress.... and they will. I'm not saying anybody is going to stop trying to make faster processors, but the basic they way they are going to go about it has to change for the first time. Can't just make it smaller and increase the frequency, decrease the power and reap the benefits anymore.

Quote:
It is almost a given that IBM is working on lowering the power used by the 970 series. You are implying, along with Nr9 apparently, that they have given up on new technology that may be applied at 90nm.

No, I'm saying that the benefit to be gained by this new technology is limited. IBM will have lower power versions of the 970, but I don't think they're going to run at significantly higher clock rates. In fact at the top clock rates they may not be lower power at all.

Quote:
Or maybe the approach that the team Freescale is involved with is leading them to more confidence. After all they have taken their time and do have the knowledge of the rest of the industiers public failings. It is not like there is only one 90nm process in this world and every one is being compelled to use it. There is still room in this world for novel approaches and innovation.

You said Freescale and confidence in the same sentence!!!

Freescale is at least a year from a production 90nm part by their own admission. And they are talking about taking a 1.5 GHz part to... 1.5+ GHz. Their later parts don't even exist yet. Sounds like IBM (and Intel) before they actually got the point of production. IBM and Intel have different processes at 90nm, but they suffer from the same problem. If I were a betting man would I put money on Freescale coming up with a solution to something that IBM and Intel's best haven't? In the next year if a solution does come out of the ether, I'll be that it comes from IBM and/or Intel. But I don't expect it.

Quote:
I do believe that many have been very fair with Nr9. He either has to offer more compelling evidence or modify his approach. To state absolutely that we have hit a wall and 2.5GHz is it, is just to much. IBM may have recently stumbled and fallen flat on their face but that doesn't mean they can't stand up again. Frankly they have to stand up again or be left in the position of watching others walk away from them technology wise.

Yes, I'd like to see evidence as well. Unfortunately this industry is buried under NDAs and trade secrets. They don't publish convenient web links to their innermost secrets so that us rumor mongers can point at them. I'd settle (happily) for clear evidence that you are correct as well... but I don't see anybody shipping significant clock rate bumps based on the 90nm migration. Intel 0.4 GHz, AMD 0.0 GHz, IBM 0.5 GHz. Wow, IBM stumbled and fell flat but still turned in the best improvement (yes, this is facetious comment).


And one last comment: I interpret "the limit" that Nr9 speaks of as something other than a speed barrier like the speed of sound or the speed of light. In engineering, something like processor design is a huge set of trade-offs, product requirements, and goals. In this very complex design space there is a vague boundry of what makes a possible and acceptable design. The available technology governs this design space, and physics governs the technology. It is simply not correct to say that 2.55178 GHz is the limit for the PowerPC. It is not even correct to say that about the PowerPC 970FX on process XYZ at voltage V, because it depends on what kind of system parameters, operating conditions, software, etc etc etc. So instead the limit being reached is really just a recognition by processor designers that they are flogging a dead horse to try and achieve their (very complex set of) goals, and new avenues to improve processor design must be followed.
Providing grist for the rumour mill since 2001.
Providing grist for the rumour mill since 2001.
post #130 of 297
Quote:
Originally posted by emig647
You can only optimise so much. I'd love for everyone to get rid of these programmer wanna-bes and make us start using assembly again.

I blame the, "Nah, no need for optimising, there'll be more powerful computers out soon enough anyways" attitude on Java and and these lazy bastards that think they are programmers because they got a 4 year CS degree. Just because you are shown 2 or 3 languages doesn't mean you suddenly are a master programmer like they think.

I say bring back in line assembly. That'll teach those punks ... that'd hurt their head... having to actually think

Felt that I had to reply to this one

(off-topic)
As i'm doing my Bachelor degree in Computer Science, I'll eventually be joining in the " lazy bastards that think they are programmers because they got a 4 year CS degree.". I know you didn't mean it to be that harsh, but I still have to disagree with you. Through my BS degree I've had experience with a handful languages (C, C++, C#, Java (SE & EE) and Perl). When I finish my Masters degree I might have added another one or two to that list, and I'll be joining the workforce as a Junior programmer...

Just because I wont have 20 years programming experience and practical knopwledge of any language since ADA, doesnt mean I'm not a programmer. I'd regard any 4,5 year Computer Science graduate as a programmer, allthough lacking the experience to make them a senior programmer.
(/off-topic)


My main problem with your post is this:

Even though, as you say, you can only "optimise so much", there are hardly no speed optimisation being done in the industry, unless thats a special requirement for that particular application. Even "hard-core" games lack optimisation because, as I stated before, there's simply no need to. Customers will be able to get new hardware to play the games by the time the game hits the market.

(I bring up games, because they are usually applications that have very high requirements accross the board)...

Also, as someone else stated, hardware changes too frequently in this business. This makes it harder to optimise the code, becuase by the time the application is completed, newer hardware is released that migh render your optimisation efforts useless because the new chips have changed and thus cant utilise the optimisations like you intended.

Which is why I brought up the Playstation 2 console. Theres a massive difference between the complexilty and graphics of games being developed 3 years ago and games being developed now. Just have a look at the highly more graphical quality between GTA3 (the original) and the screenshots of the upcoming GTA: San Andreas. There's just no question. The developers are able to run the game flawlessly on the PS2, while the PC Port of the game will most probably need a 2.5Ghz+ CPU with a killer graphics card to produce the same results. And its not because the PS2 is more powerful in any way, its because the PS2 version of the game is highly optimised for its indended platform, whereas the PC version most probably wont be. And why ? Because the hardware on the PC side are so much more powerful and frequently changing, optimisation isn't cost effective.

I say slow down the pace, and rather implement each new CPU design properly, and not rush new products to market for the sake of "beating the next guy"...

edited for grammar
In the real world, ignorance is truly a bliss.
In the real world, ignorance is truly a bliss.
post #131 of 297
Ok, it might have hit a wall or it might not have...But I am getting the gist that if clock speed still can be increased, it won't be by much and won't happen for awhile (Just like the dual cores). So this might be a little off-topic, but I am getting heavily into video editing and my powerbook 1.5 isn't handling it very well. I'm going to buy a powermac and was wondering if I should get the 2.5 now, or wait until january for a potential power increase. Thanks.
Setup:

15.2" 1.5 Ghz Powerbook G4
1 GB RAM
ATI Radeon 9700

and eventually...

Dual 2.5 Ghz Powermac G5
2.5 GB RAM
160 GB Internal + 500 GB Big Disk Extreme External
ATI Radeon 9800XT

23" HP f2304...
Setup:

15.2" 1.5 Ghz Powerbook G4
1 GB RAM
ATI Radeon 9700

and eventually...

Dual 2.5 Ghz Powermac G5
2.5 GB RAM
160 GB Internal + 500 GB Big Disk Extreme External
ATI Radeon 9800XT

23" HP f2304...
post #132 of 297
Quote:
Originally posted by BoeManE
Felt that I had to reply to this one

Let me be fair on this one. I'm not saying all Computer Science degrees are bad. But a majority are. My friend just graduated with a 4 year BS from Linfield. Its about 15-20k a year to go there. Its a very renound school.

He asked me to help him with some search / sort algorithms. Breadth first and deep sort if I remember correctly. He sent me the code he had... He said it was in C++. So I'm like GREAT!! I can follow OO much easier. I get the code and no feature from c++ was used... No polymorphism, no inheritance, no encapsolation, no OO what so ever. I asked him how it was c++... he said cause it had cout statements. I had to laugh .

Point is most CS majors are shown a handful languages and their capabilities... they aren't shown how to use them in a work related environment.

Another example... I used to do QA work for intel. I had to meet with 3 of the programmers (1 was lead and 2 were newb's) on a project a few months ago... none of them knew any UML. They had no idea how to do any documentation.

Problem with CS is they aren't ready for the work force after graduation. The projected time for a new CS graduate to be caught up in the work force is 9-18 months.

I am almost done with a 4 year Software Engineering degree. I was up to speed in about 3 months at a company that will go nameless. The #1 reason I kept up was because I was shown why and how to program. Not just how to program. CS is just a completely different realm... different intentions. More innovating bleeding edge technology then using the tools that are available now.

As far as optimizing... You said you program in java and c#... you don't think you need to optimize in either of those for client side software? Go do a GIMP type application in either of those and find yourself hogging 20-50% cpu when idling.

All software (even games) need to be optimized. The more optimized a game is the larger the platform support. Which is well worth the money in the long run. If people were ot just program in straight cocoa for games without optimization it would be a nightmare. Cocoa is a great API but there is always need for optimizing... even on today's hardware... especially for java and c#.

 

 

Quote:
The reason why they are analysts is because they failed at running businesses.

 

 

 

Quote:
The reason why they are analysts is because they failed at running businesses.

 

post #133 of 297
BTW... lets compare PS2 to Computers...

Compare ANY game that is on pc and ps2. Look at the graphics. The ps2 == a pc from 3+ years ago. The looks are LARGLY noticeable. Anti-aliasing... forget about it. Anisotropic Filtering... did that even really exist then?

The textures are horrid on the ps2 compared to the pc. My friend was playing Socom Navy Seals 2... What an ugly ass game. I couldn't even watch because the texturing was so horrible.

For the quality decay from going from pc -> ps2 (btw they build on pc's), I can see how the games get so optimized to run on that slow system.

 

 

Quote:
The reason why they are analysts is because they failed at running businesses.

 

 

 

Quote:
The reason why they are analysts is because they failed at running businesses.

 

post #134 of 297
Quote:
Originally posted by Nr9
I find it amusing that people associate knowledge with sophisticated, grammatically-correct English.

the fact is, most people who know anything worth shit don't have time to sit around and compose essays on forums.

So the brevity and poor composition of your posts are proof of their authenticity?
Attention Internet Users!

"it's" contraction of "it is"
"its" possessive form of the pronoun "it".

It's shameful how grammar on the Internet is losing its accuracy.
Attention Internet Users!

"it's" contraction of "it is"
"its" possessive form of the pronoun "it".

It's shameful how grammar on the Internet is losing its accuracy.
post #135 of 297
Sun just announced that the UltraSparc IV+ went 90 nm and gained a quite a lot frequenzy wise.
From 1.2 GHz to 1.8 GHz by doing the 130 nm to 90 nm dance we are all talking about.

So.. What is Texas Instrument's secret of doing a 50% increase in speed? Was their former fab really bad and their new really great, or is their 1.8 GHz claim just a lie or wishful thinking?
post #136 of 297
It should be noted IBM pretty publicly attested to the fact that parallelism was the way forward for performance over clock rate when they introduced the POWER4. The Opteron was designed with dual core in mind long before the issues at 90 nm were fully realised and about the only company that has really gone backwards on multi-core plans has been Intel.

Every other major company, including Sun, had realised as processes shrunk and they could toy with more transistors adding cores offered attractive performance gains for the transistor usage. To say the multi-core plans are new or solely related to current heat difficulties is to be perfectly blunt a bit naïve.

Quote:
Originally posted by Nr9
I find it amusing that people associate knowledge with sophisticated, grammatically-correct English.

the fact is, most people who know anything worth shit don't have time to sit around and compose essays on forums.

Most people who are in a position to know much that is worthwhile are also well enough educated that they don't have to spend that much time correcting grammatical errors. They tend to not make the obvious mistakes in the first place. A university education tends to knock it out of you.
"When I was a kid, my favourite relative was Uncle Caveman. After school, wed all go play in his cave, and every once and awhile, hed eat one of us. It wasnt until later that I discovered Uncle...
"When I was a kid, my favourite relative was Uncle Caveman. After school, wed all go play in his cave, and every once and awhile, hed eat one of us. It wasnt until later that I discovered Uncle...
post #137 of 297
Hi J;

I'm thinking this is the wrong thread. Really we can't even decide if we have hit a wall or not or even what type of wall it is. Considering that I'd not take advice from this thread.

That won't stop me from offering a little though. If you need a new PC and the hardware is available to do what you want then buy now. Of course Christmas is coming and you really shouldn't be thinking about yourself, so waiting until the new year can reduce guilt problems a bit.

Dave


Quote:
Originally posted by JtheVGKing
I'm going to buy a powermac and was wondering if I should get the 2.5 now, or wait until january for a potential power increase. Thanks.
post #138 of 297
Quote:
Originally posted by Henriok
Sun just announced that the UltraSparc IV+ went 90 nm and gained a quite a lot frequenzy wise.
From 1.2 GHz to 1.8 GHz by doing the 130 nm to 90 nm dance we are all talking about.

I've given up on following anything Sun or SPARC related, but this is very good to hear. Is this on a Fujitsu or TI process? Last year TI was trumpeting their 90nm process in relation to SUN. It would be very interesting to see how hot that SPARC is running.
Quote:

So.. What is Texas Instrument's secret of doing a 50% increase in speed? Was their former fab really bad and their new really great, or is their 1.8 GHz claim just a lie or wishful thinking?

I wish I knew what was up with TI. Bits and pieces of information have popped up indicating that they have made considerable break throughs. Last I knew though these break throughs where not in production yet.

I do not know if TI is boasting to much or not, but if they successfully deliver chips with the technology that has been talked about they will become the industry low power leader at 90nm. This url: http://focus.ti.com/docs/pr/pressrel...prelId=sc03226 focuses a bit on their 90nm process with respect to SPARC. Unfortunatley it is a press release. In any event this 90nm process does appear to be far more refined than what IBM is using. Ti is also far along with 65nm: http://focus.ti.com/docs/pr/pressrel...relId=sc04074, here they are claiming a reduction in leakage current by a factor of 1000 and a 40% increase in transistor performance over 90nm. Now this could all be bluster but Sun seems to be happy with the new SPARC.

In any event TI has not publicly complained about hitting a wall.



As an aside many people have been making references to Intel and the poor showing of Prescott. As I see it there are several problems with that. One is that the chip is huge to the point one wonders what all those transistors are doing. The other is that some of that logic is clocked at 2X. Finally the static power on this chip is a killer. On the other hand Intel has gotten good results with Dothan.
post #139 of 297
Quote:
Originally posted by Telomar
Every other major company, including Sun, had realised as processes shrunk and they could toy with more transistors adding cores offered attractive performance gains for the transistor usage. To say the multi-core plans are new or solely related to current heat difficulties is to be perfectly blunt a bit naïve.

This is alot like asking a mountain climber why he climbs the mountain. Often the response is "because it is there". The move to multi core processors has a great deal to do with the fact that the space is now there to put the extra processors in place. In the not to distant future I would expect that we will see everything on chip except for the system memory, the GPU and required buffering.

We are already seeing examples of such trends outside the mainstream processor market. Freescales PowerQUICC series is a modern example of high integration and specialized processors that all those transistors enable. For mainstream processors though, we have reached the point where even after trying to improve the main core you still have enough transistors available to implement a second core. Well that is given your ability to disapate the heat generated.
Quote:

They tend to not make the obvious mistakes in the first place. A university education tends to knock it out of you.

Here I will only say that I've meet a number of very well educated people that I could respect for their intelligence when applied to their field of interest who could not communicate well at all. Good communications skills are not a natural by product of a developed and intelligent mind.
post #140 of 297
Thread Starter 
Quote:
Originally posted by Telomar

Most people who are in a position to know much that is worthwhile are also well enough educated that they don't have to spend that much time correcting grammatical errors. They tend to not make the obvious mistakes in the first place. A university education tends to knock it out of you.

Right, do you even have an enginnering degree? You don't understand that unless you go to a liberal arts university, you dont need to write much do you?
post #141 of 297
Quote:
Originally posted by wizard69


. . . Here I will only say that I've meet a number of very well educated people that I could respect for their intelligence when applied to their field of interest who could not communicate well at all. Good communications skills are not a natural by product of a developed and intelligent mind.


It's not just communication. Well educated people seldom make obvious technical mistakes, especially within their field. When they do err, they are quick to recognize it when someone points it out. I see neither of these characteristics in our friend.
post #142 of 297
Ok Nr9... I think its time to lay down your account at AI...

Since you are so confident in your statement (belittled with evidence), we should make a little wager.

If Powermacs don't crawl so much of a hertz within the next year, you win.... and we'll never question you again.

However, If it does jump... lets say from 2.5ghz to 2.8ghz... (whatever speed)... then your ip gets put on a deny for an access list. Still have the same confidence in your statement?

 

 

Quote:
The reason why they are analysts is because they failed at running businesses.

 

 

 

Quote:
The reason why they are analysts is because they failed at running businesses.

 

post #143 of 297
Basically, there is two way of creating more powerful chips :
- increase clock speed
- add more power per clock cycle.

Increasing clock speed, is managed through smaller and more efficient fabbing process, and refined design, like higher pipelining (wich recquiere more transistors).

add more power per clock cycle was done by (it's an oversimplification) adding more transistors. At first the chip where 8 bits, and a simple multiplication recquiered several cycles, then the chip went 16 bits, and more. The architecture of the chip become hyperscalar (many units sharing the work).
Nowdays a modern chip is hyperscalar, have a SIMD unit, is 64 bits. Let's say it short : single core CPU have reach their limit of sophistication. Adding twice the number of sub unit, won't produce twice the amount of performance, and going to 128 bits is totally wortless.

The only way to use more transistors in an efficient way is to go multicore. Heat density has reach a critical issue, and the optimum die size for a CPU according to IBM is 100 square millimeters.

The future is clearly multi, but it does not necessary mean that the clock speed are freeze forever. It just means that the mhz race is over, and it's a real good new for Apple who have never been big on it.
post #144 of 297
Thread Starter 
Quote:
Originally posted by snoopy
It's not just communication. Well educated people seldom make obvious technical mistakes, especially within their field. When they do err, they are quick to recognize it when someone points it out. I see neither of these characteristics in our friend.

i dont make them you dumbass
post #145 of 297
Quote:
Originally posted by MarcUK
Simply put, the clockspeed walls are due to the number of pipelines,

a 31 stage Prescott P4 at 4.0GHZ = 129Mhz per stage (90nm)
a 14(?)stage PPC970 at 2.5 GHZ = 178.5Mhz per stage (90nm)

compare this to

a four stage G4 (7400) at 600 Mhz = 150Mhz per stage (180nm)
a seven stage G4 (7447) at 1.5 Ghz = 214Mhz per stage (130nm)
a 12 stage Athlon 64 at 2.4 ghz = 200Mhz per stage (130nm)
a 20 stage Northwood P4 at 3.2 Ghz = 160Mhz per stage (130nm)

it would appear simplistically that we are approaching the limits already, except Intel who have dropped the ball. Getting 200Mhz per stage is doing extremely well. So I go wih Nr9 and Programmer


Thank you for that thought provoking reply. I understand general science and physics, but have very little knowledge about the inner workings of a CPU. I may have to study this subject to see how it affects the 90nm issue. I've noticed that the number of stages relates to the maximum clock speed, but this is true for any size process used in the past. The 90nm process hit us with a new problem, higher than anticipated leakage current. So, in a way, we are talking about two separate "walls" so to speak, but I do realize they may be very much interdependent. I wish I had time to study this right now.

It seems the new leakage current is due to the extremely thin separation between parts of the transistor at 90nm. It doesn't do the job of isolation that it does in larger processes. As the voltage increases this leakage goes us very rapidly, and voltage must to increased to get to higher clock speeds. If the Intel CPU architecture somehow allows a higher clock rate at lower voltages than IBM's architecture, this would explain the different limits, 2.5GHz and 4GHz. Otherwise, I'm not sure. It'll have to wait until I learn a bit more.
post #146 of 297
Quote:
Originally posted by Nr9
i dont make them you dumbass

When asked, "I don't get the thing about dual core designs solving power consumption problems . . . ," you replied:

Quote:
Originally posted by Nr9
it solves it because it is twice the area hence easier to cool.




Twice the area, but twice the power. Or were you thinking about power density? If so, power density does not change and you get no help there. You didn't seem to get it when I first replied.
post #147 of 297
Hey Mr. IBM!
What's about the CELL processor yet or do you need some links, maybe?
Waiting for the Power Mac G5 since Oktober 2001
Waiting for the Power Mac G5 since Oktober 2001
post #148 of 297
Quote:
Originally posted by Fat Freddy
Hey Mr. IBM!
What's about the CELL processor yet or do you need some links, maybe?

\ BWAAAAAHAHAHAHAHAHA

 

 

Quote:
The reason why they are analysts is because they failed at running businesses.

 

 

 

Quote:
The reason why they are analysts is because they failed at running businesses.

 

post #149 of 297
some people never listen....think they call them children.
I heard that geeks are a dime a dozen, I just want to find out who's been passin' out the dimes
----- Fred Blassie 1964
I heard that geeks are a dime a dozen, I just want to find out who's been passin' out the dimes
----- Fred Blassie 1964
post #150 of 297
Quote:
Originally posted by snoopy


[snip]

If the Intel CPU architecture somehow allows a higher clock rate at lower voltages than IBM's architecture, this would explain the different limits, 2.5GHz and 4GHz. Otherwise, I'm not sure. It'll have to wait until I learn a bit more.

Im no expert. Ive generally read all the CPU theory at Arstechnica, and there is a good article there at the moment. www.arstechnica.com they've redesigned their site today, so I dont have a clue where everything has gone now

My analysis is something I've worked out on my own, I don't know how well it analogizes the real world, but I can definately see some correlations. Also I dont see that Intel chips run at a drastically lower voltage than anyone else. The speed comes from massive pipelines and managed heat dissipation

Re:THT "mature fab issues" - exactly why I say Intel have dropped the ball -It appears they could go much faster but they can not. And consider I said 'approaching the limits'-

That means I fully expect IBM to knock out 3Ghz 970's on a mature process. That will give

14 stage 970 at 3.0 Ghz = 214Mhz per stage (doing really well)

14 stage 970 at 3.5 Ghz = 250Mhz per stage (exceptional fabbing but a bit unlikely)

I honestly cannot see them getting better than this, because I havn't found evidence of any processor scaling this well on any process.

This is hardly earth shattering stuff, but welcome nonetheless. The limits are quite clear and easy to see. Dual Core is obviously the answer. Going from 2.5 on an early 90nm process to 3.5 Ghz on a mature 90nm process is 40% improvement. This contrasts to the 130nm processors which have all seen greater than 100% gains in speed. Scaling is dying. Even IBM admitted it.
post #151 of 297
Quote:
Originally posted by MarcUK
Re:THT "mature fab issues" - exactly why I say Intel have dropped the ball -It appears they could go much faster but they can not.

I honestly cannot see them getting better than this, because I havn't found evidence of any processor scaling this well on any process.

The 65 nm node is expected to be doable, before quantum effects makes CMOS processes unmanageable. An order of magnitude challenge over 90 nm, but doable. I would expect another nontrivial increase in MHz from it.

As for Intel, they simply just haven't bit the bullet and implemented a high performance cooling system on a mass production scale. Seriously for a minute, higher heat doesn't mean the end of scaling for this business, it could simply mean higher performance cooling and using more electricity. This has been the answer for the last decade for both desktops and laptops.

Quote:
This is hardly earth shattering stuff, but welcome nonetheless. The limits are quite clear and easy to see. Dual Core is obviously the answer.

No doubt. But clock rate scaling is still there for a little while longer, including clock rate scaling on multi-core processors. After that, perhaps, CPUs will have different clock rates for different parts of the processor that will clock even higher. But yeah, we all agree that CMOS based computing devices are nearing the end with only 2 (give or take) nodes left milk, before quantum effects become unmanageable.

Quote:
Going from 2.5 on an early 90nm process to 3.5 Ghz on a mature 90nm process is 40% improvement. This contrasts to the 130nm processors which have all seen greater than 100% gains in speed. Scaling is dying. Even IBM admitted it.

I'm not liking the way you count. The 1st 90 nm 970fx processor was 2 GHz (for Xserves), which matched the top end for the 130 nm process, unless you think IBM could have shipped a 2.5 GHz 130 nm 970? (I think IBM probably could have shipped 2.4 GHz or so with strained silicon and low-k at 130 nm, actually. Would have been hot though.)
post #152 of 297
Quote:
Originally posted by snoopy
When asked, "I don't get the thing about dual core designs solving power consumption problems . . . ," you replied:





Twice the area, but twice the power. Or were you thinking about power density? If so, power density does not change and you get no help there. You didn't seem to get it when I first replied.

You double the transistors, but lower the frequency. In the end, there's a pretty big benefit, thermally, since you end up using less than half the power. (non-linear scaling)
Cat: the other white meat
Cat: the other white meat
post #153 of 297
Quote:
Originally posted by THT




I'm not liking the way you count. The 1st 90 nm 970fx processor was 2 GHz (for Xserves), which matched the top end for the 130 nm process, unless you think IBM could have shipped a 2.5 GHz 130 nm 970? (I think IBM probably could have shipped 2.4 GHz or so with strained silicon and low-k at 130 nm, actually. Would have been hot though.)

Personally I dont think 65nm will yield any significant gains in speed. I think it will allow dual cores to be as inexpensive to manufacture as single core 90nm. The heat wont drop, or the speed increase much, and unless there is some wierd quantum effect that only manifests itself at 90nm, i think the 65nm node will be a real bitch to perfect.

I knew you wouldn't like the way I count, but I excluded 2.0 ghz because

a) this speed was already reached by the 130nm process, where as other process shrinks generally allow a speed increase or drastic heat reduction. 90nm did not, so the 2.5ghz chips are the first that can be attributed to the improvement of shrinking the die to 90nm.
b) IBM's supply of 2.0ghz 90nm 970's was obviously the pre-production test of the process for the xServe for at least 6 months.
c) It made my point more effective.
post #154 of 297
Apple have cooled 2.5 chips.

But they aren't noisy likey Wintel/AMD machines. They run quieter. Part of the Apple 'ethos' mentioned earlier.

Yields or noise politics aside...is it so much of a stretch that 2.6, 2.8 or 3 gig chips are far away? On a mature 0.09 fab?

The AMD/Wintel mhz race took us this far. Futher than I expected for years. I thought Intel was going to milk the 700mhz upto 1 gig ramp for years. AMD pushed them all the way for a while.

The mighty champ of 'mhz' is calling its chips numbers which have no resemblence to 'speed numbers' anymore. Sign of times.

So, a G5 in that context sounds like a real competitor!

Still, a 3 gig 970fx...dual core against a Prescott limping to 4 gig sounds a real performer. Surely a gig of that Prescott is flab from an old champ about to get his head caved in.

Look at benches on Tomshardware for AMD/Intel chips. The 'mhz' may seem impressive...but look at benches on Lightwave, Cinema and Max and you'll see mere seconds between a 3 gig chip and a '3.8' gig chip.

3-5% increments of performance advantage. For which you pay huge 100% price premiums over a chip a mere few speed grades down!

In that context, the dual 2.5 gig G5 with 1.25 gig bus seems like a powerhouse of a good deal (stingy ram and graphic card aside...)

As we go to dual-core...it seems PPC is going into the next round of bump grades with horse shoes in its boxing gloves...

...fighting the next performance war on its own turf?

Home advantage?

Lemon Bon Bon

I think we're only going to get significant improvements in the near future by going parallel. Dual core it is. I'll take that over a 25% linear bump to 3 gig.

I'd like both...come on PPC...you can make it...3 gig...3gig...3 gig...
We do it because Steve Jobs is the supreme defender of the Macintosh faith, someone who led Apple back from the brink of extinction just four years ago. And we do it because his annual keynote is...
We do it because Steve Jobs is the supreme defender of the Macintosh faith, someone who led Apple back from the brink of extinction just four years ago. And we do it because his annual keynote is...
post #155 of 297
Quote:
Originally posted by Nr9
i dont make them you dumbass

A four page thread based on some troll's rude, grammatically challenged posts? Oh right, this is AI.
Attention Internet Users!

"it's" contraction of "it is"
"its" possessive form of the pronoun "it".

It's shameful how grammar on the Internet is losing its accuracy.
Attention Internet Users!

"it's" contraction of "it is"
"its" possessive form of the pronoun "it".

It's shameful how grammar on the Internet is losing its accuracy.
post #156 of 297
If true, this means trouble for Longhorn. I believe that a mid-range system was supposed to need a 5 to 6 GHz CPU.
post #157 of 297
Quote:
Originally posted by Splinemodel
You double the transistors, but lower the frequency. In the end, there's a pretty big benefit, thermally, since you end up using less than half the power. (non-linear scaling)


Just a comment -- IBM is currently working on dual a core versions of the 970FX, with enhancements. That is if we can believe some web sites, but there is documented evidence. I would not expect IBM to lower the clock rate for this 970MP, but they might.

Putting that aside, I simply responded to the claim that because a dual core is twice the area it is therefore easier to cool. This statement is not true. There was no provision that we lower frequency at the same time.

I agree on the benefits of adding a second core now that we are at 90nm. It's likely impossible to get that same performance boost by running at higher frequency because of the extreme power dissipation. Going multi core at 90nm is simply taking the path of least resistance to a performance boost.
post #158 of 297
I tend to stand with Lemon Bon Bon on the speed issue comparisons if all 3 camps are at a stand still, all Apple needs to do is start to compete better with graphics card options to look as good if not possibly better than it competitors.
THere is an obvious problem with the OpenGL coming from Apple. Fixing that alone would boost current G5 #'s a bit, but getting the whole driver problems worked out could leave PowerMac;s looking pretty good (in 3D performance tests) now.
onlooker
Registered User

Join Date: Dec 2001
Location: parts unknown




http://www.apple.com/feedback/macpro.html
onlooker
Registered User

Join Date: Dec 2001
Location: parts unknown




http://www.apple.com/feedback/macpro.html
post #159 of 297
Thread Starter 
Quote:
Originally posted by snoopy
When asked, "I don't get the thing about dual core designs solving power consumption problems . . . ," you replied:





Twice the area, but twice the power. Or were you thinking about power density? If so, power density does not change and you get no help there. You didn't seem to get it when I first replied.

eh wahtever you think

obviously it was with respect to the alternative, which is a faster single core
post #160 of 297
Quote:
Originally posted by onlooker
THere is an obvious problem with the OpenGL coming from Apple.

Definitely.
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Future Apple Hardware
This thread is locked  
AppleInsider › Forums › Mac Hardware › Future Apple Hardware › power mac won't get any faster