I just left a nameless ;-) IBM campus about 2 hours ago. I went to visit a friend who is working there. I inquired about the 970 and we took a tour of some of the stuff they are working on. He showed me several workstations with 970 (single chips) running at 3.2, 3.4 ghz Both seemed to be water cooled or have a massive heatsink. He said they are running fine (the machines only had a command line on them) He didn't know much more other than they were just test stations to play with the performance of the chip...
Obviously faster stuff will come, it's stupid to believe they will be stuck at slower speeds forever. Probably 3ghz by january...
Am I the only one that thinks that this supposed speed "wall" is good for the IT business ? I mean, now that the whole IT business has come to a halt regarding speed improvement, it puts the pressure back on software developers to accually optimise their applications, and not go by the old "Nah, no need for optimising, there'll be more powerful computers out soon enough anyways".
You can only optimise so much. I'd love for everyone to get rid of these programmer wanna-bes and make us start using assembly again.
I blame the, "Nah, no need for optimising, there'll be more powerful computers out soon enough anyways" attitude on Java and and these lazy bastards that think they are programmers because they got a 4 year CS degree. Just because you are shown 2 or 3 languages doesn't mean you suddenly are a master programmer like they think.
I say bring back in line assembly. That'll teach those punks ... that'd hurt their head... having to actually think
I agree with what you said with this one exception. I don't believe he knows his stuff. He makes erroneous and confused statements at times and then does a fast shuffle if his error is pointed out. I've simply stopped replying to his posts.
I don't believe he is a troll but wants attention. I think he believes his statements, but probably picked up the information second hand, from someone who was generalizing about the issue. He took it as gospel so he states things like, "2.5GHz is the limit."
All that said, he's probably a nice guy, and maybe I can see a little me in him, before I was sufficiently embarrassed by people who really did know something.
I know Clockspeeds will stop increasing. But, I highly doubt the speed is 2.5 Ghz on the PPC and 4 Ghz on the P4. We have a person saying there are 3 Ghz chips back when the 2.5 Ghz G5 were announced but, they couldn't make them fast enough and were to hot for apple to have a quiet machine but, still stable. This person has a good record and makes sense. While we have another person saying the 3 Ghz G5 can't run stable and still get extremely hot even with water cooling. This person has a bad record and has nothing to prove it besides that the indrustry is stuck at 2.5 Ghz and 4 Ghz. But, that is the only evidence he is showing us. The CPU indrustry hit the same wall with 200 Mhz and 400 Mhz. But, we went faster.
I greatly prefer Programmer'S approach to Nr9's. You are held in high esteem by me and many others Programmer, and we appreciate your contributions to this forum. Hopefully others follow your lead.
I second that. I actually come to these threads in order to hear from you. I have gained so much wisdom from you and respect every point you've ever made. I have a problem with this one though....
I agree with your point... but Nr9 has pushed this 3 page thread without ANY proof. Its been said (so I won't say it in detail again) that he uses short cocky statements w/o any evidence to re-fuel this thread. That is a troll. He can lay down a little argument... Isn't that part of making a compelling argument? Instead he just says he's right.
Of course none of us are going to accept his answer. Even if he is right.... but to use the word never isn't very intelligent. Never is a very long time... like someone else pointed out... where were we without the transistor... someone invented it and suddenly CPU's boomed. There isn't anything to say that someone else can't create something else that is new that will make another boom. Perhaps jump ghz at a time. No one can predict the future... not even moore
This is a very funny thread to read. Nr9 comes in with a serious piece of information that we really ought to important ramifications of, and he gets slapped down because people don't want to hear it.
It is not that we don't want to hear it it is just a matter of the information being wrong. Sure there have been problems with 90nm, and IBM's process could hardly becalled innovative but that does mean we won't see progress.
Are there physical limits to how fast a CPU can operate - most certainly. The problem is that we are far from those limits, the problems today are almost universally thermal and can be dealt with.
Quote:
The evidence in support of this information is there, but almost nobody will acknowledge it. Instead he is called a troll.
Maybe not so much a troll as somebody that maybe is being mis informed for various reasons.
Quote:
This reminds me of various other transitions in history where people refused to believe something was going to happen until they were steamrollered by it. Can't you even accept the possibility that he has a legitimate piece of information?
Ah no I can't accept it because the premiss doesn't support what is happening in the industry. If I saw every manufacture give up on trying to produce smaller and faster trasistors I'd say we may be on to something. This isn't the case at all, corporations are still pursuing much faster devices.
One should not get side tracked by multicore processors either, this is just an out growth of having the hardware available to implement them. Multicore chips are not an excuse for poor single core performance, rather they are a avenue to increasing SYSTEM performance.
Quote:
First of all, this is not a decree that no line of chips will every gain even a single MHz of speed from this day forth for all time.
You may not be making such a decree but Nr9 certainly is trying to. This is what people are rejecting
Quote:
Different lineups are at different places on their performance/power curves, they have different requirements, they are on different processes, and they have different characteristics. What Nr9 is saying is that the companies who have reached the 90nm node have discovered, to their surprise (!!), that the power cost of frequency increases is no longer viable as a primary strategy to increasing performance.
With the initial transistion to 90nm and the lack of process development this may be the case. The reality is that manufactures have no choice but to find a way around this issue.
Quote:
This trend has been obvious to anyone paying attention over the past 5 years, so I don't understand why it should be a surprise to anyone. Its also not a hard-and-fast number like the speed of sound, or the speed of light... but clearly the balance of tradeoffs in processor design has shifted far enough to effect the basic strategy of the designers. Intel announced this 6+ months ago when they decided that their future was not their clock rate champ, the NETBurst architecture.
We have heard about such issues since the industry started. Heck I've been following this since Byte was a young magazine. At each point in the development of the industry somebody has dealt with the supposed limits of the time. There is nothing to keep that fm happening again.
Quote:
Second, the various counter-examples mentioned are from companies that "expect" to be able to do better or from old roadmaps that don't take into account the learnings of the last 6-9 months. These are projections made without the foreknowledge of the problems found at 90nm.
It is almost a given that IBM is working on lowering the power used by the 970 series. You are implying, along with Nr9 apparently, that they have given up on new technology that may be applied at 90nm.
Quote:
Freescale saying that they can reach 3 GHz, for example, is either them not being aware of the precise troubles they're about to hit... or it is them deciding that they can make the same design choices that IBM & Intel made to achieve 3 GHz.
Or maybe the approach that the team Freescale is involved with is leading them to more confidence. After all they have taken their time and do have the knowledge of the rest of the industiers public failings. It is not like there is only one 90nm process in this world and every one is being compelled to use it. There is still room in this world for novel approaches and innovation.
Quote:
Third, the engineering planning for this technology has very long lead times. If all the bleeding edge technology in existance doesn't provide any hope for correcting this problem, then any truly new ideas are probably about 5 years from reaching the maturity level where they can be used to produce millions and billions of new chips. I always take "never" with a grain of salt, but in the absence of any real ideas about how to break a given barrier then we ought to assume that for any practical purpose that barrier is as inevitable as the speed of light. Let the researchers hunt for a solution, that is the nature of research.
This is the whole point with many rejecting the posts that Nr9 has been making. Research is on going. There are 90nm process options and alternatives. Some of these IBM is probally exploring at this very minute for the next PPC releases.
Quote:
In any case, give Nr9 a fair shake... he hasn't done anything to deserve being slammed like this.
I do believe that many have been very fair with Nr9. He either has to offer more compelling evidence or modify his approach. To state absolutely that we have hit a wall and 2.5GHz is it, is just to much. IBM may have recently stumbled and fallen flat on their face but that doesn't mean they can't stand up again. Frankly they have to stand up again or be left in the position of watching others walk away from them technology wise.
This is not a court, his point is valid, physical limitations are beginning to enter the picture...Atoms are ony so wide...Scaling up has ended scaling sideways is coming in.
No doubt they will come up with some new materials and new technologies for higher speeds but commercial products are a ways off...
What this all is is karmic payback for the GHz wars.
The little struggle that AMD and Intel got embroiled in made both companies burn through clockspeed ranges much faster than they'd ordinarily have been inclined to (especially in Intel's case). They burned through several years' worth of speedbumps in about 1 year. Now, this did get us faster chips, so it's not all bad, but it also meant that the industry not only hit a wall, it accelerated into the wall. There was no time to get the transition right, and a great deal of pressure to get it done.
I, personally, welcome a more leisurely pace. I don't believe for a second that "modern computers have enough power for anybody," but I do believe that hardware has been changing so rapidly that software doesn't efficiently exploit the power we currently have. To the extent that's true, it moots the improvements in hardware, and exacerbates the problem that personal computers never have quite enough power. This is especially, but not uniquely, true of GPUs, which went through their own "bubble".
In other words, if we really want computers to become more and more powerful in any useful or meaningful sense, then the rate of progress has to be steady enough to allow all the various parts time to be designed and refined enough to work well together. If that pace has to be enforced by runaway heat density and current leakage problems, fine. I don't think there's any question that it'll be good for the industry to take a breather. GPUs just got mature shaders and programmability, parallel busses are being replaced by high-speed serial busses, and there's a wealth of largely untapped additional functionality, like SIMD engines, just lying around waiting to be tapped. It takes time and effort to exploit all these changes, and the frenzied pace of "progress" has not allowed that time, to date, except when it was absolutely crucial to do the extra work for a particular performance-critical application.
On a higher level, it takes time to dream up ways to put new amounts of power to work, and even more time to refine them into something user friendly. So a slower pace will be good. If you believe otherwise, consider how the 500MHz debacle forced Apple and its application developers to really tighten up code and exploit AltiVec for all it's worth. And now, look at the payoff.
That doesn't mean they will not make each core faster. I was trying to say is that we will hit the final speed in a core. Which I think is 6-15 Ghz. Then we will only have the technique of adding cores to make the CPU faster. But, then we wouldn't be able to add anymore cores. What then? Only the future can tell us.
Device physics can tell, also. So can basic analysis. Heat has become one major problem with shrinking fab sizes and higher clock rates, I think modern silicon actually has higher power dissipation per surface area that does the sun.
Another issue that I haven't heard much about in this thread is UV photolith. There are limitations in the current way that chip assembly lines operate. People know just how far UV photolith can take them. Particle-beam etching techniques exist, and can produce extremely small transistors (2-4nm), but there's a long way to go before it becomes feasible for mass production.
The point is that higher clock rates can be achieved for sure, but there are a lot of limitations and roadblocks beyond voltage and transistor size. For the last 10 years most big research houses have been very busy tring to come up with more intelligent ways to pass information, rather than just more intelligent ways to move electrons. I don't really know why anyone ultimately cares about clock speed anyway. What matters is performance and power consumption, however that's achieved.
. . . his point is valid, physical limitations are beginning to enter the picture...Atoms are ony so wide . . .
Too much is being made of this physical limitation thing. Why is there a 2.5GHz limit for IBM and a 4GHz limit for Intel? Do the laws of physics work differently at Intel? No. Obviously there are some process and layout differences here.
It has simply become more difficult to push clock rate at 90nm because of leakage current, but clock rates will go up. At the same time it is now easier to add more cores because 90nm allows many more transistors on a chip. Going multi core is simply taking the path of least resistance to a performance boost.
Too much is being made of this physical limitation thing. Why is there a 2.5GHz limit for IBM and a 4GHz limit for Intel? Do the laws of physics work differently at Intel? No. Obviously there are some process and layout differences here.
It has simply become more difficult to push clock rate at 90nm because of leakage current, but clock rates will go up. At the same time it is now easier to add more cores because 90nm allows many more transistors on a chip. Going multi core is simply taking the path of least resistance to a performance boost.
Simply put, the clockspeed walls are due to the number of pipelines,
a 31 stage Prescott P4 at 4.0GHZ = 129Mhz per stage (90nm)
a 14(?)stage PPC970 at 2.5 GHZ = 178.5Mhz per stage (90nm)
compare this to
a four stage G4 (7400) at 600 Mhz = 150Mhz per stage (180nm)
a seven stage G4 (7447) at 1.5 Ghz = 214Mhz per stage (130nm)
a 12 stage Athlon 64 at 2.4 ghz = 200Mhz per stage (130nm)
a 20 stage Northwood P4 at 3.2 Ghz = 160Mhz per stage (130nm)
it would appear simplistically that we are approaching the limits already, except Intel who have dropped the ball. Getting 200Mhz per stage is doing extremely well. So I go wih Nr9 and Programmer
I had to know people who designed IBM's POWER and PowerPC chips for references when I worked on OS/2 several years ago (oh, and how I miss the Workplace Shell).
I hear you man ! My first introduction to 32bit, object-oriented computing. I miss it also. If you remember EXCAL , a ews written pim ? I doubt any OS taday can do what it could do, back in 1993.
Following MarkUK's post, you get diminishing returns with higher pipeline counts, given that vast amounts of additional logic are required to maintain order to the memory accessing. That is, errors can develop and follow though memory as multiple instructions in the pipeline at the same time access the same memory (variable in code). This accompanying logic will have a worst-case-path that must be accounted for.
Then there's branch prediction, which is a whole 'nother can of worms.
With fewer than 5 stages, you only have to worry about one kind of memory error. The legacy of MIPS (and it's name sake: 5 points to the guy who knows) is that it forced the programmer to deal with this. Given that MIPS was the mother of RISC, I don't see why programmers can't adapt once more to a new concept, and write new compilers. But that's a bit beside the point. The real point is that your speed-per-stage metric gets rocked by the 750gx, which is 275Mhz if I am correct. Put together these pieces -- worst-case logic paths, short pipeline efficiencies, the limits of superscalar sequential chips -- and you paint a really appealing picture that's actually an advertisement for Cell. (Or a multicore PPC440 shindig. )
Here's some crazy talk: x86 wasn't intended to be pipelined. . . Of course, modern x86 chips are hardly CISC anyway, but that's another thread.
I bet you $100, that Apple will not release a 4.0Ghz Powermac that has a great fucking open copper tube filled with liquid nitrogen, gassing out steam, with a bank of test equipment already installed, and will not run a benchmark for love or money.
Comments
Obviously faster stuff will come, it's stupid to believe they will be stuck at slower speeds forever. Probably 3ghz by january...
Originally posted by BoeManE
Am I the only one that thinks that this supposed speed "wall" is good for the IT business ? I mean, now that the whole IT business has come to a halt regarding speed improvement, it puts the pressure back on software developers to accually optimise their applications, and not go by the old "Nah, no need for optimising, there'll be more powerful computers out soon enough anyways".
You can only optimise so much. I'd love for everyone to get rid of these programmer wanna-bes and make us start using assembly again.
I blame the, "Nah, no need for optimising, there'll be more powerful computers out soon enough anyways" attitude on Java and and these lazy bastards that think they are programmers because they got a 4 year CS degree. Just because you are shown 2 or 3 languages doesn't mean you suddenly are a master programmer like they think.
I say bring back in line assembly. That'll teach those punks
Originally posted by Henriok
. . . He obviously knows his stuff but . . .
I agree with what you said with this one exception. I don't believe he knows his stuff. He makes erroneous and confused statements at times and then does a fast shuffle if his error is pointed out. I've simply stopped replying to his posts.
I don't believe he is a troll but wants attention. I think he believes his statements, but probably picked up the information second hand, from someone who was generalizing about the issue. He took it as gospel so he states things like, "2.5GHz is the limit."
All that said, he's probably a nice guy, and maybe I can see a little me in him, before I was sufficiently embarrassed by people who really did know something.
I know Clockspeeds will stop increasing. But, I highly doubt the speed is 2.5 Ghz on the PPC and 4 Ghz on the P4. We have a person saying there are 3 Ghz chips back when the 2.5 Ghz G5 were announced but, they couldn't make them fast enough and were to hot for apple to have a quiet machine but, still stable. This person has a good record and makes sense. While we have another person saying the 3 Ghz G5 can't run stable and still get extremely hot even with water cooling. This person has a bad record and has nothing to prove it besides that the indrustry is stuck at 2.5 Ghz and 4 Ghz. But, that is the only evidence he is showing us. The CPU indrustry hit the same wall with 200 Mhz and 400 Mhz. But, we went faster.
Originally posted by dfiler
I greatly prefer Programmer'S approach to Nr9's. You are held in high esteem by me and many others Programmer, and we appreciate your contributions to this forum. Hopefully others follow your lead.
I second that. I actually come to these threads in order to hear from you. I have gained so much wisdom from you and respect every point you've ever made. I have a problem with this one though....
I agree with your point... but Nr9 has pushed this 3 page thread without ANY proof. Its been said (so I won't say it in detail again) that he uses short cocky statements w/o any evidence to re-fuel this thread. That is a troll. He can lay down a little argument... Isn't that part of making a compelling argument? Instead he just says he's right.
Of course none of us are going to accept his answer. Even if he is right.... but to use the word never isn't very intelligent. Never is a very long time... like someone else pointed out... where were we without the transistor... someone invented it and suddenly CPU's boomed. There isn't anything to say that someone else can't create something else that is new that will make another boom. Perhaps jump ghz at a time. No one can predict the future... not even moore
Originally posted by Programmer
This is a very funny thread to read. Nr9 comes in with a serious piece of information that we really ought to important ramifications of, and he gets slapped down because people don't want to hear it.
It is not that we don't want to hear it it is just a matter of the information being wrong. Sure there have been problems with 90nm, and IBM's process could hardly becalled innovative but that does mean we won't see progress.
Are there physical limits to how fast a CPU can operate - most certainly. The problem is that we are far from those limits, the problems today are almost universally thermal and can be dealt with.
The evidence in support of this information is there, but almost nobody will acknowledge it. Instead he is called a troll.
Maybe not so much a troll as somebody that maybe is being mis informed for various reasons.
This reminds me of various other transitions in history where people refused to believe something was going to happen until they were steamrollered by it. Can't you even accept the possibility that he has a legitimate piece of information?
Ah no I can't accept it because the premiss doesn't support what is happening in the industry. If I saw every manufacture give up on trying to produce smaller and faster trasistors I'd say we may be on to something. This isn't the case at all, corporations are still pursuing much faster devices.
One should not get side tracked by multicore processors either, this is just an out growth of having the hardware available to implement them. Multicore chips are not an excuse for poor single core performance, rather they are a avenue to increasing SYSTEM performance.
First of all, this is not a decree that no line of chips will every gain even a single MHz of speed from this day forth for all time.
You may not be making such a decree but Nr9 certainly is trying to. This is what people are rejecting
Different lineups are at different places on their performance/power curves, they have different requirements, they are on different processes, and they have different characteristics. What Nr9 is saying is that the companies who have reached the 90nm node have discovered, to their surprise (!!), that the power cost of frequency increases is no longer viable as a primary strategy to increasing performance.
With the initial transistion to 90nm and the lack of process development this may be the case. The reality is that manufactures have no choice but to find a way around this issue.
This trend has been obvious to anyone paying attention over the past 5 years, so I don't understand why it should be a surprise to anyone. Its also not a hard-and-fast number like the speed of sound, or the speed of light... but clearly the balance of tradeoffs in processor design has shifted far enough to effect the basic strategy of the designers. Intel announced this 6+ months ago when they decided that their future was not their clock rate champ, the NETBurst architecture.
We have heard about such issues since the industry started. Heck I've been following this since Byte was a young magazine. At each point in the development of the industry somebody has dealt with the supposed limits of the time. There is nothing to keep that fm happening again.
Second, the various counter-examples mentioned are from companies that "expect" to be able to do better or from old roadmaps that don't take into account the learnings of the last 6-9 months. These are projections made without the foreknowledge of the problems found at 90nm.
It is almost a given that IBM is working on lowering the power used by the 970 series. You are implying, along with Nr9 apparently, that they have given up on new technology that may be applied at 90nm.
Freescale saying that they can reach 3 GHz, for example, is either them not being aware of the precise troubles they're about to hit... or it is them deciding that they can make the same design choices that IBM & Intel made to achieve 3 GHz.
Or maybe the approach that the team Freescale is involved with is leading them to more confidence. After all they have taken their time and do have the knowledge of the rest of the industiers public failings. It is not like there is only one 90nm process in this world and every one is being compelled to use it. There is still room in this world for novel approaches and innovation.
Third, the engineering planning for this technology has very long lead times. If all the bleeding edge technology in existance doesn't provide any hope for correcting this problem, then any truly new ideas are probably about 5 years from reaching the maturity level where they can be used to produce millions and billions of new chips. I always take "never" with a grain of salt, but in the absence of any real ideas about how to break a given barrier then we ought to assume that for any practical purpose that barrier is as inevitable as the speed of light. Let the researchers hunt for a solution, that is the nature of research.
This is the whole point with many rejecting the posts that Nr9 has been making. Research is on going. There are 90nm process options and alternatives. Some of these IBM is probally exploring at this very minute for the next PPC releases.
In any case, give Nr9 a fair shake... he hasn't done anything to deserve being slammed like this.
I do believe that many have been very fair with Nr9. He either has to offer more compelling evidence or modify his approach. To state absolutely that we have hit a wall and 2.5GHz is it, is just to much. IBM may have recently stumbled and fallen flat on their face but that doesn't mean they can't stand up again. Frankly they have to stand up again or be left in the position of watching others walk away from them technology wise.
Thanks
Dave
No doubt they will come up with some new materials and new technologies for higher speeds but commercial products are a ways off...
Tell me more about CELL. All you know, if you want too.
I'm waiting...
The little struggle that AMD and Intel got embroiled in made both companies burn through clockspeed ranges much faster than they'd ordinarily have been inclined to (especially in Intel's case). They burned through several years' worth of speedbumps in about 1 year. Now, this did get us faster chips, so it's not all bad, but it also meant that the industry not only hit a wall, it accelerated into the wall. There was no time to get the transition right, and a great deal of pressure to get it done.
I, personally, welcome a more leisurely pace. I don't believe for a second that "modern computers have enough power for anybody," but I do believe that hardware has been changing so rapidly that software doesn't efficiently exploit the power we currently have. To the extent that's true, it moots the improvements in hardware, and exacerbates the problem that personal computers never have quite enough power. This is especially, but not uniquely, true of GPUs, which went through their own "bubble".
In other words, if we really want computers to become more and more powerful in any useful or meaningful sense, then the rate of progress has to be steady enough to allow all the various parts time to be designed and refined enough to work well together. If that pace has to be enforced by runaway heat density and current leakage problems, fine. I don't think there's any question that it'll be good for the industry to take a breather. GPUs just got mature shaders and programmability, parallel busses are being replaced by high-speed serial busses, and there's a wealth of largely untapped additional functionality, like SIMD engines, just lying around waiting to be tapped. It takes time and effort to exploit all these changes, and the frenzied pace of "progress" has not allowed that time, to date, except when it was absolutely crucial to do the extra work for a particular performance-critical application.
On a higher level, it takes time to dream up ways to put new amounts of power to work, and even more time to refine them into something user friendly. So a slower pace will be good. If you believe otherwise, consider how the 500MHz debacle forced Apple and its application developers to really tighten up code and exploit AltiVec for all it's worth. And now, look at the payoff.
Originally posted by Fat Freddy
Nr9, you are working for IBM, right?
Tell me more about CELL. All you know, if you want too.
I'm waiting...
Originally posted by quagmire
That doesn't mean they will not make each core faster. I was trying to say is that we will hit the final speed in a core. Which I think is 6-15 Ghz. Then we will only have the technique of adding cores to make the CPU faster. But, then we wouldn't be able to add anymore cores. What then? Only the future can tell us.
Device physics can tell, also. So can basic analysis. Heat has become one major problem with shrinking fab sizes and higher clock rates, I think modern silicon actually has higher power dissipation per surface area that does the sun.
Another issue that I haven't heard much about in this thread is UV photolith. There are limitations in the current way that chip assembly lines operate. People know just how far UV photolith can take them. Particle-beam etching techniques exist, and can produce extremely small transistors (2-4nm), but there's a long way to go before it becomes feasible for mass production.
The point is that higher clock rates can be achieved for sure, but there are a lot of limitations and roadblocks beyond voltage and transistor size. For the last 10 years most big research houses have been very busy tring to come up with more intelligent ways to pass information, rather than just more intelligent ways to move electrons. I don't really know why anyone ultimately cares about clock speed anyway. What matters is performance and power consumption, however that's achieved.
Originally posted by Splinemodel
Heat has become one major problem with shrinking fab sizes and higher clock rates
Good cooling and the problem is solved
Originally posted by Bigc
. . . his point is valid, physical limitations are beginning to enter the picture...Atoms are ony so wide . . .
Too much is being made of this physical limitation thing. Why is there a 2.5GHz limit for IBM and a 4GHz limit for Intel? Do the laws of physics work differently at Intel? No. Obviously there are some process and layout differences here.
It has simply become more difficult to push clock rate at 90nm because of leakage current, but clock rates will go up. At the same time it is now easier to add more cores because 90nm allows many more transistors on a chip. Going multi core is simply taking the path of least resistance to a performance boost.
Originally posted by smalM
Good cooling and the problem is solved
OMFG THAT IS NUTZ!!!
Originally posted by snoopy
Too much is being made of this physical limitation thing. Why is there a 2.5GHz limit for IBM and a 4GHz limit for Intel? Do the laws of physics work differently at Intel? No. Obviously there are some process and layout differences here.
It has simply become more difficult to push clock rate at 90nm because of leakage current, but clock rates will go up. At the same time it is now easier to add more cores because 90nm allows many more transistors on a chip. Going multi core is simply taking the path of least resistance to a performance boost.
Simply put, the clockspeed walls are due to the number of pipelines,
a 31 stage Prescott P4 at 4.0GHZ = 129Mhz per stage (90nm)
a 14(?)stage PPC970 at 2.5 GHZ = 178.5Mhz per stage (90nm)
compare this to
a four stage G4 (7400) at 600 Mhz = 150Mhz per stage (180nm)
a seven stage G4 (7447) at 1.5 Ghz = 214Mhz per stage (130nm)
a 12 stage Athlon 64 at 2.4 ghz = 200Mhz per stage (130nm)
a 20 stage Northwood P4 at 3.2 Ghz = 160Mhz per stage (130nm)
it would appear simplistically that we are approaching the limits already, except Intel who have dropped the ball. Getting 200Mhz per stage is doing extremely well. So I go wih Nr9 and Programmer
Originally posted by Mr. MacPhisto
I had to know people who designed IBM's POWER and PowerPC chips for references when I worked on OS/2 several years ago (oh, and how I miss the Workplace Shell).
I hear you man ! My first introduction to 32bit, object-oriented computing. I miss it also. If you remember EXCAL , a ews written pim ? I doubt any OS taday can do what it could do, back in 1993.
back on topic . . . . . . .
Originally posted by MarcUK
Getting 200Mhz per stage is doing extremely well
Some good cooling and that problem is solved too
Then there's branch prediction, which is a whole 'nother can of worms.
With fewer than 5 stages, you only have to worry about one kind of memory error. The legacy of MIPS (and it's name sake: 5 points to the guy who knows) is that it forced the programmer to deal with this. Given that MIPS was the mother of RISC, I don't see why programmers can't adapt once more to a new concept, and write new compilers. But that's a bit beside the point. The real point is that your speed-per-stage metric gets rocked by the 750gx, which is 275Mhz if I am correct. Put together these pieces -- worst-case logic paths, short pipeline efficiencies, the limits of superscalar sequential chips -- and you paint a really appealing picture that's actually an advertisement for Cell. (Or a multicore PPC440 shindig.
Here's some crazy talk: x86 wasn't intended to be pipelined. . . Of course, modern x86 chips are hardly CISC anyway, but that's another thread.
Originally posted by smalM
Some good cooling and that problem is solved too
I bet you $100, that Apple will not release a 4.0Ghz Powermac that has a great fucking open copper tube filled with liquid nitrogen, gassing out steam, with a bank of test equipment already installed, and will not run a benchmark for love or money.