Simply put, the clockspeed walls are due to the number of pipelines,
a 31 stage Prescott P4 at 4.0GHZ = 129Mhz per stage (90nm)
a 14(?)stage PPC970 at 2.5 GHZ = 178.5Mhz per stage (90nm)
compare this to
a four stage G4 (7400) at 600 Mhz = 150Mhz per stage (180nm)
a seven stage G4 (7447) at 1.5 Ghz = 214Mhz per stage (130nm)
a 12 stage Athlon 64 at 2.4 ghz = 200Mhz per stage (130nm)
a 20 stage Northwood P4 at 3.2 Ghz = 160Mhz per stage (130nm)
it would appear simplistically that we are approaching the limits already, except Intel who have dropped the ball. Getting 200Mhz per stage is doing extremely well. So I go wih Nr9 and Programmer
The 130 nm processors are at the end of their life cycles on mature 130 nm fabs. The 90 nm processors are at the beginning of their life cycles on immature fabs. If you allow a 130 nm fab 2 years to produce 200+ MHz per stage, you should at least give the 90 nm fabs another 1.5 years to do the same before coming to a conclusion.
One thing peopel should realize is that parts of those pentium 4 are running at twice the system clock rate. So we have chips that are clocking some logic at close to 7GHz today.
The thing I find disturbing with PPC is that it is very hot considering the number of transistor in the chip. Here with the 970 we have a small chip leading the pack heat production wise. This is one reason I believe there is a ways to go with PPC. There needs to be process adjustments to address those heat issues (watts/transitor).
The other thing that seems to come up missing is that IBM did cut the dynamic power considerably with the transisition to 90nm. Controlling static power would give them extra head room. Obviously IBM is working on the static power issue as we speak simply to fill the portable roll at Apple. What is interesting is how they address the operational power with the 970 family to become more competitive.
All this talk about hitting the wall is to much negativity to early into the process. Sit back take a rest and we should all see significant changes in a year or two. After all the different companies prove their processes and the various cross licenses are issued then we may have a clearer picture of where things stand. As it is I don't see the industry as a whole staying at the current clock rates, IBM might but that is another issue.
Dave
Quote:
Originally posted by THT
The 130 nm processors are at the end of their life cycles on mature 130 nm fabs. The 90 nm processors are at the beginning of their life cycles on immature fabs. If you allow a 130 nm fab 2 years to produce 200+ MHz per stage, you should at least give the 90 nm fabs another 1.5 years to do the same before coming to a conclusion.
All this talk about hitting the wall is to much negativity to early into the process. Sit back take a rest and we should all see significant changes in a year or two. After all the different companies prove their processes and the various cross licenses are issued then we may have a clearer picture of where things stand.
I say come back in 6 months... when the Power5 has been upgraded.
And oh... btw... I know you're full of crap because my friend (when I say friend I mean someone I know and talk to daily in person) is hardware QA at apple... they HAVE HAD samples of 2.6ghz 970fx procs since Jan 04.
It is not that we don't want to hear it it is just a matter of the information being wrong. Sure there have been problems with 90nm, and IBM's process could hardly becalled innovative but that does mean we won't see progress.
Are there physical limits to how fast a CPU can operate - most certainly. The problem is that we are far from those limits, the problems today are almost universally thermal and can be dealt with.
The problem is that the innovations on the horizon don't hold much promise. At best the pending process improvements look like they could deliver ~30% improvement in clock rate (based on old promises of more significant material changes). From 2.5 GHz that'll get the 970FX to ~3.2 GHz, and that will realize less than a 30% improvement in performance. And that is a wildly optimistic prediction -- sure they've got 3.4 GHz in the lab with super-duper cooling systems on sampled parts, but that's a far cry from what you could reasonably ship in hundreds of thousands of units in a consumer level product.
Intel doesn't seem to think they'll achieve even that much of an improvement, and AMD's first 90nm chips have a zero clock rate improvement.
Quote:
Maybe not so much a troll as somebody that maybe is being mis informed for various reasons.
Well if you can quote current industry publications that will inform us correctly, then please do.
Quote:
Ah no I can't accept it because the premiss doesn't support what is happening in the industry. If I saw every manufacture give up on trying to produce smaller and faster trasistors I'd say we may be on to something. This isn't the case at all, corporations are still pursuing much faster devices.
Like who? If Intel (the leading clock rate advocate for the last 15 years) publicly abandons the pursuit of the all powerful clock rate bump, then who are you talking about? Freescale, who is still hanging around at a mere 1.5 GHz?
Besides, the pursuit of smaller transistors is not synonymous with the pursuit of higher clock rates. Smaller means you can fit more in the same space. More transistors means more cache, more memory, more cores. We're not suggesting that miniturization will stop (quite the opposite). Instead we're saying that the benefits of increasing frequency have reached or passed the point where they outweight the negatives.
Quote:
One should not get side tracked by multicore processors either, this is just an out growth of having the hardware available to implement them. Multicore chips are not an excuse for poor single core performance, rather they are a avenue to increasing SYSTEM performance.
Time for a paradigm shift.
Quote:
You may not be making such a decree but Nr9 certainly is trying to. This is what people are rejecting
Nr9 seems to "suffer" from communication issues. He seems to place lower value on clear and articulate conversation in these forums than I do -- but then I work with a lot of brilliant engineers who do much the same thing, so I'm not going to write him off purely on that evidence alone. I tend to be considered more credible, but please keep in mind that what I'm better at is communicating technical issues (or perhaps I'm just a better typist) and this is not the same as having a better knowledge base. I'm not saying Nr9 is the lead hardware engineer at the IBM's processor design facility, or even that I'm convinced he works at IBM! However, what he is saying does resonate with what I see in the industry and know about recent developments.
Quote:
With the initial transistion to 90nm and the lack of process development this may be the case. The reality is that manufactures have no choice but to find a way around this issue.
Not everything has a way around. The way around may very well be approaches other than increasing clock frequency.
Quote:
We have heard about such issues since the industry started. Heck I've been following this since Byte was a young magazine. At each point in the development of the industry somebody has dealt with the supposed limits of the time. There is nothing to keep that fm happening again.
Yeah, I'm old too. I remember everybody scoffing at Intel's wild predictions of 100 MHz. This time things feel different. This time the old tricks have been stretched so far that we've been seeing seriously diminishing returns for a few years. They are going to have to work a lot harder to continue making progress.... and they will. I'm not saying anybody is going to stop trying to make faster processors, but the basic they way they are going to go about it has to change for the first time. Can't just make it smaller and increase the frequency, decrease the power and reap the benefits anymore.
Quote:
It is almost a given that IBM is working on lowering the power used by the 970 series. You are implying, along with Nr9 apparently, that they have given up on new technology that may be applied at 90nm.
No, I'm saying that the benefit to be gained by this new technology is limited. IBM will have lower power versions of the 970, but I don't think they're going to run at significantly higher clock rates. In fact at the top clock rates they may not be lower power at all.
Quote:
Or maybe the approach that the team Freescale is involved with is leading them to more confidence. After all they have taken their time and do have the knowledge of the rest of the industiers public failings. It is not like there is only one 90nm process in this world and every one is being compelled to use it. There is still room in this world for novel approaches and innovation.
You said Freescale and confidence in the same sentence!!!
Freescale is at least a year from a production 90nm part by their own admission. And they are talking about taking a 1.5 GHz part to... 1.5+ GHz. Their later parts don't even exist yet. Sounds like IBM (and Intel) before they actually got the point of production. IBM and Intel have different processes at 90nm, but they suffer from the same problem. If I were a betting man would I put money on Freescale coming up with a solution to something that IBM and Intel's best haven't? In the next year if a solution does come out of the ether, I'll be that it comes from IBM and/or Intel. But I don't expect it.
Quote:
I do believe that many have been very fair with Nr9. He either has to offer more compelling evidence or modify his approach. To state absolutely that we have hit a wall and 2.5GHz is it, is just to much. IBM may have recently stumbled and fallen flat on their face but that doesn't mean they can't stand up again. Frankly they have to stand up again or be left in the position of watching others walk away from them technology wise.
Yes, I'd like to see evidence as well. Unfortunately this industry is buried under NDAs and trade secrets. They don't publish convenient web links to their innermost secrets so that us rumor mongers can point at them. I'd settle (happily) for clear evidence that you are correct as well... but I don't see anybody shipping significant clock rate bumps based on the 90nm migration. Intel 0.4 GHz, AMD 0.0 GHz, IBM 0.5 GHz. Wow, IBM stumbled and fell flat but still turned in the best improvement (yes, this is facetious comment).
And one last comment: I interpret "the limit" that Nr9 speaks of as something other than a speed barrier like the speed of sound or the speed of light. In engineering, something like processor design is a huge set of trade-offs, product requirements, and goals. In this very complex design space there is a vague boundry of what makes a possible and acceptable design. The available technology governs this design space, and physics governs the technology. It is simply not correct to say that 2.55178 GHz is the limit for the PowerPC. It is not even correct to say that about the PowerPC 970FX on process XYZ at voltage V, because it depends on what kind of system parameters, operating conditions, software, etc etc etc. So instead the limit being reached is really just a recognition by processor designers that they are flogging a dead horse to try and achieve their (very complex set of) goals, and new avenues to improve processor design must be followed.
You can only optimise so much. I'd love for everyone to get rid of these programmer wanna-bes and make us start using assembly again.
I blame the, "Nah, no need for optimising, there'll be more powerful computers out soon enough anyways" attitude on Java and and these lazy bastards that think they are programmers because they got a 4 year CS degree. Just because you are shown 2 or 3 languages doesn't mean you suddenly are a master programmer like they think.
I say bring back in line assembly. That'll teach those punks ... that'd hurt their head... having to actually think
Felt that I had to reply to this one
(off-topic)
As i'm doing my Bachelor degree in Computer Science, I'll eventually be joining in the " lazy bastards that think they are programmers because they got a 4 year CS degree.". I know you didn't mean it to be that harsh, but I still have to disagree with you. Through my BS degree I've had experience with a handful languages (C, C++, C#, Java (SE & EE) and Perl). When I finish my Masters degree I might have added another one or two to that list, and I'll be joining the workforce as a Junior programmer...
Just because I wont have 20 years programming experience and practical knopwledge of any language since ADA, doesnt mean I'm not a programmer. I'd regard any 4,5 year Computer Science graduate as a programmer, allthough lacking the experience to make them a senior programmer.
(/off-topic)
My main problem with your post is this:
Even though, as you say, you can only "optimise so much", there are hardly no speed optimisation being done in the industry, unless thats a special requirement for that particular application. Even "hard-core" games lack optimisation because, as I stated before, there's simply no need to. Customers will be able to get new hardware to play the games by the time the game hits the market.
(I bring up games, because they are usually applications that have very high requirements accross the board)...
Also, as someone else stated, hardware changes too frequently in this business. This makes it harder to optimise the code, becuase by the time the application is completed, newer hardware is released that migh render your optimisation efforts useless because the new chips have changed and thus cant utilise the optimisations like you intended.
Which is why I brought up the Playstation 2 console. Theres a massive difference between the complexilty and graphics of games being developed 3 years ago and games being developed now. Just have a look at the highly more graphical quality between GTA3 (the original) and the screenshots of the upcoming GTA: San Andreas. There's just no question. The developers are able to run the game flawlessly on the PS2, while the PC Port of the game will most probably need a 2.5Ghz+ CPU with a killer graphics card to produce the same results. And its not because the PS2 is more powerful in any way, its because the PS2 version of the game is highly optimised for its indended platform, whereas the PC version most probably wont be. And why ? Because the hardware on the PC side are so much more powerful and frequently changing, optimisation isn't cost effective.
I say slow down the pace, and rather implement each new CPU design properly, and not rush new products to market for the sake of "beating the next guy"...
Ok, it might have hit a wall or it might not have...But I am getting the gist that if clock speed still can be increased, it won't be by much and won't happen for awhile (Just like the dual cores). So this might be a little off-topic, but I am getting heavily into video editing and my powerbook 1.5 isn't handling it very well. I'm going to buy a powermac and was wondering if I should get the 2.5 now, or wait until january for a potential power increase. Thanks.
Let me be fair on this one. I'm not saying all Computer Science degrees are bad. But a majority are. My friend just graduated with a 4 year BS from Linfield. Its about 15-20k a year to go there. Its a very renound school.
He asked me to help him with some search / sort algorithms. Breadth first and deep sort if I remember correctly. He sent me the code he had... He said it was in C++. So I'm like GREAT!! I can follow OO much easier. I get the code and no feature from c++ was used... No polymorphism, no inheritance, no encapsolation, no OO what so ever. I asked him how it was c++... he said cause it had cout statements. I had to laugh .
Point is most CS majors are shown a handful languages and their capabilities... they aren't shown how to use them in a work related environment.
Another example... I used to do QA work for intel. I had to meet with 3 of the programmers (1 was lead and 2 were newb's) on a project a few months ago... none of them knew any UML. They had no idea how to do any documentation.
Problem with CS is they aren't ready for the work force after graduation. The projected time for a new CS graduate to be caught up in the work force is 9-18 months.
I am almost done with a 4 year Software Engineering degree. I was up to speed in about 3 months at a company that will go nameless. The #1 reason I kept up was because I was shown why and how to program. Not just how to program. CS is just a completely different realm... different intentions. More innovating bleeding edge technology then using the tools that are available now.
As far as optimizing... You said you program in java and c#... you don't think you need to optimize in either of those for client side software? Go do a GIMP type application in either of those and find yourself hogging 20-50% cpu when idling.
All software (even games) need to be optimized. The more optimized a game is the larger the platform support. Which is well worth the money in the long run. If people were ot just program in straight cocoa for games without optimization it would be a nightmare. Cocoa is a great API but there is always need for optimizing... even on today's hardware... especially for java and c#.
Compare ANY game that is on pc and ps2. Look at the graphics. The ps2 == a pc from 3+ years ago. The looks are LARGLY noticeable. Anti-aliasing... forget about it. Anisotropic Filtering... did that even really exist then?
The textures are horrid on the ps2 compared to the pc. My friend was playing Socom Navy Seals 2... What an ugly ass game. I couldn't even watch because the texturing was so horrible.
For the quality decay from going from pc -> ps2 (btw they build on pc's), I can see how the games get so optimized to run on that slow system.
Sun just announced that the UltraSparc IV+ went 90 nm and gained a quite a lot frequenzy wise.
From 1.2 GHz to 1.8 GHz by doing the 130 nm to 90 nm dance we are all talking about.
So.. What is Texas Instrument's secret of doing a 50% increase in speed? Was their former fab really bad and their new really great, or is their 1.8 GHz claim just a lie or wishful thinking?
It should be noted IBM pretty publicly attested to the fact that parallelism was the way forward for performance over clock rate when they introduced the POWER4. The Opteron was designed with dual core in mind long before the issues at 90 nm were fully realised and about the only company that has really gone backwards on multi-core plans has been Intel.
Every other major company, including Sun, had realised as processes shrunk and they could toy with more transistors adding cores offered attractive performance gains for the transistor usage. To say the multi-core plans are new or solely related to current heat difficulties is to be perfectly blunt a bit naïve.
Quote:
Originally posted by Nr9
I find it amusing that people associate knowledge with sophisticated, grammatically-correct English.
the fact is, most people who know anything worth shit don't have time to sit around and compose essays on forums.
Most people who are in a position to know much that is worthwhile are also well enough educated that they don't have to spend that much time correcting grammatical errors. They tend to not make the obvious mistakes in the first place. A university education tends to knock it out of you.
I'm thinking this is the wrong thread. Really we can't even decide if we have hit a wall or not or even what type of wall it is. Considering that I'd not take advice from this thread.
That won't stop me from offering a little though. If you need a new PC and the hardware is available to do what you want then buy now. Of course Christmas is coming and you really shouldn't be thinking about yourself, so waiting until the new year can reduce guilt problems a bit.
Dave
Quote:
Originally posted by JtheVGKing
I'm going to buy a powermac and was wondering if I should get the 2.5 now, or wait until january for a potential power increase. Thanks.
Sun just announced that the UltraSparc IV+ went 90 nm and gained a quite a lot frequenzy wise.
From 1.2 GHz to 1.8 GHz by doing the 130 nm to 90 nm dance we are all talking about.
I've given up on following anything Sun or SPARC related, but this is very good to hear. Is this on a Fujitsu or TI process? Last year TI was trumpeting their 90nm process in relation to SUN. It would be very interesting to see how hot that SPARC is running.
Quote:
So.. What is Texas Instrument's secret of doing a 50% increase in speed? Was their former fab really bad and their new really great, or is their 1.8 GHz claim just a lie or wishful thinking?
I wish I knew what was up with TI. Bits and pieces of information have popped up indicating that they have made considerable break throughs. Last I knew though these break throughs where not in production yet.
I do not know if TI is boasting to much or not, but if they successfully deliver chips with the technology that has been talked about they will become the industry low power leader at 90nm. This url: http://focus.ti.com/docs/pr/pressrel...prelId=sc03226 focuses a bit on their 90nm process with respect to SPARC. Unfortunatley it is a press release. In any event this 90nm process does appear to be far more refined than what IBM is using. Ti is also far along with 65nm: http://focus.ti.com/docs/pr/pressrel...relId=sc04074, here they are claiming a reduction in leakage current by a factor of 1000 and a 40% increase in transistor performance over 90nm. Now this could all be bluster but Sun seems to be happy with the new SPARC.
In any event TI has not publicly complained about hitting a wall.
As an aside many people have been making references to Intel and the poor showing of Prescott. As I see it there are several problems with that. One is that the chip is huge to the point one wonders what all those transistors are doing. The other is that some of that logic is clocked at 2X. Finally the static power on this chip is a killer. On the other hand Intel has gotten good results with Dothan.
Every other major company, including Sun, had realised as processes shrunk and they could toy with more transistors adding cores offered attractive performance gains for the transistor usage. To say the multi-core plans are new or solely related to current heat difficulties is to be perfectly blunt a bit naïve.
This is alot like asking a mountain climber why he climbs the mountain. Often the response is "because it is there". The move to multi core processors has a great deal to do with the fact that the space is now there to put the extra processors in place. In the not to distant future I would expect that we will see everything on chip except for the system memory, the GPU and required buffering.
We are already seeing examples of such trends outside the mainstream processor market. Freescales PowerQUICC series is a modern example of high integration and specialized processors that all those transistors enable. For mainstream processors though, we have reached the point where even after trying to improve the main core you still have enough transistors available to implement a second core. Well that is given your ability to disapate the heat generated.
Quote:
They tend to not make the obvious mistakes in the first place. A university education tends to knock it out of you.
Here I will only say that I've meet a number of very well educated people that I could respect for their intelligence when applied to their field of interest who could not communicate well at all. Good communications skills are not a natural by product of a developed and intelligent mind.
Most people who are in a position to know much that is worthwhile are also well enough educated that they don't have to spend that much time correcting grammatical errors. They tend to not make the obvious mistakes in the first place. A university education tends to knock it out of you.
Right, do you even have an enginnering degree? You don't understand that unless you go to a liberal arts university, you dont need to write much do you?
. . . Here I will only say that I've meet a number of very well educated people that I could respect for their intelligence when applied to their field of interest who could not communicate well at all. Good communications skills are not a natural by product of a developed and intelligent mind.
It's not just communication. Well educated people seldom make obvious technical mistakes, especially within their field. When they do err, they are quick to recognize it when someone points it out. I see neither of these characteristics in our friend.
Comments
Originally posted by Nr9
plans change
How you can say that with a straight face is beyond me.
Originally posted by MarcUK
Simply put, the clockspeed walls are due to the number of pipelines,
a 31 stage Prescott P4 at 4.0GHZ = 129Mhz per stage (90nm)
a 14(?)stage PPC970 at 2.5 GHZ = 178.5Mhz per stage (90nm)
compare this to
a four stage G4 (7400) at 600 Mhz = 150Mhz per stage (180nm)
a seven stage G4 (7447) at 1.5 Ghz = 214Mhz per stage (130nm)
a 12 stage Athlon 64 at 2.4 ghz = 200Mhz per stage (130nm)
a 20 stage Northwood P4 at 3.2 Ghz = 160Mhz per stage (130nm)
it would appear simplistically that we are approaching the limits already, except Intel who have dropped the ball. Getting 200Mhz per stage is doing extremely well. So I go wih Nr9 and Programmer
The 130 nm processors are at the end of their life cycles on mature 130 nm fabs. The 90 nm processors are at the beginning of their life cycles on immature fabs. If you allow a 130 nm fab 2 years to produce 200+ MHz per stage, you should at least give the 90 nm fabs another 1.5 years to do the same before coming to a conclusion.
The thing I find disturbing with PPC is that it is very hot considering the number of transistor in the chip. Here with the 970 we have a small chip leading the pack heat production wise. This is one reason I believe there is a ways to go with PPC. There needs to be process adjustments to address those heat issues (watts/transitor).
The other thing that seems to come up missing is that IBM did cut the dynamic power considerably with the transisition to 90nm. Controlling static power would give them extra head room. Obviously IBM is working on the static power issue as we speak simply to fill the portable roll at Apple. What is interesting is how they address the operational power with the 970 family to become more competitive.
All this talk about hitting the wall is to much negativity to early into the process. Sit back take a rest and we should all see significant changes in a year or two. After all the different companies prove their processes and the various cross licenses are issued then we may have a clearer picture of where things stand. As it is I don't see the industry as a whole staying at the current clock rates, IBM might but that is another issue.
Dave
Originally posted by THT
The 130 nm processors are at the end of their life cycles on mature 130 nm fabs. The 90 nm processors are at the beginning of their life cycles on immature fabs. If you allow a 130 nm fab 2 years to produce 200+ MHz per stage, you should at least give the 90 nm fabs another 1.5 years to do the same before coming to a conclusion.
Originally posted by wizard69
All this talk about hitting the wall is to much negativity to early into the process. Sit back take a rest and we should all see significant changes in a year or two. After all the different companies prove their processes and the various cross licenses are issued then we may have a clearer picture of where things stand.
Dave
Couldn't have said it better.
the fact is, most people who know anything worth shit don't have time to sit around and compose essays on forums.
it doesnt matter to me if you belive or not
the fact is, the limit has been reached.
come back in 2 years and you'll see.
And oh... btw... I know you're full of crap because my friend (when I say friend I mean someone I know and talk to daily in person) is hardware QA at apple... they HAVE HAD samples of 2.6ghz 970fx procs since Jan 04.
Originally posted by wizard69
It is not that we don't want to hear it it is just a matter of the information being wrong. Sure there have been problems with 90nm, and IBM's process could hardly becalled innovative but that does mean we won't see progress.
Are there physical limits to how fast a CPU can operate - most certainly. The problem is that we are far from those limits, the problems today are almost universally thermal and can be dealt with.
The problem is that the innovations on the horizon don't hold much promise. At best the pending process improvements look like they could deliver ~30% improvement in clock rate (based on old promises of more significant material changes). From 2.5 GHz that'll get the 970FX to ~3.2 GHz, and that will realize less than a 30% improvement in performance. And that is a wildly optimistic prediction -- sure they've got 3.4 GHz in the lab with super-duper cooling systems on sampled parts, but that's a far cry from what you could reasonably ship in hundreds of thousands of units in a consumer level product.
Intel doesn't seem to think they'll achieve even that much of an improvement, and AMD's first 90nm chips have a zero clock rate improvement.
Maybe not so much a troll as somebody that maybe is being mis informed for various reasons.
Well if you can quote current industry publications that will inform us correctly, then please do.
Ah no I can't accept it because the premiss doesn't support what is happening in the industry. If I saw every manufacture give up on trying to produce smaller and faster trasistors I'd say we may be on to something. This isn't the case at all, corporations are still pursuing much faster devices.
Like who? If Intel (the leading clock rate advocate for the last 15 years) publicly abandons the pursuit of the all powerful clock rate bump, then who are you talking about? Freescale, who is still hanging around at a mere 1.5 GHz?
Besides, the pursuit of smaller transistors is not synonymous with the pursuit of higher clock rates. Smaller means you can fit more in the same space. More transistors means more cache, more memory, more cores. We're not suggesting that miniturization will stop (quite the opposite). Instead we're saying that the benefits of increasing frequency have reached or passed the point where they outweight the negatives.
One should not get side tracked by multicore processors either, this is just an out growth of having the hardware available to implement them. Multicore chips are not an excuse for poor single core performance, rather they are a avenue to increasing SYSTEM performance.
Time for a paradigm shift.
You may not be making such a decree but Nr9 certainly is trying to. This is what people are rejecting
Nr9 seems to "suffer" from communication issues. He seems to place lower value on clear and articulate conversation in these forums than I do -- but then I work with a lot of brilliant engineers who do much the same thing, so I'm not going to write him off purely on that evidence alone. I tend to be considered more credible, but please keep in mind that what I'm better at is communicating technical issues (or perhaps I'm just a better typist) and this is not the same as having a better knowledge base. I'm not saying Nr9 is the lead hardware engineer at the IBM's processor design facility, or even that I'm convinced he works at IBM! However, what he is saying does resonate with what I see in the industry and know about recent developments.
With the initial transistion to 90nm and the lack of process development this may be the case. The reality is that manufactures have no choice but to find a way around this issue.
Not everything has a way around. The way around may very well be approaches other than increasing clock frequency.
We have heard about such issues since the industry started. Heck I've been following this since Byte was a young magazine. At each point in the development of the industry somebody has dealt with the supposed limits of the time. There is nothing to keep that fm happening again.
Yeah, I'm old too. I remember everybody scoffing at Intel's wild predictions of 100 MHz. This time things feel different. This time the old tricks have been stretched so far that we've been seeing seriously diminishing returns for a few years. They are going to have to work a lot harder to continue making progress.... and they will. I'm not saying anybody is going to stop trying to make faster processors, but the basic they way they are going to go about it has to change for the first time. Can't just make it smaller and increase the frequency, decrease the power and reap the benefits anymore.
It is almost a given that IBM is working on lowering the power used by the 970 series. You are implying, along with Nr9 apparently, that they have given up on new technology that may be applied at 90nm.
No, I'm saying that the benefit to be gained by this new technology is limited. IBM will have lower power versions of the 970, but I don't think they're going to run at significantly higher clock rates. In fact at the top clock rates they may not be lower power at all.
Or maybe the approach that the team Freescale is involved with is leading them to more confidence. After all they have taken their time and do have the knowledge of the rest of the industiers public failings. It is not like there is only one 90nm process in this world and every one is being compelled to use it. There is still room in this world for novel approaches and innovation.
Freescale is at least a year from a production 90nm part by their own admission. And they are talking about taking a 1.5 GHz part to... 1.5+ GHz. Their later parts don't even exist yet. Sounds like IBM (and Intel) before they actually got the point of production. IBM and Intel have different processes at 90nm, but they suffer from the same problem. If I were a betting man would I put money on Freescale coming up with a solution to something that IBM and Intel's best haven't? In the next year if a solution does come out of the ether, I'll be that it comes from IBM and/or Intel. But I don't expect it.
I do believe that many have been very fair with Nr9. He either has to offer more compelling evidence or modify his approach. To state absolutely that we have hit a wall and 2.5GHz is it, is just to much. IBM may have recently stumbled and fallen flat on their face but that doesn't mean they can't stand up again. Frankly they have to stand up again or be left in the position of watching others walk away from them technology wise.
Yes, I'd like to see evidence as well. Unfortunately this industry is buried under NDAs and trade secrets. They don't publish convenient web links to their innermost secrets so that us rumor mongers can point at them. I'd settle (happily) for clear evidence that you are correct as well... but I don't see anybody shipping significant clock rate bumps based on the 90nm migration. Intel 0.4 GHz, AMD 0.0 GHz, IBM 0.5 GHz. Wow, IBM stumbled and fell flat but still turned in the best improvement (yes, this is facetious comment).
And one last comment: I interpret "the limit" that Nr9 speaks of as something other than a speed barrier like the speed of sound or the speed of light. In engineering, something like processor design is a huge set of trade-offs, product requirements, and goals. In this very complex design space there is a vague boundry of what makes a possible and acceptable design. The available technology governs this design space, and physics governs the technology. It is simply not correct to say that 2.55178 GHz is the limit for the PowerPC. It is not even correct to say that about the PowerPC 970FX on process XYZ at voltage V, because it depends on what kind of system parameters, operating conditions, software, etc etc etc. So instead the limit being reached is really just a recognition by processor designers that they are flogging a dead horse to try and achieve their (very complex set of) goals, and new avenues to improve processor design must be followed.
Originally posted by emig647
You can only optimise so much. I'd love for everyone to get rid of these programmer wanna-bes and make us start using assembly again.
I blame the, "Nah, no need for optimising, there'll be more powerful computers out soon enough anyways" attitude on Java and and these lazy bastards that think they are programmers because they got a 4 year CS degree. Just because you are shown 2 or 3 languages doesn't mean you suddenly are a master programmer like they think.
I say bring back in line assembly. That'll teach those punks
Felt that I had to reply to this one
(off-topic)
As i'm doing my Bachelor degree in Computer Science, I'll eventually be joining in the " lazy bastards that think they are programmers because they got a 4 year CS degree.". I know you didn't mean it to be that harsh, but I still have to disagree with you. Through my BS degree I've had experience with a handful languages (C, C++, C#, Java (SE & EE) and Perl). When I finish my Masters degree I might have added another one or two to that list, and I'll be joining the workforce as a Junior programmer...
Just because I wont have 20 years programming experience and practical knopwledge of any language since ADA, doesnt mean I'm not a programmer. I'd regard any 4,5 year Computer Science graduate as a programmer, allthough lacking the experience to make them a senior programmer.
(/off-topic)
My main problem with your post is this:
Even though, as you say, you can only "optimise so much", there are hardly no speed optimisation being done in the industry, unless thats a special requirement for that particular application. Even "hard-core" games lack optimisation because, as I stated before, there's simply no need to. Customers will be able to get new hardware to play the games by the time the game hits the market.
(I bring up games, because they are usually applications that have very high requirements accross the board)...
Also, as someone else stated, hardware changes too frequently in this business. This makes it harder to optimise the code, becuase by the time the application is completed, newer hardware is released that migh render your optimisation efforts useless because the new chips have changed and thus cant utilise the optimisations like you intended.
Which is why I brought up the Playstation 2 console. Theres a massive difference between the complexilty and graphics of games being developed 3 years ago and games being developed now. Just have a look at the highly more graphical quality between GTA3 (the original) and the screenshots of the upcoming GTA: San Andreas. There's just no question. The developers are able to run the game flawlessly on the PS2, while the PC Port of the game will most probably need a 2.5Ghz+ CPU with a killer graphics card to produce the same results. And its not because the PS2 is more powerful in any way, its because the PS2 version of the game is highly optimised for its indended platform, whereas the PC version most probably wont be. And why ? Because the hardware on the PC side are so much more powerful and frequently changing, optimisation isn't cost effective.
I say slow down the pace, and rather implement each new CPU design properly, and not rush new products to market for the sake of "beating the next guy"...
edited for grammar
Originally posted by BoeManE
Felt that I had to reply to this one
Let me be fair on this one. I'm not saying all Computer Science degrees are bad. But a majority are. My friend just graduated with a 4 year BS from Linfield. Its about 15-20k a year to go there. Its a very renound school.
He asked me to help him with some search / sort algorithms. Breadth first and deep sort if I remember correctly. He sent me the code he had... He said it was in C++. So I'm like GREAT!! I can follow OO much easier. I get the code and no feature from c++ was used... No polymorphism, no inheritance, no encapsolation, no OO what so ever. I asked him how it was c++... he said cause it had cout statements. I had to laugh
Point is most CS majors are shown a handful languages and their capabilities... they aren't shown how to use them in a work related environment.
Another example... I used to do QA work for intel. I had to meet with 3 of the programmers (1 was lead and 2 were newb's) on a project a few months ago... none of them knew any UML. They had no idea how to do any documentation.
Problem with CS is they aren't ready for the work force after graduation. The projected time for a new CS graduate to be caught up in the work force is 9-18 months.
I am almost done with a 4 year Software Engineering degree. I was up to speed in about 3 months at a company that will go nameless. The #1 reason I kept up was because I was shown why and how to program. Not just how to program. CS is just a completely different realm... different intentions. More innovating bleeding edge technology then using the tools that are available now.
As far as optimizing... You said you program in java and c#... you don't think you need to optimize in either of those for client side software? Go do a GIMP type application in either of those and find yourself hogging 20-50% cpu when idling.
All software (even games) need to be optimized. The more optimized a game is the larger the platform support. Which is well worth the money in the long run. If people were ot just program in straight cocoa for games without optimization it would be a nightmare. Cocoa is a great API but there is always need for optimizing... even on today's hardware... especially for java and c#.
Compare ANY game that is on pc and ps2. Look at the graphics. The ps2 == a pc from 3+ years ago. The looks are LARGLY noticeable. Anti-aliasing... forget about it. Anisotropic Filtering... did that even really exist then?
The textures are horrid on the ps2 compared to the pc. My friend was playing Socom Navy Seals 2... What an ugly ass game. I couldn't even watch because the texturing was so horrible.
For the quality decay from going from pc -> ps2 (btw they build on pc's), I can see how the games get so optimized to run on that slow system.
Originally posted by Nr9
I find it amusing that people associate knowledge with sophisticated, grammatically-correct English.
the fact is, most people who know anything worth shit don't have time to sit around and compose essays on forums.
So the brevity and poor composition of your posts are proof of their authenticity?
From 1.2 GHz to 1.8 GHz by doing the 130 nm to 90 nm dance we are all talking about.
So.. What is Texas Instrument's secret of doing a 50% increase in speed? Was their former fab really bad and their new really great, or is their 1.8 GHz claim just a lie or wishful thinking?
Every other major company, including Sun, had realised as processes shrunk and they could toy with more transistors adding cores offered attractive performance gains for the transistor usage. To say the multi-core plans are new or solely related to current heat difficulties is to be perfectly blunt a bit naïve.
Originally posted by Nr9
I find it amusing that people associate knowledge with sophisticated, grammatically-correct English.
the fact is, most people who know anything worth shit don't have time to sit around and compose essays on forums.
Most people who are in a position to know much that is worthwhile are also well enough educated that they don't have to spend that much time correcting grammatical errors. They tend to not make the obvious mistakes in the first place. A university education tends to knock it out of you.
I'm thinking this is the wrong thread. Really we can't even decide if we have hit a wall or not or even what type of wall it is. Considering that I'd not take advice from this thread.
That won't stop me from offering a little though. If you need a new PC and the hardware is available to do what you want then buy now. Of course Christmas is coming and you really shouldn't be thinking about yourself, so waiting until the new year can reduce guilt problems a bit.
Dave
Originally posted by JtheVGKing
I'm going to buy a powermac and was wondering if I should get the 2.5 now, or wait until january for a potential power increase. Thanks.
Originally posted by Henriok
Sun just announced that the UltraSparc IV+ went 90 nm and gained a quite a lot frequenzy wise.
From 1.2 GHz to 1.8 GHz by doing the 130 nm to 90 nm dance we are all talking about.
I've given up on following anything Sun or SPARC related, but this is very good to hear. Is this on a Fujitsu or TI process? Last year TI was trumpeting their 90nm process in relation to SUN. It would be very interesting to see how hot that SPARC is running.
So.. What is Texas Instrument's secret of doing a 50% increase in speed? Was their former fab really bad and their new really great, or is their 1.8 GHz claim just a lie or wishful thinking?
I wish I knew what was up with TI. Bits and pieces of information have popped up indicating that they have made considerable break throughs. Last I knew though these break throughs where not in production yet.
I do not know if TI is boasting to much or not, but if they successfully deliver chips with the technology that has been talked about they will become the industry low power leader at 90nm. This url: http://focus.ti.com/docs/pr/pressrel...prelId=sc03226 focuses a bit on their 90nm process with respect to SPARC. Unfortunatley it is a press release. In any event this 90nm process does appear to be far more refined than what IBM is using. Ti is also far along with 65nm: http://focus.ti.com/docs/pr/pressrel...relId=sc04074, here they are claiming a reduction in leakage current by a factor of 1000 and a 40% increase in transistor performance over 90nm. Now this could all be bluster but Sun seems to be happy with the new SPARC.
In any event TI has not publicly complained about hitting a wall.
As an aside many people have been making references to Intel and the poor showing of Prescott. As I see it there are several problems with that. One is that the chip is huge to the point one wonders what all those transistors are doing. The other is that some of that logic is clocked at 2X. Finally the static power on this chip is a killer. On the other hand Intel has gotten good results with Dothan.
Originally posted by Telomar
Every other major company, including Sun, had realised as processes shrunk and they could toy with more transistors adding cores offered attractive performance gains for the transistor usage. To say the multi-core plans are new or solely related to current heat difficulties is to be perfectly blunt a bit naïve.
This is alot like asking a mountain climber why he climbs the mountain. Often the response is "because it is there". The move to multi core processors has a great deal to do with the fact that the space is now there to put the extra processors in place. In the not to distant future I would expect that we will see everything on chip except for the system memory, the GPU and required buffering.
We are already seeing examples of such trends outside the mainstream processor market. Freescales PowerQUICC series is a modern example of high integration and specialized processors that all those transistors enable. For mainstream processors though, we have reached the point where even after trying to improve the main core you still have enough transistors available to implement a second core. Well that is given your ability to disapate the heat generated.
They tend to not make the obvious mistakes in the first place. A university education tends to knock it out of you.
Here I will only say that I've meet a number of very well educated people that I could respect for their intelligence when applied to their field of interest who could not communicate well at all. Good communications skills are not a natural by product of a developed and intelligent mind.
Originally posted by Telomar
Most people who are in a position to know much that is worthwhile are also well enough educated that they don't have to spend that much time correcting grammatical errors. They tend to not make the obvious mistakes in the first place. A university education tends to knock it out of you.
Right, do you even have an enginnering degree? You don't understand that unless you go to a liberal arts university, you dont need to write much do you?
Originally posted by wizard69
. . . Here I will only say that I've meet a number of very well educated people that I could respect for their intelligence when applied to their field of interest who could not communicate well at all. Good communications skills are not a natural by product of a developed and intelligent mind.
It's not just communication. Well educated people seldom make obvious technical mistakes, especially within their field. When they do err, they are quick to recognize it when someone points it out. I see neither of these characteristics in our friend.