IBM CTO: Scaling dead at 130nm

Posted:
in Future Apple Hardware edited January 2014
I caught this link off ArsTechnica



Bernie Meyerson, IBM's chief technology officer, talked about process scaling at a microprocessor forum.



Quote:

"Somewhere between 130-nm and 90-nm the whole system fell apart. Things stopped working and nobody seemed to notice." He added, "Scaling is already dead but nobody noticed it had stopped breathing and its lips had turned blue."



He mentions the growing need to innovate because of this. The article references SOI and Strained Sillicon, but it's not really clear if those are the types of innovations that Meyerson is talking about.



So what does this mean for the G5 and beyond?



Here's the article.

Comments

  • Reply 1 of 12
    scottscott Posts: 7,431member
    He's not referring to any particular G5 fab in process right now. Just scaling as a method to get more performance out of the same process.



    If IBM is looking past scaling to get more performance out of electronics I'd say that's a good thing for the G5 et al.
  • Reply 2 of 12
    pbpb Posts: 4,255member
    Quote:

    Originally posted by Scott

    He's not referring to any particular G5 fab in process right now. Just scaling as a method to get more performance out of the same process.



    If IBM is looking past scaling to get more performance out of electronics I'd say that's a good thing for the G5 et al.




    Indeed, the statement is not G5 specific, and it is perhaps a good thing for the G5 in the long term, as IBM seems to be the first to recognize publicly the issue and, I presume, working on a solution. But in the near term, perhaps things are worse than we ever suspected here for the G5, with all the consequences this would have into using G5s to other than the high end desktop/server machines.
  • Reply 3 of 12
    smirclesmircle Posts: 1,035member
    Obviously he is exaggerating a bit. Process scaling still works at 90nm and since IBM is projecting to deliver 65nm CPUs for MS' XBox, it can't be completely dead.



    However, there are increasingly signs that the usual methods of moving to smaller structures, then slapping on innovations and process refinements to achieve higher frequencies are no longer working that well.

    IBM has seen more difficulties than expected in their move to 90nm, and intel has just cancelled future generations of its fire-breathing monster P4. At 100W power consumption and nearly 3.5Ghz, this thing has hit a brick wall.



    This is not good news at all since he implies that future generations of chips will be even more dependent on scientific break-throughs. This is going to cost big time and while intel with its mass market will have little trouble to refinance, small-market architectures might suffer.
  • Reply 4 of 12
    shawkshawk Posts: 116member
    This is an interesting and significant statement from IBM.

    The implication is that existing performance improvement and development models are no longer valid or predictable.



    The effects may include:

    The end of predictable performance increases

    This suggests that operating systems with high demands on minimum processor speed may not be a good idea.

    With a 6GHz CPU requirement for acceptable performance, Microsoft Longhorn may need to be rethought.



    A move to more efficient OS and application architectures.

    To date, it is more cost effective to write sloppy, bloated code and let predictable CPU performance increases make it run acceptably fast.

    The Microsoft Office suite comes to mind as a good example of this philosophy.

    Performance improvements may need to come from OS and application efficiencies.



    A potential technological breakthrough making other fabrication techniques and design architectures suddenly obsolete.

    The cost of research for CPU performance increases may become uneconomically high for all but Intel and IBM.

    Interestingly, the research will need to be basic physics research, rather than applied technical process research. this puts IBM at an advantage.



    A move to price driven CPU designs and a commodity CPU market.

    When a technological limit is reached, the primary competitive advantage becomes cost.

    This puts great pressure on IBM and Intel to find breakthrough technologies. If they do not, the future lies with those who can make dirt cheap CPUs.
  • Reply 5 of 12
    pbpb Posts: 4,255member
    Quote:

    Originally posted by shawk



    The effects may include:

    The end of predictable performance increases




    Exactly my thoughts. What I can see for the immediate future, is that routine speed bumps will be (probably much) more hard to achieve. Perhaps, the one year chasm between the two Power Mac G5 updates (if the second happens in this WWDC), is the first sign of the new era.
  • Reply 6 of 12
    oldmacfanoldmacfan Posts: 501member
    Quote:

    Originally posted by shawk

    This is an interesting and significant statement from IBM.

    The implication is that existing performance improvement and development models are no longer valid or predictable.



    The effects may include:

    The end of predictable performance increases

    This suggests that operating systems with high demands on minimum processor speed may not be a good idea.

    With a 6GHz CPU requirement for acceptable performance, Microsoft Longhorn may need to be rethought.



    A move to more efficient OS and application architectures.

    To date, it is more cost effective to write sloppy, bloated code and let predictable CPU performance increases make it run acceptably fast.

    The Microsoft Office suite comes to mind as a good example of this philosophy.

    Performance improvements may need to come from OS and application efficiencies.



    A potential technological breakthrough making other fabrication techniques and design architectures suddenly obsolete.

    The cost of research for CPU performance increases may become uneconomically high for all but Intel and IBM.

    Interestingly, the research will need to be basic physics research, rather than applied technical process research. this puts IBM at an advantage.



    A move to price driven CPU designs and a commodity CPU market.

    When a technological limit is reached, the primary competitive advantage becomes cost.

    This puts great pressure on IBM and Intel to find breakthrough technologies. If they do not, the future lies with those who can make dirt cheap CPUs.




    A move to more efficient OS and application architectures would be a dream come true. Programmers don't get enough time to create under those terms. Everything changes too quickly for them. Time is money and programming takes time.



    I am more than ready for CPU's to reach the commodity pricing stage. Yes, we can buy cheap CPU's now, but they are not the top performers.
  • Reply 7 of 12
    dfryerdfryer Posts: 140member
    On the pessimistic side, this could be IBM knowing they've screwed up the 90nm process and trying to pass it off as "normal".



    Come on guys, 3 Ghz by summer or intel 0wnZ us!!
  • Reply 8 of 12
    What really worries me is that this guys says that everything fell apart between 130nm and 90nm. What does this mean for the 970fx? Maybe it won't scale up in speed like everyone is expecting. Maybe it's really only marginally better than the 130nm 970.



    It would be disastrous if the G5 was stuck at 2Ghz for over a year. It'd be Motorola all over again, except this time we're hitting a physics wall, not a bad management one.



    Maybe IBM could pull some Intel-like tricks to up the clock speed, but even Intel these days is going for lower speed, slower power, but similar performance designs.



    I'm much more interested in the rumored 975 and 980 chips now. The next performance jump might be from a dual core SMT design. A dual 980 PowerMac could look like an eight processor machine to OS X. That could be where we're heading.
  • Reply 9 of 12
    pbpb Posts: 4,255member
    Quote:

    Originally posted by dfryer

    Come on guys, 3 Ghz by summer or intel 0wnZ us!!







    Indeed, Dothan, launched today, is a 90 nm part. And it performs per clock like an Athlon 64 FX .
  • Reply 10 of 12
    kroehlkroehl Posts: 164member
    Quote:

    Originally posted by oldmacfan

    A move to more efficient OS and application architectures would be a dream come true. Programmers don't get enough time to create under those terms. Everything changes too quickly for them. Time is money and programming takes time.



    I am more than ready for CPU's to reach the commodity pricing stage. Yes, we can buy cheap CPU's now, but they are not the top performers.




    Oh $DEITY



    I can just hear all the old Linux neck-beards going "Yaaah, well, I've been running a 600 user university research system off a 33 MHz 486DX for years. The harddisk spindles may have gone square since it hasn't been powered down since the autumn of 1992 but it's the best damn system money doesn't have to buy. Hand me another brewskie and pass the pepperoni pizza."



    OTOH the projected (speculated?) hardware requirements for Longhorn are just plain rediculous. For anyone to assume that processor development will proceed at the same exponential pace as the last 10-15 years and just gobble up the advances in bloatware should be taken out back and torn apart by those very same frothing-at-the-mouth UNIX zealots.



    Hey.........we all use UNIX....

  • Reply 11 of 12
    oldmacfanoldmacfan Posts: 501member
    Quote:

    Originally posted by kroehl

    I can just hear all the old Linux neck-beards going "Yaaah, well, I've been running a 600 user university research system off a 33 MHz 486DX for years. The harddisk spindles may have gone square since it hasn't been powered down since the autumn of 1992 but it's the best damn system money doesn't have to buy.



    I know this guy, his dial-up internet runs into that machine and then out the network port for all 600 users. It doubles as a print server, firewall, and back-up for the entire lab ( on two 512MB hard drives).
  • Reply 12 of 12
    leonardleonard Posts: 528member
    Quote:

    Originally posted by spankalee

    I'm much more interested in the rumored 975 and 980 chips now. The next performance jump might be from a dual core SMT design. A dual 980 PowerMac could look like an eight processor machine to OS X. That could be where we're heading.



    Yes, it's sounding more and more like dual core is the way to go now. Even Intel is now looking at dual cores. Supposedly the 975 isn't too far off, and the 970fx is on back on track (a few months delayed) according to the new AppleInsider article today... so we shouldn't be stuck at 2.0Ghz.
Sign In or Register to comment.