IBM's 65nm technology 70% done

Posted:
in Future Apple Hardware edited January 2014
Story Here.

* Story near bottom of page.
«1

Comments

  • Reply 1 of 29
    powerdocpowerdoc Posts: 8,123member
    I wonder what means 70 % done, sometimes it's the 1% left, that cause all the problems ...



    AMD claim to get their first 65 nm chip in 2006, I wonder if IBM will be quicker ?
  • Reply 2 of 29
    65nm. what about all the problems with 90nm?
  • Reply 3 of 29
    hmurchisonhmurchison Posts: 12,425member
    I think the whole industry will be moving to DC and QC in 2006/7



    Rough est



    970 120mm^2 130nm single core

    970fx 56mm^2 90nm single core

    970mp 154mm^2 90nm dual core

    970mp 70mm^2 65nm dual core

    970mp 140mm^2 65nm quad core



    As the process shrinks all the CPU vendors will move to DC and QC CPU. I can't imagine scaling will be much better at 65nm either.



    Maybe I should just hold off until Powermacs have 8 Core 45nm parts
  • Reply 4 of 29
    hmurchisonhmurchison Posts: 12,425member
    Quote:

    Originally posted by Altivec_2.0

    65nm. what about all the problems with 90nm?





    Should be addressed by now. 65nm will be different and could come with its own set of problems. 90nm isn't going to scale well so neither Intel, IBM nor AMD will want to stay on it long when they can add additional cores in the space opened up by a 64nm process shrink.
  • Reply 5 of 29
    by that logic, if every time you want to increase the performance of a processor, you have to shrink the process, rather than scale the mhz?
  • Reply 6 of 29
    macsrgood4umacsrgood4u Posts: 3,007member
    No matter. Developing the technology in the lab is not the same as being able to manufacture it in a factroy (as we all do know, don't we). G5 chips are not coming from IBM as fast as promised according to a report today on their current product let alone something being worked on to be ready a year from now.
  • Reply 7 of 29
    hmurchisonhmurchison Posts: 12,425member
    Quote:

    Originally posted by powermacG6

    by that logic, if every time you want to increase the performance of a processor, you have to shrink the process, rather than scale the mhz?



    As the ability to scale the CPU for a given process diminishes then yes the CPU vendors will have to transition more quickly to the next smaller fab.



    Both IBM and Intel have had issues @ 90nm but a little time is all that is needed. They still have technologies in the wings that will help like strained silicon, low-k dialectrics etc.
  • Reply 8 of 29
    onlookeronlooker Posts: 5,252member
    I didn't see where it said IBM was 70% done. I only saw AMD stuff, and lesser mentions of IBM.



    {edit} OK I saw it., but it's essentially talking about AMD. IBM is more of a side note, and it gives no real specifics on IBM's transition to 60nm.
  • Reply 9 of 29
    Quote:

    Originally posted by powermacG6

    by that logic, if every time you want to increase the performance of a processor, you have to shrink the process, rather than scale the mhz?



    I think it would be more accurate to say that shrinking the process is an *easy way of increasing clock speed. I don't know how long this can go on for, I mean, chips can only be made so small.
  • Reply 10 of 29
    I'm not exactly sure how it works exactly, but say 130nm, 90nm, and 65nm processes each produce 130nm, 90nm, and 65nm transistors and interconnects, for arguments sake. (I read this somewhere actually) So if you perfectly scales 970 at 130nm to 90nm to 65nm, at the exact same Hz, they would become progressively faster. I don't know how noticeable it may be, but because the electrons only have to 90 instead of 130nm, or 65 instead of 90nm, they are faster. From what I know of physics it should be a little better than linear speed increase.
  • Reply 11 of 29
    programmerprogrammer Posts: 3,458member
    Quote:

    Originally posted by macserverX

    I'm not exactly sure how it works exactly, but say 130nm, 90nm, and 65nm processes each produce 130nm, 90nm, and 65nm transistors and interconnects, for arguments sake. (I read this somewhere actually) So if you perfectly scales 970 at 130nm to 90nm to 65nm, at the exact same Hz, they would become progressively faster. I don't know how noticeable it may be, but because the electrons only have to 90 instead of 130nm, or 65 instead of 90nm, they are faster. From what I know of physics it should be a little better than linear speed increase.



    The problem is that heat output doesn't drop as fast so there is more heat is a smaller area even if they don't increase the transistor count, which they normally do (the 970FX being an exception).
  • Reply 12 of 29
    powerdocpowerdoc Posts: 8,123member
    Quote:

    Originally posted by Programmer

    The problem is that heat output doesn't drop as fast so there is more heat is a smaller area even if they don't increase the transistor count, which they normally do (the 970FX being an exception).



    I know it's a waste of money, but they can choose to lower the transistor density of the chip, in order to gain clock speed, even if the die size stay unchanged.
  • Reply 13 of 29
    zapchudzapchud Posts: 844member
    Quote:

    Originally posted by macserverX

    So if you perfectly scales 970 at 130nm to 90nm to 65nm, at the exact same Hz, they would become progressively faster. I don't know how noticeable it may be, but because the electrons only have to 90 instead of 130nm, or 65 instead of 90nm, they are faster. From what I know of physics it should be a little better than linear speed increase.



    I'm not 100% sure I understand what you're trying to say, but I'll give it a go.



    You are suggesting that if you take the CPU, scale it down to 90 and further 65 nm, and keep it at the same core frequency, the processor would be faster? If this is what you're trying to say, it's wrong.



    The CPU will be exactly as fast at all process sizes, given that the core and core frequency stays the same. Heat dissipation may go down, heat density might go up. To make the processor faster, you either 1) optimize the core by adding functionality, remove constraints, add cache etc. or 2) you up the core frequency, or 3) both. All three options are common enough.
  • Reply 14 of 29
    onlookeronlooker Posts: 5,252member
    Quote:

    Originally posted by Altivec_2.0

    65nm. what about all the problems with 90nm?



    I have to say that I still think along the lines of Altivec_2.0 here.



    If they are still having so much trouble with 90nm how is making every thing smaller going to fix it? First I must say that I do not know a damn thing about processor manufacturing, but wouldn't it seem more intelligent to first fix the problem on a larger scale (at 130, or 90nm) then use the same model at 90, or 65nm and go from there? It really seems stupid to me to shrink something that is broken down to a smaller scale, and expect it to be OK.



    Am I alone on this train of thought?
  • Reply 15 of 29
    hmurchisonhmurchison Posts: 12,425member
    I think each fab process comes with its own idiosyncrasies. Just because you licked all the problems at 130nm doesn't mean you'll be unscathed for 90nm as we found out. There will be some issues that crop up with 65nm but they could be very different from what 90nm delivered.



    The 90nm problems were good. If you read roadmaps from Intel and IBM prior to 90nm production they were very heady. Intel thought their Netburst architecture would take them to 7Ghz. IBM reported what 10Ghz at very little wattage. Well they got a rude awakening and now everyone is singing a different tune.



    They will have newer technology to bring to 65nm so I'm not worried that they will have problems and the fact that rather than try to scale the hell out of the chips they will simply expand horizontally to new cores that don't have to scale as much.
  • Reply 16 of 29
    Quote:

    Originally posted by onlooker

    I have to say that I still think along the lines of Altivec_2.0 here.



    If they are still having so much trouble with 90nm how is making every thing smaller going to fix it? First I must say that I do not know a damn thing about processor manufacturing, but wouldn't it seem more intelligent to first fix the problem on a larger scale (at 130, or 90nm) then use the same model at 90, or 65nm and go from there? It really seems stupid to me to shrink something that is broken down to a smaller scale, and expect it to be OK.



    Am I alone on this train of thought?




    Im with you and Altivec2.0. The reason 90nm isn't working is because of quantum tunneling, electrons are jumping the insulators, 65nm makes the problem bigger imo, it doesn't solve it. I'd fix 90nm first, find ways of making 90nm at least as efficient as 130nm and shrink from there. Are Intel/IBM basically saying, we'll shrink the die to get more cores while just living with the increase in leakage?



    IBM said that scaling died somewhere between 130 and 90. Id find the size where scaling breaks down and try to understand why this is. It seems some memory manufacturers have produced 110nm OK, so the problem might be between 110 and 90.
  • Reply 17 of 29
    amorphamorph Posts: 7,112member
    Quote:

    Originally posted by powermacG6

    Are Intel/IBM basically saying, we'll shrink the die to get more cores while just living with the increase in leakage?



    That was certainly Intel's position out of the starting gate, which is why their early 90nm offerings require their own 30 amp circuits. They seem to be getting religion about process tech now. IBM took a middle road and used relatively simple process tech in hopes of easing the migration to 90nm. AMD and Mot are moving to 90nm with all the fixings, which is (one reason) why they're just now making the transition.



    Quote:

    IBM said that scaling died somewhere between 130 and 90. Id find the size where scaling breaks down and try to understand why this is. It seems some memory manufacturers have produced 110nm OK, so the problem might be between 110 and 90.



    That seems to be the consensus, as far as I've read.
  • Reply 18 of 29
    onlookeronlooker Posts: 5,252member
    Quote:

    Originally posted by powermacG6

    Im with you and Altivec2.0. The reason 90nm isn't working is because of quantum tunneling, electrons are jumping the insulators, 65nm makes the problem bigger imo, it doesn't solve it. I'd fix 90nm first, find ways of making 90nm at least as efficient as 130nm and shrink from there. Are Intel/IBM basically saying, we'll shrink the die to get more cores while just living with the increase in leakage?



    IBM said that scaling died somewhere between 130 and 90. Id find the size where scaling breaks down and try to understand why this is. It seems some memory manufacturers have produced 110nm OK, so the problem might be between 110 and 90.




    Exactly! It may be simplistic thinking that brings us to these conclusions, but most mistakes are made at the most simple of routines. Some of the little things should not be overlooked before you distance them further from reach.

    If they take the time to understand why the transition from 130 to 90 was so difficult, or from 110 to 90nm as you said the experience could help them to draw more accurate conclusions as to why the 65nm transition may have similar problems.

    It seems redundant not to fully examine these processors, and their problems. It makes you wonder if they even bothered to check out what went wrong with 180 to 130nm transition. Could this be an ongoing reoccurrence of something they neglected that should not have been overlooked a long time ago?
  • Reply 19 of 29
    Zapchud, you obviously don't understand physics at all. Do you drive? I hope not.



    Ok a car is travelling at 60 mph (or for simplicities sake, 100kph).



    Lets say the car travels 120 miles (200km), it will take the car 2 hours to travel that distance.



    Now, the car only travels 60 miles (100km). It takes the car 1 hour to get to it's destination.



    Finally, the car goes 30 miles (50km). The trip takes 1/2 hour.



    130nm = 1ns (ns?)

    90nm = .6923ns

    65nm = .5ns



    Now, to justify the slightly better that linear speed increase.



    If the above cars reach 60mph (100kph) almost instantly after starting its trip, we can say (calculus talk, we can say stuff even if it isn't quite true) that the cars instantaneous speed at t=0 is 60mph (100kph). Now lets say the car slows 1/4mph for every mile traveled.



    So the instantaneous speed of the car:

    Code:


    Go distanceFinal Speed

    120mi/200k60mph - (120mi/4) = 30mph

    60mi/100k60mph - (60mi/4) = 45mph

    30mi/50k60mph - (30mi/4) = 52.5mph







    And that wouldn't be linear, so...



    Now that I think about it more, with the die size on the X-axis, and the speed on the Y, the equation should be in the form 1/x.
  • Reply 20 of 29
    zapchudzapchud Posts: 844member
    Quote:

    Originally posted by macserverX

    Zapchud, you obviously don't understand physics at all. Do you drive? I hope not.





    Please, before insulting me and then firing away with basic mathematics, make sure you read my post and understood what I said.
Sign In or Register to comment.