IBM's 65nm technology 70% done

2»

Comments

  • Reply 21 of 29
    That was more of a vent for the genuinely intelligence challenged people that hang around these boards. But after re-reading, I did notice something I forget to discuss, but wasn't sure if that was what you were in fact saying.



    If the processor gets hotter, the atoms inside it spread apart, therefore increasing it's resistivity. The same thing happens in your standard filament light bulb. If you attach and ohm meter to a circuit with a light bulb and turn it on (use a dimmer so you can see the increase) you can see it get increasingly resistive.



    I'm not sure how much that offsets the benefits of smaller size/shorter distances. But since the heat issue takes you to an atomic scale, I doubt it's that much.



    macserverX
     0Likes 0Dislikes 0Informatives
  • Reply 22 of 29
    msanttimsantti Posts: 1,377member
    I think IBM needs to actually master the 90nm concept before going on to bigger things.



     0Likes 0Dislikes 0Informatives
  • Reply 23 of 29
    kim kap solkim kap sol Posts: 2,987member
    Quote:

    Originally posted by msantti

    I think IBM needs to actually master the 90nm concept before going on to bigger things.







    You mean smaller things.
     0Likes 0Dislikes 0Informatives
  • Reply 24 of 29
    telomartelomar Posts: 1,804member
    Quote:

    Originally posted by macserverX

    If the processor gets hotter, the atoms inside it spread apart, therefore increasing it's resistivity. The same thing happens in your standard filament light bulb. If you attach and ohm meter to a circuit with a light bulb and turn it on (use a dimmer so you can see the increase) you can see it get increasingly resistive.



    I'm not sure how much that offsets the benefits of smaller size/shorter distances. But since the heat issue takes you to an atomic scale, I doubt it's that much.




    From memory the greater challenge in process shrinks is the increase in current density. That alone leads to a line of difficulties covering everything from hot spot generation and current leakage to electromigration.
     0Likes 0Dislikes 0Informatives
  • Reply 25 of 29
    Quote:

    Originally posted by macserverX

    That was more of a vent for the genuinely intelligence challenged people that hang around these boards.



    And you're still wrong. You're confusing the speed of the processor (how much real work can be done in how many clock cycles) with signal speed (how fast electrons travel between device features). It is no longer true that process size is the limiting factor in CPU performance. It's pipeline depth that allows for the increase of clock rates, and it's IPC that determines performance at that rate, not process size.



    Smaller process sizes just allow more transistors to be within the boundries of signal timing such that chip design is simplified and the number of redundant components is reduced.



    You are also overlooking (or dismissing as neglible) tons of other factors affecting processor performance, not the least of which is the thermal profile of the circuits involved.
     0Likes 0Dislikes 0Informatives
  • Reply 26 of 29
    drboardrboar Posts: 477member
    The 90nm debacle tells us that 70% done is like being 70% pregnant



    Its done when it is shipping in sufficent numbers not before that!
     0Likes 0Dislikes 0Informatives
  • Reply 27 of 29
    I think I'm gathering the right things from TotU. The way I was explaining it, the shorter electron travel times made the processor faster (in real work) when in fact they simply have longer times for the processor doing nothing, until the clock ticks again.



    And I will admit I'm no expert, but I tend to think I have a good understand of this stuff because I read about it, as a geek, and not as a fanatical Mac user, who says "Those numbers look good", but has no understanding. So yeah, I'm thinking that because of my previous paragraph's explanation involving the timing, I would think that you should be able to scale up the clock speed with some kind of inverse relationship to the change in die size, all other human controlled variables (transistor count; chip design basically) remaining unchanged. Then you'd have to account for the physics of electricity and semiconductors and all that fun stuff which I don't know at this time, and will not put forth then either.
     0Likes 0Dislikes 0Informatives
  • Reply 28 of 29
    Quote:

    Originally posted by Zapchud

    I'm not 100% sure I understand what you're trying to say, but I'll give it a go.



    You are suggesting that if you take the CPU, scale it down to 90 and further 65 nm, and keep it at the same core frequency, the processor would be faster? If this is what you're trying to say, it's wrong.



    The CPU will be exactly as fast at all process sizes, given that the core and core frequency stays the same. Heat dissipation may go down, heat density might go up. To make the processor faster, you either 1) optimize the core by adding functionality, remove constraints, add cache etc. or 2) you up the core frequency, or 3) both. All three options are common enough.






    Now tell them why this is so!

    When you go from 90nm to 65nm the distence between the two points is closer so it takes less time to go from point A to point B. But sence clock speed is mesured in cycles per whatever the shrink would automaticly incress the clockspeed becouse it takes less time to complete a cycle. So clock for clock the speed would and should be the same unless you have done something to opptimize the code, given more l1/l2 cache, such as better branch prediction or increase cache size respectively! This is simple logic.



    please forgive grammer and spelling errors

    thanks
     0Likes 0Dislikes 0Informatives
  • Reply 29 of 29
    amorphamorph Posts: 7,112member
    Quote:

    Originally posted by jhgibbs

    Now tell them why this is so!

    When you go from 90nm to 65nm the distence between the two points is closer so it takes less time to go from point A to point B. But sence clock speed is mesured in cycles per whatever the shrink would automaticly incress the clockspeed becouse it takes less time to complete a cycle.




    There's nothing automatic about this.



    As macserverX figured out above, all the shorter distances give you is headroom. You can use that headroom to increase the clockspeed, or you can use it to drop the voltage, or some combination of the two. Because of all the other factors in play, there are no linear scales here or fixed ratios here, but those are roughly the choices that a die shrink gives you.



    The clock runs at whatever frequency it's told to. If the circuits could go a lot faster, good for them. But they're not allowed to "fire when ready." They fire when the clock signal peaks. Otherwise, you wouldn't see technologies like PowerTune and SpeedStep that throttle the CPU clock dynamically.



    Quote:

    So clock for clock the speed would and should be the same unless you have done something to opptimize the code, given more l1/l2 cache, such as better branch prediction or increase cache size respectively! This is simple logic.



    This is true as far as it goes, although there are other tricks.

     0Likes 0Dislikes 0Informatives
Sign In or Register to comment.