Chip speeds

Posted:
in Future Apple Hardware edited January 2014
Is it just me, or has anyone noticed that it seems like the 2 major chip manufacturers are stuck at 3-3.x with chip speeds, and are just adding more cores to compensate? I know more cores is not a bad thing, its just not many OS's and apps out there can use them efficiently currently. But at the same time, we seem to be on a paradigm shift towards that, and Apple is seemingly wanting to be at the forefront...I hope SL delivers in spades.



Just my random thought.

Comments

  • Reply 1 of 12
    krispiekrispie Posts: 260member
    Thanks for that.
  • Reply 2 of 12
    futurepastnowfuturepastnow Posts: 1,772member
    They're not so much "stuck" as they made a conscious decision to go that way.



    Server processors are where the money is and server software is very parallel.
  • Reply 3 of 12
    mbmcavoymbmcavoy Posts: 157member
    Quote:
    Originally Posted by hypoluxa View Post


    Is it just me, or has anyone noticed that it seems like the 2 major chip manufacturers are stuck at 3-3.x with chip speeds, and are just adding more cores to compensate?



    The clock speed increases have slowed. I think in part that the technology is really getting pushed against the wall, and further breakthroughs are needed to get more speed. However, a big issue with speed is power consumption. It seems that the current range of speeds are a good compromise on the sweet spot.



    Adding cores is one way to gain improvement. Current-generation software is not effective at using multiple cores, so more than two cores is still not really mainstream. We will need a paradigm shift to see mainstream benefits from 3+ cores. I am hopeful that Snow Leopard will help kick this off.



    However, CPU performance is more than just clock speeds and cores. The architecture of each core is becoming more powerful, and more work is accomplished per-tick.



    Compare a Core 2 Duo and a Pentium D (both dual-core chips) at the same speed - the Core 2 Duo will blow the Pentum D out of the water.
  • Reply 4 of 12
    hmurchisonhmurchison Posts: 12,423member
    The chase for megahertz just had Intel hitting a brick wall.



    I find that the most efficient use of a computer for my needs is the ability to work on multiple

    tasks at a time. Fast processors give the hope that I can complete a task quickly and then movie on to the next task but often I find myself bouncing around multiple apps that need to be running.



    So I'm more about scaling out wider and increasing RAM so that I can run more processes rather than running fewer but at a faster rate.
  • Reply 5 of 12
    wizard69wizard69 Posts: 13,377member
    Quote:
    Originally Posted by hypoluxa View Post


    Is it just me, or has anyone noticed that it seems like the 2 major chip manufacturers are stuck at 3-3.x with chip speeds, and are just adding more cores to compensate?



    Well they are being marketed that way. The recent modern cores though over clock very easily. From the consummers standpoint though it has become obvious that SMP is a very real advantage. So if you where involved in marketing what would you focus.



    This doesn't even address that todays processor cores are doing a lot more with those GigaHertzs.

    Quote:

    I know more cores is not a bad thing, its just not many OS's and apps out there can use them efficiently currently.



    I become a bit frustrated when I see this as this really isn't the case generally, especially with respect to operating systems. Even apps that can be easily paralleled have been. This doesn't even address the fact that are a number of processes running on modern computers.

    Quote:

    But at the same time, we seem to be on a paradigm shift towards that, and Apple is seemingly wanting to be at the forefront...I hope SL delivers in spades.



    I'm truly affraid that in some cases SL is being over sold as far as what it will do. Will it perform better, certainly in many cases but that might not have much to do with the cores installed. Well I86 cores, simply doing more on the GPU ought to help a lot. What SL will do for individual apps will be highly variable, some may benefit, some will need to be rewritten, others will perform the same.



    The big advantage that SL ought to deliver better performance on machines that are running a large number programs.

    Quote:



    Just my random thought.



    Yes very random. We are all very excited about SL, but it isn't a super hero OS. It is a given it will perform better itself but how exsisting apps will leverage it is an open question.







    Dave
  • Reply 6 of 12
    The world's fastest silicon-based microchip has been demonstrated by scientists in the US.



    The prototype operates at speeds up to 500 gigahertz (GHz), more than 100 times faster than desktop PC chips.



    To break the world record, the researchers from IBM and the Georgia Institute of Technology had to super-cool the chip with liquid helium.



    The team believes the device could eventually speed up wireless networks and develop cheaper mobile phones.



    "Faster and faster chips open up new applications and reduce costs for existing products," said Professor David Ahlgren of IBM.

    ==============

    debt help

    Growth Stock
  • Reply 7 of 12
    programmerprogrammer Posts: 3,458member
    Quote:
    Originally Posted by FuturePastNow View Post


    They're not so much "stuck" as they made a conscious decision to go that way.



    Server processors are where the money is and server software is very parallel.



    While its a conscious decision, it was motivated by the increasing power inefficiency of pushing to higher clock rates. Driving the clock rate higher causes the chips to run dramatically hotter for less and less gain. While servers can benefit from parallelism better than most desktop apps, the power efficiency is of huge importance to them (data centers are massive power hogs). And mobile consumer applications similarly require excellent power efficiency which means keeping the clock rate down. Going parallel is the only option left for improving compute capability.
  • Reply 8 of 12
    futurepastnowfuturepastnow Posts: 1,772member
    Quote:
    Originally Posted by Programmer View Post


    While its a conscious decision, it was motivated by the increasing power inefficiency of pushing to higher clock rates. Driving the clock rate higher causes the chips to run dramatically hotter for less and less gain. While servers can benefit from parallelism better than most desktop apps, the power efficiency is of huge importance to them (data centers are massive power hogs). And mobile consumer applications similarly require excellent power efficiency which means keeping the clock rate down. Going parallel is the only option left for improving compute capability.



    Right now we have quad core processors that are almost as fast- in gigahertz- as the fastest single-core Pentium 4 Intel made (note that I mean speed in raw GHz, obviously the P4 was an inefficient, low-IPC architecture, but it was designed to push that GHz number up up up).



    So now we have a 3.33 GHz Core i7 (quad), when ~4 years ago we had a 3.8 GHz P4 (single). They both draw roughly the same amount of power and produce about the same amount of heat. Ignoring other architectural differences, Intel can now fit four 45nm cores in the same space as one 90nm core. If they wanted to build a single-core 45nm processor that fit in that same 130W TDP space and used all the power available to it efficiently, they could. You're implying that they could not, and I think you're wrong.



    But parallelism is where the money is now, it's where the money will be in the future, and that is why processors are the way that they are.
  • Reply 9 of 12
    addaboxaddabox Posts: 12,665member
    Quote:
    Originally Posted by rangaraj1987 View Post


    The world's fastest silicon-based microchip has been demonstrated by scientists in the US.



    The prototype operates at speeds up to 500 gigahertz (GHz), more than 100 times faster than desktop PC chips.



    To break the world record, the researchers from IBM and the Georgia Institute of Technology had to super-cool the chip with liquid helium.



    The team believes the device could eventually speed up wireless networks and develop cheaper mobile phones.



    "Faster and faster chips open up new applications and reduce costs for existing products," said Professor David Ahlgren of IBM.

    ==============

    debt help

    Growth Stock



    Apple must use the 500GHz chip in the next iPhone or be doomed to irrelevance!
  • Reply 10 of 12
    programmerprogrammer Posts: 3,458member
    Quote:
    Originally Posted by FuturePastNow View Post


    If they wanted to build a single-core 45nm processor that fit in that same 130W TDP space and used all the power available to it efficiently, they could. You're implying that they could not, and I think you're wrong.



    To compete with what those 4 cores could do, a single core processor would need to clock at four times what the quad core does. Power consumption rises exponentially based on the clock rate, therefore in any sort of performance/watt comparison the single core processor will lose badly. Some degree of performance advantage can be had be increasing a processor's size (larger caches, more internal table space, more registers, more functional units) except that these all suffer badly from diminishing returns and the existing cores Intel has built seem to be in quite a sweetspot. As a result the single core processor wouldn't benefit very much from four times as many transistors (probably less than 50%). Additionally, any of these would require cross-processor connections, instead of highly localized ones, resulting in higher current leakage in the larger core -- small cores are inherently more power efficient. So while Intel could build much larger single cores, the benefits of doing so are seriously limited and a much larger bang for the buck comes from a multi-core solution. And the design flexibility of being able to use essentially the same core as a building block for chip design (1 - 8 cores per chip, plenty of SoC-style options) saves enormously on cost.
  • Reply 11 of 12
    anna2009anna2009 Posts: 18member
    I have tested the Chrysler 300 chassy based cars (your Charger is one of them) for the rental fleet reviews. I am a professional driver and can tell you the following about this car:



    This car is not designed for speed. It has a chunky, badly responsive steering and no subframe contruction for the front suspension.



    120 MPH will exhaust this car's potential for proper road handling (besides it does not use underbody defusers for stability either). This car performs awfuly bad on the skid pad and it is awful at high speeds.



    Besides, you don't even know how to drive in the first place, so going fast in this car will not just endanger you, it will endanger others. You should keep the clueless driving off the city streets and on the track where this car does not belong at all. It is just a commuter car.
  • Reply 12 of 12
    Anna2009



    Are you feeling ok.... or just posting to the wrong forum ?
Sign In or Register to comment.