Multi-Cores evidence that Moore's Law is already breaking down?

Posted:
in Future Apple Hardware edited January 2014
If this was a car, would it be equal to installing two separate motors to increase performance because the law of diminishing returns makes it cost prohibitive to get a significant performance increase out of a single motor?



If Moore's Law is indeed ending and Heisenberg's Law is beginning, how do you think it will influences Apple products five years from now? Will Mac performance plateau while iOS devices continue to rapidly increase performance with the use of multi-cores until Macs and ios devices have nearly the same performance in 5 years or so?



How many multiple-cores can we go before it becomes redundant?
«13

Comments

  • Reply 1 of 45
    mcarlingmcarling Posts: 1,106member
    The general expectation is that Moore's Law will break down as we approach the atomic scale. However, Intel's trigate 3D process, to be introduced with Ivy Bridge, may provide a way around that limitation -- for a while.
  • Reply 2 of 45
    dobbydobby Posts: 797member
    We can expect a generation or two of continued exponential growth will likely continue only for leading-edge chips such as multi-core microprocessors. More designers are finding that everyday applications do not require the latest physical designs.

    It could well be that, Moore's Law (halving of the dimensions and doubling of speed of chips every 18 months), will run out of steam very soon based on current production and materials used. Only a few high-end chip makers today can even afford the exorbitant cost of next-generation research and design, much less the fabs to build them.

    There are three next-generation technologies that are still on the fast track for exponential growth: optical interconnects, 3-D chips and accelerator-based (GPU) processing. Perhaps optical interconnects will become commonplace, with chip-to-chip optical connections on the same board coming soon.

    When you buy a CPU chip it has a max speed i.e. 3 Ghz. The chip will perform without error when running at or below that speed withing the chips normal operational temperature parameters.

    There are 2 things that limit a chips speed, transmissions delays on the hip and Heat build up on the chip.

    Transmission delays occur in the wire or ?etchings? on the silicon. A chip in its simplicity is a collection of transistors and wires, the transistor is the on-off switch. To change state the switch as to charge up or drain the wire the connects it to the next transistor. The size of the wire and transistors has gotten smaller of the year but there is a limit, charging and draining the wires take time and there is a minimum amount of time for a transitor to flip states. As transistors are chained together the delays add up and even though they are smaller the sheer number of transistors (currently approx 2.5 billion in a six core i5 and only 1 billion in a dual core itanium 2. So the faster chips with more transistors also suffer from longer chain limits which impacts speed. Now that manufacturing technologies are getting smaller this is exacerbating the problem as at the 10nm level the electrons can leak so that on-off switching is no longer stable.

    Heat is the 2nd factor. every time the transistor changes state it leaks a bit of electricity. This creates heat. As the transistors sizes shrink the amount of wasted current is less but the faster the clock speed 3.4Ghz) the more heat it generates and the more dense the transistors are the more overall heat is generated. This also puts another limit on speed.

    So using current materials with current manufacturing processes we should not expect to see 10Ghz CPU?s.



    (yes I did meet Carl Anderson and plagiarised my notes on his speech for this)
  • Reply 3 of 45
    MarvinMarvin Posts: 15,310moderator
    Quote:
    Originally Posted by dobby View Post


    So using current materials with current manufacturing processes we should not expect to see 10Ghz CPU?s.



    Hopefully soon they will be able to get this new manufacturing process going and it will be the last step that needs to be taken:



    http://www.bit-tech.net/news/hardwar...-hits-300ghz/1



    I think the current manufacturing will however allow enough room to satisfy the vast majority of computing needs in something as small as a mobile phone.
  • Reply 4 of 45
    shrikeshrike Posts: 494member
    Multi-core CPUs are actually direct supporting evidence of Moore's Law. It's was never a prediction about clock rate or performance. It was always about the number of transistors: the number of transistors per square area in a CMOS device will double approximately every 18 to 24 months.



    By increasing the cores, it maintains this prediction of transistor counts doubling every 18-24 months.



    Moore's Law is not some natural, physics based "law". It's an economics one. He had other "laws" such as the cost of developing a next gen CMOS process approximately doubles every generation. Maybe it's just a convenience in the industry to create processes that double transistor counts per square area every generation, and that was his insight. Once it is set in motion, it's difficult to break as the entire industry relies on each others developments.



    The clock rate and end user performance are only correlative.



    Quote:
    Originally Posted by Commodification View Post


    If Moore's Law is indeed ending and Heisenberg's Law is beginning, how do you think it will influences Apple products five years from now? Will Mac performance plateau while iOS devices continue to rapidly increase performance with the use of multi-cores until Macs and ios devices have nearly the same performance in 5 years or so?



    How many multiple-cores can we go before it becomes redundant?



    In 5 years we will be using 10 nm devices. Today, 32 nm. Next year, 22 nm. 3 years from now 15 nm. 5 years, 10 nm. We will see if the semiconductor folks can overcome quantum effects at 10 nm. 15nm looks doable.



    In many ways, we aren't compute bound or GPU bound anymore or even memory bound anymore, except for specialized computational problems. Today, we are storage bound and network bound. It is the hard drive and the network that is holding back our experience. (And as always, software is the ultimate gate).



    For Macs, 90% of the folks out don't need the power represented by the Sandy Bridge CPUs in iMacs and MBPs today. With quad-core, Thunderbolt and an external RAID, even most video pros aren't held back with that.



    So we've already reached a plateau where computers already satisfactorily fulfill the needs of 90% of the people out there. So, I think we've already reached it with quad-core 2-way multithreaded processors. Unless you are doing computationally intensive stuff, an iMac or MBP purchased today will probably satisfy you for next 5 years, especially with SSDs. What will likely improve, especially with Apple, are thinner and cooler form factors.



    What won't satisfy you is storage performance, storage limits and network performance.



    An interesting aside. Back in the early 2000s, when there was a clock rate race between Intel and AMD in the hey days of the Netburst architecture and SLI of GPU cards was beginning, I thought that 1000 Watt PSUs could be the norm. 1000 W! Running on your desktop for the majority of the day. Holy cow! With 22 nm and 15 nm, a prospective 4-core desktop with nice GPU and SSD drive could be 10 Watts and <10 db (no fans).
  • Reply 5 of 45
    If Moore's Law was still increasing at the same rate it did from 1990-2000, shouldn't we be near 16 ghz or at least 8-cores (2 ghz each)?



    I often think the future of the net is some form of grid-computing, where the internet-itself becomes one giant wireless computers as all connected devices share a portion of their processing power. As America has gone from an 'ownership economy', to a 'credit economy', and is now becoming an 'access economy', I think this will significantly impact the type of computers we use and how we use them in the future.
  • Reply 6 of 45
    hirohiro Posts: 2,663member
    Memristors and Phase Change Memory are the leading edge of the next technological s-curve that will sit on top of the current semiconductor technological s-curve pushing computing faster. Cool stuff not quite ready for mass manufacturing, but the theory has turned into actual working, repeatably build-able hardware. 5-7 years should see some of this stuff appearing in expensive gear, once that happens the the 20 years after that will be inevitably built on the new stuff.



    Every technology has it's heyday, then it runs out of overhead for the bleeding edge. Basic silicon CMOS and related chip tech has a few years to go, HP and IBM are well positioned to reap some major upsets over Intel/AMD given their current progress in the next best thing.
  • Reply 7 of 45
    hirohiro Posts: 2,663member
    Quote:
    Originally Posted by Commodification View Post


    If Moore's Law was still increasing at the same rate it did from 1990-2000, shouldn't we be near 16,000 ghz or at least 8-cores (2,000 ghz each)?



    I often think the future of the net is some form of grid-computing, where the internet-itself becomes one giant wireless computers as all connected devices share a portion of their processing power. As America has gone from an 'ownership economy', to a 'credit economy', and is now becoming an 'access economy', I think this will significantly impact the type of computers we use and how we use them in the future.



    No. Moore's law had absolutely nothing to do with CPU frequency. The original version was simply that the number of transistors per unit area doubled every 12 months, then it settled into a more average 18 month doubling.



    This made chips more powerful, but not in a linear manner. It also made them cheaper because you could make more of them at one time for roughly the same amount of raw material and energy input, but this also isn't a strict relationship.



    If you do a computational power comparison (throughput or total work done per unit time per chip), we are actually ahead of a equivalent (incorrect) extension of Moore's Law. You need to pick the right problem to test against, one that isn't memory access limited, but thats how to assess how powerful CPUs are getting now.
  • Reply 8 of 45
    emacs72emacs72 Posts: 356member
    Quote:
    Originally Posted by Commodification View Post


    If Moore's Law was still increasing at the same rate it did from 1990-2000, shouldn't we be near 16,000 ghz or at least 8-cores (2,000 ghz each)?



    as stated by someone else, Moore's Law isn't related to clock frequency. and, what you mention already exists in the form of: IBM POWER7, Intel Xeon E7-class and AMD Opteron 6100 series
  • Reply 9 of 45
    wizard69wizard69 Posts: 13,377member
    Quote:
    Originally Posted by Commodification View Post


    If this was a car, would it be equal to installing two separate motors to increase performance because the law of diminishing returns makes it cost prohibitive to get a significant performance increase out of a single motor?



    If you can tie two motors together and harness the power you can leverage all the engineering and tuning that has been developed for an eight cylinder engine. In some ways this represents exactly what has happened with multicore.



    With multi core you simply duplicate a "processor" on the die and tie everything together. This gives you some significant advantages. For one you aren't designing 128 bit cores for general purpose applications that don't need that sort of core. Second you don't have to scale clock rate to gain performance which saves significantly on power. Third you can effectively run the different processes that modern computers run on real hardware as opposed to context switching all the time.

    Quote:



    If Moore's Law is indeed ending and Heisenberg's Law is beginning, how do you think it will influences Apple products five years from now?



    Well it isn't a problem yet and likely won't be in five years either. Ten years might be a different story all together.

    Quote:

    Will Mac performance plateau while iOS devices continue to rapidly increase performance with the use of multi-cores until Macs and ios devices have nearly the same performance in 5 years or so?



    I'm not sure where this question comes from. To put it simply the current ARM processors are no where near i86 in performance. Nor is it likely that they will be anytime soon as that sort of performance would mean that ARM devices are using the same amount of power as the i86 devices.

    Quote:



    How many multiple-cores can we go before it becomes redundant?



    With current technology I believe that number is less than 50. There has actually ben some research done on this. The problem if memory holds up is that communications begins to throttle the processors in an unacceptable manner. On the other hand specialized processor can perform well beyond the 50 core mark very well. Here is an example company having success with thousands of processors: http://www.adapteva.com/



    The problem with your question is that the answer then becomes "it depends". Think about it, even at this early stage some codes will effectively saturate every core in an Intel processor. That is something considering we are at an early stage in the process of taking advantage of all of these cores. Simply put many people aren't even taking advantage of their dual cores, while other can't find machines that are fast enough.
  • Reply 10 of 45
    I really do appreciate all the answers that all of you have given, since it's very useful in helping me understand the complexity of the situation. Does anyone think that our technological progress is limited more by software code or by the needs and desires of the average user? Might PCs and Macs never see their full potential because most consumers will migrate towards easier and more fun to use iOS/Android mobile devices, and software developers will go where the money and exponential growth is? It seems like it's more likely that the money needed to fund and build fab plants to make faster PCs/Macs will stop much sooner then when the true technological limits are reached.
  • Reply 11 of 45
    dobbydobby Posts: 797member
    Quote:
    Originally Posted by Commodification View Post


    If Moore's Law was still increasing at the same rate it did from 1990-2000, shouldn't we be near 16,000 ghz or at least 8-cores (2,000 ghz each)?

    .



    Sparc T3' have 16 core. More to come with the T4's in October this year.



    We can expect the P8 (2013/2014) to have at least 16 core.

    I also see Intel having a 16 core chip out in the next 2 years.
  • Reply 12 of 45
    dobbydobby Posts: 797member
    Quote:
    Originally Posted by Marvin View Post


    Hopefully soon they will be able to get this new manufacturing process going and it will be the last step that needs to be taken:



    http://www.bit-tech.net/news/hardwar...-hits-300ghz/1

    .



    I know there is research going on in developing new substrates just like this one but another complication has emerged as part of nanotechnology and that is that at a nanometer size there are changes to how the materials work i.e. heat and physical properties react differently than they do at sizes we can see.

    This is part of the reason why nanotechnology is hard to develop, you can't physically see what you are developing and stuff doesn't necessarily work based on normal physics so this has to be developed and discovered to help progress technology in this field.

    One good thing on nano tech is the fact that they also have organic polymers from maize so these should aid treatment for medical conditions.



    Dobby.
  • Reply 13 of 45
    Quote:
    Originally Posted by dobby View Post


    Sparc T3' have 16 core. More to come with the T4's in October this year.



    I was referring more to Macs in particular since we had 1 ghz machines 10 years (2001) ago and 33 mhz ones 20 years ago (1991). Today a single core processor in a multi-core Mac maxes out at 3.33 ghz so 'on the surface it appears' like progress on single core performance has really slowed.
  • Reply 14 of 45
    dobbydobby Posts: 797member
    Oh, sorry, I've been waffling about the CPU industry in general. Forgot where I was.



    I think if the power macs ran P7 CPU's they would be a head above anything else. Of course the software/OS problems would be a pain in the arse but from a pure power perspective P7 CPU's using SSD's only with infiniband would be great (and only as expensive as a normal IBM server i.e. arm + leg).
  • Reply 15 of 45
    tallest skiltallest skil Posts: 43,388member
    Quote:
    Originally Posted by Commodification View Post


    I was referring more to Macs in particular since we had 1 ghz machines 10 years (2001) ago and 33 mhz ones 20 years ago (1991). Today a single core processor in a multi-core Mac maxes out at 3.33 ghz so 'on the surface it appears' like progress on single core performance has really slowed.



    That's why it's called the Megahertz MYTH.
  • Reply 16 of 45
    Quote:
    Originally Posted by Tallest Skil View Post


    That's why it's called the Megahertz MYTH.



    True, but some myths (and/or sales terminology) are more influential on an industry than the actual facts of performance. I think the speed or the number of cores is important for marketing reasons because its one of the very few things that the average consumer can actually understand about a computer processor.
  • Reply 17 of 45
    zephzeph Posts: 133member
    Quote:
    Originally Posted by Marvin View Post


    Hopefully soon they will be able to get this new manufacturing process going and it will be the last step that needs to be taken:



    http://www.bit-tech.net/news/hardwar...-hits-300ghz/1



    I think the current manufacturing will however allow enough room to satisfy the vast majority of computing needs in something as small as a mobile phone.



    Thanks, that is interesting. I looked at graphene's wiki and it says:

    Quote:

    According to a January 2010 report,[144] graphene was epitaxially grown on SiC in a quantity and with quality suitable for mass production of integrated circuits. At high temperatures, the Quantum Hall effect could be measured in these samples. See also the 2010 work by IBM in the transistor section above in which they made 'processors' of fast transistors on 2-inch (51 mm) graphene sheets.[136][137]



    In June 2011, IBM researchers announced[145] that they had succeeded in creating the first graphene-based integrated circuit, a broadband radio mixer. The circuit handled frequencies up to 10 GHz, and its performance was unaffected by temperatures up to 127 degrees Celsius.



    https://secure.wikimedia.org/wikiped...rated_circuits
  • Reply 18 of 45
    shrikeshrike Posts: 494member
    Quote:
    Originally Posted by Commodification View Post


    I was referring more to Macs in particular since we had 1 ghz machines 10 years (2001) ago and 33 mhz ones 20 years ago (1991). Today a single core processor in a multi-core Mac maxes out at 3.33 ghz so 'on the surface it appears' like progress on single core performance has really slowed.



    This is actually a pretty good observation. Yes, single core, single-threaded performance has slowed down.



    One wrinkle you forget about is that Intel's Core-i7 CPUs can "turbo". Depending on the workload and the temperature, a multi-core i7 can shut down all but one core and ramp up the clock 20% or so. Not a 2x, but it illustrates why the MHz race is ended.



    CPU power (Watts) = number transistors x capacitance per transistor x frequency x voltage^2



    These parameters are also correlative as well. In order to get a larger number of higher frequency parts from your fab process, you have to typically increase the input voltage. But look at the equation: f x V^2. Increases in frequency through voltage increases CPU power (essentially heat for this discussion) by more than the square of the voltage increase. It could even be a cubic increase depending on the frequency increase.



    This is in of itself not a huge deal if we happen to have really cheap power. A 500 W CPU running at 6 GHz is possible. But you have to suffer the consequences. A super noisy, huge cooling system and the energy costs for powering the thing for any length of time.



    Half a decade ago, Intel realized this was not a tenable thing for personal computers, and abandoned their Netburst architecture and moved on to Core. Once that decision was made, single core performance slowed down and the pressure is now on the software to take advantage of the cores.
  • Reply 19 of 45
    I think the fact that Apple might be considering merging mobile iOS devices and Mac OS X imacs and laptops into a single platform is related to some of the limitations of PC processors ability to offer any real new advantages for the majority of users.



    http://blogs.barrons.com/techtraderd...ays-jefferies/



    Quote:

    ?We believe Apple is ready to start sampling the A6 quad-core app processor and will be the first such multi-device platform capable of PC-like strength.?



  • Reply 20 of 45
    dobbydobby Posts: 797member
    I think it is the other way around.

    The CPU technology has started to plateau on the 2-3.5 Ghz area so more effort is being put into CPU design which allows CPU's like the A6 to delivery performance required to give a similar feel to a PC's response yet use far less power and produce less heat.



    An ipad/mac book air will be crap at crunching a DVD, encryption and other tasks where real throughput is required.



    Horses for courses.
Sign In or Register to comment.