Is the new "Cores Phenom" just in its infancy?

Posted:
in Future Apple Hardware edited January 2014
Recently, I have been trying to figure out if I want to get a Quad-Core 2.66. But I am having a hard time trying to figure out if this new wave in computing is just in its infancy. Like for instance does anyone know or speculate with certainty that in a year we will have a 32 core machine? with to 16 core chips? or like by 2010 there will be 256 core machines? Cause one thing I have notice is the mHz/gHz rating of chips has seemed to level off since the progress of 1985 to 2005 that was 20 years of doubling and what not. So to me it seems like because of bus speeds, that systems might just increase there cores and multi processing. Then the gHz will just level off due to heat, although I did read something about IBM having a chip running at 500 gHz or 50 gHz. But in conclusion my belief is that processors speeds might increase slightly but now the cores will increase until a new euphemism for mega-multi-core chips arise. But, I see 16 core and 32 core systems easily by 2010 with only modest gHz speeds. Anyone else see how the trend with evolve? cause I could wait a little longer to get a 32 core with 2 x 16 core chips for $3,000. Or a medium level 16 core for $2,300. Just what I'm thinking about lately... but one thing is it sounds sick...

Comments

  • Reply 1 of 13
    rezwitsrezwits Posts: 895member
    here is an article I found from almost 1 year ago, http://www.tomshardware.com/2006/07/...eifer_32_core/



    which puts this at 16x the speed of a woodcrest



    here is quote from Intel's site:



    We've heard there is a 256-core machine in Intel's roadmap



    How long do you think it will take Intel to get to that point?

    Delivering such a processor would take a while. We're going to have to see a couple generations of process improvement in terms of transistor density. I would expect we'd see a platform like that by the 2015 time frame, assuming we see enough applications that could take advantage of a 256-core machine. We are focused on delivering more threads per processor by adding more cores and more threads per core. That gives you a multiplicative improvement in thread count.
  • Reply 2 of 13
    aplnubaplnub Posts: 2,605member
    I have summarized an article below from a Business Week publication. I pulled up my summary when I caught this post. Obviously, this is old news since it is based on an article over 2 years old. But, it is an interesting read regardless especially when you notice they think 45 nm will be in 2010. Intel seems to be ahead of the curve from the point of view in 2005. Exciting1 Processor construction will change yet to keep speed increases coming in the way of 3 dimensional stacking.



    I was reading my wife's BusinessWeek magazine, June 20, 2005 issue, and ran across an article titled, "More Life for Moore's Law". It was a good read but also included some surprising quotes from IBM. The article is on page 108 may be nice to know.



    Article Summary



    BuisnessWeek reports that future solutions to keeping speeds of processors increasing in sync with Moore’s law are starting to immerge. Current processes rely on shrinking transistors on chips reducing the time needed for electrons to reach their destination. “This year and next they’ll go down to 65 nm, followed by 45 nm by 2010, 32 nm by 2013, and 22 nm by 2016” increasing the speed of processors the old fashioned way.



    The next step in increasing speeds without shrinking circuit lines would be the utilization of multicore processors, where more than one processor core is coupled together and both fit on the same semiconductor. There is a big push from Intel to encourage software to take advantage of multicore processors. “Intel has committed 3,000 of its 10,000 software programmers to help accelerate the shift to multicore designs.” Philip Emma, manager of systems technology and microarchitecture at IBM, predicts that personal computers will likely see a peak of 8 core processors.



    The next possible solution is to design “ways to stack circuitry, layer upon layer into multi-story, 3D structures.” This would allow the pathway distance for electrons to be reduced to 10 microns from 20,000 microns allowing current 90 nm processors to perform similar to 32 nm processors scheduled for 2011. There are challenges to be overcome when stacking transistors one on top of the other and this technology could take as long as 2011 to make an appearance.



    "We're going to see a lot of evolution happening very fast,” said Philip Emma.
  • Reply 3 of 13
    rezwitsrezwits Posts: 895member
    Sweet thanks for the reply, I have to say, interesting see how fast things are getting. I was not really intersted in getting a new system. But lately I have been burning DVDs at 16x speed and I used to burn at 1x speed. Now, I know this has nothing to do with processors. But when you look at current specs of computers and realize that these computers and even future ones are and going to be up to 16x the speed of my current computer the feeling of burning at 16x which is so nice, would also seem probably be a nice feeling, worth a $2,000+ purchase of a new quad-core.
  • Reply 4 of 13
    hmurchisonhmurchison Posts: 12,437member
    Inte will be delivering 16 core computers with Nehalem the successor to Penryn due out in 2008. Each die will have up to 8 cores and support ondie memory controllers via CSI. I assume there will be 4 socket server/workstation motherboards that will allow for 32 cores late 08 or sometime in 09.



    Clockspeed ramping proved to be rather unsucessful and I'm glad Intel came to their senses. Now they are heavily focused on many-core computing and will have improved tools for threading (Nehalem supports SMT so a 16-core computer looks like a 32-core computer)



    I'd rather have more cores at a lower speed than fewer cores working faster.
  • Reply 5 of 13
    vineavinea Posts: 5,585member
    Quote:
    Originally Posted by hmurchison View Post




    I'd rather have more cores at a lower speed than fewer cores working faster.



    Depends on what you're doing. Not all computational tasks are parallizable and it has been more difficult to program for.



    Vinea
  • Reply 6 of 13
    hmurchisonhmurchison Posts: 12,437member
    Quote:
    Originally Posted by vinea View Post


    Depends on what you're doing. Not all computational tasks are parallizable and it has been more difficult to program for.



    Vinea



    True, I like many-core for the multitasking. I don't want something like a render or encode slowing down my system. I see no reason why I shouldn't have processing power on demand. While some tasks don't lend themselves to parallel processing that easily I find that's becoming more and more of a rare situation.



    The interesting thing is when computing gets so fast that the typical consumer truly has more power than they know what to do with. Software always rises to the challenge but with 32-core computers with integrated memory controllers and GPU...I could easily see computing power finally outstripping software demands for most consumers.



    Pros will always need the power.
  • Reply 7 of 13
    aplnubaplnub Posts: 2,605member
    Quote:
    Originally Posted by hmurchison View Post


    True, I like many-core for the multitasking. I don't want something like a render or encode slowing down my system. I see no reason why I shouldn't have processing power on demand. While some tasks don't lend themselves to parallel processing that easily I find that's becoming more and more of a rare situation.



    The interesting thing is when computing gets so fast that the typical consumer truly has more power than they know what to do with. Software always rises to the challenge but with 32-core computers with integrated memory controllers and GPU...I could easily see computing power finally outstripping software demands for most consumers.



    Pros will always need the power.



    It would be nice if they would shut the power off to these cores when not in use once we exceed more than 4 cores as a standard. Having a power saving option to cut the power to all but one core and then lower the voltage on that one would be great for mobile computing to get battery life while being able to pump it up while plugged in.



    With 4 or less cores, I think the average user will see benefits by the things other posters have noted above being multithreaded programs or not. Rendering, DVD watching, running .Mac ad syncing your computer ties up the porcessor power on a core, Parallels, VMWare, etc. After four cores, your normal user will see less and less benefit.
  • Reply 8 of 13
    vineavinea Posts: 5,585member
    Quote:
    Originally Posted by aplnub View Post


    It would be nice if they would shut the power off to these cores when not in use once we exceed more than 4 cores as a standard. Having a power saving option to cut the power to all but one core and then lower the voltage on that one would be great for mobile computing to get battery life while being able to pump it up while plugged in.



    With 4 or less cores, I think the average user will see benefits by the things other posters have noted above being multithreaded programs or not. Rendering, DVD watching, running .Mac ad syncing your computer ties up the porcessor power on a core, Parallels, VMWare, etc. After four cores, your normal user will see less and less benefit.



    Intel is doing that and in addition reducing the multi-core penalty on single core ops by "overclocking" that core becuase the other core is idled and not adding to the thermal output. Intel prefers not to call it "overclocking" but "Intel Dynamic Acceleration" (IDA) because the increased speed is within spec but its really just overclocking.



    It's in Santa Rosa if I recall correctly along with the Dynamic Power Coordination (sleeps the idle core).



    Vinea
  • Reply 9 of 13
    rezwitsrezwits Posts: 895member
    I believe it's over, for both cores and gHz. Since 2003 we have had 2 gHz processors. There haven't been any jumps in speed for a while. And as far as multiple cores I think this maybe the end also, why? XServe. Let's say you wanted to get 32 cores of processing, right? If you purchased 8 x Quad Core XServes @ 2.66 gHz that would cost you $32,000. Now right now you another alternative is you could get 4 x 8 core Mac Pro's, that would be a better solution at $16,000, but if Apple came out with a 32 core Mac Pro for $4,000 or 16 core Mac Pro for $3,000 or $4,000, that would make buying XServes stupid... and retarted. So, the cores are done and the gHz are done. Computer power Peak reached for a while. Buying a Mac Pro 2.66 gHz Quad at $2,500 safe investment in my opinion. Should last 10 years easily...
  • Reply 10 of 13
    rezwitsrezwits Posts: 895member
    of course that's what they (Apple and Intel) want you to think... :P
  • Reply 11 of 13
    vineavinea Posts: 5,585member
    Quote:
    Originally Posted by rezwits View Post


    I believe it's over, for both cores and gHz. Since 2003 we have had 2 gHz processors. There haven't been any jumps in speed for a while. And as far as multiple cores I think this maybe the end also, why? XServe. Let's say you wanted to get 32 cores of processing, right? If you purchased 8 x Quad Core XServes @ 2.66 gHz that would cost you $32,000. Now right now you another alternative is you could get 4 x 8 core Mac Pro's, that would be a better solution at $16,000, but if Apple came out with a 32 core Mac Pro for $4,000 or 16 core Mac Pro for $3,000 or $4,000, that would make buying XServes stupid... and retarted. So, the cores are done and the gHz are done. Computer power Peak reached for a while. Buying a Mac Pro 2.66 gHz Quad at $2,500 safe investment in my opinion. Should last 10 years easily...



    "640K is more memory than anyone will ever need." quote attributed to Bill Gates.



    Computers have stagnated from 2003 because:



    Because going from 2Ghz to 3 Ghz is not a 50% jump.

    Because 4 cores is not a jump from 1.

    Because a 1U 4 core XServe is not a better server than a 8 core Mac Pro that doesn't fit in a rack

    Because Intel has no plans for 8+ core CPUs.

    Because Apple will never put quad core or greater CPUs in their XServes.



    Vinea
  • Reply 12 of 13
    benroethigbenroethig Posts: 2,782member
    Considering that most programs don't make use of more than one of those cores, It's very much in its infancy.
  • Reply 13 of 13
    easyceasyc Posts: 69member
    The mutilcore design is definatly in its infancy. Right now the problem with it is supporting software. Theres really no point in having 8 cores if your not using them to their full effect. For the consumer part, I think a quad core is about as far as a home user will need for a while. Even when software does catch up, will a home computer really need lots of cores to, surf the net, play some music, and type of little johnnie's class papers? No.



    Also I can not remember where I had read something but I was reading up on AMD and Intel comparrision articles and I ran across someone making a significant comment anbout having mutilcore processors having cores to deal with specific systgem processes. Right now we have so many things that coommunicate with the CPU, Video card, sound card, network devices ect. Well the breakthrough in mutilcore technology is not only cooling them down and making them faster but having a CPU that has cores taking care of certain things that are now outsourced from it. I know that you can have a core do specific things when running an application but thats only if that app supports multicore. That would mean palm sized functional desktops! I think that we will see that coming very shortly in the upcoming years, or atleast step being made in that direction.
Sign In or Register to comment.