Is the new "Cores Phenom" just in its infancy?
Recently, I have been trying to figure out if I want to get a Quad-Core 2.66. But I am having a hard time trying to figure out if this new wave in computing is just in its infancy. Like for instance does anyone know or speculate with certainty that in a year we will have a 32 core machine? with to 16 core chips? or like by 2010 there will be 256 core machines? Cause one thing I have notice is the mHz/gHz rating of chips has seemed to level off since the progress of 1985 to 2005 that was 20 years of doubling and what not. So to me it seems like because of bus speeds, that systems might just increase there cores and multi processing. Then the gHz will just level off due to heat, although I did read something about IBM having a chip running at 500 gHz or 50 gHz. But in conclusion my belief is that processors speeds might increase slightly but now the cores will increase until a new euphemism for mega-multi-core chips arise. But, I see 16 core and 32 core systems easily by 2010 with only modest gHz speeds. Anyone else see how the trend with evolve? cause I could wait a little longer to get a 32 core with 2 x 16 core chips for $3,000. Or a medium level 16 core for $2,300. Just what I'm thinking about lately... but one thing is it sounds sick...
Comments
which puts this at 16x the speed of a woodcrest
here is quote from Intel's site:
We've heard there is a 256-core machine in Intel's roadmap
How long do you think it will take Intel to get to that point?
Delivering such a processor would take a while. We're going to have to see a couple generations of process improvement in terms of transistor density. I would expect we'd see a platform like that by the 2015 time frame, assuming we see enough applications that could take advantage of a 256-core machine. We are focused on delivering more threads per processor by adding more cores and more threads per core. That gives you a multiplicative improvement in thread count.
I was reading my wife's BusinessWeek magazine, June 20, 2005 issue, and ran across an article titled, "More Life for Moore's Law". It was a good read but also included some surprising quotes from IBM. The article is on page 108 may be nice to know.
Article Summary
BuisnessWeek reports that future solutions to keeping speeds of processors increasing in sync with Moore’s law are starting to immerge. Current processes rely on shrinking transistors on chips reducing the time needed for electrons to reach their destination. “This year and next they’ll go down to 65 nm, followed by 45 nm by 2010, 32 nm by 2013, and 22 nm by 2016” increasing the speed of processors the old fashioned way.
The next step in increasing speeds without shrinking circuit lines would be the utilization of multicore processors, where more than one processor core is coupled together and both fit on the same semiconductor. There is a big push from Intel to encourage software to take advantage of multicore processors. “Intel has committed 3,000 of its 10,000 software programmers to help accelerate the shift to multicore designs.” Philip Emma, manager of systems technology and microarchitecture at IBM, predicts that personal computers will likely see a peak of 8 core processors.
The next possible solution is to design “ways to stack circuitry, layer upon layer into multi-story, 3D structures.” This would allow the pathway distance for electrons to be reduced to 10 microns from 20,000 microns allowing current 90 nm processors to perform similar to 32 nm processors scheduled for 2011. There are challenges to be overcome when stacking transistors one on top of the other and this technology could take as long as 2011 to make an appearance.
"We're going to see a lot of evolution happening very fast,” said Philip Emma.
Clockspeed ramping proved to be rather unsucessful and I'm glad Intel came to their senses. Now they are heavily focused on many-core computing and will have improved tools for threading (Nehalem supports SMT so a 16-core computer looks like a 32-core computer)
I'd rather have more cores at a lower speed than fewer cores working faster.
I'd rather have more cores at a lower speed than fewer cores working faster.
Depends on what you're doing. Not all computational tasks are parallizable and it has been more difficult to program for.
Vinea
Depends on what you're doing. Not all computational tasks are parallizable and it has been more difficult to program for.
Vinea
True, I like many-core for the multitasking. I don't want something like a render or encode slowing down my system. I see no reason why I shouldn't have processing power on demand. While some tasks don't lend themselves to parallel processing that easily I find that's becoming more and more of a rare situation.
The interesting thing is when computing gets so fast that the typical consumer truly has more power than they know what to do with. Software always rises to the challenge but with 32-core computers with integrated memory controllers and GPU...I could easily see computing power finally outstripping software demands for most consumers.
Pros will always need the power.
True, I like many-core for the multitasking. I don't want something like a render or encode slowing down my system. I see no reason why I shouldn't have processing power on demand. While some tasks don't lend themselves to parallel processing that easily I find that's becoming more and more of a rare situation.
The interesting thing is when computing gets so fast that the typical consumer truly has more power than they know what to do with. Software always rises to the challenge but with 32-core computers with integrated memory controllers and GPU...I could easily see computing power finally outstripping software demands for most consumers.
Pros will always need the power.
It would be nice if they would shut the power off to these cores when not in use once we exceed more than 4 cores as a standard. Having a power saving option to cut the power to all but one core and then lower the voltage on that one would be great for mobile computing to get battery life while being able to pump it up while plugged in.
With 4 or less cores, I think the average user will see benefits by the things other posters have noted above being multithreaded programs or not. Rendering, DVD watching, running .Mac ad syncing your computer ties up the porcessor power on a core, Parallels, VMWare, etc. After four cores, your normal user will see less and less benefit.
It would be nice if they would shut the power off to these cores when not in use once we exceed more than 4 cores as a standard. Having a power saving option to cut the power to all but one core and then lower the voltage on that one would be great for mobile computing to get battery life while being able to pump it up while plugged in.
With 4 or less cores, I think the average user will see benefits by the things other posters have noted above being multithreaded programs or not. Rendering, DVD watching, running .Mac ad syncing your computer ties up the porcessor power on a core, Parallels, VMWare, etc. After four cores, your normal user will see less and less benefit.
Intel is doing that and in addition reducing the multi-core penalty on single core ops by "overclocking" that core becuase the other core is idled and not adding to the thermal output. Intel prefers not to call it "overclocking" but "Intel Dynamic Acceleration" (IDA) because the increased speed is within spec but its really just overclocking.
It's in Santa Rosa if I recall correctly along with the Dynamic Power Coordination (sleeps the idle core).
Vinea
I believe it's over, for both cores and gHz. Since 2003 we have had 2 gHz processors. There haven't been any jumps in speed for a while. And as far as multiple cores I think this maybe the end also, why? XServe. Let's say you wanted to get 32 cores of processing, right? If you purchased 8 x Quad Core XServes @ 2.66 gHz that would cost you $32,000. Now right now you another alternative is you could get 4 x 8 core Mac Pro's, that would be a better solution at $16,000, but if Apple came out with a 32 core Mac Pro for $4,000 or 16 core Mac Pro for $3,000 or $4,000, that would make buying XServes stupid... and retarted. So, the cores are done and the gHz are done. Computer power Peak reached for a while. Buying a Mac Pro 2.66 gHz Quad at $2,500 safe investment in my opinion. Should last 10 years easily...
"640K is more memory than anyone will ever need." quote attributed to Bill Gates.
Computers have stagnated from 2003 because:
Because going from 2Ghz to 3 Ghz is not a 50% jump.
Because 4 cores is not a jump from 1.
Because a 1U 4 core XServe is not a better server than a 8 core Mac Pro that doesn't fit in a rack
Because Intel has no plans for 8+ core CPUs.
Because Apple will never put quad core or greater CPUs in their XServes.
Vinea
Also I can not remember where I had read something but I was reading up on AMD and Intel comparrision articles and I ran across someone making a significant comment anbout having mutilcore processors having cores to deal with specific systgem processes. Right now we have so many things that coommunicate with the CPU, Video card, sound card, network devices ect. Well the breakthrough in mutilcore technology is not only cooling them down and making them faster but having a CPU that has cores taking care of certain things that are now outsourced from it. I know that you can have a core do specific things when running an application but thats only if that app supports multicore. That would mean palm sized functional desktops! I think that we will see that coming very shortly in the upcoming years, or atleast step being made in that direction.