or Connect
AppleInsider › Forums › Mac Hardware › Future Apple Hardware › Sandy Bridge CPU preview at Anandtech
New Posts  All Forums:Forum Nav:

Sandy Bridge CPU preview at Anandtech

post #1 of 64
Thread Starter 
http://www.anandtech.com/show/3871/t...-wins-in-a-row

Note this preview CPU lacks final features like Turbo.

Not bad, although new motherboards are required for this CPU. Big gains in the integrated graphics, though that is one area I honestly don't care about. Overclocking gets cracked down on . That doesn't affect Mac users but enthusiast PC users will be pissed.

Check it. Thirteen pages, in typical PC hardware review style. (use Safari's Autopager feature!)
post #2 of 64
Quote:
Originally Posted by 1337_5L4Xx0R View Post

http://www.anandtech.com/show/3871/t...-wins-in-a-row

Note this preview CPU lacks final features like Turbo.

I've learned to avoid getting excited about unreleased products. Plus I want to see how SB stands up to the coming AMD offerings.
Quote:
Not bad, although new motherboards are required for this CPU. Big gains in the integrated graphics, though that is one area I honestly don't care about. Overclocking gets cracked down on . That doesn't affect Mac users but enthusiast PC users will be pissed.

Check it. Thirteen pages, in typical PC hardware review style. (use Safari's Autopager feature!)

Yeah well many of those PC review articles are garbage. Especially in the context of a Mac or Linux user. Many sites are nothing more than marketing vehicles for Intel. For most users the difference between an AMD and an Intel chip is trivial but the review sites like to turn into wide gulfs.

That being said one of these (AMD OR Intel) SoC are going to make for one very nice Mini.


Dave
post #3 of 64
Quote:
Big gains in the integrated graphics, though that is one area I honestly don't care about.

It may not be something that people in the market for a Pro machine care about but this is something that is going to be huge when the technology 'trickles down' to the Consumer level Macs.

The MacBook, MacBook Air, Mac Mini, and even the 13 inch MacBook Pro all have integrated graphics at the moment that are lackluster at best. The types of gains Intel is making with Sandy Bridge makes the future of integrated graphics look a heck of a lot better. Despite the gains, it still won't be a match for discrete graphics but it will really strengthen the lower end systems. The performance gains actually surprised me.
post #4 of 64
Well not against current Macs with integrated graphics. It is good that Intel is doubling performance no doubt, but its previous results where pathetic.
Quote:
Originally Posted by Fran441 View Post

It may not be something that people in the market for a Pro machine care about but this is something that is going to be huge when the technology 'trickles down' to the Consumer level Macs.

Don't mis understand me this will be huge, it will be even better if AMD can significantly better the numbers against Intels GPUs. There is an interesting question here which revolves around which approach is better. In the end the consummer will win big time.

What people need to realize is that this sort of tech will put respectable quad cores into Mini sized computers. The Mini currently has a seventy watt power supply, low power variants could easily go in there. No it won't be a gaming machine but it wll be a big upgrade for Mini users.
Quote:
The MacBook, MacBook Air, Mac Mini, and even the 13 inch MacBook Pro all have integrated graphics at the moment that are lackluster at best. The types of gains Intel is making with Sandy Bridge makes the future of integrated graphics look a heck of a lot better.

Two things:
1.
Yes all of those would benefit. Plus so would models we don't know about.
2.
Considering how bad Intel graphics are this really won't be that much of an improvement over the current machines. In some cases it won't be an improvement at all.

Further you really can't compare current hardware with future non shipping hardware. Lets face it AMD will have competitive products.
Quote:
Despite the gains, it still won't be a match for discrete graphics but it will really strengthen the lower end systems. The performance gains actually surprised me.

Well I'm not surprised, when you are at the bottom of the barrel doubling performance becomes mandatory. Apple basically slapped Intel senseless with the helper GPUs on many of its machines. It is a pretty strong message to publicly imply that Intels GPUs aren't worth the silicon they are written on.

However I can't dismiss that SoC tech will be a fantastic thing for smaller computers. You get the three P's. Good performance, low power and pretty graphics or " pretty performance & power".
post #5 of 64
Quote:
Further you really can't compare current hardware with future non shipping hardware.

The is the Future Hardware forum on AppleInsider, that's what we do here!
post #6 of 64
Quote:
Originally Posted by Fran441 View Post

The MacBook, MacBook Air, Mac Mini, and even the 13 inch MacBook Pro all have integrated graphics at the moment that are lackluster at best.

You really think the 320m is lackluster?

Do you expect Intel's new IGP to be a significantly better than the 320m? I think you're setting yourself up for disappointment.
post #7 of 64
Quote:
Originally Posted by backtomac View Post

Do you expect Intel's new IGP to be a significantly better than the 320m? I think you're setting yourself up for disappointment.

Disappointment indeed, the 9400M gets 48fps in Modern Warfare 2 on low quality and Intel's latest graphics are getting 44fps. Also in Youtube tests, the 320M is easily better than a Radeon 5450 by a large margin - the 5450 quality looks like the 9400M.

The Intel 4500 that's in the i-series chips was half the speed of a 9400M so these new ones being double the speed will match the 9400M. But that's still half of the 320M.

So it's basically the same story - Intel's GPUs coming out next year will be half the speed of this year's NVidia GPUs and they still won't support OpenCL so Apple won't use them on the low-end.

They can't cling on to Core 2 Duo forever so AMD Fusion it is. I reckon AMD won't get Light Peak though but USB 3 support might be enough for now.
post #8 of 64
That's not bad at all. A healthy improvement over Nehalem.
post #9 of 64
Quote:
Originally Posted by Marvin View Post

Disappointment indeed, the 9400M gets 48fps in Modern Warfare 2 on low quality and Intel's latest graphics are getting 44fps. Also in Youtube tests, the 320M is easily better than a Radeon 5450 by a large margin - the 5450 quality looks like the 9400M.

The Intel 4500 that's in the i-series chips was half the speed of a 9400M so these new ones being double the speed will match the 9400M. But that's still half of the 320M.

This is a very important point and well presented. Basically Intel doubled its performance and is still well behind. About the only thing it would be good for is a lowend MacBook optimized for long battery life.
Quote:
So it's basically the same story - Intel's GPUs coming out next year will be half the speed of this year's NVidia GPUs and they still won't support OpenCL so Apple won't use them on the low-end.

Well I'm wondering about that OpenCLsupport. Supposedly the ALUs wgere completely redesigned in the latest rev. It would have been silly for Intel to redesign them and not make them OpenCL compatible. Yet I've heard nothing to indicate that OpenCL is supported.

In any event I have to agree, if Intel didn't go to the effort of supporting OpenCL I can't see Apple supporting them. Intel seems to be way off base here.
Quote:

They can't cling on to Core 2 Duo forever so AMD Fusion it is.

Which would be good for the overall industry. Intel has been a little to stupid of late. Plus AMD has a very wide range of processors planned in the Fusion family. So we could see AMD stuff in everything from an AIR replacement to a really nice quad core Mini. As long as we don't see any huge regressions in GPU performance I think most Mac users will be pretty positive about the change.

As a side note AMDs Ontario Fusion product would make for a nice MacBook AIR. It is more than an ATOM level netbook processor with my only concern being GPU performance. Its low power nature though should be an excellent match for AIR like notebooks. Can you tell I'm wishing for a refactored AIR?
Quote:
I reckon AMD won't get Light Peak though but USB 3 support might be enough for now.

That all depends upon how LightPeak gets implemented. If it is PCI Express Apple could implement it with an AMD chip.

By the way rumors are that Fusion products will support USB 3. I'm not certain how true that is nor if it applies to the entire lineup but it certainly sounds good. Apparently it is also well implemented performance wise. Rumors of course are not information from the horses mouth. The little bit rumored though hints to an excellent product line up for AMD. Further they have long range plans that should make OpenCL very viable. The end game appears to make the GPU an equal partner to the CPU.
post #10 of 64
Quote:
Originally Posted by wizard69 View Post

In any event I have to agree, if Intel didn't go to the effort of supporting OpenCL I can't see Apple supporting them. Intel seems to be way off base here.

They have a conflict of interest. They are marketing their Ct language with the following:

"Forward-scaling: Ct technology lets a single-source application work consistently on multiple multicore and manycore processors with different architectures, instruction sets, cache architectures, core counts, and vector widths without requiring developers to rewrite programs over and over. The benefits of only writing and debugging code once are substantial.

Ease of use: Ct technology is built off the familiar C++ language and does not require developers to alter or replace standard compilers, or to learn a new programming language. It provides a simple and easy to use portable data parallel programming API that results in simpler and more maintainable code.

High-level and hardware independent: Reduces low-level parallel programming effort while improving portability and safety with a high-level API that abstracts low-level data parallelization mechanisms. Targets SIMD and thread parallelism simultaneously.

Safety: Ct technology prevents parallel programming bugs such as data races and deadlocks by design. Ct technology guards against these problems by prompting developers to specify computations in terms of composable, deterministic patterns close to the mathematical form of their problem, not in terms of low-level parallel computation mechanisms. Ct then automatically maps the high-level, deterministic specification of the computation onto an efficient implementation, eliminating the risk of race conditions and non-determinism."

http://software.intel.com/en-us/data-parallel/

Most of the wording goes against OpenCL and GPU programming. Intel aren't generally fond of GPU development if it's not x86 and try hard to dismiss GPU performance, sometimes without quite thinking it through:

http://www.engadget.com/2010/06/24/n...imes-faster-t/

Quote:
Originally Posted by wizard69 View Post

So we could see AMD stuff in everything from an AIR replacement to a really nice quad core Mini. As long as we don't see any huge regressions in GPU performance I think most Mac users will be pretty positive about the change.

Supposedly Llano has a Redwood GPU part:

http://www.xbitlabs.com/news/video/d...s_Company.html

as in, 5500-series GPU. If there's a quad Phenom 2 in the mix, that's pretty crazy. I'd expect some down-clocking but still, highly parallel and way faster than the 5400-series mentioned in that link, which Intel's GPU is comparable to.

Quote:
Originally Posted by wizard69 View Post

As a side note AMDs Ontario Fusion product would make for a nice MacBook AIR. It is more than an ATOM level netbook processor with my only concern being GPU performance. Its low power nature though should be an excellent match for AIR like notebooks. Can you tell I'm wishing for a refactored AIR?

For that performance, the price of the Air would have to drop significantly. The benefit right now is that it's smaller but around the same performance as the Macbook. I'd rather they dropped the Air and just added some of that design to the Macbook. No optical and user-replaceable SSD. The iPad is moving into the realm of the thin, light travel companion with very long battery life, half the weight of the Air and IPS screen.
post #11 of 64
Quote:
Originally Posted by FuturePastNow View Post

That's not bad at all. A healthy improvement over Nehalem.

Intel have become somewhat predictable in regards to performance improvements with the 'tick, tock' cpu upgrade cycles.

The ticks (architectural changes) give about a 20 % improvement. The tocks (die shrinks) give about 5% clock speed improvement, better battery life for the mobile parts and a couple more cores on the high end parts.

For macs it seems the best bet is to wait for the architectural changes.
post #12 of 64
Quote:
Originally Posted by Marvin View Post

They have a conflict of interest. They are marketing their Ct language with the following:

"Forward-scaling: Ct technology lets a single-source application work consistently on multiple multicore and manycore processors with different architectures, instruction sets, cache architectures, core counts, and vector widths without requiring developers to rewrite programs over and over. The benefits of only writing and debugging code once are substantial.

Ease of use: Ct technology is built off the familiar C++ language and does not require developers to alter or replace standard compilers, or to learn a new programming language. It provides a simple and easy to use portable data parallel programming API that results in simpler and more maintainable code.

High-level and hardware independent: Reduces low-level parallel programming effort while improving portability and safety with a high-level API that abstracts low-level data parallelization mechanisms. Targets SIMD and thread parallelism simultaneously.

Safety: Ct technology prevents parallel programming bugs such as data races and deadlocks by design. Ct technology guards against these problems by prompting developers to specify computations in terms of composable, deterministic patterns close to the mathematical form of their problem, not in terms of low-level parallel computation mechanisms. Ct then automatically maps the high-level, deterministic specification of the computation onto an efficient implementation, eliminating the risk of race conditions and non-determinism."

http://software.intel.com/en-us/data-parallel/

Most of the wording goes against OpenCL and GPU programming. Intel aren't generally fond of GPU development if it's not x86 and try hard to dismiss GPU performance, sometimes without quite thinking it through:

http://www.engadget.com/2010/06/24/n...imes-faster-t/

Intel is also good at shooting themselves in the foot. But I really don't think they grasp the utility of GPU's. In the end there recent behavior is leading to some sourness in the user community. It is like they are giving AMD an opportunity to be successful.
Quote:
Supposedly Llano has a Redwood GPU part:

http://www.xbitlabs.com/news/video/d...s_Company.html

as in, 5500-series GPU. If there's a quad Phenom 2 in the mix, that's pretty crazy. I'd expect some down-clocking but still, highly parallel and way faster than the 5400-series mentioned in that link, which Intel's GPU is comparable to.

In the end we need to learn more about how these will perform in AMD's implementation. They already admit to less bandwidth to memory but hope to make that up with the closer coupling to the CPU
Quote:
For that performance, the price of the Air would have to drop significantly. The benefit right now is that it's smaller but around the same performance as the Macbook.

Actually I don't think the performance is any where near a MacBook. To much thermal throttling for it to sustain good performance plus the clock rate is rather modest to begin with. While Ontario isn't designed to be a performance power house, if they can hit two GHz in a four core model it would be an excellent AIR processor and effectively offer better performance. I read some where what the estimated performance was but it is suppose to be something like 80 to 90 percent of a desktop processor, whatever it is expected to be much faster than ATOM. If the Bobcat core power usage is as low as expected it could easily power AIR.

Can AMD put together an Ontario implementation that beats the 1.8 GHz processor currently in the AIR? That is the question. By beat I mean both in respect to the CPU performance and the GPU performance. It might be to much to ask for and details are lacking to even guess right now but the idea seems possible.
Quote:
I'd rather they dropped the Air and just added some of that design to the Macbook. No optical and user-replaceable SSD.

Well I have to agree that innovation in the portables is lacking. Frankly I want my next Mac Book Pro to have bays for SSD's and maybe one HD. At least two possibly three.

AIR has a place though I just don't know where that place is for the current model. The hope is that if they do re-factor the AIR they reconsider some of the wanting design choices.
Quote:
The iPad is moving into the realm of the thin, light travel companion with very long battery life, half the weight of the Air and IPS screen.

Yes the iPad is very nice, and frankly I'm very tempted to buy one but it isn't a laptop. If you need a laptop you won't be considering the iPad. AIR and the other small portables from Apple will benefit from the same technology shrinks that enabled the iPad. In this case SoC x86 processors, very dense RAM and solid state storage. If we could get AIR like performance with a half or quarter of the currently required power, battery life would increase dramatically.

It would be nice to see what Apple is up to here real soon. AIR is so old as to be difficult to justify its high price.
post #13 of 64
Quote:
Originally Posted by backtomac View Post

Intel have become somewhat predictable in regards to performance improvements with the 'tick, tock' cpu upgrade cycles.
........................

For macs it seems the best bet is to wait for the architectural changes.

The best time to buy a Mac is right after the model is introduced. At that time they are pretty good values. Well generally, some have never been really good values such as AIR.


If Apple does adopt some AMd chips I would expect the same thing.
post #14 of 64
Thread Starter 
I think Intel's current and future lack of OpenCL support is obvious. If they supported OpenCL, it would make for very unfavorable comparisons with AMD and NVidia. By any metric, besides cost, Intel would lose handily.

It's especially hard for a Mac user to feign interest in Intel GPUs; Macs have such abysmal 3D performance, as is. Who cares what bottom of the barrel 3D solutions can do. Anyone who has the coin for a Sandy bridge CPU and motherboard is going to have the money and sense to get a proper GPU in there.

Quote:
So it's basically the same story - Intel's GPUs coming out next year will be half the speed of this year's NVidia GPUs and they still won't support OpenCL so Apple won't use them on the low-end.

Bingo.

Fran441? Holy smokes you are still around.

wizard69: Bulldozer won't debut until June 2011 or so, Sandy bridge ships in the next few months. Anand at Anandtech is not an Intel fanboi.
post #15 of 64
Quote:
Originally Posted by 1337_5L4Xx0R View Post


........
Bingo.

Fran441? Holy smokes you are still around.

wizard69: Bulldozer won't debut until June 2011 or so,

Nobody even mentioned Bulldozer! I brought up the Fusion line up which has two devices coming near term. One Fusion product is called Ontario with a rekease expected very shortly. This Fusion product uses the new Bobcat core and is to be marketed against Intels ATOM at the low end and Intels ULV chips above the ATOMs.

The second known Fusion product is called Liano (sp?) and is due early next year. This chip leverages existing AMD cores. There is less public info on this guy (not that there is a lot of Ontario info floating about) but it is believed destined for desktop devices and maybe high end note books.

Bulldozers isn't even in the Fusion line up for all of next year as far as I know.
Quote:
Sandy bridge ships in the next few months. Anand at Anandtech is not an Intel fanboi.

He might not be a fanboi but he does have early access to unreleased Intel chips. That is enough for me to question what he prints, the old grain of salt rule if you will.

Frankly I don't like the way he reported the "doubling of GPU performance. It doesn't take much effort to use glowing paint to report err twist some facts to impress. The fact is even with its double performance the Intel hardware is still crappy when compared to shipping products. Some of those products are almost a year old now. Instead of fanboi how about shill?


Dave
post #16 of 64
Quote:
Originally Posted by 1337_5L4Xx0R View Post

If they supported OpenCL, it would make for very unfavorable comparisons with AMD and NVidia. By any metric, besides cost, Intel would lose handily.

As a blanket observation, this is untrue. Intel CPUs can stand up to and even crush GPUs in many circumstances -- it depends entirely on the algorithms being used. Some problems highlight the power of the GPU, others highlight the limitations and constraints of the GPU. OpenCL is appropriate for both.

Intel's position is supportive of OpenCL, and they are actively participating in it... highlighting their CPU performance. Sandy Bridge will have an improved CPU OpenCL story with AVX and the other architectural improvements. The integrated GPU may just not be sophisticated enough to support OpenCL... we don't know yet.
Providing grist for the rumour mill since 2001.
Reply
Providing grist for the rumour mill since 2001.
Reply
post #17 of 64
Quote:
Originally Posted by Programmer View Post

As a blanket observation, this is untrue. Intel CPUs can stand up to and even crush GPUs in many circumstances -- it depends entirely on the algorithms being used. Some problems highlight the power of the GPU, others highlight the limitations and constraints of the GPU. OpenCL is appropriate for both.

OpenCL may be suitable for both but that really isn't the issue here. Rather it is the rather enemic performance of Intels GPU that is at issue. That relative to the performance of others GPUs that can be leveraged with the right codes.
Quote:
Intel's position is supportive of OpenCL, and they are actively participating in it... highlighting their CPU performance.

Intels marketing here has been totally screwed up. First they don't seem to realize that people have made decisions to move away from Intel hardware because of its performance penalty. People sink significant money into moving code to GPUs because they get a payoff. Intel might not think 14 times faster is something to worry about, but many can justify the effort with a 2x gain. If your algorithms fit a GPUs parallel nature, intel doesn't have much to keep you on i86.
Quote:
Sandy Bridge will have an improved CPU OpenCL story with AVX and the other architectural improvements.

Which nobody cares about.

Especially down the road when AMD has their integrated Fusion GPUs addressing the same address space as the CPU. Many of the penalties associated with GPUs then go away. Every machine then becomes a heterogenous compute platform.
Quote:
The integrated GPU may just not be sophisticated enough to support OpenCL... we don't know yet.

I wouldn't be surprised if it does. After all the cores are entirely new. New but lackluster unless they are hiding info we don't know about.

My point of view is this: For a short time AMD will have a chance to establish themselves as a performance leader in SoC processors. They can do that via their GPU technology, something Intel doesn't have an answer for yet.

In any event it will be interesting to see how the two companies products pan out. It will be even harder to get factual, sound and unbiased opinions.


Dave
post #18 of 64
Quote:
Originally Posted by wizard69 View Post

Which nobody cares about.

And that's a problem... there are many cases where a quad-core (or better) i7 crushes a discrete GPU running the same OpenCL code. Systems are going to be heterogenous and people need to realize that what matters is how all parts of the system perform.

Some SandyBridges will come with an integral GPU for people that only need low end performance (and note that it is now up to current discrete part performance levels so this is going to be a very large part of the market), and other SB's will have only PCIe lanes for attaching discrete GPUs. And into the future Intel will continue improving their GPU performance... it is performance is increasing at a rate faster than the high end GPU's so they are catching up, and the higher the overall performance gets the fewer people in the market are going to need to extra expense of the high end discrete parts. AMD may (may... they haven't proven it yet) have a short term advantage, but it isn't going to last.
Providing grist for the rumour mill since 2001.
Reply
Providing grist for the rumour mill since 2001.
Reply
post #19 of 64
Quote:
Originally Posted by Programmer View Post

And that's a problem... there are many cases where a quad-core (or better) i7 crushes a discrete GPU running the same OpenCL code. Systems are going to be heterogenous and people need to realize that what matters is how all parts of the system perform.

Some SandyBridges will come with an integral GPU for people that only need low end performance (and note that it is now up to current discrete part performance levels so this is going to be a very large part of the market), and other SB's will have only PCIe lanes for attaching discrete GPUs. And into the future Intel will continue improving their GPU performance... it is performance is increasing at a rate faster than the high end GPU's so they are catching up, and the higher the overall performance gets the fewer people in the market are going to need to extra expense of the high end discrete parts. AMD may (may... they haven't proven it yet) have a short term advantage, but it isn't going to last.

Going to differ with you here.

The SB that Anand reviewed recently apparently had the 'high end' IGP, with 12 execution units vs 6 EUS, that Intel will offer with SB. So basically this is as good as it gets with Intel IGP and SB. Granted it isn't 'bad' IF you get a SB cpu with the IGP that has the 12 EUs. If you get one with 6 EUs, I'm going to go out on a limb and predict that they're going to suck pretty hard. Still the best you can get from Intle is 9400m performance which is half the current 320m.

While Intel have made reat strides in improving the IGP performance its coming from a base of very weak performance. Getting their IGPs to perform at the level of a 9400m is a profound improvement but that is low hanging fruit in my view. Where they go from here will be telling. While we don't know if the new Intel IGPs will support OCL, Anand mentioned in his review that he did not think that they would be.

I still think this will be an interesting horserace. If you look at overall cpu/gpu theoretical performance, I still believe that AMD can be better than Intel and SB. The cpus may be 20% slower than Intel's SB cpus but the gpus should be 2x faster. I can't wait to see what Bulldozer and bobcat bring to the table. More than anything, we need OCL apps. IMO that's the big limitation at the moment.
post #20 of 64
Quote:
Originally Posted by backtomac View Post

Going to differ with you here.

Fair enough, you're allowed to.

Quote:
The SB that Anand reviewed recently apparently had the 'high end' IGP, with 12 execution units vs 6 EUS, that Intel will offer with SB. So basically this is as good as it gets with Intel IGP and SB. Granted it isn't 'bad' IF you get a SB cpu with the IGP that has the 12 EUs. If you get one with 6 EUs, I'm going to go out on a limb and predict that they're going to suck pretty hard. Still the best you can get from Intle is 9400m performance which is half the current 320m.

While Intel have made reat strides in improving the IGP performance its coming from a base of very weak performance. Getting their IGPs to perform at the level of a 9400m is a profound improvement but that is low hanging fruit in my view. Where they go from here will be telling. While we don't know if the new Intel IGPs will support OCL, Anand mentioned in his review that he did not think that they would be.

My point, however, was that we're hitting the diminishing curve in terms of value derived from adding more GPU power (at least without the arrival of some new kind of consumer-compelling OCL app). There are going to be a vast array of people for whom the 6 EU SB will be just fine, a large number for whom 12 EU does the trick, and for the rest they will buy SB+discrete GPU. Five years ago the Intel GPUs would hardly satisfy anybody... people that ended up with them essentially didn't use their computers for graphics. Now they do a decent job of playing games, handle HD video with aplomb, and rock the desktop GUI. Sure they won't stand up to the high end GPUs when running the latest Crytek creation (or equivalent), but you need to take into account that an enormous percentage of the consumer base doesn't care. In fact, they don't even notice when you turn off some of the higher end graphics features so that the lower end GPUs can get a decent frame rate. They don't notice 30 vs 60 Hz. They'll watch a demo, see the price difference, and choose the cheaper machine. There are always the more 'discerning' or the geeks who will spring for more power, but we're starting to crest the curve... its getting harder to justify the extra money at the marginal rate of improvement.

Quote:
I still think this will be an interesting horserace. If you look at overall cpu/gpu theoretical performance, I still believe that AMD can be better than Intel and SB. The cpus may be 20% slower than Intel's SB cpus but the gpus should be 2x faster. I can't wait to see what Bulldozer and bobcat bring to the table. More than anything, we need OCL apps. IMO that's the big limitation at the moment.

Actually I think the horserace has started to become less interesting. Keep in mind that I've been taking bets on it for almost three decades so perhaps I'm just getting bored with it, but my point above is that the exact balance AMD and Intel choose to take between CPU and GPU matters less and less. They'll hit their price/performance and power/performance ratios, and the tech world will gibber endlessly about the various details... and the discrete GPU market will continue eroding as it has been. The competition between AMD and Intel will continue with AMD struggling and Intel remaining the major market force. I expect AMD to slowly lose ground in the long term even though they may have positive spikes here and there as they push a new generation product out (if they manage to pull ahead at all... which they only seem to when Intel makes a serious misstep such as Merced or Pentium4). Gradually Intel's dominance in terms of manufacturing and process technology will leave AMD and its partners panting with the effort to keep up.

Of course that is just a straight projection from the trends of the past. It all gets a whole lot less boring if a game changer shows up. I'm not talking about Fusion or Bulldozer or anything else AMD or Intel have publicly talked about... I mean really radical stuff. Stuff in the labs today that turns out to have dramatic and commericalizable impacts. Memristors? Quantum computing? Radical changes in manufacturing processes? Larrabee was a potentially interesting bet that didn't (hasn't) paid off, but it was in the "different enough" category to make it worth paying attention to.
Providing grist for the rumour mill since 2001.
Reply
Providing grist for the rumour mill since 2001.
Reply
post #21 of 64
What I was trying to get at is that idea that many organizations using GPU computing now do so because they can get a significant advantage by doing so. They understand the trade offs between narrow but fast processing units and wide but slower processing units.

Quote:
Originally Posted by Programmer View Post

And that's a problem... there are many cases where a quad-core (or better) i7 crushes a discrete GPU running the same OpenCL code. Systems are going to be heterogenous and people need to realize that what matters is how all parts of the system perform.

The general user simply isn't going to understand or even want to hear about heterogenous systems or the trade offs between the two approaches. They will simply leave it up to the software vendor to tell them which is the better choice for the program they are running. That is the software vendor will be expected to spec a minimal GPU for exceptional performance with their code.
Quote:
Some SandyBridges will come with an integral GPU for people that only need low end performance (and note that it is now up to current discrete part performance levels so this is going to be a very large part of the market), and other SB's will have only PCIe lanes for attaching discrete GPUs. And into the future Intel will continue improving their GPU performance... it is performance is increasing at a rate faster than the high end GPU's so they are catching up, and the higher the overall performance gets the fewer people in the market are going to need to extra expense of the high end discrete parts.

GPU's in general have out striped general CPU's in the way they have gain performance in recent years. This is what makes GPU's so useful for certain classes of problems. The fact that they have far less baggage to maintain means that they will continue to advance at a very fast rate performance wise.

Yes that performance will be pretty targeted which is why we will always need some sort of conventional CPU in out machines. A thousand Floating Point processors can be significant for many uses.
Quote:
AMD may (may... they haven't proven it yet) have a short term advantage, but it isn't going to last.

I'm not willing to throw in the towel for AMD yet. Intel has proven again and again that they don't get GPUs. AMD has uncharacteristically shown some ability to make hard calls to advance its hardware offerings. For example abandoning 3Dnow and taking a long time to get the Bobcat cores to market. If it causes Intel to rethink ATOM and its ULV market then AMD will have accomplished something.

The problem for AMD is getting enough design ins to make an impact. Selling product can give a company a chance to turn a short term gain into long term success. Will it last? That is a very good question, and frankly I'm more optimistic. That is due to a couple of things.

First; since AMD took over ATI their graphics drives have improved, in many cases very significantly. It is a sign to me that AMD rattled the cage enough and did so in a positive way, so that good things came about.

Second; Intel is a long way, a very very long way from making GPU's that are competitive with AMDs offerings. The fact is GPU's are extremely important these days for many users, with some who don't even realize that they are important. If AMD has any sense at all they will make a stringent effort to keep that lead into the future. Right now it should be easy for them to do. Maybe you are right and Intel will take off the rose colored glasses and aggressively go after the GPU market, but I've yet to hear any noise indicating this is happening. Instead we get an Integrated GPU that still isn't competitive with year old tech from some other supplier.

Besides I don't measure success at AMD in increments of beating Intel. As long as they get back on track and gain market share that is a perfectly good outcome. Fusion is one element in getting back on track. It is a good one too, as like it or not the CPU isn't as important as it use to be.

Dave
post #22 of 64
Quote:
Originally Posted by wizard69 View Post

The general user simply isn't going to understand or even want to hear about heterogenous systems or the trade offs between the two approaches. They will simply leave it up to the software vendor to tell them which is the better choice for the program they are running. That is the software vendor will be expected to spec a minimal GPU for exceptional performance with their code.

Sorry, wasn't very clear about who I meant by "people" in that context. Wasn't referring to the general user... was referring to technically oriented sorts, including developers.

Quote:
GPU's in general have out striped general CPU's in the way they have gain performance in recent years. This is what makes GPU's so useful for certain classes of problems. The fact that they have far less baggage to maintain means that they will continue to advance at a very fast rate performance wise.

I don't think GPUs have had such a rush because of their lack of baggage. Lack of baggage didn't help PowerPC or IA-64 a whole lot in the face of x86/x64. GPUs have experienced a rapid rate of improvement mostly because they have been able to carefully constrain the problem they are solving -- doing more-or-less the same thing to lots of uniformly organized pieces of data.

Quote:
I'm not willing to throw in the towel for AMD yet.

If you read my previous post you'll see that I'm not either, but in the long run I don't have a very rosy prognosis for them. I'm also not so pessimistic about Intel's GPU work, nor so optimistic about ATI's legacy. Larrabee may not have succeeded (yet), but it did bring a lot of people into Intel who "get it" when it comes to graphics. They're still there.
Providing grist for the rumour mill since 2001.
Reply
Providing grist for the rumour mill since 2001.
Reply
post #23 of 64
Quote:
Originally Posted by Programmer View Post

Sorry, wasn't very clear about who I meant by "people" in that context. Wasn't referring to the general user... was referring to technically oriented sorts, including developers.

Exactly and it is up to the developer to figure out how structure the program for best performance. GPUs simply aren't useful for many problems but where they can be leveraged the programmers should be expected to.
Quote:
I don't think GPUs have had such a rush because of their lack of baggage. Lack of baggage didn't help PowerPC or IA-64 a whole lot in the face of x86/x64.

Yes but there is one very significant difference, all computers need to have some sort of GPU anyways. Well at least computers for use of common apps on the desk or lap. This is very important as GPUs will never go away.
Quote:
GPUs have experienced a rapid rate of improvement mostly because they have been able to carefully constrain the problem they are solving -- doing more-or-less the same thing to lots of uniformly organized pieces of data.

That is exactly what makes GPUs so powerful for certain classes of programs.
Quote:



If you read my previous post you'll see that I'm not either, but in the long run I don't have a very rosy prognosis for them. I'm also not so pessimistic about Intel's GPU work, nor so optimistic about ATI's legacy.

My personal opinion is that we have a very long way to go before GPUs are powerful enough that development will cease. In fact with the high density displays that will be possible very soon now, I suspect the average GPU will be strained. Imagine a 27" iMac with a 250 to 300 dpi screen. GPU development won't slow down anytime soon.
Quote:
Larrabee may not have succeeded (yet), but it did bring a lot of people into Intel who "get it" when it comes to graphics. They're still there.

Well hopefully they are working in seclusion to surprise us all. Honestly though I'm not convinced because it seems to be Intels goal to supply minimal graphical systems. They simply want something to go after the bulk of the business or a good enough market.


Dave
post #24 of 64
Quote:
Originally Posted by backtomac View Post

Intel have become somewhat predictable in regards to performance improvements with the 'tick, tock' cpu upgrade cycles.

The ticks (architectural changes) give about a 20 % improvement. The tocks (die shrinks) give about 5% clock speed improvement, better battery life for the mobile parts and a couple more cores on the high end parts.

For macs it seems the best bet is to wait for the architectural changes.

Each die shrink gives about double the performance per watt consumed. Since mobile performance is limited by heat dissipation and battery life, most of the advances for laptops come with the die shrinks and the new micro-architectures offer very little. On the desktop it's a different story.
Mac user since August 1983.
Reply
Mac user since August 1983.
Reply
post #25 of 64
Quote:
Originally Posted by mcarling View Post

Each die shrink gives about double the performance per watt consumed. Since mobile performance is limited by heat dissipation and battery life, most of the advances for laptops come with the die shrinks...

Well then Intel are cheating us.

Do you think you get double the performance or twice the battery life with each die shrink?
post #26 of 64
anand the shrimp is a shill for intel, a boring marketing site full of pc trolls that climax with minor tec details.

Intel is ...ed because they failed to develop an even barely comparable gpu to nvidia and amd. And gpus are gaining increasing relevance in both laptops and dekstops. Whatever x86 minor gains or die shrinks they can leverage via their large infrastructure they can't mask their glaring inadequacy by pre shipping some kid with a new chip and having the shill chorus praise them. They also managed to screw nvidia AND apple over with this stunt the pulled in the i-something chips of including an unbearably crap and backward non open gl igfx on the same die. A cheap, bullish way to shove an anachronistic product down everyone's throat and do away with nvidia, all in one coup. It's the reason why the macbooks are stuck with the cores and why the air has taken aeons to be updated.

I don't think apple will take this lightly. Apple doesn't care what's inside, and apple customers don't either care about lists of specs as long as the end product is effective. And I think that provided amd overcome some hurdles then they pose the a very interesting proposition for apple for a tight, integrated and exclusive partnership that will give apple the best gfx on the globe and some great apus (gpus and cpus) too at very good prices. I think amd will finally join apple for their mutual benefit.
post #27 of 64
I don't understand the hangup over OpenCL or GPU's in general under OSX; I know Apple wants OpenCL, but it hasn't gone anywhere. Additionally, Apple is behind the curve on OpenGL extensions anyhow.

Apple had no problem using Intel GPU's before (GMA 950 and x3100) - they were crap, but that didn't stop them, at least not until OpenCL, which again, has gone nowhere. It seems more like a carrot and stick to me.

Honestly, I could probably get by with Intel GMA HD graphics right now on my PC (using an AMD 5770), as it would still acceleration video playback and Flash, if not for the occasional game. That's probably all the average consumer would want as well, and if Intel could fit the bill, that's probably the ticket.
post #28 of 64
Quote:
Originally Posted by myapplelove View Post

anand the shrimp is a shill for intel, a boring marketing site full of pc trolls that climax with minor tec details.

Intel is ...ed because they failed to develop an even barely comparable gpu to nvidia and amd. And gpus are gaining increasing relevance in both laptops and dekstops. Whatever x86 minor gains or die shrinks they can leverage via their large infrastructure they can't mask their glaring inadequacy by pre shipping some kid with a new chip and having the shill chorus praise them. They also managed to screw nvidia AND apple over with this stunt the pulled in the i-something chips of including an unbearably crap and backward non open gl igfx on the same die. A cheap, bullish way to shove an anachronistic product down everyone's throat and do away with nvidia, all in one coup. It's the reason why the macbooks are stuck with the cores and why the air has taken aeons to be updated.

I don't think apple will take this lightly. Apple doesn't care what's inside, and apple customers don't either care about lists of specs as long as the end product is effective. And I think that provided amd overcome some hurdles then they pose the a very interesting proposition for apple for a tight, integrated and exclusive partnership that will give apple the best gfx on the globe and some great apus (gpus and cpus) too at very good prices. I think amd will finally join apple for their mutual benefit.

In this regard we share a common view. AMD has the potential to place a lot of their Bobcat based Ontario chips, even in Apple products. If the rumors are true they would be perfect for the AIR line. Given a high enough clock they might even power a Mini and some of the Mac Books. In the short term Ontario is likely the only path to quad cores in low end laptops.

I'm really hoping AMD is successful here as they could put a hurt on Intel. As indicated Apple has to be a little bit pissed at Intel.


Dave
post #29 of 64
Quote:
Originally Posted by guinness View Post

I don't understand the hangup over OpenCL or GPU's in general under OSX; I know Apple wants OpenCL, but it hasn't gone anywhere. Additionally, Apple is behind the curve on OpenGL extensions anyhow.

OpenCL development needs to ramp up but the driver support needs to be complete. OpenCL 1.1 was only ratified on June 14th this year:

http://www.khronos.org/news/press/re...uting-standard

It's going through the necessary cycle of testing, finding missing features and then adding them. Once that reaches a certain level, it can be used more. Ideally, OpenGL shouldn't need extensions at all. If you take the example of rendering engines like VRay, Mental Ray and Renderman, they are used to produce photorealistic 3D content and the "extensions" are basically just shader kernels written in a computational language. If you need a new lighting model such as ambient occlusion or subsurface scattering, you just write one in software and you can reuse it on any machine. Slower machines simply run it slower or you can set options in the shader functions to take fewer samples.

This is where GPUs need to get to and once they do, it will have a big impact on not only games but movie production.

If Apple got Pixar to develop a Renderman-compliant engine using OpenCL that can integrate with physics engines like Havok etc and got it to work on the iPhone as well as in Motion/Final Cut, well that would be pretty neat.

Quote:
Originally Posted by guinness View Post

Honestly, I could probably get by with Intel GMA HD graphics right now on my PC (using an AMD 5770), as it would still acceleration video playback and Flash, if not for the occasional game. That's probably all the average consumer would want as well, and if Intel could fit the bill, that's probably the ticket.

It will get to that point. Despite Intel's slowness, it's a long game and once they hit that threshold of the GPU being adequate for most things then it won't matter one bit to the consumer. It's unfortunate that Intel might win in the end as they have always by far been the worst in this field and simply don't deserve to win.
post #30 of 64
Quote:
Originally Posted by guinness View Post

I don't understand the hangup over OpenCL or GPU's in general under OSX; I know Apple wants OpenCL, but it hasn't gone anywhere. Additionally, Apple is behind the curve on OpenGL extensions anyhow.

I'm not sure where you get your info or where your opinion come from but i don'y think they are supported by the facts. Apples been very successful with OpenCL with just about every GPU vendor on board with support. That includes Imagination. On the implementation side I've seen much that indicates that people are having success with OpenCL.

OpenCL doesn't have anything to do wuth OpenGL extensions, other that the possibility of implementing those extensions in OpenCL. OpenCLs future rests in uses outside of the graphics world. That doesn't mean that OpenCL won't be important in the graphics world, just that it is a limited world.
Quote:

Apple had no problem using Intel GPU's before (GMA 950 and x3100) - they were crap, but that didn't stop them, at least not until OpenCL, which again, has gone nowhere.

Your grasp of history is wanting here. The first Intel machines had these GPUs because Apple teamed up with Intel to switch to i86. At which point the consumners started to complain loudly. OpenCL had very little to do with the move away from Intel integrated graphics.

In any event where do you get this idea that OpenCL has gone no where? Apple uses it as do third party vendors. You wouldn't know if a piece of software used OpenCL or not, because the reccomendation is for fallbacks to other resources like the main CPU. The onky potential indicator of OpenCL use is faster execution.

In any event please tell us why you think OpenCL has gone no where.
Quote:
It seems more like a carrot and stick to me.

Honestly, I could probably get by with Intel GMA HD graphics right now on my PC (using an AMD 5770), as it would still acceleration video playback and Flash,

Maybe maybe not it depends upon you expectations and usage. For many, Intels GPUs are so slow as to be worthless. Some will be happy but then again some people still drive VW busses built in the sixties.
Quote:
if not for the occasional game.

This is the sort of garbage that really frustrates me. You do realize that a good GPU can be put to advantage for a lot more than games, right? Maybe the occasional game is the only stress you put on the GPU in your system. If so great, but I offer up this, in the future you won't know exactly what parts of the OS or installed apps are using OpenCL code or the GPU in general.
Quote:
That's probably all the average consumer would want as well, and if Intel could fit the bill, that's probably the ticket.

Sadly the average consummer is not well informed. In part that is a marketing problem, one that Apple seems to have a good grasp on. They also have apps that might benefit from OpenCL dramatically. A little marketing polish on an app that demonstrates the advantages of OpenCL would keep people interested in buying a good GPU.

Besides all of the above people mis important elements in the discussion. Number one is that I don't believe that GPU performance demand will level off any time soon. If nothing else higher density displays will be a huge factor. IPhone has the Retina display which is a huge improvement. Now imagine a 27" screen at a third, two thirds or the same density. The more pixels to compute the more GPU power needed.


Dace
post #31 of 64
Quote:
Originally Posted by backtomac View Post

Well then Intel are cheating us.

Do you think you get double the performance or twice the battery life with each die shrink?

Amdahl's Law, pal. The processor is only one component in a very complex system.
post #32 of 64
Thread Starter 
From a consumer perspective, I tend to agree with guinness. Unfortunately I'm a professional, so Intel's "GPUs" are of no use to me.

I do think that if we were to stretch out GPU timelines waaay into the future, Intels GPU philosophy may have some merit. But in the here and now, and certainly the next 5 years AFAIK, Intel's GPUs are an added cost for me, that don't perform in any meaningful sense of the word.

OpenCL I think may be more useful to the Pro market, ie the Get things Done Quickly With My Expensive Gear market. Like 3D rendering, or Audio DSP, protein folding, or nuclear bomb simulations, etc. I do think tapping into latent Teraflop performance is a good idea™.

OpenCL for the consumer is nice, but certainly not a deal breaker for light web surfing and email checking.
post #33 of 64
Thread Starter 
Quote:
Originally Posted by Marvin View Post

OpenCL development needs to ramp up but the driver support needs to be complete. OpenCL 1.1 was only ratified on June 14th this year:

http://www.khronos.org/news/press/re...uting-standard

I think guinness was more pointing that Apple may have other, more immediate challenges in anything remotely GPU related, more than implying OpenC/GL are related. We aren't privy to Apple's priorities, but I would assume fixing their drivers and OpenGL are more important to a majority of their users than fringe OpenCL usage cases.

Quote:
It's going through the necessary cycle of testing, finding missing features and then adding them. Once that reaches a certain level, it can be used more. Ideally, OpenGL shouldn't need extensions at all. If you take the example of rendering engines like VRay, Mental Ray and Renderman, they are used to produce photorealistic 3D content and the "extensions" are basically just shader kernels written in a computational language. If you need a new lighting model such as ambient occlusion or subsurface scattering, you just write one in software and you can reuse it on any machine. Slower machines simply run it slower or you can set options in the shader functions to take fewer samples.

This is where GPUs need to get to and once they do, it will have a big impact on not only games but movie production.

Fully programmable shader pipes don't already accomplish this? Isn't that why OpenCL came about?

Quote:
If Apple got Pixar to develop a Renderman-compliant engine using OpenCL that can integrate with physics engines like Havok etc and got it to work on the iPhone as well as in Motion/Final Cut, well that would be pretty neat.

It will get to that point. Despite Intel's slowness, it's a long game and once they hit that threshold of the GPU being adequate for most things then it won't matter one bit to the consumer. It's unfortunate that Intel might win in the end as they have always by far been the worst in this field and simply don't deserve to win.

Agreed. It's almost... monopolistic or something... Forcing a product that 'might not be the best available on the market*' onto everybody's machines to increase sales.

The only upshot is it gives AMD a very clear window of opportunity to potentially crush Intel's performance lead in the market.


*going for understatement of the year award.
post #34 of 64
Quote:
Originally Posted by 1337_5L4Xx0R View Post

From a consumer perspective, I tend to agree with guinness....
OpenCL for the consumer is nice, but certainly not a deal breaker for light web surfing and email checking.

There is this belief that the Intel's new IGPs are 'good' enough and that they are going to cannibalize sales of discrete GPUs. Do you need a GPU capable of teraflop of computing performance to browse the web and check email?

That may be true at least initially. But what's to stop iPads and the like (android tablets) from cannibalizing them? In other words, if you don't need a dedicated GPU to do your computer tasks, browse the web and check email, do you need a quad core CPU? Why do people need a Sandy Bridge CPU to do these things?
post #35 of 64
Thread Starter 
Agreed, and it just makes Intels IGP 'fusion' strategy all the more baffling, except on Atoms, where they belong.
post #36 of 64
Quote:
Originally Posted by backtomac View Post

There is this belief that the Intel's new IGPs are 'good' enough and that they are going to cannibalize sales of discrete GPUs. Do you need a GPU capable of teraflop of computing performance to browse the web and check email?

That may be true at least initially. But what's to stop iPads and the like (android tablets) from cannibalizing them? In other words, if you don't need a dedicated GPU to do your computer tasks, browse the web and check email, do you need a quad core CPU? Why do people need a Sandy Bridge CPU to do these things?

You make a good point about browsing the web and checking email, but there are plenty of "professional" computer tasks that involve lots of number crunching without fancy graphics. A powerful CPU and "free" integrated graphics makes a lot of sense in a lot of places.
post #37 of 64
Quote:
Originally Posted by FuturePastNow View Post

You make a good point about browsing the web and checking email, but there are plenty of "professional" computer tasks that involve lots of number crunching without fancy graphics. A powerful CPU and "free" integrated graphics makes a lot of sense in a lot of places.

You make a good point.

But that begs the question, would those apps that need to do number crunching benefit from OCL? If so then they would benefit from a powerful GPU.
post #38 of 64
Thread Starter 
Bingo. And in steps AMD in 2011.
post #39 of 64
Quote:
Originally Posted by backtomac View Post

There is this belief that the Intel's new IGPs are 'good' enough and that they are going to cannibalize sales of discrete GPUs. Do you need a GPU capable of teraflop of computing performance to browse the web and check email?

Look at the old i86 Macs and the number one complaint there. It was all directed at the crappy IGP of those machines and rightfully so. Even for browsing the web they where a bit under powered. A slow GUI will leave people with the impression the machine is slow.

So obviously you don't need that high performance GPU. On the otherhand most people didn't want the sluggishness of Intels solution. It is notable that NVidias 9400m had a dramatic impact on Apples Hardware spuring strong sales. In many ways the 9400m is a fairly pedestrian processor yet it had the effect of giving Apples low end machines respectable performance when judged against the alternatives.
Quote:
That may be true at least initially. But what's to stop iPads and the like (android tablets) from cannibalizing them? In other words, if you don't need a dedicated GPU to do your computer tasks, browse the web and check email, do you need a quad core CPU?

Well you never use to need a quad core to browse but the people started using flash! Your point is a good one and it depends upon what your goals are. If you assume that the only interest in a desk top is to browse then they do tend to look over powered. Not everybody is so interested though and look at mail and browsing as one of many tasks they do.

What I seem to be hearing here is that you think that the constant rush to better and better GPUs will go away with the coming tablets. Nothing could be further from the truth. Samsung just released a little info on their new Cortex A9 based SoC. One bullet point is 4x better 3D graphics. In the end iPad is already suitable for Mail and browsing but yet one evolutionary path will give us much better graphics. Why go that route? Probably because almost everybody does more than e-Mail and browsing on their computers. Even browsing can demand a lot from the GPU to support the newer features like SVG.
Quote:
Why do people need a Sandy Bridge CPU to do these things?

If that is all they did then they don't need Sandy Bridge or for that matter a PC. I for one am very surprised at how much of my E-Mail is now done on an iPhone. Even at that I'm lusting after iPhone 4 as it has better overall performance due in part to the GPU. Which brings up another thing we could soon see chips in an iPad that support OpenCL. I don't expect iPad to ever be used for high performance computing but yet I have to admit that iPad is far more powerful than most of the computers I've ever owned. That is the current model, put a dual core chip in an iPad and it will be the second SMP macine I've owned.

It is rather incredible to look at what hadheld devices can do these days. Ny little 3G iPhone outclasses many of those computers I've used over the years. The reality is each process shrink gives engineers a lot more room to plug functionality into a chip or computer. A lot of that space goes to the GPU because we get a big pay off there.

One last thing. We are talking about GPUs driving current display technology. What sort of GPU is required to drive a 4K display. Or lets say Apple goes a step further with their marketing and decides all their displays should equal the Retina standard, that is a density that exceeds the ability of most people to resolve at working distances. How much GPU power will that take for Mail? How about 3D graphics? The good thing is higher performance GPUs permit the move to higher resolution displays. The original video processors gad trouble with video CRTs, now we drive displays with millions of colors and pixels. One day we will have GPUs capable of driving 3D holographic projection devices on our desktops. They will still be more than you need for E-Mail but just imagine the porn.


Dave
post #40 of 64
Quote:
Originally Posted by wizard69 View Post

What I seem to be hearing here is that you think that the constant rush to better and better GPUs will go away with the coming tablets.
Dave

No. I think tablets will diminish the demand for machines with IGPs.
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Future Apple Hardware
AppleInsider › Forums › Mac Hardware › Future Apple Hardware › Sandy Bridge CPU preview at Anandtech