Sandy Bridge CPU preview at Anandtech

Posted:
in Future Apple Hardware edited January 2014
http://www.anandtech.com/show/3871/t...-wins-in-a-row



Note this preview CPU lacks final features like Turbo.



Not bad, although new motherboards are required for this CPU. Big gains in the integrated graphics, though that is one area I honestly don't care about. Overclocking gets cracked down on . That doesn't affect Mac users but enthusiast PC users will be pissed.



Check it. Thirteen pages, in typical PC hardware review style. (use Safari's Autopager feature!)
«134

Comments

  • Reply 1 of 63
    wizard69wizard69 Posts: 13,377member
    Quote:
    Originally Posted by 1337_5L4Xx0R View Post


    http://www.anandtech.com/show/3871/t...-wins-in-a-row



    Note this preview CPU lacks final features like Turbo.



    I've learned to avoid getting excited about unreleased products. Plus I want to see how SB stands up to the coming AMD offerings.

    Quote:

    Not bad, although new motherboards are required for this CPU. Big gains in the integrated graphics, though that is one area I honestly don't care about. Overclocking gets cracked down on . That doesn't affect Mac users but enthusiast PC users will be pissed.



    Check it. Thirteen pages, in typical PC hardware review style. (use Safari's Autopager feature!)



    Yeah well many of those PC review articles are garbage. Especially in the context of a Mac or Linux user. Many sites are nothing more than marketing vehicles for Intel. For most users the difference between an AMD and an Intel chip is trivial but the review sites like to turn into wide gulfs.



    That being said one of these (AMD OR Intel) SoC are going to make for one very nice Mini.





    Dave
  • Reply 2 of 63
    fran441fran441 Posts: 3,715member
    Quote:

    Big gains in the integrated graphics, though that is one area I honestly don't care about.



    It may not be something that people in the market for a Pro machine care about but this is something that is going to be huge when the technology 'trickles down' to the Consumer level Macs.



    The MacBook, MacBook Air, Mac Mini, and even the 13 inch MacBook Pro all have integrated graphics at the moment that are lackluster at best. The types of gains Intel is making with Sandy Bridge makes the future of integrated graphics look a heck of a lot better. Despite the gains, it still won't be a match for discrete graphics but it will really strengthen the lower end systems. The performance gains actually surprised me.
  • Reply 3 of 63
    wizard69wizard69 Posts: 13,377member
    Well not against current Macs with integrated graphics. It is good that Intel is doubling performance no doubt, but its previous results where pathetic.

    Quote:
    Originally Posted by Fran441 View Post


    It may not be something that people in the market for a Pro machine care about but this is something that is going to be huge when the technology 'trickles down' to the Consumer level Macs.



    Don't mis understand me this will be huge, it will be even better if AMD can significantly better the numbers against Intels GPUs. There is an interesting question here which revolves around which approach is better. In the end the consummer will win big time.



    What people need to realize is that this sort of tech will put respectable quad cores into Mini sized computers. The Mini currently has a seventy watt power supply, low power variants could easily go in there. No it won't be a gaming machine but it wll be a big upgrade for Mini users.

    Quote:

    The MacBook, MacBook Air, Mac Mini, and even the 13 inch MacBook Pro all have integrated graphics at the moment that are lackluster at best. The types of gains Intel is making with Sandy Bridge makes the future of integrated graphics look a heck of a lot better.



    Two things:

    1.

    Yes all of those would benefit. Plus so would models we don't know about.

    2.

    Considering how bad Intel graphics are this really won't be that much of an improvement over the current machines. In some cases it won't be an improvement at all.



    Further you really can't compare current hardware with future non shipping hardware. Lets face it AMD will have competitive products.

    Quote:

    Despite the gains, it still won't be a match for discrete graphics but it will really strengthen the lower end systems. The performance gains actually surprised me.



    Well I'm not surprised, when you are at the bottom of the barrel doubling performance becomes mandatory. Apple basically slapped Intel senseless with the helper GPUs on many of its machines. It is a pretty strong message to publicly imply that Intels GPUs aren't worth the silicon they are written on.



    However I can't dismiss that SoC tech will be a fantastic thing for smaller computers. You get the three P's. Good performance, low power and pretty graphics or " pretty performance & power".
  • Reply 4 of 63
    fran441fran441 Posts: 3,715member
    Quote:

    Further you really can't compare current hardware with future non shipping hardware.



    The is the Future Hardware forum on AppleInsider, that's what we do here!
  • Reply 5 of 63
    backtomacbacktomac Posts: 4,579member
    Quote:
    Originally Posted by Fran441 View Post


    The MacBook, MacBook Air, Mac Mini, and even the 13 inch MacBook Pro all have integrated graphics at the moment that are lackluster at best.



    You really think the 320m is lackluster?



    Do you expect Intel's new IGP to be a significantly better than the 320m? I think you're setting yourself up for disappointment.
  • Reply 6 of 63
    MarvinMarvin Posts: 15,324moderator
    Quote:
    Originally Posted by backtomac View Post


    Do you expect Intel's new IGP to be a significantly better than the 320m? I think you're setting yourself up for disappointment.



    Disappointment indeed, the 9400M gets 48fps in Modern Warfare 2 on low quality and Intel's latest graphics are getting 44fps. Also in Youtube tests, the 320M is easily better than a Radeon 5450 by a large margin - the 5450 quality looks like the 9400M.



    The Intel 4500 that's in the i-series chips was half the speed of a 9400M so these new ones being double the speed will match the 9400M. But that's still half of the 320M.



    So it's basically the same story - Intel's GPUs coming out next year will be half the speed of this year's NVidia GPUs and they still won't support OpenCL so Apple won't use them on the low-end.



    They can't cling on to Core 2 Duo forever so AMD Fusion it is. I reckon AMD won't get Light Peak though but USB 3 support might be enough for now.
  • Reply 7 of 63
    That's not bad at all. A healthy improvement over Nehalem.
  • Reply 8 of 63
    wizard69wizard69 Posts: 13,377member
    Quote:
    Originally Posted by Marvin View Post


    Disappointment indeed, the 9400M gets 48fps in Modern Warfare 2 on low quality and Intel's latest graphics are getting 44fps. Also in Youtube tests, the 320M is easily better than a Radeon 5450 by a large margin - the 5450 quality looks like the 9400M.



    The Intel 4500 that's in the i-series chips was half the speed of a 9400M so these new ones being double the speed will match the 9400M. But that's still half of the 320M.



    This is a very important point and well presented. Basically Intel doubled its performance and is still well behind. About the only thing it would be good for is a lowend MacBook optimized for long battery life.

    Quote:

    So it's basically the same story - Intel's GPUs coming out next year will be half the speed of this year's NVidia GPUs and they still won't support OpenCL so Apple won't use them on the low-end.



    Well I'm wondering about that OpenCLsupport. Supposedly the ALUs wgere completely redesigned in the latest rev. It would have been silly for Intel to redesign them and not make them OpenCL compatible. Yet I've heard nothing to indicate that OpenCL is supported.



    In any event I have to agree, if Intel didn't go to the effort of supporting OpenCL I can't see Apple supporting them. Intel seems to be way off base here.

    Quote:



    They can't cling on to Core 2 Duo forever so AMD Fusion it is.



    Which would be good for the overall industry. Intel has been a little to stupid of late. Plus AMD has a very wide range of processors planned in the Fusion family. So we could see AMD stuff in everything from an AIR replacement to a really nice quad core Mini. As long as we don't see any huge regressions in GPU performance I think most Mac users will be pretty positive about the change.



    As a side note AMDs Ontario Fusion product would make for a nice MacBook AIR. It is more than an ATOM level netbook processor with my only concern being GPU performance. Its low power nature though should be an excellent match for AIR like notebooks. Can you tell I'm wishing for a refactored AIR?

    Quote:

    I reckon AMD won't get Light Peak though but USB 3 support might be enough for now.



    That all depends upon how LightPeak gets implemented. If it is PCI Express Apple could implement it with an AMD chip.



    By the way rumors are that Fusion products will support USB 3. I'm not certain how true that is nor if it applies to the entire lineup but it certainly sounds good. Apparently it is also well implemented performance wise. Rumors of course are not information from the horses mouth. The little bit rumored though hints to an excellent product line up for AMD. Further they have long range plans that should make OpenCL very viable. The end game appears to make the GPU an equal partner to the CPU.
  • Reply 9 of 63
    MarvinMarvin Posts: 15,324moderator
    Quote:
    Originally Posted by wizard69 View Post


    In any event I have to agree, if Intel didn't go to the effort of supporting OpenCL I can't see Apple supporting them. Intel seems to be way off base here.



    They have a conflict of interest. They are marketing their Ct language with the following:



    "Forward-scaling: Ct technology lets a single-source application work consistently on multiple multicore and manycore processors with different architectures, instruction sets, cache architectures, core counts, and vector widths without requiring developers to rewrite programs over and over. The benefits of only writing and debugging code once are substantial.



    Ease of use: Ct technology is built off the familiar C++ language and does not require developers to alter or replace standard compilers, or to learn a new programming language. It provides a simple and easy to use portable data parallel programming API that results in simpler and more maintainable code.



    High-level and hardware independent: Reduces low-level parallel programming effort while improving portability and safety with a high-level API that abstracts low-level data parallelization mechanisms. Targets SIMD and thread parallelism simultaneously.



    Safety: Ct technology prevents parallel programming bugs such as data races and deadlocks by design. Ct technology guards against these problems by prompting developers to specify computations in terms of composable, deterministic patterns close to the mathematical form of their problem, not in terms of low-level parallel computation mechanisms. Ct then automatically maps the high-level, deterministic specification of the computation onto an efficient implementation, eliminating the risk of race conditions and non-determinism."



    http://software.intel.com/en-us/data-parallel/



    Most of the wording goes against OpenCL and GPU programming. Intel aren't generally fond of GPU development if it's not x86 and try hard to dismiss GPU performance, sometimes without quite thinking it through:



    http://www.engadget.com/2010/06/24/n...imes-faster-t/



    Quote:
    Originally Posted by wizard69 View Post


    So we could see AMD stuff in everything from an AIR replacement to a really nice quad core Mini. As long as we don't see any huge regressions in GPU performance I think most Mac users will be pretty positive about the change.



    Supposedly Llano has a Redwood GPU part:



    http://www.xbitlabs.com/news/video/d...s_Company.html



    as in, 5500-series GPU. If there's a quad Phenom 2 in the mix, that's pretty crazy. I'd expect some down-clocking but still, highly parallel and way faster than the 5400-series mentioned in that link, which Intel's GPU is comparable to.



    Quote:
    Originally Posted by wizard69 View Post


    As a side note AMDs Ontario Fusion product would make for a nice MacBook AIR. It is more than an ATOM level netbook processor with my only concern being GPU performance. Its low power nature though should be an excellent match for AIR like notebooks. Can you tell I'm wishing for a refactored AIR?



    For that performance, the price of the Air would have to drop significantly. The benefit right now is that it's smaller but around the same performance as the Macbook. I'd rather they dropped the Air and just added some of that design to the Macbook. No optical and user-replaceable SSD. The iPad is moving into the realm of the thin, light travel companion with very long battery life, half the weight of the Air and IPS screen.
  • Reply 10 of 63
    backtomacbacktomac Posts: 4,579member
    Quote:
    Originally Posted by FuturePastNow View Post


    That's not bad at all. A healthy improvement over Nehalem.



    Intel have become somewhat predictable in regards to performance improvements with the 'tick, tock' cpu upgrade cycles.



    The ticks (architectural changes) give about a 20 % improvement. The tocks (die shrinks) give about 5% clock speed improvement, better battery life for the mobile parts and a couple more cores on the high end parts.



    For macs it seems the best bet is to wait for the architectural changes.
  • Reply 11 of 63
    wizard69wizard69 Posts: 13,377member
    Quote:
    Originally Posted by Marvin View Post


    They have a conflict of interest. They are marketing their Ct language with the following:



    "Forward-scaling: Ct technology lets a single-source application work consistently on multiple multicore and manycore processors with different architectures, instruction sets, cache architectures, core counts, and vector widths without requiring developers to rewrite programs over and over. The benefits of only writing and debugging code once are substantial.



    Ease of use: Ct technology is built off the familiar C++ language and does not require developers to alter or replace standard compilers, or to learn a new programming language. It provides a simple and easy to use portable data parallel programming API that results in simpler and more maintainable code.



    High-level and hardware independent: Reduces low-level parallel programming effort while improving portability and safety with a high-level API that abstracts low-level data parallelization mechanisms. Targets SIMD and thread parallelism simultaneously.



    Safety: Ct technology prevents parallel programming bugs such as data races and deadlocks by design. Ct technology guards against these problems by prompting developers to specify computations in terms of composable, deterministic patterns close to the mathematical form of their problem, not in terms of low-level parallel computation mechanisms. Ct then automatically maps the high-level, deterministic specification of the computation onto an efficient implementation, eliminating the risk of race conditions and non-determinism."



    http://software.intel.com/en-us/data-parallel/



    Most of the wording goes against OpenCL and GPU programming. Intel aren't generally fond of GPU development if it's not x86 and try hard to dismiss GPU performance, sometimes without quite thinking it through:



    http://www.engadget.com/2010/06/24/n...imes-faster-t/



    Intel is also good at shooting themselves in the foot. But I really don't think they grasp the utility of GPU's. In the end there recent behavior is leading to some sourness in the user community. It is like they are giving AMD an opportunity to be successful.

    Quote:

    Supposedly Llano has a Redwood GPU part:



    http://www.xbitlabs.com/news/video/d...s_Company.html



    as in, 5500-series GPU. If there's a quad Phenom 2 in the mix, that's pretty crazy. I'd expect some down-clocking but still, highly parallel and way faster than the 5400-series mentioned in that link, which Intel's GPU is comparable to.



    In the end we need to learn more about how these will perform in AMD's implementation. They already admit to less bandwidth to memory but hope to make that up with the closer coupling to the CPU

    Quote:

    For that performance, the price of the Air would have to drop significantly. The benefit right now is that it's smaller but around the same performance as the Macbook.



    Actually I don't think the performance is any where near a MacBook. To much thermal throttling for it to sustain good performance plus the clock rate is rather modest to begin with. While Ontario isn't designed to be a performance power house, if they can hit two GHz in a four core model it would be an excellent AIR processor and effectively offer better performance. I read some where what the estimated performance was but it is suppose to be something like 80 to 90 percent of a desktop processor, whatever it is expected to be much faster than ATOM. If the Bobcat core power usage is as low as expected it could easily power AIR.



    Can AMD put together an Ontario implementation that beats the 1.8 GHz processor currently in the AIR? That is the question. By beat I mean both in respect to the CPU performance and the GPU performance. It might be to much to ask for and details are lacking to even guess right now but the idea seems possible.

    Quote:

    I'd rather they dropped the Air and just added some of that design to the Macbook. No optical and user-replaceable SSD.



    Well I have to agree that innovation in the portables is lacking. Frankly I want my next Mac Book Pro to have bays for SSD's and maybe one HD. At least two possibly three.



    AIR has a place though I just don't know where that place is for the current model. The hope is that if they do re-factor the AIR they reconsider some of the wanting design choices.

    Quote:

    The iPad is moving into the realm of the thin, light travel companion with very long battery life, half the weight of the Air and IPS screen.



    Yes the iPad is very nice, and frankly I'm very tempted to buy one but it isn't a laptop. If you need a laptop you won't be considering the iPad. AIR and the other small portables from Apple will benefit from the same technology shrinks that enabled the iPad. In this case SoC x86 processors, very dense RAM and solid state storage. If we could get AIR like performance with a half or quarter of the currently required power, battery life would increase dramatically.



    It would be nice to see what Apple is up to here real soon. AIR is so old as to be difficult to justify its high price.
  • Reply 12 of 63
    wizard69wizard69 Posts: 13,377member
    Quote:
    Originally Posted by backtomac View Post


    Intel have become somewhat predictable in regards to performance improvements with the 'tick, tock' cpu upgrade cycles.

    ........................



    For macs it seems the best bet is to wait for the architectural changes.



    The best time to buy a Mac is right after the model is introduced. At that time they are pretty good values. Well generally, some have never been really good values such as AIR.





    If Apple does adopt some AMd chips I would expect the same thing.
  • Reply 13 of 63
    I think Intel's current and future lack of OpenCL support is obvious. If they supported OpenCL, it would make for very unfavorable comparisons with AMD and NVidia. By any metric, besides cost, Intel would lose handily.



    It's especially hard for a Mac user to feign interest in Intel GPUs; Macs have such abysmal 3D performance, as is. Who cares what bottom of the barrel 3D solutions can do. Anyone who has the coin for a Sandy bridge CPU and motherboard is going to have the money and sense to get a proper GPU in there.



    Quote:

    So it's basically the same story - Intel's GPUs coming out next year will be half the speed of this year's NVidia GPUs and they still won't support OpenCL so Apple won't use them on the low-end.



    Bingo.



    Fran441? Holy smokes you are still around.



    wizard69: Bulldozer won't debut until June 2011 or so, Sandy bridge ships in the next few months. Anand at Anandtech is not an Intel fanboi.
  • Reply 14 of 63
    wizard69wizard69 Posts: 13,377member
    Quote:
    Originally Posted by 1337_5L4Xx0R View Post




    ........

    Bingo.



    Fran441? Holy smokes you are still around.



    wizard69: Bulldozer won't debut until June 2011 or so,



    Nobody even mentioned Bulldozer! I brought up the Fusion line up which has two devices coming near term. One Fusion product is called Ontario with a rekease expected very shortly. This Fusion product uses the new Bobcat core and is to be marketed against Intels ATOM at the low end and Intels ULV chips above the ATOMs.



    The second known Fusion product is called Liano (sp?) and is due early next year. This chip leverages existing AMD cores. There is less public info on this guy (not that there is a lot of Ontario info floating about) but it is believed destined for desktop devices and maybe high end note books.



    Bulldozers isn't even in the Fusion line up for all of next year as far as I know.

    Quote:

    Sandy bridge ships in the next few months. Anand at Anandtech is not an Intel fanboi.



    He might not be a fanboi but he does have early access to unreleased Intel chips. That is enough for me to question what he prints, the old grain of salt rule if you will.



    Frankly I don't like the way he reported the "doubling of GPU performance. It doesn't take much effort to use glowing paint to report err twist some facts to impress. The fact is even with its double performance the Intel hardware is still crappy when compared to shipping products. Some of those products are almost a year old now. Instead of fanboi how about shill?





    Dave
  • Reply 15 of 63
    programmerprogrammer Posts: 3,458member
    Quote:
    Originally Posted by 1337_5L4Xx0R View Post


    If they supported OpenCL, it would make for very unfavorable comparisons with AMD and NVidia. By any metric, besides cost, Intel would lose handily.



    As a blanket observation, this is untrue. Intel CPUs can stand up to and even crush GPUs in many circumstances -- it depends entirely on the algorithms being used. Some problems highlight the power of the GPU, others highlight the limitations and constraints of the GPU. OpenCL is appropriate for both.



    Intel's position is supportive of OpenCL, and they are actively participating in it... highlighting their CPU performance. Sandy Bridge will have an improved CPU OpenCL story with AVX and the other architectural improvements. The integrated GPU may just not be sophisticated enough to support OpenCL... we don't know yet.
  • Reply 16 of 63
    wizard69wizard69 Posts: 13,377member
    Quote:
    Originally Posted by Programmer View Post


    As a blanket observation, this is untrue. Intel CPUs can stand up to and even crush GPUs in many circumstances -- it depends entirely on the algorithms being used. Some problems highlight the power of the GPU, others highlight the limitations and constraints of the GPU. OpenCL is appropriate for both.



    OpenCL may be suitable for both but that really isn't the issue here. Rather it is the rather enemic performance of Intels GPU that is at issue. That relative to the performance of others GPUs that can be leveraged with the right codes.

    Quote:

    Intel's position is supportive of OpenCL, and they are actively participating in it... highlighting their CPU performance.



    Intels marketing here has been totally screwed up. First they don't seem to realize that people have made decisions to move away from Intel hardware because of its performance penalty. People sink significant money into moving code to GPUs because they get a payoff. Intel might not think 14 times faster is something to worry about, but many can justify the effort with a 2x gain. If your algorithms fit a GPUs parallel nature, intel doesn't have much to keep you on i86.

    Quote:

    Sandy Bridge will have an improved CPU OpenCL story with AVX and the other architectural improvements.



    Which nobody cares about.



    Especially down the road when AMD has their integrated Fusion GPUs addressing the same address space as the CPU. Many of the penalties associated with GPUs then go away. Every machine then becomes a heterogenous compute platform.

    Quote:

    The integrated GPU may just not be sophisticated enough to support OpenCL... we don't know yet.



    I wouldn't be surprised if it does. After all the cores are entirely new. New but lackluster unless they are hiding info we don't know about.



    My point of view is this: For a short time AMD will have a chance to establish themselves as a performance leader in SoC processors. They can do that via their GPU technology, something Intel doesn't have an answer for yet.



    In any event it will be interesting to see how the two companies products pan out. It will be even harder to get factual, sound and unbiased opinions.





    Dave
  • Reply 17 of 63
    Quote:
    Originally Posted by wizard69 View Post


    Which nobody cares about.



    And that's a problem... there are many cases where a quad-core (or better) i7 crushes a discrete GPU running the same OpenCL code. Systems are going to be heterogenous and people need to realize that what matters is how all parts of the system perform.



    Some SandyBridges will come with an integral GPU for people that only need low end performance (and note that it is now up to current discrete part performance levels so this is going to be a very large part of the market), and other SB's will have only PCIe lanes for attaching discrete GPUs. And into the future Intel will continue improving their GPU performance... it is performance is increasing at a rate faster than the high end GPU's so they are catching up, and the higher the overall performance gets the fewer people in the market are going to need to extra expense of the high end discrete parts. AMD may (may... they haven't proven it yet) have a short term advantage, but it isn't going to last.
  • Reply 18 of 63
    backtomacbacktomac Posts: 4,579member
    Quote:
    Originally Posted by Programmer View Post


    And that's a problem... there are many cases where a quad-core (or better) i7 crushes a discrete GPU running the same OpenCL code. Systems are going to be heterogenous and people need to realize that what matters is how all parts of the system perform.



    Some SandyBridges will come with an integral GPU for people that only need low end performance (and note that it is now up to current discrete part performance levels so this is going to be a very large part of the market), and other SB's will have only PCIe lanes for attaching discrete GPUs. And into the future Intel will continue improving their GPU performance... it is performance is increasing at a rate faster than the high end GPU's so they are catching up, and the higher the overall performance gets the fewer people in the market are going to need to extra expense of the high end discrete parts. AMD may (may... they haven't proven it yet) have a short term advantage, but it isn't going to last.



    Going to differ with you here.



    The SB that Anand reviewed recently apparently had the 'high end' IGP, with 12 execution units vs 6 EUS, that Intel will offer with SB. So basically this is as good as it gets with Intel IGP and SB. Granted it isn't 'bad' IF you get a SB cpu with the IGP that has the 12 EUs. If you get one with 6 EUs, I'm going to go out on a limb and predict that they're going to suck pretty hard. Still the best you can get from Intle is 9400m performance which is half the current 320m.



    While Intel have made reat strides in improving the IGP performance its coming from a base of very weak performance. Getting their IGPs to perform at the level of a 9400m is a profound improvement but that is low hanging fruit in my view. Where they go from here will be telling. While we don't know if the new Intel IGPs will support OCL, Anand mentioned in his review that he did not think that they would be.



    I still think this will be an interesting horserace. If you look at overall cpu/gpu theoretical performance, I still believe that AMD can be better than Intel and SB. The cpus may be 20% slower than Intel's SB cpus but the gpus should be 2x faster. I can't wait to see what Bulldozer and bobcat bring to the table. More than anything, we need OCL apps. IMO that's the big limitation at the moment.
  • Reply 19 of 63
    Quote:
    Originally Posted by backtomac View Post


    Going to differ with you here.



    Fair enough, you're allowed to.



    Quote:

    The SB that Anand reviewed recently apparently had the 'high end' IGP, with 12 execution units vs 6 EUS, that Intel will offer with SB. So basically this is as good as it gets with Intel IGP and SB. Granted it isn't 'bad' IF you get a SB cpu with the IGP that has the 12 EUs. If you get one with 6 EUs, I'm going to go out on a limb and predict that they're going to suck pretty hard. Still the best you can get from Intle is 9400m performance which is half the current 320m.



    While Intel have made reat strides in improving the IGP performance its coming from a base of very weak performance. Getting their IGPs to perform at the level of a 9400m is a profound improvement but that is low hanging fruit in my view. Where they go from here will be telling. While we don't know if the new Intel IGPs will support OCL, Anand mentioned in his review that he did not think that they would be.



    My point, however, was that we're hitting the diminishing curve in terms of value derived from adding more GPU power (at least without the arrival of some new kind of consumer-compelling OCL app). There are going to be a vast array of people for whom the 6 EU SB will be just fine, a large number for whom 12 EU does the trick, and for the rest they will buy SB+discrete GPU. Five years ago the Intel GPUs would hardly satisfy anybody... people that ended up with them essentially didn't use their computers for graphics. Now they do a decent job of playing games, handle HD video with aplomb, and rock the desktop GUI. Sure they won't stand up to the high end GPUs when running the latest Crytek creation (or equivalent), but you need to take into account that an enormous percentage of the consumer base doesn't care. In fact, they don't even notice when you turn off some of the higher end graphics features so that the lower end GPUs can get a decent frame rate. They don't notice 30 vs 60 Hz. They'll watch a demo, see the price difference, and choose the cheaper machine. There are always the more 'discerning' or the geeks who will spring for more power, but we're starting to crest the curve... its getting harder to justify the extra money at the marginal rate of improvement.



    Quote:

    I still think this will be an interesting horserace. If you look at overall cpu/gpu theoretical performance, I still believe that AMD can be better than Intel and SB. The cpus may be 20% slower than Intel's SB cpus but the gpus should be 2x faster. I can't wait to see what Bulldozer and bobcat bring to the table. More than anything, we need OCL apps. IMO that's the big limitation at the moment.



    Actually I think the horserace has started to become less interesting. Keep in mind that I've been taking bets on it for almost three decades so perhaps I'm just getting bored with it, but my point above is that the exact balance AMD and Intel choose to take between CPU and GPU matters less and less. They'll hit their price/performance and power/performance ratios, and the tech world will gibber endlessly about the various details... and the discrete GPU market will continue eroding as it has been. The competition between AMD and Intel will continue with AMD struggling and Intel remaining the major market force. I expect AMD to slowly lose ground in the long term even though they may have positive spikes here and there as they push a new generation product out (if they manage to pull ahead at all... which they only seem to when Intel makes a serious misstep such as Merced or Pentium4). Gradually Intel's dominance in terms of manufacturing and process technology will leave AMD and its partners panting with the effort to keep up.



    Of course that is just a straight projection from the trends of the past. It all gets a whole lot less boring if a game changer shows up. I'm not talking about Fusion or Bulldozer or anything else AMD or Intel have publicly talked about... I mean really radical stuff. Stuff in the labs today that turns out to have dramatic and commericalizable impacts. Memristors? Quantum computing? Radical changes in manufacturing processes? Larrabee was a potentially interesting bet that didn't (hasn't) paid off, but it was in the "different enough" category to make it worth paying attention to.
  • Reply 20 of 63
    wizard69wizard69 Posts: 13,377member
    What I was trying to get at is that idea that many organizations using GPU computing now do so because they can get a significant advantage by doing so. They understand the trade offs between narrow but fast processing units and wide but slower processing units.



    Quote:
    Originally Posted by Programmer View Post


    And that's a problem... there are many cases where a quad-core (or better) i7 crushes a discrete GPU running the same OpenCL code. Systems are going to be heterogenous and people need to realize that what matters is how all parts of the system perform.



    The general user simply isn't going to understand or even want to hear about heterogenous systems or the trade offs between the two approaches. They will simply leave it up to the software vendor to tell them which is the better choice for the program they are running. That is the software vendor will be expected to spec a minimal GPU for exceptional performance with their code.

    Quote:

    Some SandyBridges will come with an integral GPU for people that only need low end performance (and note that it is now up to current discrete part performance levels so this is going to be a very large part of the market), and other SB's will have only PCIe lanes for attaching discrete GPUs. And into the future Intel will continue improving their GPU performance... it is performance is increasing at a rate faster than the high end GPU's so they are catching up, and the higher the overall performance gets the fewer people in the market are going to need to extra expense of the high end discrete parts.



    GPU's in general have out striped general CPU's in the way they have gain performance in recent years. This is what makes GPU's so useful for certain classes of problems. The fact that they have far less baggage to maintain means that they will continue to advance at a very fast rate performance wise.



    Yes that performance will be pretty targeted which is why we will always need some sort of conventional CPU in out machines. A thousand Floating Point processors can be significant for many uses.

    Quote:

    AMD may (may... they haven't proven it yet) have a short term advantage, but it isn't going to last.



    I'm not willing to throw in the towel for AMD yet. Intel has proven again and again that they don't get GPUs. AMD has uncharacteristically shown some ability to make hard calls to advance its hardware offerings. For example abandoning 3Dnow and taking a long time to get the Bobcat cores to market. If it causes Intel to rethink ATOM and its ULV market then AMD will have accomplished something.



    The problem for AMD is getting enough design ins to make an impact. Selling product can give a company a chance to turn a short term gain into long term success. Will it last? That is a very good question, and frankly I'm more optimistic. That is due to a couple of things.



    First; since AMD took over ATI their graphics drives have improved, in many cases very significantly. It is a sign to me that AMD rattled the cage enough and did so in a positive way, so that good things came about.



    Second; Intel is a long way, a very very long way from making GPU's that are competitive with AMDs offerings. The fact is GPU's are extremely important these days for many users, with some who don't even realize that they are important. If AMD has any sense at all they will make a stringent effort to keep that lead into the future. Right now it should be easy for them to do. Maybe you are right and Intel will take off the rose colored glasses and aggressively go after the GPU market, but I've yet to hear any noise indicating this is happening. Instead we get an Integrated GPU that still isn't competitive with year old tech from some other supplier.



    Besides I don't measure success at AMD in increments of beating Intel. As long as they get back on track and gain market share that is a perfectly good outcome. Fusion is one element in getting back on track. It is a good one too, as like it or not the CPU isn't as important as it use to be.



    Dave
Sign In or Register to comment.