Inthought Intel and NVidia had settled their suit. When I did a search I didn't see anything definitive but the trial was supposed to start on the 6th. Assuming a settlement is in place and it allows NVidia to make chipsets for intel chips, Apple would be able to use both intels latest chips and NVidia's integers graphics chips.
They settled the suit. But IIRC Nvidia came out with a statement that they weren't going to make chipsets for Nehalem, etc.
Edit: I see another poster mentioned this already. Oh well...
Those results don't make sense. Rough estimates from my readings online reflect Sandy Bridge IGPs to be between the Arrandale IGPs and the 9400M. Highly unlikely to be better.
It depends if some tested chips didn't have finalised features though. One of the improvements to the new EUs is that they do transcendental Maths in hardware so sine and cosine on the GPU are "several orders of magnitude faster" than with Arrandale. That doesn't mean 70x but 10,000,000x faster (unless they used the terminology wrong).
Part of their design focus for SB was to put everything that didn't need to be programmable into fixed function hardware. This is what they did with OnLive. The real-time compression algorithms were tested on high-end Xeon workstations and maxed them out but by putting them in silicon, they matched the performance at a fraction of the cost and power consumption.
Their idea is that anything you need flexibility for, you do on the CPU and for the most common operations, you use fixed function. This might end up faster than GPGPU because instead of recoding entire blocks of code to send off to the GPU, you just use the fixed function parts of the CPU per line of code and the compiler or run-time can speed this up automatically - no recoding required and you also get process protection (no lockups like on a GPU).
Quote:
Originally Posted by nvidia2008
BTW doesn't 3DMark06 reflect a combined CPU+GPU score? In which case the CPU score may be skewing things massively, or some other mistakes. What's the 3DMark06 GPU score? and don't have time to dive into this further.
The CPU scores are shown on the second link. They are fairly even between all the machines tested even though they have slightly different CPUs in each. The SM 2 & 3 scores matched the GTX quite closely. 3DMark might have picked up the GPU name wrongly or the machine was in hybrid SLI and 3DMark was only choosing the name of the IGP.
The CPU score does make it an unfavourable comparison to current GPUs in current machines as an i7 mobile CPU gets 3k and the new i7 was scoring 5k so overall scores would be lower by 2k but like I say, the comparison between the GPUs tested doesn't have that problem.
Intel have been demoing the chips themselves. Here with Starcraft 2:
The comparison was between integrated and a mid-range discrete GPU. What's interesting there is that the fire on the right with SB looks like it's stuttering whereas it's smooth on the left and those do look like laptops behind the presenter. The power meter was showing Sandy Bridge at <50% of the other one. So given that SB has a 35W TDP, the other machine would be a laptop drawing 50-70W.
They said the effects were all set on maximum. I'm sure the 320M was capable of that but not at a very high resolution and with AA off and it wasn't all that smooth.
It just doesn't feel right though, it's like when we saw the IE9 benchmarks beating every other browser and they found out they were cheating by skipping some of the tests.
Anyway, if they manage to allow people to play Mass Effect 2 on entry laptops at maximum quality, I think that will be a good achievement as well as a 100% speedup over C2D.
Those results don't make sense. Rough estimates from my readings online reflect Sandy Bridge IGPs to be between the Arrandale IGPs and the 9400M. Highly unlikely to be better.
BTW doesn't 3DMark06 reflect a combined CPU+GPU score? In which case the CPU score may be skewing things massively, or some other mistakes. What's the 3DMark06 GPU score? and don't have time to dive into this further.
I'll repeat my assertion: Sandy Bridge IGPs are still, crap. Feel free to prove me wrong. (Not being sarcastic here)
From the 3D Mark website:
"A 3DMark score is an overall measure of your system?s 3D gaming capabilities, based on comprehensive real-time 3D graphics and processor tests. By comparing your score with those submitted by millions of other gamers you can see how your gaming rig performs, making it easier to choose the most effective upgrades or finding other ways to optimize your system."
So yes it is a combination of CPU and GPU. The real test will be if it is pulling similar gaming numbers and if the game features look as nice as the discrete GPUs.
Exactly. I would rather see Apple going AMD than downgrading their graphics to Intel crap.
Agree. Apple's graphics hardware has always been the subject of much humor. The last thing we need is for a new computer to be going sideways or backwards when it should be moving forward.
Guys, we're way off base here. I've reviewed the data.
1. Sandy Bridge IGP cannot possibly do 5k+ in those SM2 and SM3 scores. This is because 5K in SM2 and SM3 in 3DMark06 is what my overclocked ATI Radeon 4830 512MB can do. I can play Starcraft 2 on High settings at 1920x1080, Dirt 2 on max settings at 1920x1080, etc. It really is a capable card. Them modern equivalent is an ATI Radeon 5750 or something like that. If an integrated on-die GPU in an Intel CPU can do that, AMD might as well set their headquarters on fire and collect the insurance money, because this would mean AMD Fusion is completely blown to hell.
2. Those Intel's demos as you linked to are extremely horrible and really nefarious.
A. The Mass Effect 2 demo is the starting scene where it looks like there are a lot of objects but it is just a short corridor with flame and other effect thingys. The fact that there is no camera movement, no interactivity, means that demo is really worthless. In fact, that scene of the game is one of the easiest to render. Most in-game scenes have 10x that complexity.
B. Starcraft 2, ditto, in SC2 you can dial the detail level way down, and there is not much camera movement and so on.
C. Putting those clips in a nice little window, while good for comparison, means they're probably running at 800x600 or 1024x768 which are extremely low resolutions.
D. Intel never specifies *what* discrete GPU they are comparing it to. Probably a low-end one, for sure.
E. Anyone that has played Mass Effect 2 and Starcraft 2 will know those Intel demos do not test real gameplay experience in any way.
3. Marvin, I appreciate you sourcing all the material, but I have to tell everyone, *this is not the GPU you are looking for* ~ without having to use any mind tricks. Intel's demos show about up to a Nvidia 9400M level of GPU capability, at the very, very best they could approach 320M level of GPU capability, but I highly doubt that.
4. At this stage I cannot but conclude that there is no way Sandy Bridge IGP can deliver anything more than the Nvidia 320M and as such if Apple were to use it, that would be a step backwards. For the MBP 13", I believe Apple will have to bite the bullet and stick a low-cost discrete GPU in there (probably ATI). MacBook Airs are fine because through 2011 I don't think they would get Core i-series chips (unless it is a rebranded Core 2).
5. If you look at iMovie '11 for example, the level of GPU computation that is required to render a lot of the graphics, I really can't see Sandy Bridge IGP doing that, particularly for a "pro" Mac laptop. (And it is clear the GPU is very heavily leveraged because this is what allows the MBA with 320M to actually be able to do iMovie '11 stuff with such a limited CPU - of course the final "render" of the iMovie edit will be much slower, but the MBA is highly responsive during editing and effects, transitions, etc.)
Quote:
Originally Posted by Marvin
The CPU scores are shown on the second link. They are fairly even between all the machines tested even though they have slightly different CPUs in each. The SM 2 & 3 scores matched the GTX quite closely. 3DMark might have picked up the GPU name wrongly or the machine was in hybrid SLI and 3DMark was only choosing the name of the IGP.
The CPU score does make it an unfavourable comparison to current GPUs in current machines as an i7 mobile CPU gets 3k and the new i7 was scoring 5k so overall scores would be lower by 2k but like I say, the comparison between the GPUs tested doesn't have that problem.
Intel have been demoing the chips themselves. Here with Starcraft 2:
The comparison was between integrated and a mid-range discrete GPU. What's interesting there is that the fire on the right with SB looks like it's stuttering whereas it's smooth on the left and those do look like laptops behind the presenter. The power meter was showing Sandy Bridge at <50% of the other one. So given that SB has a 35W TDP, the other machine would be a laptop drawing 50-70W.
They said the effects were all set on maximum. I'm sure the 320M was capable of that but not at a very high resolution and with AA off and it wasn't all that smooth.
It just doesn't feel right though, it's like when we saw the IE9 benchmarks beating every other browser and they found out they were cheating by skipping some of the tests.
Anyway, if they manage to allow people to play Mass Effect 2 on entry laptops at maximum quality, I think that will be a good achievement as well as a 100% speedup over C2D.
It really doesn't matter what Intel GPUs have been like in the past, you can only judge SB on what Apple ships with it. We can pretty much assume right off the bat that Apple drivers will be wanting for example. As to Apple hardware it wouldn't be imPossible for them to ship hardware with the low end Sandy Bridge GPU.
Quote:
Originally Posted by nvidia2008
Those results don't make sense. Rough estimates from my readings online reflect Sandy Bridge IGPs to be between the Arrandale IGPs and the 9400M. Highly unlikely to be better.
Im not sure why you would say that! Seriously intel could simply have been embarrassed about it's previous GPUs. Or it could be scared to death of AMDs coming Fusion products. I suspect that it has become obvious to the industry that 3D performance is a big deal these days and that in general the GPU plays a huge part in positive user experiences.
Quote:
BTW doesn't 3DMark06 reflect a combined CPU+GPU score? In which case the CPU score may be skewing things massively, or some other mistakes. What's the 3DMark06 GPU score? and don't have time to dive into this further.
I'll repeat my assertion: Sandy Bridge IGPs are still, crap. Feel free to prove me wrong. (Not being sarcastic here)
Until there is shipping Apple machines we simply don't know! I have zero faith in these mouth pieces talking about pre release Intel hardware. They have no credibility at all in my mind. I do trust that Intel has completely overhauled Sandy Bridge and as a result it is a much better upgrade than many new Intel releases.
As a side note there may be concerns about how they are getting these good numbers with the limited execution units on the GPUs. This has me wondering what the devices clock rate is like. A fast clock might skew some benchmarks more than maybe they should. We really need specifics on the hardware before we start to wonder about numbers.
I know exactly what OpenCL is, but thanks for assuming I don't.
That's why I said I was confused:
Quote:
Originally Posted by FuturePastNow
This is a good decision. Most people don't play 3D games and won't notice a slight reduction in graphics performance, but they might notice a (comparatively much larger) increase in CPU performance, and they will notice an increase in battery life.
OpenCL is never going to amount to anything on such low-end GPUs as the Nvidia IGP's we're discussing.
Then why wouldn't access to additional processing power -- via GPGPU -- be a good thing? If the 320M (or better) IGP is underutilized except when decoding HD content, why not use that for specific types of application processes that can benefit from the OpenCL architecture?
And I don't know that I agree that the Nvidia IGP is all that "low end"' for a properly written OpenCL process.
Also (and I'm no expert here), don't our Windows-using brethren need a certain level of 3D support for the basic Aero interface in Win7? Isn't this a larger problem space for Intel than just smaller Apple notebooks?
However that doesn't mean that the GPU isn't significantly improved. We will not know until product ships.
Quote:
Originally Posted by nvidia2008
Guys, we're way off base here. I've reviewed the data.
1. Sandy Bridge IGP cannot possibly do 5k+ in those SM2 and SM3 scores. This is because 5K in SM2 and SM3 in 3DMark06 is what my overclocked ATI Radeon 4830 512MB can do. I can play Starcraft 2 on High settings at 1920x1080, Dirt 2 on max settings at 1920x1080, etc. It really is a capable card. Them modern equivalent is an ATI Radeon 5750 or something like that. If an integrated on-die GPU in an Intel CPU can do that, AMD might as well set their headquarters on fire and collect the insurance money, because this would mean AMD Fusion is completely blown to hell.
it wouldn't be impossible for the hardware or the drivers to be cooked for the benchmarks. It isn't like it hasn't been done in the past. We will only have an answer when we can run real hardware and software up against competeing platforms.
One should not however discount sneaky approaches that are Possible. For example they could be making extensive use of the CPU vector units. While possibly valid it would mean many CPU cycles wasted on graphics. With the high speed buses and cache memory this could be a very real technique.
There are potentially a number of way for Intel to be sneaky here yet at the same time be somewhat honest.
Quote:
2. Those Intel's demos as you linked to are extremely horrible and really nefarious.
As is much of what I've seen on the net recently. Much of the previewing appears to be paid marketing from Intel. This is why I stress the need for shiping hardware.
Quote:
A. The Mass Effect 2 demo is the starting scene where it looks like there are a lot of objects but it is just a short corridor with flame and other effect thingys. The fact that there is no camera movement, no interactivity, means that demo is really worthless. In fact, that scene of the game is one of the easiest to render. Most in-game scenes have 10x that complexity.
B. Starcraft 2, ditto, in SC2 you can dial the detail level way down, and there is not much camera movement and so on.
C. Putting those clips in a nice little window, while good for comparison, means they're probably running at 800x600 or 1024x768 which are extremely low resolutions.
D. Intel never specifies *what* discrete GPU they are comparing it to. Probably a low-end one, for sure.
E. Anyone that has played Mass Effect 2 and Starcraft 2 will know those Intel demos do not test real gameplay experience in any way.
3. Marvin, I appreciate you sourcing all the material, but I have to tell everyone, *this is not the GPU you are looking for* ~ without having to use any mind tricks. Intel's demos show about up to a Nvidia 9400M level of GPU capability, at the very, very best they could approach 320M level of GPU capability, but I highly doubt that.
It wouldn't be the first time things have been skewed to paint Intel hardware in a brighter light than it deserves. I suspect that in the end we will find that the hardware has an optimal set off features that work well but that overall the experience is ver middle of the road.
Quote:
4. At this stage I cannot but conclude that there is no way Sandy Bridge IGP can deliver anything more than the Nvidia 320M and as such if Apple were to use it, that would be a step backwards. For the MBP 13", I believe Apple will have to bite the bullet and stick a low-cost discrete GPU in there (probably ATI). MacBook Airs are fine because through 2011 I don't think they would get Core i-series chips (unless it is a rebranded Core 2).
Yep a very real possibility. At best we may see corner cases where the GPU is pretty good.
Quote:
5. If you look at iMovie '11 for example, the level of GPU computation that is required to render a lot of the graphics, I really can't see Sandy Bridge IGP doing that, particularly for a "pro" Mac laptop. (And it is clear the GPU is very heavily leveraged because this is what allows the MBA with 320M to actually be able to do iMovie '11 stuff with such a limited CPU - of course the final "render" of the iMovie edit will be much slower, but the MBA is highly responsive during editing and effects, transitions, etc.)
The problem with a lot of these pre release comparisons is that we don't know how the caches and high speed buses will impact results. Well that and the CPU /GPU code split. In the end SB is a whole new world and we may see the impact of it's architecture in strange ways.
Intel never specifies *what* discrete GPU they are comparing it to. Probably a low-end one, for sure.
Intel's demos show about up to a Nvidia 9400M level of GPU capability, at the very, very best they could approach 320M level of GPU capability, but I highly doubt that.
At this stage I cannot but conclude that there is no way Sandy Bridge IGP can deliver anything more than the Nvidia 320M and as such if Apple were to use it, that would be a step backwards.
I really can't see Sandy Bridge IGP doing that, particularly for a "pro" Mac laptop. (And it is clear the GPU is very heavily leveraged because this is what allows the MBA with 320M to actually be able to do iMovie '11 stuff with such a limited CPU - of course the final "render" of the iMovie edit will be much slower, but the MBA is highly responsive during editing and effects, transitions, etc.)
I found a clearer video of the demo and from the writing on the laptop around 0:28, it would appear they are using an unbranded Toshiba Satellite:
Graphics card would be the mid-range 330M that is used in the Macbook Pro. I definitely see some lag/stuttering in the Sandy Bridge one (1:00) vs the 330M but during the presentation, he says that the SB one is also simultaneously capturing HD in-game footage, presumably unlike the other one. Given that the Satellite only supports 720p, I'd guess that's what the games are running at.
That demo certainly looks a long way off the 3DMark 06 score and there's no way it comes close to a GTX 460M. However, it looks to be between a 320M and 330M and that's all they need for this demographic.
While it seems like a bit of a lack of progress for now, when Ivy Bridge hits at the end of the year, they can put in 18-24EUs with the smaller fabrication and then we're into the realm of the GTX 460M.
In terms of iMovie etc, they will be using Core Image, which leverages GLSL when available (which is supported by Intel's IGPs) and if not, falls back to the CPU. While it's true that there's more chance of falling back to the CPU than with a general compute GPU like AMD or NVidia, it's rarely if ever going to affect people who buy that grade of machine.
The MBP will still have the dedicated chip for compute and I would hazard a guess that since NVidia's chips haven't improved much with the 4xxM series that it will be the Radeon 6550M or 6570M:
The power consumption ranges between 11-30W and obviously Apple can throttle it down to stay below 20W or not bother and rely on their graphics switching tech. This performance is on par with a 5650 and exceeds the 330M by 5-20%.
It's not a nice-tasting pill to swallow but I think that's it for NVidia. None in the iMacs or Mac Pros so far, with this change, it will be Intel IGP in the Mini, MB, MBA and the MBP will have Intel IGP + a Radeon card.
Intel won, not by being better but by being bigger and pushing them out of the way. It had to happen but NVidia don't deserve to go out this way.
Im not sure why you would say that! Seriously intel could simply have been embarrassed about it's previous GPUs. Or it could be scared to death of AMDs coming Fusion products. I suspect that it has become obvious to the industry that 3D performance is a big deal these days and that in general the GPU plays a huge part in positive user experiences.
Until there is shipping Apple machines we simply don't know! I have zero faith in these mouth pieces talking about pre release Intel hardware. They have no credibility at all in my mind. I do trust that Intel has completely overhauled Sandy Bridge and as a result it is a much better upgrade than many new Intel releases.
Intel has been saying all along that Sandy Bridge's GPU would be about twice as good as what their current IGP was. This link only shows 3DMark06 test scores unfortunately, but the 320 is almost 3 times better, so doubling would be less good. I have seen other benchmarks that routinely have the IGP behind by more, w/the 320 quadrupling numbers. If they double and are still half as good as the 320M, how can that be anything but a step backwards?
Below are some links and a couple of comparative numbers based on some of the only games that were tested in common.
Here is a few links showing figures for all 3 cards. Note that the GMA HD was incaable of even playing most of the games, including games like Starcraft 2. This is a game produced by Blizzard, who is known for having their games be extremely flexible as far as what kind of equipment they will run decently on.
The benchmark numbers for SB being debated are highly questionably and anyone who has kept up with the progression of these video cards at all will know this. So unless Intel magically in the last month managed to jack things from a 2x to 4x improvement, it is still well behind the 320m.
Something else to keep in mind is that the GPU in the Sandy Bridge chip will NOT be a DX11 part, but Fusion is. Some will say no big deal, but with the recent Cataclysm expansion for World of Warcraft, if you enable the DX11 optimizations, you will receive a 20-30% performance boost. That is huge.
Will Intels graphics at least match the 320m in present 13" MacBook's?
This will be severly disappointing if Apple down grades the GPU in the next update.
Not buying it, they made this mistake once already & had to back pedal on the move. If they switch to integrated Intel chips only their competitors are going to have a marketing heyday with it.
Then why wouldn't access to additional processing power -- via GPGPU -- be a good thing? If the 320M (or better) IGP is underutilized except when decoding HD content, why not use that for specific types of application processes that can benefit from the OpenCL architecture?
And I don't know that I agree that the Nvidia IGP is all that "low end"' for a properly written OpenCL process.
Also (and I'm no expert here), don't our Windows-using brethren need a certain level of 3D support for the basic Aero interface in Win7? Isn't this a larger problem space for Intel than just smaller Apple notebooks?
More processing power is a good thing, in a global sense. I used to be a big proponent of GPGPU and I am glad that there's an open standard available for use.
But contrary to its name GPGPU is limited in what it can do, and you need a fast GPU to really show it off (most benchmarks you see are run on powerful graphics cards). Do you sacrifice general-purpose CPU power for a slight gain in, say, simulation of protein folding?
It's true that modern operating systems- including OS X- want a minimal level of graphics acceleration, though they all have a 2D fallback. Windows' Aero interface is supported by Intel graphics going back to GMA 950, so Intel has had that covered for years. OS X actually does some things that Arrandale's "HD Graphics" doesn't like, but that's driver related. 3D support for operating systems just isn't a problem for Intel. They evem accelerate HD video playback in hardware now.
IF Intel and Apple can turn out a decent driver, no one but gamers will notice if the Macbook is using Intel graphics, but a more modern CPU will benefit everybody.
It's not a nice-tasting pill to swallow but I think that's it for NVidia. None in the iMacs or Mac Pros so far, with this change, it will be Intel IGP in the Mini, MB, MBA and the MBP will have Intel IGP + a Radeon card.
Intel won, not by being better but by being bigger and pushing them out of the way. It had to happen but NVidia don't deserve to go out this way.
It's sad. Nvidia was kicking ass in the heady 6800, 8800, 9600, 9800 days. The awesome G92 chip that saw them so clearly dominate around 2006-2008. SLI innovation and Quadros.
Nvidia had three strikes that dealt them out of being serious competitors to Intel and AMD.
The first was post-G92 where they came up with the powerful but hot and heavy GTX260. While good for high-end gamers they were never really able to bring it down to appropriate laptop GPUs and relied on G92 derivatives for well past its due date and excruciating rounds of rebadging 9800 etc GPUs.
Next up was Fermi. Apparently it was meant for GPGPU, high-end computing etc. so, after much delays, it came out, and... was again too hot and heavy. They managed to carve out some nice GPUs like the GTX460 but again in the laptop space more derivatives and rebranding of who knows what, and not quite the killer discrete graphics at great performance-per-watt across the board.
Third and the real sucker punch was Intel shafting them by out-of-the-blue disallowing them from making Intel chipsets (and subsequently no more AMD chipsets with AMD buying ATI) and Intel force-bundling GPUs with all Intel CPUs... This pretty much destroyed both Nvidia chipset products as well as low-end Nvidia discrete graphics.
If not for Nvidia's extremely robust marketing, CUDA, their generally better-than-ATI PC drivers and Nvidia's very close relationships with top-class game development studios, Nvidia would be pretty much toast by now. I don't know about the financials but I think my username reflects when they were at their peak this decade.
IF Intel and Apple can turn out a decent driver, no one but gamers will notice if the Macbook is using Intel graphics, but a more modern CPU will benefit everybody.
Depends... If Core Image/ Core Graphics performs as good as if not better than a 320M on an Intel IGP, then that more modern CPU will certainly benefit everyone. If not, then, well, it's a sideways or backwards step for Macs. Have you seen iMovie '11? All that realtime rendering (and there is tons of it especially in iMovie 2011) ...is heavily GPU-based.
Core Image (not OpenCL or other GPGPU) is so critical to so much of what we do on a Mac. As seen in the MBA, real powerhouse CPUs are not relevant to most of the population. Not to mention SSDs can remove huge bottlenecks on the Mac.
It's an interesting time. CPU, GPU, APU, all battling it out.
Like I said, if I could get a MacBook Air 15" with no ODD and Corei5 Sandy Bridge and ATI 6800 series 1GB VRAM and 320GB SSD, that would be sweeeeet and have all the benefits with not too much compromise.
Yet we should keep in mind the iPad will outsell the Mac in 2011 by a very large margin.
Our whole notion of computing is being reworked over the next few years.
One point that hasn't been raised is that the SB IGP is not on the other side of a PCIe bus. The discretes and other IGP designs sit on the other side of a PCIe bus port, even if they are on-chip in the memory controller. This represents a substantial bottleneck, particularly for OpenCL where the data tends to want to go back and forth between the CPU and GPU (as opposed to graphics where it all just goes out to the GPU). The SB IGP, therefore, will have an advantage if it is running OpenCL. Will Intel (or Apple) support OpenCL on these devices? Can't say for sure, but consider that Intel was silent on OpenCL until introducing their x86 OpenCL alpha release (http://software.intel.com/en-us/arti...el-opencl-sdk/) last month. Given their job postings and Apple's interest in OpenCL, I think its a good bet that SB IGP will support it... and do so in a useful way. In addition, Intel's x86 OpenCL implementation will clearly work very well on AVX, and since it is doing some aggressive SSE/AVX optimization it looks like it will perform at 4-8x current CPU OpenCL implementations. For many algorithms this makes it as fast as or faster than GPUs running the same code.
It will likely become even more important in future updates to Mac OS.
Quote:
Originally Posted by FuturePastNow
More processing power is a good thing, in a global sense. I used to be a big proponent of GPGPU and I am glad that there's an open standard available for use.
You should still be a big advocate.
Quote:
But contrary to its name GPGPU is limited in what it can do, and you need a fast GPU to really show it off (most benchmarks you see are run on powerful graphics cards).
Why of course you do, that is show off your technology on the best hardware available. It is the nature of showing off, however that doesn't mean least cards can't effectively accelerate certain classes of problems. The problem just needs to map cleanly to the GPU.
You do highlight one issue and that is the confusion some people have with the term GPGPU computing. The reality is that today it is anything but general purpose and instead is a great facility for accelerating certain problem sets. At least with today's processors you have to be able to justify the overhead of using something like OpenCL in a project. Where it is a win it is often a win on modest GPUs. .
Quote:
Do you sacrifice general-purpose CPU power for a slight gain in, say, simulation of protein folding?
I see this as BS! How many people do constructive protein folding on their PCs? GPGPU computing has a much wider array of viable applications than that. You try to reinforce this notion that GPGPU computing is of no use to the general population which I don't buy at all.
Quote:
It's true that modern operating systems- including OS X- want a minimal level of graphics acceleration, though they all have a 2D fallback. Windows' Aero interface is supported by Intel graphics going back to GMA 950, so Intel has had that covered for years. OS X actually does some things that Arrandale's "HD Graphics" doesn't like, but that's driver related. 3D support for operating systems just isn't a problem for Intel. They evem accelerate HD video playback in hardware now.
Frankly I have to disagree with you here too. There is a big (MASSIVE) difference between supporting something and having that something work well. To that point I'm not convinced that 3D is such a big issue for Mac OS right now. In the near future I would expect many GPU cycles to go to things like resolution independence and enhancements to things like preview in the finder.
Quote:
IF Intel and Apple can turn out a decent driver, no one but gamers will notice if the Macbook is using Intel graphics, but a more modern CPU will benefit everybody.
Again this is BS! People notice GPU performance more than just about anything else on a PC these days. A fluid user interface is pretty much expected which a good GPU goes a long way to delivering. Think back to a couple of years when the first machines with Intel integrated GPUs came out, people rejected the machines right and left. Very few of those people where gamers as if gaming was ever a big thing on the Mac. Macs by their nature as "the graphical machine" need better than run of the mill GPUs.
Now to the question of drivers and Sandy Bridge, I'm trying to keep an open mind here. The problem is Intel has a really bad history here. My big fear is that this is a slip backwards with respect to GPU performance and support of core technologies. I don't think anybody on this forum knows for sure what the situation is. For one thing you can't be reasonable expected to trust sites with prerelease Intel hardware. The next issue is those drivers which often vary drastically from what we see in the Windows world. Things can easily add up to crappy Mac OS performance for SB. I just hold out hope that SB won't be that bad GPU wise.
Im also not so technically illiterate not to realize that integrated GPUs are the wave of the future. In the end there is more to gain by tightly coupling the GPU to the CPU than there is to loose. Especially considering if AMDs vision with respect to Fusion ever matures. Eventually all or most of the obsticals to using the GPU to accelerate apps will be gone. Use of that hardware will become so transparent that we won't think of it as anything odd. In any event the coming SB and Fusion chips are just the start of a whole new generation of processors. Most reasonable people should be hopeful that they do well in their initial release.
Yes, and it'll be interesting to see if the next iPad will include multiple ARM cores or an OpenCL capable GPU.
Just from the standpoint of being competitive I would think that an SMP ARM processor is a requirement in iPad 2. Since Apple has been working with Imagination with respect to OpenCL support I would suspect that OpenCL will be supported on the SoC also.
Much of the infrastructure is already there in iOS so it won't be a big deal when it happens. Dual core will come because it is the low power way to increase performance. OpenCL will be there because it exposes hardware to apps so that it can be leveraged as needed.
More interesting will be how Apple implements the architecture of this new SoC. Will the GPU be an equal partner to the CPU? Will the GPU support threads or other partitioning? Will they implement an on board cache or video RAM buffer? Lots of questions but really this is what interests me about iPad 2, that is just what does Apple have up it's sleeve with respect to the next iPad processor. Considering all the recent patents it could be a major advancement or it could be another run of the mill ARM SoC.
Intel confirmed that Sandy Bridge has dedicated video transcode hardware that it demoed during the keynote. The demo used Cyberlink’s Media Espresso to convert a ~1 minute long 30Mbps 1080p HD video clip to an iPhone compatible format. On Sandy Bridge the conversion finished in a matter of a few seconds (< 10 seconds by my watch).
Comments
Inthought Intel and NVidia had settled their suit. When I did a search I didn't see anything definitive but the trial was supposed to start on the 6th. Assuming a settlement is in place and it allows NVidia to make chipsets for intel chips, Apple would be able to use both intels latest chips and NVidia's integers graphics chips.
They settled the suit. But IIRC Nvidia came out with a statement that they weren't going to make chipsets for Nehalem, etc.
Edit: I see another poster mentioned this already. Oh well...
Those results don't make sense. Rough estimates from my readings online reflect Sandy Bridge IGPs to be between the Arrandale IGPs and the 9400M. Highly unlikely to be better.
It depends if some tested chips didn't have finalised features though. One of the improvements to the new EUs is that they do transcendental Maths in hardware so sine and cosine on the GPU are "several orders of magnitude faster" than with Arrandale. That doesn't mean 70x but 10,000,000x faster (unless they used the terminology wrong).
Part of their design focus for SB was to put everything that didn't need to be programmable into fixed function hardware. This is what they did with OnLive. The real-time compression algorithms were tested on high-end Xeon workstations and maxed them out but by putting them in silicon, they matched the performance at a fraction of the cost and power consumption.
Their idea is that anything you need flexibility for, you do on the CPU and for the most common operations, you use fixed function. This might end up faster than GPGPU because instead of recoding entire blocks of code to send off to the GPU, you just use the fixed function parts of the CPU per line of code and the compiler or run-time can speed this up automatically - no recoding required and you also get process protection (no lockups like on a GPU).
BTW doesn't 3DMark06 reflect a combined CPU+GPU score? In which case the CPU score may be skewing things massively, or some other mistakes. What's the 3DMark06 GPU score?
The CPU scores are shown on the second link. They are fairly even between all the machines tested even though they have slightly different CPUs in each. The SM 2 & 3 scores matched the GTX quite closely. 3DMark might have picked up the GPU name wrongly or the machine was in hybrid SLI and 3DMark was only choosing the name of the IGP.
The CPU score does make it an unfavourable comparison to current GPUs in current machines as an i7 mobile CPU gets 3k and the new i7 was scoring 5k so overall scores would be lower by 2k but like I say, the comparison between the GPUs tested doesn't have that problem.
Intel have been demoing the chips themselves. Here with Starcraft 2:
http://www.youtube.com/watch?v=4ETnmGn8q5Q
and here with Mass Effect 2:
http://www.youtube.com/watch?v=OwW2Wc9XSD4
The comparison was between integrated and a mid-range discrete GPU. What's interesting there is that the fire on the right with SB looks like it's stuttering whereas it's smooth on the left and those do look like laptops behind the presenter. The power meter was showing Sandy Bridge at <50% of the other one. So given that SB has a 35W TDP, the other machine would be a laptop drawing 50-70W.
They said the effects were all set on maximum. I'm sure the 320M was capable of that but not at a very high resolution and with AA off and it wasn't all that smooth.
It just doesn't feel right though, it's like when we saw the IE9 benchmarks beating every other browser and they found out they were cheating by skipping some of the tests.
Anyway, if they manage to allow people to play Mass Effect 2 on entry laptops at maximum quality, I think that will be a good achievement as well as a 100% speedup over C2D.
Those results don't make sense. Rough estimates from my readings online reflect Sandy Bridge IGPs to be between the Arrandale IGPs and the 9400M. Highly unlikely to be better.
BTW doesn't 3DMark06 reflect a combined CPU+GPU score? In which case the CPU score may be skewing things massively, or some other mistakes. What's the 3DMark06 GPU score?
I'll repeat my assertion: Sandy Bridge IGPs are still, crap. Feel free to prove me wrong. (Not being sarcastic here)
From the 3D Mark website:
"A 3DMark score is an overall measure of your system?s 3D gaming capabilities, based on comprehensive real-time 3D graphics and processor tests. By comparing your score with those submitted by millions of other gamers you can see how your gaming rig performs, making it easier to choose the most effective upgrades or finding other ways to optimize your system."
So yes it is a combination of CPU and GPU. The real test will be if it is pulling similar gaming numbers and if the game features look as nice as the discrete GPUs.
Exactly. I would rather see Apple going AMD than downgrading their graphics to Intel crap.
Agree. Apple's graphics hardware has always been the subject of much humor. The last thing we need is for a new computer to be going sideways or backwards when it should be moving forward.
I'm not sure I follow you here. Are you sure you aren't confusing OpenCL with OpenGL?
I know exactly what OpenCL is, but thanks for assuming I don't.
1. Sandy Bridge IGP cannot possibly do 5k+ in those SM2 and SM3 scores. This is because 5K in SM2 and SM3 in 3DMark06 is what my overclocked ATI Radeon 4830 512MB can do. I can play Starcraft 2 on High settings at 1920x1080, Dirt 2 on max settings at 1920x1080, etc. It really is a capable card. Them modern equivalent is an ATI Radeon 5750 or something like that. If an integrated on-die GPU in an Intel CPU can do that, AMD might as well set their headquarters on fire and collect the insurance money, because this would mean AMD Fusion is completely blown to hell.
2. Those Intel's demos as you linked to are extremely horrible and really nefarious.
A. The Mass Effect 2 demo is the starting scene where it looks like there are a lot of objects but it is just a short corridor with flame and other effect thingys. The fact that there is no camera movement, no interactivity, means that demo is really worthless. In fact, that scene of the game is one of the easiest to render. Most in-game scenes have 10x that complexity.
B. Starcraft 2, ditto, in SC2 you can dial the detail level way down, and there is not much camera movement and so on.
C. Putting those clips in a nice little window, while good for comparison, means they're probably running at 800x600 or 1024x768 which are extremely low resolutions.
D. Intel never specifies *what* discrete GPU they are comparing it to. Probably a low-end one, for sure.
E. Anyone that has played Mass Effect 2 and Starcraft 2 will know those Intel demos do not test real gameplay experience in any way.
3. Marvin, I appreciate you sourcing all the material, but I have to tell everyone, *this is not the GPU you are looking for* ~ without having to use any mind tricks. Intel's demos show about up to a Nvidia 9400M level of GPU capability, at the very, very best they could approach 320M level of GPU capability, but I highly doubt that.
4. At this stage I cannot but conclude that there is no way Sandy Bridge IGP can deliver anything more than the Nvidia 320M and as such if Apple were to use it, that would be a step backwards. For the MBP 13", I believe Apple will have to bite the bullet and stick a low-cost discrete GPU in there (probably ATI). MacBook Airs are fine because through 2011 I don't think they would get Core i-series chips (unless it is a rebranded Core 2).
5. If you look at iMovie '11 for example, the level of GPU computation that is required to render a lot of the graphics, I really can't see Sandy Bridge IGP doing that, particularly for a "pro" Mac laptop. (And it is clear the GPU is very heavily leveraged because this is what allows the MBA with 320M to actually be able to do iMovie '11 stuff with such a limited CPU - of course the final "render" of the iMovie edit will be much slower, but the MBA is highly responsive during editing and effects, transitions, etc.)
The CPU scores are shown on the second link. They are fairly even between all the machines tested even though they have slightly different CPUs in each. The SM 2 & 3 scores matched the GTX quite closely. 3DMark might have picked up the GPU name wrongly or the machine was in hybrid SLI and 3DMark was only choosing the name of the IGP.
The CPU score does make it an unfavourable comparison to current GPUs in current machines as an i7 mobile CPU gets 3k and the new i7 was scoring 5k so overall scores would be lower by 2k but like I say, the comparison between the GPUs tested doesn't have that problem.
Intel have been demoing the chips themselves. Here with Starcraft 2:
http://www.youtube.com/watch?v=4ETnmGn8q5Q
and here with Mass Effect 2:
http://www.youtube.com/watch?v=OwW2Wc9XSD4
The comparison was between integrated and a mid-range discrete GPU. What's interesting there is that the fire on the right with SB looks like it's stuttering whereas it's smooth on the left and those do look like laptops behind the presenter. The power meter was showing Sandy Bridge at <50% of the other one. So given that SB has a 35W TDP, the other machine would be a laptop drawing 50-70W.
They said the effects were all set on maximum. I'm sure the 320M was capable of that but not at a very high resolution and with AA off and it wasn't all that smooth.
It just doesn't feel right though, it's like when we saw the IE9 benchmarks beating every other browser and they found out they were cheating by skipping some of the tests.
Anyway, if they manage to allow people to play Mass Effect 2 on entry laptops at maximum quality, I think that will be a good achievement as well as a 100% speedup over C2D.
Those results don't make sense. Rough estimates from my readings online reflect Sandy Bridge IGPs to be between the Arrandale IGPs and the 9400M. Highly unlikely to be better.
Im not sure why you would say that! Seriously intel could simply have been embarrassed about it's previous GPUs. Or it could be scared to death of AMDs coming Fusion products. I suspect that it has become obvious to the industry that 3D performance is a big deal these days and that in general the GPU plays a huge part in positive user experiences.
BTW doesn't 3DMark06 reflect a combined CPU+GPU score? In which case the CPU score may be skewing things massively, or some other mistakes. What's the 3DMark06 GPU score?
I'll repeat my assertion: Sandy Bridge IGPs are still, crap. Feel free to prove me wrong. (Not being sarcastic here)
Until there is shipping Apple machines we simply don't know! I have zero faith in these mouth pieces talking about pre release Intel hardware. They have no credibility at all in my mind. I do trust that Intel has completely overhauled Sandy Bridge and as a result it is a much better upgrade than many new Intel releases.
As a side note there may be concerns about how they are getting these good numbers with the limited execution units on the GPUs. This has me wondering what the devices clock rate is like. A fast clock might skew some benchmarks more than maybe they should. We really need specifics on the hardware before we start to wonder about numbers.
I know exactly what OpenCL is, but thanks for assuming I don't.
That's why I said I was confused:
This is a good decision. Most people don't play 3D games and won't notice a slight reduction in graphics performance, but they might notice a (comparatively much larger) increase in CPU performance, and they will notice an increase in battery life.
OpenCL is never going to amount to anything on such low-end GPUs as the Nvidia IGP's we're discussing.
Then why wouldn't access to additional processing power -- via GPGPU -- be a good thing? If the 320M (or better) IGP is underutilized except when decoding HD content, why not use that for specific types of application processes that can benefit from the OpenCL architecture?
And I don't know that I agree that the Nvidia IGP is all that "low end"' for a properly written OpenCL process.
Also (and I'm no expert here), don't our Windows-using brethren need a certain level of 3D support for the basic Aero interface in Win7? Isn't this a larger problem space for Intel than just smaller Apple notebooks?
Guys, we're way off base here. I've reviewed the data.
1. Sandy Bridge IGP cannot possibly do 5k+ in those SM2 and SM3 scores. This is because 5K in SM2 and SM3 in 3DMark06 is what my overclocked ATI Radeon 4830 512MB can do. I can play Starcraft 2 on High settings at 1920x1080, Dirt 2 on max settings at 1920x1080, etc. It really is a capable card. Them modern equivalent is an ATI Radeon 5750 or something like that. If an integrated on-die GPU in an Intel CPU can do that, AMD might as well set their headquarters on fire and collect the insurance money, because this would mean AMD Fusion is completely blown to hell.
it wouldn't be impossible for the hardware or the drivers to be cooked for the benchmarks. It isn't like it hasn't been done in the past. We will only have an answer when we can run real hardware and software up against competeing platforms.
One should not however discount sneaky approaches that are Possible. For example they could be making extensive use of the CPU vector units. While possibly valid it would mean many CPU cycles wasted on graphics. With the high speed buses and cache memory this could be a very real technique.
There are potentially a number of way for Intel to be sneaky here yet at the same time be somewhat honest.
2. Those Intel's demos as you linked to are extremely horrible and really nefarious.
As is much of what I've seen on the net recently. Much of the previewing appears to be paid marketing from Intel. This is why I stress the need for shiping hardware.
A. The Mass Effect 2 demo is the starting scene where it looks like there are a lot of objects but it is just a short corridor with flame and other effect thingys. The fact that there is no camera movement, no interactivity, means that demo is really worthless. In fact, that scene of the game is one of the easiest to render. Most in-game scenes have 10x that complexity.
B. Starcraft 2, ditto, in SC2 you can dial the detail level way down, and there is not much camera movement and so on.
C. Putting those clips in a nice little window, while good for comparison, means they're probably running at 800x600 or 1024x768 which are extremely low resolutions.
D. Intel never specifies *what* discrete GPU they are comparing it to. Probably a low-end one, for sure.
E. Anyone that has played Mass Effect 2 and Starcraft 2 will know those Intel demos do not test real gameplay experience in any way.
3. Marvin, I appreciate you sourcing all the material, but I have to tell everyone, *this is not the GPU you are looking for* ~ without having to use any mind tricks. Intel's demos show about up to a Nvidia 9400M level of GPU capability, at the very, very best they could approach 320M level of GPU capability, but I highly doubt that.
It wouldn't be the first time things have been skewed to paint Intel hardware in a brighter light than it deserves. I suspect that in the end we will find that the hardware has an optimal set off features that work well but that overall the experience is ver middle of the road.
4. At this stage I cannot but conclude that there is no way Sandy Bridge IGP can deliver anything more than the Nvidia 320M and as such if Apple were to use it, that would be a step backwards. For the MBP 13", I believe Apple will have to bite the bullet and stick a low-cost discrete GPU in there (probably ATI). MacBook Airs are fine because through 2011 I don't think they would get Core i-series chips (unless it is a rebranded Core 2).
Yep a very real possibility. At best we may see corner cases where the GPU is pretty good.
5. If you look at iMovie '11 for example, the level of GPU computation that is required to render a lot of the graphics, I really can't see Sandy Bridge IGP doing that, particularly for a "pro" Mac laptop. (And it is clear the GPU is very heavily leveraged because this is what allows the MBA with 320M to actually be able to do iMovie '11 stuff with such a limited CPU - of course the final "render" of the iMovie edit will be much slower, but the MBA is highly responsive during editing and effects, transitions, etc.)
The problem with a lot of these pre release comparisons is that we don't know how the caches and high speed buses will impact results. Well that and the CPU /GPU code split. In the end SB is a whole new world and we may see the impact of it's architecture in strange ways.
Intel never specifies *what* discrete GPU they are comparing it to. Probably a low-end one, for sure.
Intel's demos show about up to a Nvidia 9400M level of GPU capability, at the very, very best they could approach 320M level of GPU capability, but I highly doubt that.
At this stage I cannot but conclude that there is no way Sandy Bridge IGP can deliver anything more than the Nvidia 320M and as such if Apple were to use it, that would be a step backwards.
I really can't see Sandy Bridge IGP doing that, particularly for a "pro" Mac laptop. (And it is clear the GPU is very heavily leveraged because this is what allows the MBA with 320M to actually be able to do iMovie '11 stuff with such a limited CPU - of course the final "render" of the iMovie edit will be much slower, but the MBA is highly responsive during editing and effects, transitions, etc.)
I found a clearer video of the demo and from the writing on the laptop around 0:28, it would appear they are using an unbranded Toshiba Satellite:
http://www.youtube.com/watch?v=7ImQ3...eature=related
http://us.toshiba.com/computers/lapt...660/A665-S6092
Graphics card would be the mid-range 330M that is used in the Macbook Pro. I definitely see some lag/stuttering in the Sandy Bridge one (1:00) vs the 330M but during the presentation, he says that the SB one is also simultaneously capturing HD in-game footage, presumably unlike the other one. Given that the Satellite only supports 720p, I'd guess that's what the games are running at.
That demo certainly looks a long way off the 3DMark 06 score and there's no way it comes close to a GTX 460M. However, it looks to be between a 320M and 330M and that's all they need for this demographic.
While it seems like a bit of a lack of progress for now, when Ivy Bridge hits at the end of the year, they can put in 18-24EUs with the smaller fabrication and then we're into the realm of the GTX 460M.
In terms of iMovie etc, they will be using Core Image, which leverages GLSL when available (which is supported by Intel's IGPs) and if not, falls back to the CPU. While it's true that there's more chance of falling back to the CPU than with a general compute GPU like AMD or NVidia, it's rarely if ever going to affect people who buy that grade of machine.
The MBP will still have the dedicated chip for compute and I would hazard a guess that since NVidia's chips haven't improved much with the 4xxM series that it will be the Radeon 6550M or 6570M:
http://www.notebookcheck.net/AMD-Rad...M.41143.0.html
The power consumption ranges between 11-30W and obviously Apple can throttle it down to stay below 20W or not bother and rely on their graphics switching tech. This performance is on par with a 5650 and exceeds the 330M by 5-20%.
It's not a nice-tasting pill to swallow but I think that's it for NVidia. None in the iMacs or Mac Pros so far, with this change, it will be Intel IGP in the Mini, MB, MBA and the MBP will have Intel IGP + a Radeon card.
Intel won, not by being better but by being bigger and pushing them out of the way. It had to happen but NVidia don't deserve to go out this way.
Im not sure why you would say that! Seriously intel could simply have been embarrassed about it's previous GPUs. Or it could be scared to death of AMDs coming Fusion products. I suspect that it has become obvious to the industry that 3D performance is a big deal these days and that in general the GPU plays a huge part in positive user experiences.
Until there is shipping Apple machines we simply don't know! I have zero faith in these mouth pieces talking about pre release Intel hardware. They have no credibility at all in my mind. I do trust that Intel has completely overhauled Sandy Bridge and as a result it is a much better upgrade than many new Intel releases.
Intel has been saying all along that Sandy Bridge's GPU would be about twice as good as what their current IGP was. This link only shows 3DMark06 test scores unfortunately, but the 320 is almost 3 times better, so doubling would be less good. I have seen other benchmarks that routinely have the IGP behind by more, w/the 320 quadrupling numbers. If they double and are still half as good as the 320M, how can that be anything but a step backwards?
Below are some links and a couple of comparative numbers based on some of the only games that were tested in common.
http://www.notebookcheck.net/Mobile-...ist.844.0.html
320m #123 on list 3DMark06 4155
9400m #173 on list 3DMark06 1348
GMA HD #191 on list 3DMark06 1503
Gaming performance: http://www.notebookcheck.net/Compute...s.13849.0.html
320M F.E.A.R. 49fps high Doom 3 124fps high
9400m F.E.A.R. 17fps high Doom 3 83fps high
GMA HD F.E.A.R. N/A high Doom 3 21fps high
Here is a few links showing figures for all 3 cards. Note that the GMA HD was incaable of even playing most of the games, including games like Starcraft 2. This is a game produced by Blizzard, who is known for having their games be extremely flexible as far as what kind of equipment they will run decently on.
320M http://www.notebookcheck.net/NVIDIA-...M.28701.0.html
9400m http://www.notebookcheck.net/NVIDIA-...G.11949.0.html
GMA HD http://www.notebookcheck.net/Intel-G...HD.9883.0.html
The benchmark numbers for SB being debated are highly questionably and anyone who has kept up with the progression of these video cards at all will know this. So unless Intel magically in the last month managed to jack things from a 2x to 4x improvement, it is still well behind the 320m.
Something else to keep in mind is that the GPU in the Sandy Bridge chip will NOT be a DX11 part, but Fusion is. Some will say no big deal, but with the recent Cataclysm expansion for World of Warcraft, if you enable the DX11 optimizations, you will receive a 20-30% performance boost. That is huge.
Will Intels graphics at least match the 320m in present 13" MacBook's?
This will be severly disappointing if Apple down grades the GPU in the next update.
Not buying it, they made this mistake once already & had to back pedal on the move. If they switch to integrated Intel chips only their competitors are going to have a marketing heyday with it.
That's why I said I was confused:
Then why wouldn't access to additional processing power -- via GPGPU -- be a good thing? If the 320M (or better) IGP is underutilized except when decoding HD content, why not use that for specific types of application processes that can benefit from the OpenCL architecture?
And I don't know that I agree that the Nvidia IGP is all that "low end"' for a properly written OpenCL process.
Also (and I'm no expert here), don't our Windows-using brethren need a certain level of 3D support for the basic Aero interface in Win7? Isn't this a larger problem space for Intel than just smaller Apple notebooks?
More processing power is a good thing, in a global sense. I used to be a big proponent of GPGPU and I am glad that there's an open standard available for use.
But contrary to its name GPGPU is limited in what it can do, and you need a fast GPU to really show it off (most benchmarks you see are run on powerful graphics cards). Do you sacrifice general-purpose CPU power for a slight gain in, say, simulation of protein folding?
It's true that modern operating systems- including OS X- want a minimal level of graphics acceleration, though they all have a 2D fallback. Windows' Aero interface is supported by Intel graphics going back to GMA 950, so Intel has had that covered for years. OS X actually does some things that Arrandale's "HD Graphics" doesn't like, but that's driver related. 3D support for operating systems just isn't a problem for Intel. They evem accelerate HD video playback in hardware now.
IF Intel and Apple can turn out a decent driver, no one but gamers will notice if the Macbook is using Intel graphics, but a more modern CPU will benefit everybody.
It's not a nice-tasting pill to swallow but I think that's it for NVidia. None in the iMacs or Mac Pros so far, with this change, it will be Intel IGP in the Mini, MB, MBA and the MBP will have Intel IGP + a Radeon card.
Intel won, not by being better but by being bigger and pushing them out of the way. It had to happen but NVidia don't deserve to go out this way.
It's sad. Nvidia was kicking ass in the heady 6800, 8800, 9600, 9800 days. The awesome G92 chip that saw them so clearly dominate around 2006-2008. SLI innovation and Quadros.
Nvidia had three strikes that dealt them out of being serious competitors to Intel and AMD.
The first was post-G92 where they came up with the powerful but hot and heavy GTX260. While good for high-end gamers they were never really able to bring it down to appropriate laptop GPUs and relied on G92 derivatives for well past its due date and excruciating rounds of rebadging 9800 etc GPUs.
Next up was Fermi. Apparently it was meant for GPGPU, high-end computing etc. so, after much delays, it came out, and... was again too hot and heavy. They managed to carve out some nice GPUs like the GTX460 but again in the laptop space more derivatives and rebranding of who knows what, and not quite the killer discrete graphics at great performance-per-watt across the board.
Third and the real sucker punch was Intel shafting them by out-of-the-blue disallowing them from making Intel chipsets (and subsequently no more AMD chipsets with AMD buying ATI) and Intel force-bundling GPUs with all Intel CPUs... This pretty much destroyed both Nvidia chipset products as well as low-end Nvidia discrete graphics.
If not for Nvidia's extremely robust marketing, CUDA, their generally better-than-ATI PC drivers and Nvidia's very close relationships with top-class game development studios, Nvidia would be pretty much toast by now. I don't know about the financials but I think my username reflects when they were at their peak this decade.
IF Intel and Apple can turn out a decent driver, no one but gamers will notice if the Macbook is using Intel graphics, but a more modern CPU will benefit everybody.
Depends... If Core Image/ Core Graphics performs as good as if not better than a 320M on an Intel IGP, then that more modern CPU will certainly benefit everyone. If not, then, well, it's a sideways or backwards step for Macs. Have you seen iMovie '11? All that realtime rendering (and there is tons of it especially in iMovie 2011) ...is heavily GPU-based.
Core Image (not OpenCL or other GPGPU) is so critical to so much of what we do on a Mac. As seen in the MBA, real powerhouse CPUs are not relevant to most of the population. Not to mention SSDs can remove huge bottlenecks on the Mac.
It's an interesting time. CPU, GPU, APU, all battling it out.
Like I said, if I could get a MacBook Air 15" with no ODD and Corei5 Sandy Bridge and ATI 6800 series 1GB VRAM and 320GB SSD, that would be sweeeeet and have all the benefits with not too much compromise.
Yet we should keep in mind the iPad will outsell the Mac in 2011 by a very large margin.
Our whole notion of computing is being reworked over the next few years.
Yet we should keep in mind the iPad will outsell the Mac in 2011 by a very large margin.
Yes, and it'll be interesting to see if the next iPad will include multiple ARM cores or an OpenCL capable GPU.
More processing power is a good thing, in a global sense. I used to be a big proponent of GPGPU and I am glad that there's an open standard available for use.
You should still be a big advocate.
But contrary to its name GPGPU is limited in what it can do, and you need a fast GPU to really show it off (most benchmarks you see are run on powerful graphics cards).
Why of course you do, that is show off your technology on the best hardware available. It is the nature of showing off, however that doesn't mean least cards can't effectively accelerate certain classes of problems. The problem just needs to map cleanly to the GPU.
You do highlight one issue and that is the confusion some people have with the term GPGPU computing. The reality is that today it is anything but general purpose and instead is a great facility for accelerating certain problem sets. At least with today's processors you have to be able to justify the overhead of using something like OpenCL in a project. Where it is a win it is often a win on modest GPUs. .
Do you sacrifice general-purpose CPU power for a slight gain in, say, simulation of protein folding?
I see this as BS! How many people do constructive protein folding on their PCs? GPGPU computing has a much wider array of viable applications than that. You try to reinforce this notion that GPGPU computing is of no use to the general population which I don't buy at all.
It's true that modern operating systems- including OS X- want a minimal level of graphics acceleration, though they all have a 2D fallback. Windows' Aero interface is supported by Intel graphics going back to GMA 950, so Intel has had that covered for years. OS X actually does some things that Arrandale's "HD Graphics" doesn't like, but that's driver related. 3D support for operating systems just isn't a problem for Intel. They evem accelerate HD video playback in hardware now.
Frankly I have to disagree with you here too. There is a big (MASSIVE) difference between supporting something and having that something work well. To that point I'm not convinced that 3D is such a big issue for Mac OS right now. In the near future I would expect many GPU cycles to go to things like resolution independence and enhancements to things like preview in the finder.
IF Intel and Apple can turn out a decent driver, no one but gamers will notice if the Macbook is using Intel graphics, but a more modern CPU will benefit everybody.
Again this is BS! People notice GPU performance more than just about anything else on a PC these days. A fluid user interface is pretty much expected which a good GPU goes a long way to delivering. Think back to a couple of years when the first machines with Intel integrated GPUs came out, people rejected the machines right and left. Very few of those people where gamers as if gaming was ever a big thing on the Mac. Macs by their nature as "the graphical machine" need better than run of the mill GPUs.
Now to the question of drivers and Sandy Bridge, I'm trying to keep an open mind here. The problem is Intel has a really bad history here. My big fear is that this is a slip backwards with respect to GPU performance and support of core technologies. I don't think anybody on this forum knows for sure what the situation is. For one thing you can't be reasonable expected to trust sites with prerelease Intel hardware. The next issue is those drivers which often vary drastically from what we see in the Windows world. Things can easily add up to crappy Mac OS performance for SB. I just hold out hope that SB won't be that bad GPU wise.
Im also not so technically illiterate not to realize that integrated GPUs are the wave of the future. In the end there is more to gain by tightly coupling the GPU to the CPU than there is to loose. Especially considering if AMDs vision with respect to Fusion ever matures. Eventually all or most of the obsticals to using the GPU to accelerate apps will be gone. Use of that hardware will become so transparent that we won't think of it as anything odd. In any event the coming SB and Fusion chips are just the start of a whole new generation of processors. Most reasonable people should be hopeful that they do well in their initial release.
Yes, and it'll be interesting to see if the next iPad will include multiple ARM cores or an OpenCL capable GPU.
Just from the standpoint of being competitive I would think that an SMP ARM processor is a requirement in iPad 2. Since Apple has been working with Imagination with respect to OpenCL support I would suspect that OpenCL will be supported on the SoC also.
Much of the infrastructure is already there in iOS so it won't be a big deal when it happens. Dual core will come because it is the low power way to increase performance. OpenCL will be there because it exposes hardware to apps so that it can be leveraged as needed.
More interesting will be how Apple implements the architecture of this new SoC. Will the GPU be an equal partner to the CPU? Will the GPU support threads or other partitioning? Will they implement an on board cache or video RAM buffer? Lots of questions but really this is what interests me about iPad 2, that is just what does Apple have up it's sleeve with respect to the next iPad processor. Considering all the recent patents it could be a major advancement or it could be another run of the mill ARM SoC.
Have you seen iMovie '11? All that realtime rendering (and there is tons of it especially in iMovie 2011) ...is heavily GPU-based.
Here's something to check out. Scroll down to the last few paragraphs.
Intel confirmed that Sandy Bridge has dedicated video transcode hardware that it demoed during the keynote. The demo used Cyberlink’s Media Espresso to convert a ~1 minute long 30Mbps 1080p HD video clip to an iPhone compatible format. On Sandy Bridge the conversion finished in a matter of a few seconds (< 10 seconds by my watch).