or Connect
AppleInsider › Forums › Mac Hardware › Future Apple Hardware › Apple to use Intel's Sandy Bridge without Nvidia GPUs in new MacBooks
New Posts  All Forums:Forum Nav:

Apple to use Intel's Sandy Bridge without Nvidia GPUs in new MacBooks - Page 2

post #41 of 127
Quote:
Originally Posted by melgross View Post

Not reliable at all. Apple is depending on OpenCL for performance. Otherwise they wouldn't be willing to take the flack for still using Core 2 chips in its lower machines so they can use OpenCL GPU's. I don't see them backing away from that, unless, somehow, they've figured out a way around it. Being that's it's Apple, that's not impossible, but not likely.

Not entirely true. There's little advantage in the i3 over the C2D and the IGP isn't as good as in sandy bridge. In the case of sandy bridge I believe the mobile parts are all i5 for now. The difference in CPU performance with be huge.

As argued back and forth in the other thread OpenCL support IMHO isn't a necessarily a hard requirement for the GPU.

The quick recap of my position is:

1) Intel is supporting OpenCL on the CPU. Therefore OpenCL calls could be implemented by on die hardware in Sandy Bridge (i.e. via 256 bit wide SIMD or the hardware encoder/decoders).

2) Intel is providing on die hardware support/enhancement for the 2 most common GP GPU uses: cryptography and transcoding. How can we guess these are the two most common use cases? Because these are the two GPGPU benchmarks added the SiSoft Sandra benchmark suite. [1][2]

3) The GPU performance as a GPU is slightly better than the 320M as the 5450 is slightly better than the 320M.

Given these three, going with just a Sandy Bridge i5 in the mini, MB and MBP 13" is plausible. The MBA I believe uses lower TDP parts and will simply have to wait.

---

[1] "The demo used Cyberlinks Media Espresso to convert a ~1 minute long 30Mbps 1080p HD video clip to an iPhone compatible format. On Sandy Bridge the conversion finished in a matter of a few seconds (< 10 seconds by my watch)."

http://www.anandtech.com/show/3916/i...anscode-engine

[2] "The interesting bit is that SiSoft expects how CPUs should be able to challenge GPGPUs in the near future, given that "Using 256-bit register width (instead of 128-bit of SSE/2/3/4) yields further performance gains through greater parallelism in most algorithms. Combined with the increase in processor cores and threads we will soon have CPUs rivaling GPGPUs in performance."

http://www.brightsideofnews.com/news...gpu-tests.aspx
post #42 of 127
Deleted for being too grumpy
post #43 of 127
Quote:
Originally Posted by solipsism View Post

This is where I can see Apple throwing its weight around. I can see Apple strong arming Intel into supporting OpenCL, even if its special category just for Mac notebook that arent found on Intels price list. Its not like there isnt precedence to support this possibility.

It depends. Intel's design philosophy seems to be toward fixed function hardware outside the CPU. Meaning that the GPU design may not be suitable for GPGPU and no amount of driver tweaks is going to change that.

I believe Apple has pull. I don't think Apple has THAT much pull to have a custom IGP on the die.
post #44 of 127
Quote:
Originally Posted by hypercommunist View Post

Anandtech's preview of Sandy Bridge makes it sound very promising. Even the integrated graphics sounds pretty good.

The interesting thing is that those game benchmarks is without a working turbo mode. It won't help you for sustained frame rates in something like a long WoW raid but for short periods the GPU can ramp up for more performance.

I might guess that the TurboBoost would also be able to ramp the IGP up at the expense of one of the two cores. That would be handy if true if you are GPU bound and not CPU bound. That might make sustained graphic performance a possibility for games that don't normally use more than one core.
post #45 of 127
i understand the whole problem with the graphics in macs 13in and below (mbp 13, and macbook). but what about macbook pros 15in and above? what graphics will be used for that? AMD or possibly nVidia still? is AMD better or not?
i apologize for posting such an ignorant question...
post #46 of 127
Quote:
Originally Posted by Marvin View Post

There's nothing in there that immediately puts them off using it alone. It's good enough but not so good you will want it over a more expensive model.

Which brings us back to the original question, will this actually perform better than an older CPU (C2D) with a better IGP (Nvidia 320M)? My guess is that they are evaluating that in Cupertino as we speak. Which doesn't make buying the 13" MBP today any easier...

   Apple develops an improved programming language.  Google copied Java.  Everything you need to know, right there.

 

  MA497LL/A FB463LL/A MC572LL/A FC060LL/A MD481LL/A MD388LL/A ME344LL/A

Reply

   Apple develops an improved programming language.  Google copied Java.  Everything you need to know, right there.

 

  MA497LL/A FB463LL/A MC572LL/A FC060LL/A MD481LL/A MD388LL/A ME344LL/A

Reply
post #47 of 127
Inthought Intel and NVidia had settled their suit. When I did a search I didn't see anything definitive but the trial was supposed to start on the 6th. Assuming a settlement is in place and it allows NVidia to make chipsets for intel chips, Apple would be able to use both intels latest chips and NVidia's integers graphics chips.
post #48 of 127
Quote:
Originally Posted by Mynameisjoe View Post

Inthought Intel and NVidia had settled their suit. When I did a search I didn't see anything definitive but the trial was supposed to start on the 6th. Assuming a settlement is in place and it allows NVidia to make chipsets for intel chips, Apple would be able to use both intels latest chips and NVidia's integers graphics chips.

http://arstechnica.com/apple/news/20...-for-apple.ars

Even if that pans out, we don't know is if this is the direction Apple will go.

   Apple develops an improved programming language.  Google copied Java.  Everything you need to know, right there.

 

  MA497LL/A FB463LL/A MC572LL/A FC060LL/A MD481LL/A MD388LL/A ME344LL/A

Reply

   Apple develops an improved programming language.  Google copied Java.  Everything you need to know, right there.

 

  MA497LL/A FB463LL/A MC572LL/A FC060LL/A MD481LL/A MD388LL/A ME344LL/A

Reply
post #49 of 127
Since most Apple customers are not very technically sophisticated, this story is irrelevant for the majority.
post #50 of 127
Quote:
Originally Posted by nht View Post

The GPU performance as a GPU is slightly better than the 320M as the 5450 is slightly better than the 320M.

The desktop 5450 is, the mobile variant isn't. The Anand benchmark uses the desktop version. Given that Sandy Bridge is supposed to be faster than the 5450 though, it could still be on par or better than the 320M. If that was without Turbo Boost, it's going to be awesome. Something doesn't seem right though. It's almost as if Anand are being led to thinking they have a 6EU chip with no Turbo Boost, which would imply the final version will be 2-3 times faster. Intel also failed to disclose information about their own GPU demos with Starcraft 2 and I think it was Mass Effect compared to an unspecified dedicated chip.

If Anand in fact have a 12EU chip with Turbo Boost, the final version will be slower because the chip will ramp down to near half the clock speed in certain cases.

We'll find out soon enough. Let's not forget though that even if Intel do match the 320M, they are catching up to NVidia's last generation chip. If they hadn't blocked NVidia illegally, they'd be coming in at half NVidia's latest GPU.

Quote:
Originally Posted by John.B

Which brings us back to the original question, will this actually perform better than an older CPU (C2D) with a better IGP (Nvidia 320M)? My guess is that they are evaluating that in Cupertino as we speak. Which doesn't make buying the 13" MBP today any easier...

The i5 chips are much faster than the C2D. I reckon that the gaming performance will be on par at best with 320M but with fewer supported features and what Apple could do this time is focus on the media encoding performance and battery life.

It wouldn't be wise to buy a new one before the new one arrives because you'll still be able to buy a refurb at a lower price if the GPU part in SB turns out to be not so good.

Quote:
Originally Posted by Mynameisjoe

Inthought Intel and NVidia had settled their suit. When I did a search I didn't see anything definitive but the trial was supposed to start on the 6th. Assuming a settlement is in place and it allows NVidia to make chipsets for intel chips, Apple would be able to use both intels latest chips and NVidia's integers graphics chips.

Yeah but NVidia have already said they aren't making chipsets for Intel CPUs any more. The settlement would be to do with SLI noted here:

http://pressroom.nvidia.com/easyir/c...157&xhtml=true
post #51 of 127
Quote:
Originally Posted by Marvin View Post

If Anand in fact have a 12EU chip with Turbo Boost, the final version will be slower because the chip will ramp down to near half the clock speed in certain cases.

Anand had doubts about it being a 6EU chip since it wasn't specified but I thought he was told turboboost was broken. Meh, I don't care enough to reread that article.

Quote:
We'll find out soon enough. Let's not forget though that even if Intel do match the 320M, they are catching up to NVidia's last generation chip.

True, but it's not a step back. Intel IGP has been major suckage for a long long time and there was much deserved griping about the performance of the GMA 950 and GMA X3100 based macbooks.
post #52 of 127
Quote:
Originally Posted by Marvin View Post

It's almost as if Anand are being led to thinking they have a 6EU chip with no Turbo Boost, which would imply the final version will be 2-3 times faster.

I wrote that with the intention of it coming across like it could never happen. Well, I think I just got one of the biggest shocks of my life :

http://www.engadget.com/2010/12/08/u...intel-sandy-b/
http://forum.notebookreview.com/alie...y-wanna-c.html

3DMark 06
Sandy Bridge IGP: 15,940
Radeon 6900M: 20,155
Geforce GTX 460M: 16,957

If those benchmarks are accurate, what you are looking at is an Intel IGP that is 3 times faster than the 320M and in the same performance range as class 1 NVidia and AMD GPUs. This is the higher TDP quad-core CPU but AFAIK, the GPU is the same.

How the f did they do that with 12 processing units?? The GTX has 192!!

Quote:
Originally Posted by nht

True, but it's not a step back.

Yeah, it seems like it might turn out ok after all.
post #53 of 127
Quote:
Originally Posted by Marvin View Post

I wrote that with the intention of it coming across like it could never happen. Well, I think I just got one of the biggest shocks of my life :

http://www.engadget.com/2010/12/08/u...intel-sandy-b/
http://forum.notebookreview.com/alie...y-wanna-c.html

3DMark 06
Sandy Bridge IGP: 15,940
Radeon 6900M: 20,155
Geforce GTX 460M: 16,957

If those benchmarks are accurate, what you are looking at is an Intel IGP that is 3 times faster than the 320M and in the same performance range as class 1 NVidia and AMD GPUs. This is the higher TDP quad-core CPU but AFAIK, the GPU is the same.

How the f did they do that with 12 processing units?? The GTX has 192!!



Yeah, it seems like it might turn out ok after all.

That doesn't make sense.

If its too good to be true, it probably isn't true.
post #54 of 127
Quote:
Originally Posted by nht View Post

Anand had doubts about it being a 6EU chip since it wasn't specified but I thought he was told turboboost was broken. Meh, I don't care enough to reread that article.



True, but it's not a step back. Intel IGP has been major suckage for a long long time and there was much deserved griping about the performance of the GMA 950 and GMA X3100 based macbooks.

Looks like you're going to be right about Apple going with the new Intel HD graphics only in some Macs. Lets hope you're right that its a good idea.
post #55 of 127
Quote:
Originally Posted by backtomac View Post

That doesn't make sense.

If its too good to be true, it probably isn't true.

Yeah, the test machine used NVidia Optimus to test the Sandy Bridge IGP so there's a possibility that 3DMark mixed up the scores somehow. If the IGP was that quick, the manufacturer would have no reason to include both unless the machine could use both in hybrid SLI.

It's not entirely unbelievable though. Like I say, if the Anand preview chip had 6 EUs and no Turbo boost, then it's possible. The test chip would have scored in the region of 4000-5000. If you double the GPU cores, you get 8000-10000. The shader clock ramps from 650MHz to 1300MHz in the i7 with Turbo boost enabled, so double again to 16000-20000 at maximum performance.

They said they used technology from Larrabee to make these IGPs so maybe they figured out how to do these things properly.

Still, this goes against the laws of nature so no, it's not possible and Intel's IGPs will always suck as they always have done.

AMD's Zacate only got 2135:

http://www.legitreviews.com/article/1470/7/
post #56 of 127
Quote:
Originally Posted by al_bundy View Post

depends on the browser as well

in the last year or two someone did a test and believe it or not IE came out as the most energy efficient browser. firefox was the worst. forgot where safari, opera and chrome ended up

AnandTech has done a several such tests. On Mac OS X Safari is the most power efficient. On Windows IE is the most power efficient. Obviously disabling Flash makes the browsers even more efficient.

Also, Mac OS X is more power efficient than Windows.
Dick Applebaum on whether the iPad is a personal computer: "BTW, I am posting this from my iPad pc while sitting on the throne... personal enough for you?"
Reply
Dick Applebaum on whether the iPad is a personal computer: "BTW, I am posting this from my iPad pc while sitting on the throne... personal enough for you?"
Reply
post #57 of 127
This is a good decision. Most people don't play 3D games and won't notice a slight reduction in graphics performance, but they might notice a (comparatively much larger) increase in CPU performance, and they will notice an increase in battery life.

OpenCL is never going to amount to anything on such low-end GPUs as the Nvidia IGP's we're discussing.
post #58 of 127
Quote:
Originally Posted by Marvin View Post

I wrote that with the intention of it coming across like it could never happen. Well, I think I just got one of the biggest shocks of my life :

http://www.engadget.com/2010/12/08/u...intel-sandy-b/
http://forum.notebookreview.com/alie...y-wanna-c.html

3DMark 06
Sandy Bridge IGP: 15,940
Radeon 6900M: 20,155
Geforce GTX 460M: 16,957

If those benchmarks are accurate, what you are looking at is an Intel IGP that is 3 times faster than the 320M and in the same performance range as class 1 NVidia and AMD GPUs. This is the higher TDP quad-core CPU but AFAIK, the GPU is the same.

How the f did they do that with 12 processing units?? The GTX has 192!!

Yeah, it seems like it might turn out ok after all.

No freaking way it's benching 16K with 3DMark 06. Okay that's the i7 and more headroom but it shouldn't be that much better than the i5 Anand tested even with 6 more EU and turbo boost. Something is borked with that score.
post #59 of 127
Quote:
Originally Posted by FuturePastNow View Post

This is a good decision. Most people don't play 3D games and won't notice a slight reduction in graphics performance, but they might notice a (comparatively much larger) increase in CPU performance, and they will notice an increase in battery life.

OpenCL is never going to amount to anything on such low-end GPUs as the Nvidia IGP's we're discussing.

I'm not sure I follow you here. Are you sure you aren't confusing OpenCL with OpenGL?

The promise of OpenCL is to unlock some of the power in the GPU for other than garden variety graphics processing.

Quote:
Originally Posted by en.wikipedia.org/wiki/OpenCL

OpenCL gives any application access to the Graphics Processing Unit for non-graphical computing. Thus, OpenCL extends the power of the Graphics Processing Unit beyond graphics (General-purpose computing on graphics processing units).

If Mac applications are going to start getting wide support for OpenCL-based applications, it'll be because it's ubiquitous. Otherwise we're back to that whole chicken vs. the egg thing; if only some Macs have OpenCL support, what developers are going to add that into the next version of their applications?

   Apple develops an improved programming language.  Google copied Java.  Everything you need to know, right there.

 

  MA497LL/A FB463LL/A MC572LL/A FC060LL/A MD481LL/A MD388LL/A ME344LL/A

Reply

   Apple develops an improved programming language.  Google copied Java.  Everything you need to know, right there.

 

  MA497LL/A FB463LL/A MC572LL/A FC060LL/A MD481LL/A MD388LL/A ME344LL/A

Reply
post #60 of 127
Personally I think this is nonsense and somehow geared towards endorsing Intel and Sandy Bridge. Sandy Bridge MBP 13" will have AMD lower-5000-series discrete GPUs. The power sacrifice would be fairly minimal, AMD has some excellent basic discrete GPUs that still blow Intel's BundleGate RubbishGPUs out of the water. MBA will stay on Core2, so that's 320M for the foreseeable calendar 1st half of 2011.

Fusion is not an option yet for MBA, maybe, maybe calendar 2nd of 2011.

My gut 2 cents.
post #61 of 127
Quote:
Originally Posted by Marvin View Post

I wrote that with the intention of it coming across like it could never happen. Well, I think I just got one of the biggest shocks of my life :

http://www.engadget.com/2010/12/08/u...intel-sandy-b/
http://forum.notebookreview.com/alie...y-wanna-c.html

3DMark 06
Sandy Bridge IGP: 15,940
Radeon 6900M: 20,155
Geforce GTX 460M: 16,957

If those benchmarks are accurate, what you are looking at is an Intel IGP that is 3 times faster than the 320M and in the same performance range as class 1 NVidia and AMD GPUs. This is the higher TDP quad-core CPU but AFAIK, the GPU is the same.

How the f did they do that with 12 processing units?? The GTX has 192!!

Yeah, it seems like it might turn out ok after all.

Those results don't make sense. Rough estimates from my readings online reflect Sandy Bridge IGPs to be between the Arrandale IGPs and the 9400M. Highly unlikely to be better.

BTW doesn't 3DMark06 reflect a combined CPU+GPU score? In which case the CPU score may be skewing things massively, or some other mistakes. What's the 3DMark06 GPU score? :confused: and don't have time to dive into this further.

I'll repeat my assertion: Sandy Bridge IGPs are still, crap. Feel free to prove me wrong. (Not being sarcastic here)
post #62 of 127
Quote:
Originally Posted by Mynameisjoe View Post

Inthought Intel and NVidia had settled their suit. When I did a search I didn't see anything definitive but the trial was supposed to start on the 6th. Assuming a settlement is in place and it allows NVidia to make chipsets for intel chips, Apple would be able to use both intels latest chips and NVidia's integers graphics chips.

They settled the suit. But IIRC Nvidia came out with a statement that they weren't going to make chipsets for Nehalem, etc.

Edit: I see another poster mentioned this already. Oh well...
post #63 of 127
Quote:
Originally Posted by nvidia2008 View Post

Those results don't make sense. Rough estimates from my readings online reflect Sandy Bridge IGPs to be between the Arrandale IGPs and the 9400M. Highly unlikely to be better.

It depends if some tested chips didn't have finalised features though. One of the improvements to the new EUs is that they do transcendental Maths in hardware so sine and cosine on the GPU are "several orders of magnitude faster" than with Arrandale. That doesn't mean 70x but 10,000,000x faster (unless they used the terminology wrong).

Part of their design focus for SB was to put everything that didn't need to be programmable into fixed function hardware. This is what they did with OnLive. The real-time compression algorithms were tested on high-end Xeon workstations and maxed them out but by putting them in silicon, they matched the performance at a fraction of the cost and power consumption.

Their idea is that anything you need flexibility for, you do on the CPU and for the most common operations, you use fixed function. This might end up faster than GPGPU because instead of recoding entire blocks of code to send off to the GPU, you just use the fixed function parts of the CPU per line of code and the compiler or run-time can speed this up automatically - no recoding required and you also get process protection (no lockups like on a GPU).

Quote:
Originally Posted by nvidia2008 View Post

BTW doesn't 3DMark06 reflect a combined CPU+GPU score? In which case the CPU score may be skewing things massively, or some other mistakes. What's the 3DMark06 GPU score? :confused: and don't have time to dive into this further.

The CPU scores are shown on the second link. They are fairly even between all the machines tested even though they have slightly different CPUs in each. The SM 2 & 3 scores matched the GTX quite closely. 3DMark might have picked up the GPU name wrongly or the machine was in hybrid SLI and 3DMark was only choosing the name of the IGP.

The CPU score does make it an unfavourable comparison to current GPUs in current machines as an i7 mobile CPU gets 3k and the new i7 was scoring 5k so overall scores would be lower by 2k but like I say, the comparison between the GPUs tested doesn't have that problem.

Intel have been demoing the chips themselves. Here with Starcraft 2:

http://www.youtube.com/watch?v=4ETnmGn8q5Q

and here with Mass Effect 2:

http://www.youtube.com/watch?v=OwW2Wc9XSD4

The comparison was between integrated and a mid-range discrete GPU. What's interesting there is that the fire on the right with SB looks like it's stuttering whereas it's smooth on the left and those do look like laptops behind the presenter. The power meter was showing Sandy Bridge at <50% of the other one. So given that SB has a 35W TDP, the other machine would be a laptop drawing 50-70W.

They said the effects were all set on maximum. I'm sure the 320M was capable of that but not at a very high resolution and with AA off and it wasn't all that smooth.

It just doesn't feel right though, it's like when we saw the IE9 benchmarks beating every other browser and they found out they were cheating by skipping some of the tests.

Anyway, if they manage to allow people to play Mass Effect 2 on entry laptops at maximum quality, I think that will be a good achievement as well as a 100% speedup over C2D.
post #64 of 127
Quote:
Originally Posted by nvidia2008 View Post

Those results don't make sense. Rough estimates from my readings online reflect Sandy Bridge IGPs to be between the Arrandale IGPs and the 9400M. Highly unlikely to be better.

BTW doesn't 3DMark06 reflect a combined CPU+GPU score? In which case the CPU score may be skewing things massively, or some other mistakes. What's the 3DMark06 GPU score? :confused: and don't have time to dive into this further.

I'll repeat my assertion: Sandy Bridge IGPs are still, crap. Feel free to prove me wrong. (Not being sarcastic here)

From the 3D Mark website:

"A 3DMark score is an overall measure of your systems 3D gaming capabilities, based on comprehensive real-time 3D graphics and processor tests. By comparing your score with those submitted by millions of other gamers you can see how your gaming rig performs, making it easier to choose the most effective upgrades or finding other ways to optimize your system."

So yes it is a combination of CPU and GPU. The real test will be if it is pulling similar gaming numbers and if the game features look as nice as the discrete GPUs.
post #65 of 127
Quote:
Originally Posted by Lukeskymac View Post

Exactly. I would rather see Apple going AMD than downgrading their graphics to Intel crap.

Agree. Apple's graphics hardware has always been the subject of much humor. The last thing we need is for a new computer to be going sideways or backwards when it should be moving forward.
Prosecutors will be violated
Reply
Prosecutors will be violated
Reply
post #66 of 127
Quote:
Originally Posted by John.B View Post

I'm not sure I follow you here. Are you sure you aren't confusing OpenCL with OpenGL?

I know exactly what OpenCL is, but thanks for assuming I don't.
post #67 of 127
Guys, we're way off base here. I've reviewed the data.


1. Sandy Bridge IGP cannot possibly do 5k+ in those SM2 and SM3 scores. This is because 5K in SM2 and SM3 in 3DMark06 is what my overclocked ATI Radeon 4830 512MB can do. I can play Starcraft 2 on High settings at 1920x1080, Dirt 2 on max settings at 1920x1080, etc. It really is a capable card. Them modern equivalent is an ATI Radeon 5750 or something like that. If an integrated on-die GPU in an Intel CPU can do that, AMD might as well set their headquarters on fire and collect the insurance money, because this would mean AMD Fusion is completely blown to hell.


2. Those Intel's demos as you linked to are extremely horrible and really nefarious.

A. The Mass Effect 2 demo is the starting scene where it looks like there are a lot of objects but it is just a short corridor with flame and other effect thingys. The fact that there is no camera movement, no interactivity, means that demo is really worthless. In fact, that scene of the game is one of the easiest to render. Most in-game scenes have 10x that complexity.

B. Starcraft 2, ditto, in SC2 you can dial the detail level way down, and there is not much camera movement and so on.

C. Putting those clips in a nice little window, while good for comparison, means they're probably running at 800x600 or 1024x768 which are extremely low resolutions.

D. Intel never specifies *what* discrete GPU they are comparing it to. Probably a low-end one, for sure.

E. Anyone that has played Mass Effect 2 and Starcraft 2 will know those Intel demos do not test real gameplay experience in any way.


3. Marvin, I appreciate you sourcing all the material, but I have to tell everyone, *this is not the GPU you are looking for* ~ without having to use any mind tricks. Intel's demos show about up to a Nvidia 9400M level of GPU capability, at the very, very best they could approach 320M level of GPU capability, but I highly doubt that.


4. At this stage I cannot but conclude that there is no way Sandy Bridge IGP can deliver anything more than the Nvidia 320M and as such if Apple were to use it, that would be a step backwards. For the MBP 13", I believe Apple will have to bite the bullet and stick a low-cost discrete GPU in there (probably ATI). MacBook Airs are fine because through 2011 I don't think they would get Core i-series chips (unless it is a rebranded Core 2).


5. If you look at iMovie '11 for example, the level of GPU computation that is required to render a lot of the graphics, I really can't see Sandy Bridge IGP doing that, particularly for a "pro" Mac laptop. (And it is clear the GPU is very heavily leveraged because this is what allows the MBA with 320M to actually be able to do iMovie '11 stuff with such a limited CPU - of course the final "render" of the iMovie edit will be much slower, but the MBA is highly responsive during editing and effects, transitions, etc.)



Quote:
Originally Posted by Marvin View Post

The CPU scores are shown on the second link. They are fairly even between all the machines tested even though they have slightly different CPUs in each. The SM 2 & 3 scores matched the GTX quite closely. 3DMark might have picked up the GPU name wrongly or the machine was in hybrid SLI and 3DMark was only choosing the name of the IGP.

The CPU score does make it an unfavourable comparison to current GPUs in current machines as an i7 mobile CPU gets 3k and the new i7 was scoring 5k so overall scores would be lower by 2k but like I say, the comparison between the GPUs tested doesn't have that problem.

Intel have been demoing the chips themselves. Here with Starcraft 2:

http://www.youtube.com/watch?v=4ETnmGn8q5Q

and here with Mass Effect 2:

http://www.youtube.com/watch?v=OwW2Wc9XSD4

The comparison was between integrated and a mid-range discrete GPU. What's interesting there is that the fire on the right with SB looks like it's stuttering whereas it's smooth on the left and those do look like laptops behind the presenter. The power meter was showing Sandy Bridge at <50% of the other one. So given that SB has a 35W TDP, the other machine would be a laptop drawing 50-70W.

They said the effects were all set on maximum. I'm sure the 320M was capable of that but not at a very high resolution and with AA off and it wasn't all that smooth.

It just doesn't feel right though, it's like when we saw the IE9 benchmarks beating every other browser and they found out they were cheating by skipping some of the tests.

Anyway, if they manage to allow people to play Mass Effect 2 on entry laptops at maximum quality, I think that will be a good achievement as well as a 100% speedup over C2D.
post #68 of 127
It really doesn't matter what Intel GPUs have been like in the past, you can only judge SB on what Apple ships with it. We can pretty much assume right off the bat that Apple drivers will be wanting for example. As to Apple hardware it wouldn't be imPossible for them to ship hardware with the low end Sandy Bridge GPU.
Quote:
Originally Posted by nvidia2008 View Post

Those results don't make sense. Rough estimates from my readings online reflect Sandy Bridge IGPs to be between the Arrandale IGPs and the 9400M. Highly unlikely to be better.

Im not sure why you would say that! Seriously intel could simply have been embarrassed about it's previous GPUs. Or it could be scared to death of AMDs coming Fusion products. I suspect that it has become obvious to the industry that 3D performance is a big deal these days and that in general the GPU plays a huge part in positive user experiences.
Quote:
BTW doesn't 3DMark06 reflect a combined CPU+GPU score? In which case the CPU score may be skewing things massively, or some other mistakes. What's the 3DMark06 GPU score? :confused: and don't have time to dive into this further.

I'll repeat my assertion: Sandy Bridge IGPs are still, crap. Feel free to prove me wrong. (Not being sarcastic here)

Until there is shipping Apple machines we simply don't know! I have zero faith in these mouth pieces talking about pre release Intel hardware. They have no credibility at all in my mind. I do trust that Intel has completely overhauled Sandy Bridge and as a result it is a much better upgrade than many new Intel releases.

As a side note there may be concerns about how they are getting these good numbers with the limited execution units on the GPUs. This has me wondering what the devices clock rate is like. A fast clock might skew some benchmarks more than maybe they should. We really need specifics on the hardware before we start to wonder about numbers.
post #69 of 127
Quote:
Originally Posted by FuturePastNow View Post

I know exactly what OpenCL is, but thanks for assuming I don't.

That's why I said I was confused:

Quote:
Originally Posted by FuturePastNow View Post

This is a good decision. Most people don't play 3D games and won't notice a slight reduction in graphics performance, but they might notice a (comparatively much larger) increase in CPU performance, and they will notice an increase in battery life.

OpenCL is never going to amount to anything on such low-end GPUs as the Nvidia IGP's we're discussing.

Then why wouldn't access to additional processing power -- via GPGPU -- be a good thing? If the 320M (or better) IGP is underutilized except when decoding HD content, why not use that for specific types of application processes that can benefit from the OpenCL architecture?

And I don't know that I agree that the Nvidia IGP is all that "low end"' for a properly written OpenCL process.

Also (and I'm no expert here), don't our Windows-using brethren need a certain level of 3D support for the basic Aero interface in Win7? Isn't this a larger problem space for Intel than just smaller Apple notebooks?

   Apple develops an improved programming language.  Google copied Java.  Everything you need to know, right there.

 

  MA497LL/A FB463LL/A MC572LL/A FC060LL/A MD481LL/A MD388LL/A ME344LL/A

Reply

   Apple develops an improved programming language.  Google copied Java.  Everything you need to know, right there.

 

  MA497LL/A FB463LL/A MC572LL/A FC060LL/A MD481LL/A MD388LL/A ME344LL/A

Reply
post #70 of 127
However that doesn't mean that the GPU isn't significantly improved. We will not know until product ships.

Quote:
Originally Posted by nvidia2008 View Post

Guys, we're way off base here. I've reviewed the data.


1. Sandy Bridge IGP cannot possibly do 5k+ in those SM2 and SM3 scores. This is because 5K in SM2 and SM3 in 3DMark06 is what my overclocked ATI Radeon 4830 512MB can do. I can play Starcraft 2 on High settings at 1920x1080, Dirt 2 on max settings at 1920x1080, etc. It really is a capable card. Them modern equivalent is an ATI Radeon 5750 or something like that. If an integrated on-die GPU in an Intel CPU can do that, AMD might as well set their headquarters on fire and collect the insurance money, because this would mean AMD Fusion is completely blown to hell.

it wouldn't be impossible for the hardware or the drivers to be cooked for the benchmarks. It isn't like it hasn't been done in the past. We will only have an answer when we can run real hardware and software up against competeing platforms.

One should not however discount sneaky approaches that are Possible. For example they could be making extensive use of the CPU vector units. While possibly valid it would mean many CPU cycles wasted on graphics. With the high speed buses and cache memory this could be a very real technique.

There are potentially a number of way for Intel to be sneaky here yet at the same time be somewhat honest.
Quote:

2. Those Intel's demos as you linked to are extremely horrible and really nefarious.

As is much of what I've seen on the net recently. Much of the previewing appears to be paid marketing from Intel. This is why I stress the need for shiping hardware.
Quote:
A. The Mass Effect 2 demo is the starting scene where it looks like there are a lot of objects but it is just a short corridor with flame and other effect thingys. The fact that there is no camera movement, no interactivity, means that demo is really worthless. In fact, that scene of the game is one of the easiest to render. Most in-game scenes have 10x that complexity.

B. Starcraft 2, ditto, in SC2 you can dial the detail level way down, and there is not much camera movement and so on.

C. Putting those clips in a nice little window, while good for comparison, means they're probably running at 800x600 or 1024x768 which are extremely low resolutions.

D. Intel never specifies *what* discrete GPU they are comparing it to. Probably a low-end one, for sure.

E. Anyone that has played Mass Effect 2 and Starcraft 2 will know those Intel demos do not test real gameplay experience in any way.


3. Marvin, I appreciate you sourcing all the material, but I have to tell everyone, *this is not the GPU you are looking for* ~ without having to use any mind tricks. Intel's demos show about up to a Nvidia 9400M level of GPU capability, at the very, very best they could approach 320M level of GPU capability, but I highly doubt that.

It wouldn't be the first time things have been skewed to paint Intel hardware in a brighter light than it deserves. I suspect that in the end we will find that the hardware has an optimal set off features that work well but that overall the experience is ver middle of the road.
Quote:
4. At this stage I cannot but conclude that there is no way Sandy Bridge IGP can deliver anything more than the Nvidia 320M and as such if Apple were to use it, that would be a step backwards. For the MBP 13", I believe Apple will have to bite the bullet and stick a low-cost discrete GPU in there (probably ATI). MacBook Airs are fine because through 2011 I don't think they would get Core i-series chips (unless it is a rebranded Core 2).

Yep a very real possibility. At best we may see corner cases where the GPU is pretty good.
Quote:

5. If you look at iMovie '11 for example, the level of GPU computation that is required to render a lot of the graphics, I really can't see Sandy Bridge IGP doing that, particularly for a "pro" Mac laptop. (And it is clear the GPU is very heavily leveraged because this is what allows the MBA with 320M to actually be able to do iMovie '11 stuff with such a limited CPU - of course the final "render" of the iMovie edit will be much slower, but the MBA is highly responsive during editing and effects, transitions, etc.)

The problem with a lot of these pre release comparisons is that we don't know how the caches and high speed buses will impact results. Well that and the CPU /GPU code split. In the end SB is a whole new world and we may see the impact of it's architecture in strange ways.
post #71 of 127
Quote:
Originally Posted by nvidia2008 View Post

Intel never specifies *what* discrete GPU they are comparing it to. Probably a low-end one, for sure.

Intel's demos show about up to a Nvidia 9400M level of GPU capability, at the very, very best they could approach 320M level of GPU capability, but I highly doubt that.

At this stage I cannot but conclude that there is no way Sandy Bridge IGP can deliver anything more than the Nvidia 320M and as such if Apple were to use it, that would be a step backwards.

I really can't see Sandy Bridge IGP doing that, particularly for a "pro" Mac laptop. (And it is clear the GPU is very heavily leveraged because this is what allows the MBA with 320M to actually be able to do iMovie '11 stuff with such a limited CPU - of course the final "render" of the iMovie edit will be much slower, but the MBA is highly responsive during editing and effects, transitions, etc.)

I found a clearer video of the demo and from the writing on the laptop around 0:28, it would appear they are using an unbranded Toshiba Satellite:

http://www.youtube.com/watch?v=7ImQ3...eature=related
http://us.toshiba.com/computers/lapt...660/A665-S6092

Graphics card would be the mid-range 330M that is used in the Macbook Pro. I definitely see some lag/stuttering in the Sandy Bridge one (1:00) vs the 330M but during the presentation, he says that the SB one is also simultaneously capturing HD in-game footage, presumably unlike the other one. Given that the Satellite only supports 720p, I'd guess that's what the games are running at.

That demo certainly looks a long way off the 3DMark 06 score and there's no way it comes close to a GTX 460M. However, it looks to be between a 320M and 330M and that's all they need for this demographic.

While it seems like a bit of a lack of progress for now, when Ivy Bridge hits at the end of the year, they can put in 18-24EUs with the smaller fabrication and then we're into the realm of the GTX 460M.

In terms of iMovie etc, they will be using Core Image, which leverages GLSL when available (which is supported by Intel's IGPs) and if not, falls back to the CPU. While it's true that there's more chance of falling back to the CPU than with a general compute GPU like AMD or NVidia, it's rarely if ever going to affect people who buy that grade of machine.

The MBP will still have the dedicated chip for compute and I would hazard a guess that since NVidia's chips haven't improved much with the 4xxM series that it will be the Radeon 6550M or 6570M:

http://www.notebookcheck.net/AMD-Rad...M.41143.0.html

The power consumption ranges between 11-30W and obviously Apple can throttle it down to stay below 20W or not bother and rely on their graphics switching tech. This performance is on par with a 5650 and exceeds the 330M by 5-20%.

It's not a nice-tasting pill to swallow but I think that's it for NVidia. None in the iMacs or Mac Pros so far, with this change, it will be Intel IGP in the Mini, MB, MBA and the MBP will have Intel IGP + a Radeon card.

Intel won, not by being better but by being bigger and pushing them out of the way. It had to happen but NVidia don't deserve to go out this way.
post #72 of 127
Quote:
Originally Posted by wizard69 View Post

Im not sure why you would say that! Seriously intel could simply have been embarrassed about it's previous GPUs. Or it could be scared to death of AMDs coming Fusion products. I suspect that it has become obvious to the industry that 3D performance is a big deal these days and that in general the GPU plays a huge part in positive user experiences.


Until there is shipping Apple machines we simply don't know! I have zero faith in these mouth pieces talking about pre release Intel hardware. They have no credibility at all in my mind. I do trust that Intel has completely overhauled Sandy Bridge and as a result it is a much better upgrade than many new Intel releases.


Intel has been saying all along that Sandy Bridge's GPU would be about twice as good as what their current IGP was. This link only shows 3DMark06 test scores unfortunately, but the 320 is almost 3 times better, so doubling would be less good. I have seen other benchmarks that routinely have the IGP behind by more, w/the 320 quadrupling numbers. If they double and are still half as good as the 320M, how can that be anything but a step backwards?

Below are some links and a couple of comparative numbers based on some of the only games that were tested in common.

http://www.notebookcheck.net/Mobile-...ist.844.0.html

320m #123 on list 3DMark06 4155
9400m #173 on list 3DMark06 1348
GMA HD #191 on list 3DMark06 1503


Gaming performance: http://www.notebookcheck.net/Compute...s.13849.0.html

320M F.E.A.R. 49fps high Doom 3 124fps high
9400m F.E.A.R. 17fps high Doom 3 83fps high
GMA HD F.E.A.R. N/A high Doom 3 21fps high


Here is a few links showing figures for all 3 cards. Note that the GMA HD was incaable of even playing most of the games, including games like Starcraft 2. This is a game produced by Blizzard, who is known for having their games be extremely flexible as far as what kind of equipment they will run decently on.

320M http://www.notebookcheck.net/NVIDIA-...M.28701.0.html
9400m http://www.notebookcheck.net/NVIDIA-...G.11949.0.html
GMA HD http://www.notebookcheck.net/Intel-G...HD.9883.0.html


The benchmark numbers for SB being debated are highly questionably and anyone who has kept up with the progression of these video cards at all will know this. So unless Intel magically in the last month managed to jack things from a 2x to 4x improvement, it is still well behind the 320m.

Something else to keep in mind is that the GPU in the Sandy Bridge chip will NOT be a DX11 part, but Fusion is. Some will say no big deal, but with the recent Cataclysm expansion for World of Warcraft, if you enable the DX11 optimizations, you will receive a 20-30% performance boost. That is huge.
post #73 of 127
Quote:
Originally Posted by Josh2012 View Post

Will Intels graphics at least match the 320m in present 13" MacBook's?

This will be severly disappointing if Apple down grades the GPU in the next update.

Not buying it, they made this mistake once already & had to back pedal on the move. If they switch to integrated Intel chips only their competitors are going to have a marketing heyday with it.
post #74 of 127
Quote:
Originally Posted by John.B View Post

That's why I said I was confused:



Then why wouldn't access to additional processing power -- via GPGPU -- be a good thing? If the 320M (or better) IGP is underutilized except when decoding HD content, why not use that for specific types of application processes that can benefit from the OpenCL architecture?

And I don't know that I agree that the Nvidia IGP is all that "low end"' for a properly written OpenCL process.

Also (and I'm no expert here), don't our Windows-using brethren need a certain level of 3D support for the basic Aero interface in Win7? Isn't this a larger problem space for Intel than just smaller Apple notebooks?

More processing power is a good thing, in a global sense. I used to be a big proponent of GPGPU and I am glad that there's an open standard available for use.

But contrary to its name GPGPU is limited in what it can do, and you need a fast GPU to really show it off (most benchmarks you see are run on powerful graphics cards). Do you sacrifice general-purpose CPU power for a slight gain in, say, simulation of protein folding?

It's true that modern operating systems- including OS X- want a minimal level of graphics acceleration, though they all have a 2D fallback. Windows' Aero interface is supported by Intel graphics going back to GMA 950, so Intel has had that covered for years. OS X actually does some things that Arrandale's "HD Graphics" doesn't like, but that's driver related. 3D support for operating systems just isn't a problem for Intel. They evem accelerate HD video playback in hardware now.

IF Intel and Apple can turn out a decent driver, no one but gamers will notice if the Macbook is using Intel graphics, but a more modern CPU will benefit everybody.
post #75 of 127
Quote:
Originally Posted by Marvin View Post

It's not a nice-tasting pill to swallow but I think that's it for NVidia. None in the iMacs or Mac Pros so far, with this change, it will be Intel IGP in the Mini, MB, MBA and the MBP will have Intel IGP + a Radeon card.

Intel won, not by being better but by being bigger and pushing them out of the way. It had to happen but NVidia don't deserve to go out this way.

It's sad. Nvidia was kicking ass in the heady 6800, 8800, 9600, 9800 days. The awesome G92 chip that saw them so clearly dominate around 2006-2008. SLI innovation and Quadros.

Nvidia had three strikes that dealt them out of being serious competitors to Intel and AMD.

The first was post-G92 where they came up with the powerful but hot and heavy GTX260. While good for high-end gamers they were never really able to bring it down to appropriate laptop GPUs and relied on G92 derivatives for well past its due date and excruciating rounds of rebadging 9800 etc GPUs.

Next up was Fermi. Apparently it was meant for GPGPU, high-end computing etc. so, after much delays, it came out, and... was again too hot and heavy. They managed to carve out some nice GPUs like the GTX460 but again in the laptop space more derivatives and rebranding of who knows what, and not quite the killer discrete graphics at great performance-per-watt across the board.

Third and the real sucker punch was Intel shafting them by out-of-the-blue disallowing them from making Intel chipsets (and subsequently no more AMD chipsets with AMD buying ATI) and Intel force-bundling GPUs with all Intel CPUs... This pretty much destroyed both Nvidia chipset products as well as low-end Nvidia discrete graphics.

If not for Nvidia's extremely robust marketing, CUDA, their generally better-than-ATI PC drivers and Nvidia's very close relationships with top-class game development studios, Nvidia would be pretty much toast by now. I don't know about the financials but I think my username reflects when they were at their peak this decade.
post #76 of 127
Quote:
Originally Posted by FuturePastNow View Post

IF Intel and Apple can turn out a decent driver, no one but gamers will notice if the Macbook is using Intel graphics, but a more modern CPU will benefit everybody.

Depends... If Core Image/ Core Graphics performs as good as if not better than a 320M on an Intel IGP, then that more modern CPU will certainly benefit everyone. If not, then, well, it's a sideways or backwards step for Macs. Have you seen iMovie '11? All that realtime rendering (and there is tons of it especially in iMovie 2011) ...is heavily GPU-based.

Core Image (not OpenCL or other GPGPU) is so critical to so much of what we do on a Mac. As seen in the MBA, real powerhouse CPUs are not relevant to most of the population. Not to mention SSDs can remove huge bottlenecks on the Mac.

It's an interesting time. CPU, GPU, APU, all battling it out.

Like I said, if I could get a MacBook Air 15" with no ODD and Corei5 Sandy Bridge and ATI 6800 series 1GB VRAM and 320GB SSD, that would be sweeeeet and have all the benefits with not too much compromise.

Yet we should keep in mind the iPad will outsell the Mac in 2011 by a very large margin.

Our whole notion of computing is being reworked over the next few years.
post #77 of 127
One point that hasn't been raised is that the SB IGP is not on the other side of a PCIe bus. The discretes and other IGP designs sit on the other side of a PCIe bus port, even if they are on-chip in the memory controller. This represents a substantial bottleneck, particularly for OpenCL where the data tends to want to go back and forth between the CPU and GPU (as opposed to graphics where it all just goes out to the GPU). The SB IGP, therefore, will have an advantage if it is running OpenCL. Will Intel (or Apple) support OpenCL on these devices? Can't say for sure, but consider that Intel was silent on OpenCL until introducing their x86 OpenCL alpha release (http://software.intel.com/en-us/arti...el-opencl-sdk/) last month. Given their job postings and Apple's interest in OpenCL, I think its a good bet that SB IGP will support it... and do so in a useful way. In addition, Intel's x86 OpenCL implementation will clearly work very well on AVX, and since it is doing some aggressive SSE/AVX optimization it looks like it will perform at 4-8x current CPU OpenCL implementations. For many algorithms this makes it as fast as or faster than GPUs running the same code.
Providing grist for the rumour mill since 2001.
Reply
Providing grist for the rumour mill since 2001.
Reply
post #78 of 127
Quote:
Originally Posted by nvidia2008 View Post

Yet we should keep in mind the iPad will outsell the Mac in 2011 by a very large margin.

Yes, and it'll be interesting to see if the next iPad will include multiple ARM cores or an OpenCL capable GPU.
Providing grist for the rumour mill since 2001.
Reply
Providing grist for the rumour mill since 2001.
Reply
post #79 of 127
It will likely become even more important in future updates to Mac OS.
Quote:
Originally Posted by FuturePastNow View Post

More processing power is a good thing, in a global sense. I used to be a big proponent of GPGPU and I am glad that there's an open standard available for use.

You should still be a big advocate.
Quote:
But contrary to its name GPGPU is limited in what it can do, and you need a fast GPU to really show it off (most benchmarks you see are run on powerful graphics cards).

Why of course you do, that is show off your technology on the best hardware available. It is the nature of showing off, however that doesn't mean least cards can't effectively accelerate certain classes of problems. The problem just needs to map cleanly to the GPU.

You do highlight one issue and that is the confusion some people have with the term GPGPU computing. The reality is that today it is anything but general purpose and instead is a great facility for accelerating certain problem sets. At least with today's processors you have to be able to justify the overhead of using something like OpenCL in a project. Where it is a win it is often a win on modest GPUs. .
Quote:
Do you sacrifice general-purpose CPU power for a slight gain in, say, simulation of protein folding?

I see this as BS! How many people do constructive protein folding on their PCs? GPGPU computing has a much wider array of viable applications than that. You try to reinforce this notion that GPGPU computing is of no use to the general population which I don't buy at all.
Quote:
It's true that modern operating systems- including OS X- want a minimal level of graphics acceleration, though they all have a 2D fallback. Windows' Aero interface is supported by Intel graphics going back to GMA 950, so Intel has had that covered for years. OS X actually does some things that Arrandale's "HD Graphics" doesn't like, but that's driver related. 3D support for operating systems just isn't a problem for Intel. They evem accelerate HD video playback in hardware now.

Frankly I have to disagree with you here too. There is a big (MASSIVE) difference between supporting something and having that something work well. To that point I'm not convinced that 3D is such a big issue for Mac OS right now. In the near future I would expect many GPU cycles to go to things like resolution independence and enhancements to things like preview in the finder.
Quote:
IF Intel and Apple can turn out a decent driver, no one but gamers will notice if the Macbook is using Intel graphics, but a more modern CPU will benefit everybody.

Again this is BS! People notice GPU performance more than just about anything else on a PC these days. A fluid user interface is pretty much expected which a good GPU goes a long way to delivering. Think back to a couple of years when the first machines with Intel integrated GPUs came out, people rejected the machines right and left. Very few of those people where gamers as if gaming was ever a big thing on the Mac. Macs by their nature as "the graphical machine" need better than run of the mill GPUs.

Now to the question of drivers and Sandy Bridge, I'm trying to keep an open mind here. The problem is Intel has a really bad history here. My big fear is that this is a slip backwards with respect to GPU performance and support of core technologies. I don't think anybody on this forum knows for sure what the situation is. For one thing you can't be reasonable expected to trust sites with prerelease Intel hardware. The next issue is those drivers which often vary drastically from what we see in the Windows world. Things can easily add up to crappy Mac OS performance for SB. I just hold out hope that SB won't be that bad GPU wise.

Im also not so technically illiterate not to realize that integrated GPUs are the wave of the future. In the end there is more to gain by tightly coupling the GPU to the CPU than there is to loose. Especially considering if AMDs vision with respect to Fusion ever matures. Eventually all or most of the obsticals to using the GPU to accelerate apps will be gone. Use of that hardware will become so transparent that we won't think of it as anything odd. In any event the coming SB and Fusion chips are just the start of a whole new generation of processors. Most reasonable people should be hopeful that they do well in their initial release.
post #80 of 127
Quote:
Originally Posted by Programmer View Post

Yes, and it'll be interesting to see if the next iPad will include multiple ARM cores or an OpenCL capable GPU.

Just from the standpoint of being competitive I would think that an SMP ARM processor is a requirement in iPad 2. Since Apple has been working with Imagination with respect to OpenCL support I would suspect that OpenCL will be supported on the SoC also.


Much of the infrastructure is already there in iOS so it won't be a big deal when it happens. Dual core will come because it is the low power way to increase performance. OpenCL will be there because it exposes hardware to apps so that it can be leveraged as needed.

More interesting will be how Apple implements the architecture of this new SoC. Will the GPU be an equal partner to the CPU? Will the GPU support threads or other partitioning? Will they implement an on board cache or video RAM buffer? Lots of questions but really this is what interests me about iPad 2, that is just what does Apple have up it's sleeve with respect to the next iPad processor. Considering all the recent patents it could be a major advancement or it could be another run of the mill ARM SoC.
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Future Apple Hardware
AppleInsider › Forums › Mac Hardware › Future Apple Hardware › Apple to use Intel's Sandy Bridge without Nvidia GPUs in new MacBooks