or Connect
AppleInsider › Forums › Mac Hardware › Future Apple Hardware › Apple to use Intel's Sandy Bridge without Nvidia GPUs in new MacBooks
New Posts  All Forums:Forum Nav:

Apple to use Intel's Sandy Bridge without Nvidia GPUs in new MacBooks - Page 3

post #81 of 127
Quote:
Originally Posted by nvidia2008 View Post

Have you seen iMovie '11? All that realtime rendering (and there is tons of it especially in iMovie 2011) ...is heavily GPU-based.

Here's something to check out. Scroll down to the last few paragraphs.

Quote:
Intel confirmed that Sandy Bridge has dedicated video transcode hardware that it demoed during the keynote. The demo used Cyberlink’s Media Espresso to convert a ~1 minute long 30Mbps 1080p HD video clip to an iPhone compatible format. On Sandy Bridge the conversion finished in a matter of a few seconds (< 10 seconds by my watch).
post #82 of 127
I like seeing the anger, disappointment etc in this thread. Feel my pain.

The reason I am not posting is because John.B, nvidia2008 and nht are covering all the bullet points so well.

Intel's IGP are crap, plain and simple. If they double performance every year for the next ten years, they will still be crap relative to all other options available. "Sucking less than previous Intel IGPs" does not qualify as an 'accomplishment.' It qualifies as Moore's Law plus clockspeed increases.

Benchmarks that make Intel IGP look good are somehow borked.

Anand's benchmarks were for a desktop (higher clocked), 12 EU version (mobile will be only 6).

OpenCL has left a yawning chasm that AMD alone can fill. Let's hope Apple carpe the diem.
post #83 of 127
Quote:
Originally Posted by 1337_5L4Xx0R View Post

Intel's IGP are crap, plain and simple. If they double performance every year for the next ten years, they will still be crap relative to all other options available.

Where's the point where they become good enough that it doesn't matter though?

Right now, a 320M is about half of what an XBox 360 is capable of and it can play any game that's out today - that's about 3,500 graphically intense games. The highest end games push it to the limit but even on low quality, a game like Metro 2033 looks good:

http://www.youtube.com/watch?v=2hexGRRDZIY

Years ago, it used to matter a lot to get the highest GPU and CPU as their capability was so low. There also weren't that many decent games available. We're not at that stage any more.

If Intel manage to get in the region of the 320M with this revision, it's a side-step but Ivy Bridge could double it again within 12 months. Then you have XBox 360 performance in a laptop. Nobody complains about that quality.

You might say that in 2012, new consoles could arrive with new standards to meet but will the games really look much better than they do now and will PC versions be prevented from ramping down to suit lower hardware? I think not. From seeing the graphics quality that is in games right now compared to post-production graphics, I don't think we are much more than a couple of generations away from having enough visual quality that it doesn't matter if you only have a low-end GPU.

If you compare the entry iMac to the highest-end desktop card you can buy that is a single card and not two jammed into a single unit, the difference is just 2x. With the dual card models, it's still under 3x.

This means that you can either have a huge desktop tower consuming 200-300W running games at 50-75FPS with the fans blasting away or you have a Macbook Air running them at 25FPS consuming under 35W. The 320M stays extremely cool while playing.

Mafia 2 maximum quality + PhysX:
http://www.youtube.com/watch?v=N78L_iy7IZ8

Here's Mafia 2 medium quality out on a high-end GTX 480:
http://www.youtube.com/watch?v=iwGKpghgWoc

Here's the same game, same road on a Mac Mini 2010:
http://www.youtube.com/watch?v=xvOW3GzWAeo

Here's the iMac 4850 (similar to the 330M performance):
http://www.youtube.com/watch?v=MisekqMerEw

It's slightly hard to compare as the Mini one has an external camera but the experience is comparable.

Quote:
Originally Posted by 1337_5L4Xx0R View Post

Anand's benchmarks were for a desktop (higher clocked), 12 EU version (mobile will be only 6).

The desktop ones have the same clock as the comparative mobile ones and the mobile ones all have 12 EUs. It's the desktop models that come with either 6 or 12 EUs.
post #84 of 127
Fair points Marvin. But... For me, console quality graphics is a very low standard nowadays. I'm trying NFS Hot Pursuit on my PC and not only is it a shitty port with stupid unnecessary lags but the overall 3D environment and quality really feels "cheap". Even CoDMW2 had some scenes where the background sky was clearly one pixellated single image plane. Consoles have really mangled PC gaming from it's wonderful heights. Look at HAWX2... To think that a flight sim in 2010 looks and plays like that is a little disturbing.

But your point about console gaming being the standard that everyone expects and enjoys is also true. There is a certain aesthetic and gameplay, once reached, that makes it acceptable and successful, despite technical deficiencies compared to full-fledged PC titles like Starcraft2.

One thing that kills me about consoles is the lack of proper antialiasing, which results in most cases causing the PC port to also lack antialiasing, even though mid range PC GPUs can easily chew on 2xAA or 4xAA at 1920x1080.

On another note 320M is impressive but I think 4x that is just about a nice setting to be "good enough for most people not to have to care anymore". I'm obviously biased but I think the ATI 5850 for gaming performance and Nvidia 460 for GPGPU etc is the "benchmark" for 2011-2015. Any next gen console would be killer using at least that level of graphics quality. Also, there is no getting by the fact that dedicated 1GB VRAM does absolute wonders for all kinds of stuff at 1080p.

I was initially skeptical about DX10 but implemented wisely and on decent hardware the shaders, aesthetic, realism, etc. is quite nice. DX11, while mainly focused on tessellation, is also quite impressive. Run UniEngine and some of the latest DX11 titles on a very high-end PC and it is like going from standard def to HD, you really might not want to go back to anything else.

Gameplay is of course an entirely different issue. Cut the Rope on my iPad has been really quite good. Not so much Angry Birds - that's just pure digital cocaine - you get the rush but the edginess too when you can't complete a level!
post #85 of 127
Quote:
Originally Posted by 1337_5L4Xx0R View Post

I like seeing the anger, disappointment etc in this thread. Feel my pain.

Which should be saved until something ships! Otherwise we have people speculating on what they think a processor is capable of. If Apple ships a bunch of Mac Laptops which show a step backwards in performance then I might very well join you in releasing anger. However I'm not going to jump the gun on this issue and will instead simply wait for Apple to debut something.
[quote]
The reason I am not posting is because John.B, nvidia2008 and nht are covering all the bullet points so well.

Intel's IGP are crap, plain and simple. If they double performance every year for the next ten years, they will still be crap relative to all other options available. "Sucking less than previous Intel IGPs" does not qualify as an 'accomplishment.' It qualifies as Moore's Law plus clockspeed increases.
[/quote[
The current intel IGPU's do suck, there is no way around that, All I'm suggesting is avoiding passing judgment upon the The coming SB chips. In the end you need an open mind until intel convinces you to close it.
Quote:
Benchmarks that make Intel IGP look good are somehow borked.

Well I don't have much respect for the people doing the prerelease testing as I really think they are part of Intels marketing campaign.

However what are you going to say if some respectable people get the hardware, test it and post results showing better than 320M performance? It is pretty silly to think that it is impossible for Intel to do a good GPU.
Quote:
Anand's benchmarks were for a desktop (higher clocked), 12 EU version (mobile will be only 6).

This is one of the guys I have trouble with.
Quote:

OpenCL has left a yawning chasm that AMD alone can fill. Let's hope Apple carpe the diem.

This is again a questionable point of view. AMD definitely has a plan that may or may not pan out. Intel on the other hand has decided to keep us in the dak
post #86 of 127
Quote:
Originally Posted by nvidia2008 View Post

There is a certain aesthetic and gameplay, once reached, that makes it acceptable and successful, despite technical deficiencies compared to full-fledged PC titles like Starcraft2.

One thing that kills me about consoles is the lack of proper antialiasing, which results in most cases causing the PC port to also lack antialiasing, even though mid range PC GPUs can easily chew on 2xAA or 4xAA at 1920x1080.

I agree entirely. I think consoles have badly affected PC gaming simply because that's where the focus is now and PCs gaming is optional. 007 Blood Stone is one of the worst game ports I've seen recently. Clearly it came to the PC as an afterthought.

But it's also to do with volume. The costs to make the game are so high ($100m) that to make a return on a $50 game, you have to sell over 2 million copies. Only big titles like CoD are doing this successfully. A lot of other studios are just dropping out. This is why the IGP market is so important because it makes up over 50% of all shipped computers (about 150 million target) vs a fraction of 30% for high-end gaming (probably 5% = 15 million people).

Quote:
Originally Posted by nvidia2008 View Post

On another note 320M is impressive but I think 4x that is just about a nice setting to be "good enough for most people not to have to care anymore". I'm obviously biased but I think the ATI 5850 for gaming performance and Nvidia 460 for GPGPU etc is the "benchmark" for 2011-2015.

I would agree that 4x current mobile hardware would suffice for pretty much anything you need visually. This could be a reality with with Intel IGP in 3 years. We'll probably have 6 or even 8-core CPUs in the entry level by then.

Jan 2011: Sandy Bridge, dual-core, IGP = 320M, 32nm architecture
Jan 2012: Ivy Bridge, quad-core mobile, 2x IGP (24EUs) = GTX 460M, 22nm die-shrink
Jan 2013: Brickland, quad-core mobile, 3x IGP (24EUs) = GTS 250 desktop, 22nm architecture
Jan 2014: Brickland die-shrink, 6-core mobile (48EUs) = 5850/5870 desktop, 16nm die-shrink

The dedicated GPU will be dead in 2014 and I'd say that large towers will stop being manufactured for consumers. Obviously dedicated GPUs move on too but the visual bar that nobody needs the 300W tower for will ensure that desktop sales are eroded to under 10% and we'll all have MBAs with 1TB SSD a 6-core CPU and an Intel IGP.

Quote:
Originally Posted by nvidia2008 View Post

Gameplay is of course an entirely different issue. Cut the Rope on my iPad has been really quite good. Not so much Angry Birds - that's just pure digital cocaine - you get the rush but the edginess too when you can't complete a level!

The good thing about the GPU plateau is that it allows more of a creative focus on content without having to worry that the game won't run on however many systems.
post #87 of 127
My Gawd. Case in point. Just installed COD Black Ops tonight. What an absolute bloody nightmare. First of all you have to go through Steam. Which is normally fine but Internet is rubbish in my country.

More concerning though is the well known lag issues due to something or other even in single player. To this date Treyarch has released a few updates and mentions that they are still "working on issues". Update and update later there is still some weird lagging as though the CPU is pegged and stuttering.

The state of PC gaming is abysmal. The hardware power is there. But the whole Windows PC model combined with game development economics and/or rampant greed and/or challenges with piracy makes PC gaming one big pile of rubbish to wade through.

I am still desperate enough though that I might struggle through COD Black Ops because there is a gaming experience there not available on Mac or iPad or iPhone. Same for NFS Hot Pursuit ... 30 minutes a day of brainless racing is tolerable.

I'm ready to throw the PC out the window and get an Xbox360. Worse thing is though, for who knows what reason it is not officially available in Malaysia so the grey market... Well it seems pretty scummy to me. Plus I have to get a HDMI converter or 1080p HDTV and so on.

AAAAARARARARARARRRGGGHHHHHHHHHH

I really swear, Apple is so close to moving into console gaming, they should just make the jump and do it. Put PC gaming well and truly out of its misery oh I just don't know anymore
post #88 of 127
PC gaming may well pass, but to think that Apple will do it, to me, is utterly laughable. I'm bailing on Mac hardware this xmas, when I buy/build a PC that can actually competently run 3D apps. PC gaming is SOTA. Macs are, uh, behind the curve.
post #89 of 127
Quote:
Originally Posted by Marvin View Post

I agree entirely. I think consoles have badly affected PC gaming simply because that's where the focus is now and PCs gaming is optional. 007 Blood Stone is one of the worst game ports I've seen recently. Clearly it came to the PC as an afterthought.

I'm not a gamer so I don't care from that perspective. At least I never was but Angery Birds and a few others on my iPhone are a big distraction. However I think this highlights at least on potential reason for Apples disinterest in gaming on the Mac, that is the consoles have won.
Quote:
But it's also to do with volume. The costs to make the game are so high ($100m) that to make a return on a $50 game, you have to sell over 2 million copies. Only big titles like CoD are doing this successfully. A lot of other studios are just dropping out. This is why the IGP market is so important because it makes up over 50% of all shipped computers (about 150 million target) vs a fraction of 30% for high-end gaming (probably 5% = 15 million people).

the cost of production is likely another reason that gaming consoles are favored. Like it or not developers need to make a buck or two and on the PC there is no way to really do that.
Quote:
I would agree that 4x current mobile hardware would suffice for pretty much anything you need visually. This could be a reality with with Intel IGP in 3 years. We'll probably have 6 or even 8-core CPUs in the entry level by then.

Display tech has a very long ways to go. Imagine a Retina like resolution on 35" monitor. Even at half of that you are still talking about a lot of pixels to push. I don't see the demand for faster GPUs stopping anytime soon.
Quote:
Jan 2011: Sandy Bridge, dual-core, IGP = 320M, 32nm architecture
Jan 2012: Ivy Bridge, quad-core mobile, 2x IGP (24EUs) = GTX 460M, 22nm die-shrink
Jan 2013: Brickland, quad-core mobile, 3x IGP (24EUs) = GTS 250 desktop, 22nm architecture
Jan 2014: Brickland die-shrink, 6-core mobile (48EUs) = 5850/5870 desktop, 16nm die-shrink

AMD has indicated that they intend to rev the GPU sections of their Fusion processors faster than the CPU portion. I actually think we will see a race here to put as much GPU into these processors as is possible. The demand is certainly there.
Quote:
The dedicated GPU will be dead in 2014 and I'd say that large towers will stop being manufactured for consumers. Obviously dedicated GPUs move on too but the visual bar that nobody needs the 300W tower for will ensure that desktop sales are eroded to under 10% and we'll all have MBAs with 1TB SSD a 6-core CPU and an Intel IGP.

I'm not sure I agree with the points above. For one the process shrinks that alllow for on die GPUs also allow for far more capable discrete GPUs. I just don't see a need for an end to discrete GPUs. They may become more specialized and less mainstream but I'd be shocked if they where gone by 2014.

As for towers there are lots of reasons to buy such. So I don't see them leaving either. On the other hand somthing like Apples Mini will be an extremely powerful platform when the 22nm processes hit. More so we are entering the age where a ten watt computer might actually be very useful. This is the flip side of the process improvements we have seen, having a device optimized for low power doesn't mean that it has to suck.

This brings us back to Apples laptops and the rumors of Sandy Bridge only GPUs in the MacBook. I know this might not be the case for many in this thread but a lower power chip, that is one that runs cooler and has a long battery life may be a very hot seller. The reality is many Mac Book owners don't care about 3D.
Quote:


The good thing about the GPU plateau is that it allows more of a creative focus on content without having to worry that the game won't run on however many systems.

This is likely why Apple has had so much success with iOS devices. The capabilities from one model to the next are clearly defined. There is little mystery about how an iPhone 4 performs compared to an iPad. However I don't see a plateau as much as I see ckear and well defined rungs on a ladder. For example I'm expecting a clear jump in performance when iPad 2 comes out. This ends up being another incremental rung on the ladder to higher performing systems.

On the Macs, especially the Mac Books, I don't see a plateau either. The transition to Sandy Bridge IGP (if it actually happens) is just that a transition. It is somthing that has to happen and as you note there will be rapid advancements in performance afterward. Again this can be seen as a rung on a ladder but it is a new ladder going in a slightly different direction.

All of this hand wringing about the move to Intel integrated graphics will likely be over in a few weeks. We will either have or not have laptop systems based on Sandy Bridge, if we do we can get a clearer picture of just what the goods and bass are. Due to the new architecture I'm expecting both outstanding good points as well as a few bad points. Will the new processor make for a good gaming platform? I kinda doubt it but is that is not what most people are looking for.
post #90 of 127
Quote:
Originally Posted by wizard69 View Post

Display tech has a very long ways to go. Imagine a Retina like resolution on 35" monitor. Even at half of that you are still talking about a lot of pixels to push. I don't see the demand for faster GPUs stopping anytime soon.

The current iMac display could probably be considered a retina display already because you sit 30" away from it not 12" like a phone. So if you have 960 x 640 on a 3.5" screen at 12" away, you only need the iPad's 132ppi density for a computer. So a 21.5" would need 2400 x 1600, the 27" would need 3000 x 2000 and a 35" would need 3800 x 2500 or thereabouts. That's only about 15% more than we have now.

You don't need to push that many pixels in a game though because the textures would have to be that high too to be worthwhile and the game itself would be huge. Given that consumer movies won't go beyond 1080p, that can be the benchmark for games too and the 5850 handles that easily with Bad Company 2 at maximum and 4xAA. There's not many places to go when a card can play a game like the following:

http://www.youtube.com/watch?v=7HGqsKjDOLc

Quote:
Originally Posted by wizard69 View Post

I just don't see a need for an end to discrete GPUs. They may become more specialized and less mainstream but I'd be shocked if they where gone by 2014.

Right now 70% of all machines shipped are laptops, over 50% of all machines only have IGPs. That trend towards mobile is increasing. Some research suggests desktops won't drop to 10% in 2014 like I suggested but 19%:

http://techcrunch.com/2010/06/17/for...sell-netbooks/

A portion of those machines will still push the higher end GPUs for the remaining enthusiasts but I don't think that market will be big enough to sustain the companies developing the dedicated GPUs. As the market lowers, the prices have to go up, which continues the market decline until they give up. AMD's tagline is "the future is fusion" - the only way to survive is if you have both CPU and GPU together. Sadly for NVidia, they only have one of the parts they need, Intel and AMD have both.

Quote:
Originally Posted by wizard69 View Post

On the Macs, especially the Mac Books, I don't see a plateau either. The transition to Sandy Bridge IGP (if it actually happens) is just that a transition. It is somthing that has to happen and as you note there will be rapid advancements in performance afterward. Again this can be seen as a rung on a ladder but it is a new ladder going in a slightly different direction.

I think there is a trend towards a plateau. A certain level of performance beyond which, there's no demand for it. Inevitably come the quotes from people who have said 'no one will ever need more than x' and have been proved wrong but we can see it today with the rise in popularity of ultra-mobile devices. You see it in TVs, which are at a point where you walk into a store and you have screens bigger than you are and you'd likely opt for one you can actually get into your car.

When it comes to machines, it's a case of 'what's the cheapest machine I can get that does what I need'. If you're a casual gamer, you can get away with the lowest end Macbook Air:

http://www.youtube.com/watch?v=RmtBXr9VH4s
http://www.youtube.com/watch?v=m-bqbFRsgcI

Quote:
Originally Posted by wizard69 View Post

Will the new processor make for a good gaming platform? I kinda doubt it but is that is not what most people are looking for.

If it can play Starcraft 2 and Mass Effect 2 in the way they demoed, I'd say it will be a good gaming platform just like the 320M machines.
post #91 of 127
Quote:
Originally Posted by xSamplex View Post

Since most Apple customers are not very technically sophisticated, this story is irrelevant for the majority.

Neither are most PC users.
post #92 of 127
Quote:
Originally Posted by backtomac View Post

That doesn't make sense.

If its too good to be true, it probably isn't true.

What I've been reading on this is that it should equal low end discreet gpu's. That would be very good, but it doesn't mean it will run OpenCL, which is even more important to Apple.
post #93 of 127
Quote:
Originally Posted by Marvin View Post

Intel won, not by being better but by being bigger and pushing them out of the way. It had to happen but NVidia don't deserve to go out this way.

NVidia has made plenty of mistakes. They didn't exactly endear themselves to manufacturers over the past three years with their lying about their gpu board problems that cost manufacturers hundreds of millions.

They have also borked up demos. a few years ago they were found to be cheating by cropping off-screen computations that were supposed to be done.

In addition, i trust Intel's statements and demos more than those from AMD. Coming off their performance a couple of years ago, I find it hard to believe anything they say or do.
post #94 of 127
Quote:
Originally Posted by Marvin View Post

If it can play Starcraft 2 and Mass Effect 2 in the way they demoed, I'd say it will be a good gaming platform just like the 320M machines.

Mass Efffect 3. Already done with 2.

I hope there's a Sandy Bridge mini...
post #95 of 127
Quote:
Originally Posted by melgross View Post

In addition, i trust Intel's statements and demos more than those from AMD. Coming off their performance a couple of years ago, I find it hard to believe anything they say or do.

I have to ask exactly what you're referring to here.
post #96 of 127
Quote:
Originally Posted by Marvin View Post

The current iMac display could probably be considered a retina display already because you sit 30" away from it not 12" like a phone. So if you have 960 x 640 on a 3.5" screen at 12" away, you only need the iPad's 132ppi density for a computer. So a 21.5" would need 2400 x 1600, the 27" would need 3000 x 2000 and a 35" would need 3800 x 2500 or thereabouts. That's only about 15% more than we have now.

Sadly my ability to see pixels gets worst every year. That being said I still see room for improvement in desktop displays.
Quote:
You don't need to push that many pixels in a game though because the textures would have to be that high too to be worthwhile and the game itself would be huge. Given that consumer movies won't go beyond 1080p, that can be the benchmark for games too and the 5850 handles that easily with Bad Company 2 at maximum and 4xAA. There's not many places to go when a card can play a game like the following:

http://www.youtube.com/watch?v=7HGqsKjDOLc



Right now 70% of all machines shipped are laptops, over 50% of all machines only have IGPs. That trend towards mobile is increasing. Some research suggests desktops won't drop to 10% in 2014 like I suggested but 19%:

The trend to mobile is there but it isn't towards laptops. With the advent of smart phones and tablets I could see many people moving back to desktop machines. The desktop being the primary computer and the smart phone or tablet being the mobile solution.
Quote:
http://techcrunch.com/2010/06/17/for...sell-netbooks/

A portion of those machines will still push the higher end GPUs for the remaining enthusiasts but I don't think that market will be big enough to sustain the companies developing the dedicated GPUs. As the market lowers, the prices have to go up, which continues the market decline until they give up. AMD's tagline is "the future is fusion" - the only way to survive is if you have both CPU and GPU together. Sadly for NVidia, they only have one of the parts they need, Intel and AMD have both.

NVidia is in a very tough but not impossible situation. A better management team might save the company. As to high end GPUs demand may come to an end some day, I just think 2014 is a little early.
Quote:
I think there is a trend towards a plateau. A certain level of performance beyond which, there's no demand for it.

I think this is what I disagree with the most. In fact I would go so far as to say we need far more powerful computers to enable a new generation of software and capabilities.

Maybe I'm biased as my old 2008 MBP is to sluggish at times. Of course that could be OS/X but sometimes the machine seems to have a mind of it's own.
Quote:
Inevitably come the quotes from people who have said 'no one will ever need more than x' and have been proved wrong but we can see it today with the rise in popularity of ultra-mobile devices. You see it in TVs, which are at a point where you walk into a store and you have screens bigger than you are and you'd likely opt for one you can actually get into your car.

the interesting thing here is that computer performance does not equate with size. There may be a good reason for that large screen purchase. An ultra mobile device likewise serves specific needs. However a smart phone is not a substitute for a desktop computer.

At least not yet!
Quote:

When it comes to machines, it's a case of 'what's the cheapest machine I can get that does what I need'. If you're a casual gamer, you can get away with the lowest end Macbook Air:

http://www.youtube.com/watch?v=RmtBXr9VH4s
http://www.youtube.com/watch?v=m-bqbFRsgcI



If it can play Starcraft 2 and Mass Effect 2 in the way they demoed, I'd say it will be a good gaming platform just like the 320M machines.

Well again I'm not much of a gamer so I have to take your word for it.
post #97 of 127
Quote:
Originally Posted by 1337_5L4Xx0R View Post

Intel's IGP are crap, plain and simple. If they double performance every year for the next ten years, they will still be crap relative to all other options available.

If they double yearly for 10 years, that would mean the Intel IGP of 2020 will be 1024x more powerful (2^10) than the current IGP. I don't think that is an achievable improvement level, but if it is we all win
post #98 of 127
Intel SB GPUs support OpenGL 3, OpenCL 1.1 and DirectX 10.1.
Also they are 2xtimes more powerful than Arrandale's GPU.
post #99 of 127
Quote:
Originally Posted by 1337_5L4Xx0R View Post

Anand's benchmarks were for a desktop (higher clocked), 12 EU version (mobile will be only 6).

You're completely wrong. Desktop version will have 6 SKUs, but mobile will have 12.
Strange (desktops are usually more powerful than mobile) but true!

http://www.anandtech.com/show/3876/i...bridge-part-ii

Quote:
Originally Posted by Anandtech

The major difference between mobile Sandy Bridge and its desktop countpart is all mobile SB launch SKUs have two graphics cores (12 EUs), while only some desktop parts have 12 EUs (it looks like the high-end K SKUs will have it).
post #100 of 127
Quote:
Originally Posted by MacFinder View Post

You're completely wrong. Desktop version will have 6 SKUs, but mobile will have 12.
Strange (desktops are usually more powerful than mobile) but true!

I don't think it's that strange. If 70% of the computing market is mobile now, and there aren't exactly many aftermarket video options for laptops, then it makes perfect sense to make sure that the laptop version has as beefy a video option as possible. The desktop will be getting bypassed by people buying aftermarket videocard options in many cases.
post #101 of 127
Quote:
Originally Posted by nht View Post

Mass Efffect 3. Already done with 2.

I hope there's a Sandy Bridge mini...

Same here, it would be nice if SB can run ME3 sufficiently. One day I'm sure games will look like the good trailers too:

http://www.youtube.com/watch?v=qZGFjBmD41Q

I know that contradicts the near-term plateau I mentioned but it has to happen somewhere between 2014-2018 because they are reaching the fabrication limits for electronic components. They can make the die bigger/stacked perhaps, switch to optical transistors or find ways to ramp up the clock speeds without overheating the chips but they will run into issues.

It'll be interesting to see what happens when we reach that point because regardless of demand, they might not be able to make better chips.

Quote:
Originally Posted by FuturePastNow

I have to ask exactly what you're referring to here.

I was thinking the same thing. When the GMA 950 came to replace the Radeon 9200 etc, Intel and partners were trying to claim that IGPs weren't like IGPs of the past, which turned out to be wrong. They were slow, incompatible with poor feature support and far outperformed by the competition. This has happened with every GPU they've ever made.

With CPUs, they certainly outperform the competition though so I guess that's something but they have a lot of smoke and mirrors like their vanishing act with Larrabee. So much hype and it didn't amount to what they said it would (when they said it would anyway).

They made good moves with SSD but still under-delivered on sequential write. Light Peak will probably be good though.

Quote:
Originally Posted by wizard69

In fact I would go so far as to say we need far more powerful computers to enable a new generation of software and capabilities.

Web-based software certainly requires more capable hardware to be able to run Javascript and other content quickly enough so that will drive hardware forward.

Quote:
Originally Posted by wizard69

If they double yearly for 10 years, that would mean the Intel IGP of 2020 will be 1024x more powerful (2^10) than the current IGP. I don't think that is an achievable improvement level, but if it is we all win

They tend to double every 2 years so in 10 years, it's only 32x faster and the manufacturers will hit the manufacturing difficulties before then. It will fall short of current render farms:

http://www.slashfilm.com/cool-stuff-...s-renderfarms/

but it should be plenty for photoreal rendering in real-time. The important drive needs to be on results and needs both software and hardware to be implemented smartly, not just with speedups on both sides. That's one impressive part about SB putting very fast fixed function architecture in there. Once people decide on the best algorithms, they should just put them in silicon to make them thousands of times faster. At first it seems like general purpose solutions would be better as those setups are more flexible but post-production rendering and games and mathematical algorithms are being refined to a point where people know what algorithms to run, they just need it done quickly.

As long as they don't exclude general purpose capability, I think that will end up being the best design.
post #102 of 127
Quote:
Originally Posted by melgross View Post

Neither are most PC users.

At least based on percentages. There are far more people that don't have a clue as to what a PC is or how it does what it does. All they care about is the web and e-mail.

Apple on the other hand attracts a lot of UNIX hold outs and others skilled in using the computer as a lever to advance their businesses.

Dave
post #103 of 127
Quote:
Originally Posted by Marvin View Post

Same here, it would be nice if SB can run ME3 sufficiently. One day I'm sure games will look like the good trailers too:

http://www.youtube.com/watch?v=qZGFjBmD41Q

I know that contradicts the near-term plateau I mentioned but it has to happen somewhere between 2014-2018 because they are reaching the fabrication limits for electronic components. They can make the die bigger/stacked perhaps, switch to optical transistors or find ways to ramp up the clock speeds without overheating the chips but they will run into issues.

It'll be interesting to see what happens when we reach that point because regardless of demand, they might not be able to make better chips.

Long term there will likely be transition to another fabrication method. However there is a enough potential in the short term to give us significant improvements out to 2020. I've been seeing some really impressive numbers for the 28 nm node, like a 40% drop in static power and 50% improvements in performance for a given power level. This is most interesting in the context of Apples ARM based hardware in my mind, it is just delightful that we may have more that 2X improvement sin iPhone/iPad processors in the very near future and 2X again a generation later.

There is lots of interesting technology coming down the pipes to truly enable a new generation of portable devices. Just yesterday I was reading an interesting article in Photonics about a way to get rid of the color dies in LCD panels. As nice as the current generation of LCDs is the future is still very bright with many competing paths to even better displays. Likewise I don't see many dead ends in the field of semiconductor fabrication, once lithography meets its limit we will likely transition to something different like self assembling systems built with proteins.
Quote:
I was thinking the same thing. When the GMA 950 came to replace the Radeon 9200 etc, Intel and partners were trying to claim that IGPs weren't like IGPs of the past, which turned out to be wrong. They were slow, incompatible with poor feature support and far outperformed by the competition. This has happened with every GPU they've ever made.

This is all true but why do people think Intel will continue to screw up GPU's. After awhile it has to be embarrassing to intel so eventually they will have to fix the issue. Indeed they will have to or give up the market to AMD. The reality is the GPU is more important than the CPU these days.
Quote:
With CPUs, they certainly outperform the competition though so I guess that's something but they have a lot of smoke and mirrors like their vanishing act with Larrabee. So much hype and it didn't amount to what they said it would (when they said it would anyway).

They made good moves with SSD but still under-delivered on sequential write. Light Peak will probably be good though.

Actually I was under the impression that Intels SSD's are well respected. No product can be expected to lead in every performance category every time.
Quote:
Web-based software certainly requires more capable hardware to be able to run Javascript and other content quickly enough so that will drive hardware forward.

Well that wasn't what I was thinking about as the is today technology. I was thinking more along the lines of intelligent agents and AI's. Sort of like was seen in that old video of the Apple Navigator or what ever that little tablet was called.
Quote:
They tend to double every 2 years so in 10 years, it's only 32x faster and the manufacturers will hit the manufacturing difficulties before then. It will fall short of current render farms:

http://www.slashfilm.com/cool-stuff-...s-renderfarms/

but it should be plenty for photoreal rendering in real-time. The important drive needs to be on results and needs both software and hardware to be implemented smartly, not just with speedups on both sides. That's one impressive part about SB putting very fast fixed function architecture in there. Once people decide on the best algorithms, they should just put them in silicon to make them thousands of times faster. At first it seems like general purpose solutions would be better as those setups are more flexible but post-production rendering and games and mathematical algorithms are being refined to a point where people know what algorithms to run, they just need it done quickly.

As long as they don't exclude general purpose capability, I think that will end up being the best design.

The problem with fixed functions is that you are locked in to what is current technology. A good example here is video decoders seen in modern hardware. All is great until a new codec becomes the poster child for video Then you are back to software decoding.

As to all this special purpose or fixed function hardware it does have me wondering what Apple is up to with the iPad processor. They have patented over the years a lot of ideas that can only be realize in hardware that we have yet to see. It would be very interesting to see such ideas put into silicon thus I'm wondering if part of the reason for going to ARM was to have a platform upon which to realize some of their ideas in hardware.
post #104 of 127
Quote:
Originally Posted by wizard69 View Post

As to all this special purpose or fixed function hardware it does have me wondering what Apple is up to with the iPad processor. They have patented over the years a lot of ideas that can only be realize in hardware that we have yet to see. It would be very interesting to see such ideas put into silicon thus I'm wondering if part of the reason for going to ARM was to have a platform upon which to realize some of their ideas in hardware.

This is a long complex discussion i would love to get involved in, but it's just too much overall.

But I would like to comment on this last. It seems intereting that OS's like Unix (Mac OS X and iOS) and Linix (Android being the most important one right now) based systems can be made so much more efficient than Microsoft designed systems. Even their mobile OS's, not being Windows based, and written with low power cpus in mind seem to be much less efficient. windows is a dog. It required much more RAM, and much more powerful cpus, but doesn't perform well i netbook either. Getting it to perform with an ARM chip might prove impossible.

So Apple has chosen ARM, because it works very well there. Of course, OS X is designed to work best with at least two cores. I don't know how that relates to iOS, but if that's true there as well, we can expect some substantial improvements next year. Throw in a VR 740 or so, and things could really fly. With their own optimizations of the chips, battery life should remain good, as these two core chips are already supposed to be equal in that respect to the older Cortex 8 single core.

My iPad is already as powerful as my 700 MHz G4 Audio in many respects, and I go a lot of work done with that. a two core model with out of order execution will be a lot more powerful.

Things are going well!
post #105 of 127
Mel.

It's clear you're posting from an iPad. The typos give it away.
post #106 of 127
There is no ending because technology is constantly changing.
Quote:
Originally Posted by melgross View Post

This is a long complex discussion i would love to get involved in, but it's just too much overall.

But I would like to comment on this last. It seems intereting that OS's like Unix (Mac OS X and iOS) and Linix (Android being the most important one right now) based systems can be made so much more efficient than Microsoft designed systems. Even their mobile OS's, not being Windows based, and written with low power cpus in mind seem to be much less efficient. windows is a dog. It required much more RAM, and much more powerful cpus, but doesn't perform well i netbook either. Getting it to perform with an ARM chip might prove impossible.

Considering UNIX and the like had its roots way back at the dawn of the Mini computer revolution, it shouldn't be a surprise that these systems have less of a demand on hardware. In Apples case though they are not afraid to drop backwards compatibility over time. Micro Soft on the other hand has up to recently tried to maintain backwards compatibility way back.

I know lots of people complain about Apple dropping Carbon and support for other software but the reality is it keeps MAc OS slim.
Quote:

So Apple has chosen ARM, because it works very well there

Yes of course for the portable market ARM is your only choice. But that isn't exactly what I was getting at. Apple could just as easily purchase ARM SoC from any number of suppliers. Instead they choose to roll their own. Even that isn't all that exciting as lots of companies license ARM cores. Apple on the other hand have gone a step further from what I understand and have a much more involved license giving them access to the IP to morph the processor to their will.

It is my understanding that only a very few companies go to the trouble of licensing an ARM core to this extent. The obvious thought (in my vivid imagination) is that they are about to customize the processor to an extent not seen lately.
Quote:
. Of course, OS X is designed to work best with at least two cores. I don't know how that relates to iOS, but if that's true there as well, we can expect some substantial improvements next year.

A lot of the infra structure is already there to leverage more cores.
Quote:
Throw in a VR 740 or so, and things could really fly. With their own optimizations of the chips, battery life should remain good, as these two core chips are already supposed to be equal in that respect to the older Cortex 8 single core.

The 28nm node is becoming very very interesting. Some reports have indicated a 45% drop in static power and much faster circuitry. I'm not sure that Apple can field a 28 nm SoC next year but it is clear that there is a long ways to go as far as dropping power while upping performance.
Quote:

My iPad is already as powerful as my 700 MHz G4 Audio in many respects, and I go a lot of work done with that. a two core model with out of order execution will be a lot more powerful.

Things are going well!

As an old owner of a Mac Plus I understand 100% what you are saying as my iPhone impresses the heck out of me every day. Functionally that iPhone is light years ahead of the Mac Plus and a bunch of other computers I've had over the years. In this case we are talking a 3G so it isn't even up to date performance wise at all. It isn't to much to expect that iPad 2 will come close to effectively out performing my current 2008 MBP. Maybe not in raw CPU performance but certainly it has the potential to be a better machine than my MBP for many tasks.

As a side note I was watching the video of the RIM exec demoing a Playbook earlier today. Some have dismissed RIM already but I was left with the impression that they will have a very good product that will be leveraging dual core processors and other advancements. The product demo was good enough to draw my interest in the Playbook which is more than any of the Android platforms have done. In any event the Playbook highlights where all these low power technologies are taking us.
post #107 of 127
Quote:
Originally Posted by wizard69 View Post

More interesting will be how Apple implements the architecture of this new SoC.

Will the GPU be an equal partner to the CPU?

It pretty much is already.

Quote:
Will the GPU support threads or other partitioning?

Not for a generation or two.

Quote:
Will they implement an on board cache or video RAM buffer?

What makes you think there isn't some already?

Quote:
Lots of questions but really this is what interests me about iPad 2, that is just what does Apple have up it's sleeve with respect to the next iPad processor. Considering all the recent patents it could be a major advancement or it could be another run of the mill ARM SoC.

Keep in mind that hardware development pipelines are years long. No telling when patents will materialize in product.

Fortunately we know that iOS 4 is based on Snow Leopard and includes GCD, blocks and therefore has many of the pieces required for OpenCL. The current GPU hardware is just not flexible enough. The next generation Imagination parts are. And the next generation ARM cores are multi-core capable (and faster individually).

RIM might be bragging about how they'll really fly when dual core arrives (sounds like an excuse to me), but they had better be careful touting that one because Apple has far more experience (and shipping code) than they do at this parallelism stuff.
Providing grist for the rumour mill since 2001.
Reply
Providing grist for the rumour mill since 2001.
Reply
post #108 of 127
Quote:
Originally Posted by AppleInsider View Post

Future MacBooks set to arrive in 2011 will rely on Intel's forthcoming Sandy Bridge processor, which means Nvidia's graphics processors will not be included in at least some models 13 inches and under, according to a new report.

Citing anonymous sources, Cnet on Thursday said that MacBook models with screen sizes of 13 inches and under will switch to Sandy Bridge-only graphics. Apple's larger, higher-end MacBooks, with screen sizes of 15 and 17 inches, will allegedly rely on GPUs from AMD.

"Adoption of Sandy Bridge in popular small MacBook designs would constitute one of the strongest endorsements of Intel technology since Apple made the seminal transition from IBM-Motorola PowerPC chips to Intel back in 2005," the report said. "And a recognition that Intel's graphics technology, while maybe not the best, now offers the best price-performance for low-end MacBooks."

Starting in 2010 with its Arrandale processors, Intel began building in the major northbridge chipset memory controller components to its chips. The architectural changes in Arrandale, along with a lawsuit, forced Nvidia to halt the development of future chipsets.

Previously, Apple has not typically relied on Intel's graphics solutions for its notebooks. This year, in its updated MacBook Pro line, Apple introduced a proprietary automated graphics switching solution that dynamically switches between Intel's integrated graphics processor and Nvidia's discrete graphics chip.

For the new MacBook Air and 13-inch MacBook Pro, Apple relies on older Core 2 Duo processors because Nvidia is still capable of creating chipsets for use with those processors. But if Nvidia loses its legal battle with Intel, it would not be able to make chipsets for the current Core i series or the forthcoming Sandy Bridge line of processors.

Nathan Brookwood, principal analyst at Insight64, told Cnet he believes that Apple's lower-end MacBooks are "sitting ducks" for AMD's Fusion technology, which combines the company's central processors and graphics processors. In April, AppleInsider reported that Apple and AMD were in advanced discussions to potentially adopt AMD processors in at least some of its MacBook line.

Intel will formally unveil its Sandy Bridge processors at the Consumer Electronics Show on Jan. 5, 2011. The company's chief executive, Paul Otellini, has said that he is "more excited by Sandy Bridge" than any other product the company has launched in years.

Is this Sandy Bridge Processor only for Graphics? Is this also a data processor? A little confused.
post #109 of 127
Quote:
Originally Posted by gerald apple View Post

Is this Sandy Bridge Processor only for Graphics? Is this also a data processor? A little confused.

Yes, it's both like AMD's Fusion chips. It has some parts from a CPU and some parts from a GPU inside the same chip.
post #110 of 127
Quote:
Originally Posted by wizard69 View Post

T
The 28nm node is becoming very very interesting. Some reports have indicated a 45% drop in static power and much faster circuitry. I'm not sure that Apple can field a 28 nm SoC next year but it is clear that there is a long ways to go as far as dropping power while upping performance.

In the spirit of doubling down after making a rash prediction that might actually come true I will predict that the 2011 A5 is a 32nm HKMG Cortex A9 fabbed in Samsung's spanking new 32nm HKMG fab.

That's pushing things a little but I can see Apple like having a process advantage over current gen competitors using the 45nm Tegra 2 or OMAP4 to beat them over the head with far longer battery life and or performance.
post #111 of 127
Quote:
Originally Posted by nht View Post

In the spirit of doubling down after making a rash prediction that might actually come true I will predict that the 2011 A5 is a 32nm HKMG Cortex A9 fabbed in Samsung's spanking new 32nm HKMG fab.

That's pushing things a little but I can see Apple like having a process advantage over current gen competitors using the 45nm Tegra 2 or OMAP4 to beat them over the head with far longer battery life and or performance.

That would be tremendous if your prediction is correct. Battery life and performance would easily kill anything out there.
post #112 of 127
Quote:
Originally Posted by nht View Post

In the spirit of doubling down after making a rash prediction that might actually come true I will predict that the 2011 A5 is a 32nm HKMG Cortex A9 fabbed in Samsung's spanking new 32nm HKMG fab.

It might be slightly expensive but the die is reasonably small thus might be practical.

I don't expect huge power savings but rather a lot more capability. Well that is what I want, Apple though has it's own balancing act to perform. I could see them going extremely low power to trim battery size. I think that is a mistake though as iPad needs a performance boost.
Quote:
That's pushing things a little but I can see Apple like having a process advantage over current gen competitors using the 45nm Tegra 2 or OMAP4 to beat them over the head with far longer battery life and or performance.

Plus it would be pretty silly to have your own chip design team if they can't stay a step ahead of the competition.
post #113 of 127
Quote:
Originally Posted by Programmer View Post

It pretty much is already.

Well not exactly. GPUs don't share the same address space as the CPU.
Quote:
Not for a generation or two.

yes but at least AMD has indicated that they are going this route. When we can run GPU threads without having to worry about the impact on the display GPU computing will be here to stay.
Quote:
What makes you think there isn't some already?

On Sandy Bridge or AMDs fusion? As far as I know there is no cache or RAM dedicated to the GPU on the SoC. I should qualify that with cache or RAM for a frame buffer. It is a lot of memory to put on board but in things like an iPhone or iPad it should be doable. The goal is to reduce or eliminate off board trips to drive the video. I would think this would save considerable power.
Quote:
Keep in mind that hardware development pipelines are years long. No telling when patents will materialize in product.

This is very true but Apple needs a payoff for their investment in Chip development. A4 is a pretty run of the mill processor.
Quote:
Fortunately we know that iOS 4 is based on Snow Leopard and includes GCD, blocks and therefore has many of the pieces required for OpenCL. The current GPU hardware is just not flexible enough. The next generation Imagination parts are. And the next generation ARM cores are multi-core capable (and faster individually).

Juicy! This is important for people to understand, if iPad2 gets the processor update we are hoping for their should be an immediate benefit.
Quote:
RIM might be bragging about how they'll really fly when dual core arrives (sounds like an excuse to me), but they had better be careful touting that one because Apple has far more experience (and shipping code) than they do at this parallelism stuff.

This is possibly the point I disagree the most with. RIM is using QNX here which has a history that is pretty long. Plus they have had an excellent reputation in the industry. Admittedly this isn't the "PC" industry but their back ground is strong and diverse. Obviously this is a crash program on their part but let's face it it took Apple years to get iOS to the point where it is today. The Playbook OS really needs to be evaluated a year after release.
post #114 of 127
Quote:
Originally Posted by melgross View Post

But I would like to comment on this last. It seems intereting that OS's like Unix (Mac OS X and iOS) and Linix (Android being the most important one right now) based systems can be made so much more efficient than Microsoft designed systems.

You can actually make fairly efficient and secure windows installs. There's nothing about windows itself that makes it particularly more or less efficient than the the unix core. MS is less capable in providing tailored power management beyond the lowest common denominator because it doesn't control hardware and depends on OEMs to provide tailored power schemes.

OEMs are hampered because they don't control the software stack and can't tweak Windows power management code.

MS added more power management support in Win7 and the general feeling is that windows power management had typically been better than Linux power management. Depends on the drivers quite a bit so YMMV.

"Going into this power consumption testing we figured Microsoft Windows 7 would likely be more power efficient than Ubuntu Linux due to Windows having better ACPI support and more hardware vendors catering to Windows, but we did not expect to see such a wide difference with the ASUS Eee PC. With the "out of the box" experience for each operating system, Ubuntu 10.04 LTS was consuming 56% more power than Windows 7 Professional!"

"Fortunately, with the Lenovo ThinkPad T61 the power consumption between Windows and Ubuntu Linux were not nearly as large as the Atom 330 + ION-based netbook. The ThinkPad T61 with Ubuntu 10.04 LTS was consuming 14% more power than Windows 7, but when both were loaded with NVIDIA's binary driver that leverages PowerMizer and other power-savings techniques, Ubuntu 10.04 LTS averaged to consume just 4% more power."

http://www.phoronix.com/scan.php?pag...ws_part2&num=2

Vista posted opposite results where Linux was more power efficient.

Quote:
Even their mobile OS's, not being Windows based, and written with low power cpus in mind seem to be much less efficient. windows is a dog. It required much more RAM, and much more powerful cpus, but doesn't perform well i netbook either. Getting it to perform with an ARM chip might prove impossible.

Given that WP7 is working admirably on ARM chips this seems like a very strange assertion to make.


Quote:
Originally Posted by wizard69 View Post

Considering UNIX and the like had its roots way back at the dawn of the Mini computer revolution, it shouldn't be a surprise that these systems have less of a demand on hardware.

8-bit micro computers were not more powerful than the minis of the same time period. Far less. It is hard to argue that the unix kernel has some mystical advantage over the nt kernel.
post #115 of 127
Quote:
Originally Posted by nht View Post

You can actually make fairly efficient and secure windows installs. There's nothing about windows itself that makes it particularly more or less efficient than the the unix core. MS is less capable in providing tailored power management beyond the lowest common denominator because it doesn't control hardware and depends on OEMs to provide tailored power schemes.

OEMs are hampered because they don't control the software stack and can't tweak Windows power management code.

MS added more power management support in Win7 and the general feeling is that windows power management had typically been better than Linux power management. Depends on the drivers quite a bit so YMMV.

"Going into this power consumption testing we figured Microsoft Windows 7 would likely be more power efficient than Ubuntu Linux due to Windows having better ACPI support and more hardware vendors catering to Windows, but we did not expect to see such a wide difference with the ASUS Eee PC. With the "out of the box" experience for each operating system, Ubuntu 10.04 LTS was consuming 56% more power than Windows 7 Professional!"

"Fortunately, with the Lenovo ThinkPad T61 the power consumption between Windows and Ubuntu Linux were not nearly as large as the Atom 330 + ION-based netbook. The ThinkPad T61 with Ubuntu 10.04 LTS was consuming 14% more power than Windows 7, but when both were loaded with NVIDIA's binary driver that leverages PowerMizer and other power-savings techniques, Ubuntu 10.04 LTS averaged to consume just 4% more power."

http://www.phoronix.com/scan.php?pag...ws_part2&num=2

Vista posted opposite results where Linux was more power efficient.



Given that WP7 is working admirably on ARM chips this seems like a very strange assertion to make.

OS X is considered to be better at power management than Windows. iOS is considered to be very power efficient. Comparing Windows to Linux based systems, or to other Unix based systems is irrelevant to this. Apple does its own power management.
post #116 of 127
Quote:
Originally Posted by melgross View Post

OS X is considered to be better at power management than Windows. iOS is considered to be very power efficient. Comparing Windows to Linux based systems, or to other Unix based systems is irrelevant to this. Apple does its own power management.

Here is an AnandTech article hat backs up your comment.
Apple claims 10 hours of battery life for the MBP13 when running OS X, and Anand hit pretty close to that mark when testing it out with his light web browsing test. Now, weve shown before that OS X is more optimized for mobile power consumption than all versions of Windows, so going into this test the expectations were a fair bit lower.

And for good reason; the MBP13 showed fairly similar battery life to some of the older Core 2-based systems. With its 63.5 Wh lithium polymer battery, the MBP hits 5.5 hours on our ideal-case battery test, and exactly 5 hours on the web browsing test. While this is decent for the average Core 2 notebook, its pretty woeful compared to the OS X battery life of the MBP. If you have no reason to run Windows (program compatibility, gaming, etc) youre better off in OS X just so that you can get about double the battery life.

This reduction of battery life in Windows is pretty much along the same lines that Anand saw with the MacBook Air he tested under both OS X and Windows. This is a problem thats been noted in both Vista and 7, and doesnt look to go away anytime soon (though well see if Microsoft can fix it in Windows 8).
http://www.anandtech.com/show/3889/a...dows7-laptop/6
Dick Applebaum on whether the iPad is a personal computer: "BTW, I am posting this from my iPad pc while sitting on the throne... personal enough for you?"
Reply
Dick Applebaum on whether the iPad is a personal computer: "BTW, I am posting this from my iPad pc while sitting on the throne... personal enough for you?"
Reply
post #117 of 127
Quote:
Originally Posted by solipsism View Post

Here is an AnandTech article hat backs up your comment.
Apple claims 10 hours of battery life for the MBP13 when running OS X, and Anand hit pretty close to that mark when testing it out with his light web browsing test. Now, weve shown before that OS X is more optimized for mobile power consumption than all versions of Windows, so going into this test the expectations were a fair bit lower.

And for good reason; the MBP13 showed fairly similar battery life to some of the older Core 2-based systems. With its 63.5 Wh lithium polymer battery, the MBP hits 5.5 hours on our ideal-case battery test, and exactly 5 hours on the web browsing test. While this is decent for the average Core 2 notebook, its pretty woeful compared to the OS X battery life of the MBP. If you have no reason to run Windows (program compatibility, gaming, etc) youre better off in OS X just so that you can get about double the battery life.

This reduction of battery life in Windows is pretty much along the same lines that Anand saw with the MacBook Air he tested under both OS X and Windows. This is a problem thats been noted in both Vista and 7, and doesnt look to go away anytime soon (though well see if Microsoft can fix it in Windows 8).
http://www.anandtech.com/show/3889/a...dows7-laptop/6

The tech sites have been reporting this for years.
post #118 of 127
Now that the Sandy Bridge processors are on sale in China/Malaysia etc, someone has kindly posted solid benchmarks:

http://en.inpai.com.cn/doc/enshowcon...48&pageid=7730

The version tested is the 6 EU desktop processor and contains the HD Graphic 2000 GPU. The comparison GPU called HD Graphic is the Arrandale one we have now (i.e when you turn the NVidia GPU off in the MBP).

It's generally only slightly faster than the current one and around half the 5450. With 12 EUs, it will be in the region of the 5450. This to me suggests that Anand's version was in fact 12 EU though may or may not have been turbo-enabled.

The 12 EU model should be fine but it doesn't make sense to even make a 6 EU model if it's not that much faster than what we have. This suggests a new entry iMac will stick with a dedicated card.

They have CPU benchmarks of the i5 and i7:

http://en.inpai.com.cn/doc/enshowcon...47&pageid=7714
http://en.inpai.com.cn/doc/enshowcon...44&pageid=7685

The H.264 test won't hold much weight as it doesn't look like they are using the hardware encoder there.

It would be nice if Apple announced full support of the highest H.264 profile with the SB encoder. All the features to allow the best quality at a given bitrate. Their Quicktime encoders are way behind open source ones in quality and speed.
post #119 of 127
Quote:
Originally Posted by Marvin View Post

Now that the Sandy Bridge processors are on sale in China/Malaysia etc, someone has kindly posted solid benchmarks:

http://en.inpai.com.cn/doc/enshowcon...48&pageid=7730

The version tested is the 6 EU desktop processor and contains the HD Graphic 2000 GPU. The comparison GPU called HD Graphic is the Arrandale one we have now (i.e when you turn the NVidia GPU off in the MBP).

It's generally only slightly faster than the current one and around half the 5450. With 12 EUs, it will be in the region of the 5450. This to me suggests that Anand's version was in fact 12 EU though may or may not have been turbo-enabled.

The 12 EU model should be fine but it doesn't make sense to even make a 6 EU model if it's not that much faster than what we have. This suggests a new entry iMac will stick with a dedicated card.

They have CPU benchmarks of the i5 and i7:

http://en.inpai.com.cn/doc/enshowcon...47&pageid=7714
http://en.inpai.com.cn/doc/enshowcon...44&pageid=7685

The H.264 test won't hold much weight as it doesn't look like they are using the hardware encoder there.

It would be nice if Apple announced full support of the highest H.264 profile with the SB encoder. All the features to allow the best quality at a given bitrate. Their Quicktime encoders are way behind open source ones in quality and speed.

What is interesting for portable use, is that NVidia has stated that Apple worked with them on their chipsets, and will be using them for some to come.
post #120 of 127
Quote:
Originally Posted by melgross View Post

What is interesting for portable use, is that NVidia has stated that Apple worked with them on their chipsets, and will be using them for some to come.

I don't think it makes much sense to use them for a long time. Even though there is a statement about Apple using them in future, NVidia aren't making chipsets any more:

http://www.macrumors.com/2010/12/20/...idia-chipsets/

This means no NVidia IGP ever again to succeed the 320M. If Intel match or exceed the 320M with this generation, they will double it next year. I imagine that if that comment was true about Apple using them, it would apply to the Macbook Air line and nothing else.

Even with the MBA though, a ULV Sandy Bridge chip would likely be a better option.

Given that Apple have wiped out NVidia GPUs from every model including BTO besides the laptop models, it doesn't look good for them. The GT 330M uses at least 15W whereas the 6550M will outperform it and scale down to 11W matching Intel's own IGP.

I think Apple has dropped enough hints that the MBA will replace the entry-level lineup. The simple reason being that they are fast enough for that demographic in every way except for the CPU and the SSD makes up for that somewhat. Also, they would likely design the 13" MB and MBP in a similar way so either the Air goes or the 13" MB and MBP go.

If they don't, they are going to have a thin, light Sandy Bridge MB with a good GPU and 13" screen at the same price point as the 11" MBA, which is just a slower version of the same thing and no one will buy it.

I guess they can leave the 13" MBP but if they redesign it, the 13" Air might be a tough sell as it would be $100 more.

There are actually a number of ways Apple can choose to go with this. It will be interesting to see which way they choose.

I think the white Macbook has reached the end of its life at this point and I just don't see another version being made. I think they will have to get 128GB of storage in the entry-level though and Light Peak to supplement it with fast external storage.

Given that they just redesigned the 13" MBA, they won't discontinue it so what happens with the 13" MBP? I don't think it can be discontinued in favour of the Air just now but they only have Core 2 Duo anyway so I see it as a possibility.

Then the 15" will get the i5 + Radeon but also with a redesign.

This kind of switch will feel bad right now but by next year it won't. The same thing happened with the iMac. They switched the mid-range towers and cube to AIO and at first it sucked because they had single processor G4s and G5s and then came the Intel chips and now we have 27" screens with quad-core chips.

The MBA will start out with slow C2D but have other compelling features (instant-on, very light, very thin) and within 1-2 years, it will be quad-core with 4GB RAM, 256GB SSD and a GPU 2x the 320M it has now and no one will care about the decision.
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Future Apple Hardware
AppleInsider › Forums › Mac Hardware › Future Apple Hardware › Apple to use Intel's Sandy Bridge without Nvidia GPUs in new MacBooks