or Connect
AppleInsider › Forums › Mac Hardware › Future Apple Hardware › Intel outlines upcoming Core i7 'Haswell' integrated graphics, touts up to triple performance
New Posts  All Forums:Forum Nav:

Intel outlines upcoming Core i7 'Haswell' integrated graphics, touts up to triple performance - Page 3

post #81 of 136
Quote:
Originally Posted by Haggar View Post

GPU is not just for games.  Just look at Apple's iLife and Pro apps.

Because an 11" notebook is a great place to use Pro apps.

iLife can do very well with integrated graphics. Anything else is just goalpost-moving.
post #82 of 136
Quote:
Originally Posted by JeffDM View Post

Anything else is just goalpost-moving.

Sometimes it feels like more than goalposts have moved. Sometimes it feels like the field and stadium have been moved to another area code.

"The real haunted empire?  It's the New York Times." ~SockRolid

"There is no rule that says the best phones must have the largest screen." ~RoundaboutNow

Reply

"The real haunted empire?  It's the New York Times." ~SockRolid

"There is no rule that says the best phones must have the largest screen." ~RoundaboutNow

Reply
post #83 of 136
Moore has predicted this a long time ago. What's the news?
13" MacBook5.1 4GB 160GB
Reply
13" MacBook5.1 4GB 160GB
Reply
post #84 of 136
Quote:
Originally Posted by Tallest Skil View Post

 

THAT'S IT.

Does Alzheimer's assisted living take people younger than… retirement age? 😖

The Tesla is a real product of theirs too, it's just more of a corporate product than a home one, so there is hope for you yet.

post #85 of 136
Quote:
Originally Posted by SockRolid View Post

Yup.  Macs are going to be way more "snappy."

 

ARM-based MacBook Air is inevitable.  Some day.  The ARMv8 spec was released in 2011,

and it feature a 64-bit instruction set.  That was the last major technical issue preventing OS X

from running on ARM.  The next hardware step will be building quad-core 64-bit Ax SoCs. 

 

Apple needs to stop paying boutique prices for Intel CPUs.  (AKA "The Intel Tax.")

The benefits would of course be higher margins and/or lower retail pricing.

Yet more bad news for Intel and the brain-dead Ultrabook-making copycats.

 

There is no magic pixie dust in the ARM architecture that's going to make it outperform Intel chips.  The rapid increase in performance is paid by a higher TDP.

 

The current top end A15s is benching around 3K in geekbench at best for the Exynos (8W TDP).  Or around the same performance as a old Core 2 Duo.  The current i5-3317U (17W) in the 11" MBA benches in around 6.5K.  You can see the Core i3 330M crushing the Exynos Dual in every benchmark and it benches in at 3906 and the Core i3 330M is from 2010.

 

http://www.phoronix.com/scan.php?page=article&item=samsung_exynos5_dual&num=3

 

With Haswell 10W chips expected to perform as well as the 17W Ivy Bridge chips and the expected 7W chips I think that the window for any ARM MBA is history.

 

The Haswell Xeon E3 look to be really interesting...at 13W I'd like a 13" MBP version with 32GB ECC RAM max and a BTO discrete GPU (doesn't have to be super fast, just has to do both CUDA and OpenCL)...that would be a killer engineering laptop.

 

A Mac Mini Server Xeon E3 with 32GB ECC ram slots would be killer too.  Connect with a render or compute farm and that would work nicely as well for both small server and low-end workstation needs.

post #86 of 136
Wait until your fifties hit!!!!

It is like the human mind is a hard drive of fixed size. If you don't use info regularly the mind just seems to replace old data with what ever is new. On the flip side short term memory sucks big time, you know you are old when you jump in the car to go to the store and then wonder why you are on the road five minutes later. It is ALL DOWN HILL FROM HERE.

Quote:
Originally Posted by Tallest Skil View Post

No, you're right! I couldn't remember the name of the chip. I can't remember anything. I lose nouns all the time. I used to be so eloquent, and now I just do the verbal equivalent of tripping over my own feet and breaking both shins 20 feet from the finish line of the marathon. I can't remember times or dates, I don't know how long it has been since things occurred and I can't remember any of my old friends. 

Why am I telling you people this? I need a depression clinic or something. 
post #87 of 136
Quote:
Originally Posted by wizard69 View Post

Quote:
Originally Posted by Tallest Skil 
No, you're right! I couldn't remember the name of the chip. I can't remember anything. I lose nouns all the time. I used to be so eloquent, and now I just do the verbal equivalent of tripping over my own feet and breaking both shins 20 feet from the finish line of the marathon. I can't remember times or dates, I don't know how long it has been since things occurred and I can't remember any of my old friends.

Why am I telling you people this? I need a depression clinic or something.
Wait until your fifties hit!!!!

It is like the human mind is a hard drive of fixed size. If you don't use info regularly the mind just seems to replace old data with what ever is new. On the flip side short term memory sucks big time, you know you are old when you jump in the car to go to the store and then wonder why you are on the road five minutes later. It is ALL DOWN HILL FROM HERE.

There is a condition known as nominal aphasia that affects the recollection of nouns. I don't know exactly what constitutes having this condition as I'm sure we're all subject to bouts of forgetfulness and I assume proper nouns are most likely thing we forget as it's not a concept but simplistic rote memorization.

I'm sure we've all remembered a person or thing but couldn't think of the name, but we knew something else which we followed through a complex thread until there was something we could use to do a search or use as a reference to another, or that simply made your brain access the appropriate info, at which point you wonder how you could have ever forgotten it. Does that sound about right?

When I'm sick the lose of proper noun recollection is the first thing to go. In fact, I know I'm getting sick when that starts happening.
Edited by SolipsismX - 5/3/13 at 12:13pm

"The real haunted empire?  It's the New York Times." ~SockRolid

"There is no rule that says the best phones must have the largest screen." ~RoundaboutNow

Reply

"The real haunted empire?  It's the New York Times." ~SockRolid

"There is no rule that says the best phones must have the largest screen." ~RoundaboutNow

Reply
post #88 of 136
Quote:
Originally Posted by SolipsismX View Post

Could drive Apple's 4K Cinema Display with 1.5x the resolution (2.25x more pixels) of the current Apple Cinema Display.
I remember the day when getting 80 characters, big blocky characters, on screen was a major accomplishment. When on steps back it is amazing to see how far the industry has gone in half a lifetime.
Quote:
I've seen a 27" 4K IPS display for as low as low as $2300, and I've seen 50"+ 4K HDTVs for $1300 and $1500. I have to wonder if a Mac Pro update will also get 802.11ac, which also means new AirPort products, and a new Apple Cinema Display, which likely means following the new iMac styling and going with 4K if the availability for quality panels and price points are within reason. I could see them starting with that 27" product for 4K displays, as well as raising the price past $999 for 4K. I think an extra $500 for 4K wouldn't be a deterrent to that customer base.
This year should bring lots of new products. However I don't think TB is ready for 4K displays yet. That might happen in 2014.
Quote:

PS: I don't think the 27" displays will get 2x like all the iPod Touch, iPhone, iPad, 13" MBP and 15" MBPs have received. I think they'll get 1.5x which brings the 2560x1440 display exactly up to the 3840x2160 of UHD 4K. I think Mac OS X has already been made to work with this size without affecting the GUI elements.
Possibly, but Mac OS isn't as resolution independent as iOS is. Of course we will be hearing about another major rev to Mac OS at WWDC. They could be focused on refinement of resolution independence in the next release.
Quote:

PPS: Anyone still holding out hope for a 17" MBP?
Personally no!
Quote:
I do hold a tiny sliver of hope that Apple simply wasn't able to release a 17' MBP because they want the iGPU to be at least capable of pushing the display and it simply wasn't possible to push a 2x 3840x2400 WQUXGA display with Ivy Bridge and/or to get quality panels at a reasonable price last year.
I'm not sure if that is the reason. However I hold out hope that a new 17" laptop is possible. This mainly due to the way the missing 17" has been canceled. Apple hasn't said it is completely dead. In fact they have been rather reserved in public comments about its status.

So something might be on the horizon. Maybe not the traditional 17" laptop design but certainly something for the pro market. I look at what has been happening in the market place with discounted MBP and have to believe that this is a sign that Apple miscalculated the demand for their new creations. If so we may see a bit of retrenchment with respect to laptop design. This is especially the case for the MBP market where users expect and make use of the extra feature of "Pro" laptops. The heavy discounting of MBP models this year suggest that Apple has shot itself in the foot.
post #89 of 136
Quote:
Originally Posted by Slurpy View Post

The **** are you talking about? Ivy Bridge has "terrible" performance?
Err that was a reference to the AIRs compared to other laptops. AIRs by their nature are much lower performance machines compared to laptops running more capable Ivy Bridge processors.
Quote:

The GPU is also fine for 95% of what people use it for, and a big step up from the last one.
I'd suggest that it isn't even close to good enough for many users. Yes it is a big step up for Intel but is sucks for 3D and Apple/Intel have yet to release OpenCL support for it on Mac OS. It is really hoped that Apple will address these issues with the Haswell update.
Quote:
I use my Air for absolutely everything, including intensive design work, and it flies through everything like a knife through hot butter.
Well good for you!
Quote:
Easily the most responsive computer I've used, has never dissapointed. Yes, if you're constantly rendering video or high end 3D gaming it's not the right choice, but "terrible" performance?
At a minimum I want four cores in any new laptop I buy along with the ability to order it with plenty of RAM.
Quote:
Hardly. What exactly do you do that makes the Air insufficient?
XCode for one. It isn't that running XCode on an AIR is impossible you certainly can do so. The problem is when you have multiple activities going on at once. You see some people actually do more than one thing at a time on their Macs, that is where having extra cores and lots of RAM makes a huge difference. On top of that I won't buy a machine that doesn't have OpenCL support in the GPU, last I knew Apple/Intel have yet to resolve that issue. OpenCL support is one of those features that it pays to have as you never really know which apps or which rev to an app will begin to use it.

Haswell has the potential to address some of these issues in the AIR platform. It could improve the AIRs to the point that I might consider it a viable machine to replace a MBP. It really depends upon drivers being dealt with, improved multi thread support (not a substitute for cores but helpful anyways), an update to the base RAM and more flash storage. Of course this Haswell based machine doesn't exist yet, but the potential is there for a massively upgraded Mac Book AIR. This is the reason behind my comments. Haswell just might allow for that quantum jump in performance that remakes a product into something entirely different. We just need to see which way Apple goes with the next AIR rev. By that I mean better battery performance vs better system performance.
post #90 of 136
Quote:
Originally Posted by wizard69 View Post

XCode for one. It isn't that running XCode on an AIR is impossible you certainly can do so. The problem is when you have multiple activities going on at once. You see some people actually do more than one thing at a time on their Macs, that is where having extra cores and lots of RAM makes a huge difference.

I think you have a terrible mismatch on applying appropriate expectations to a given machine. Don't expect a pen to do the job of a chisel. But that does not mean the pen is useless to everyone even though the only thing you need is a chisel.

When comparing to "other laptops", don't cross classification lines so wily-nilly. You're probably not going to do development work on any other ultrabook either, and most computer manufacturer offer at least one, even though you apparently disagree with the existence of the category. That doesn't make the ultrabook class useless, because the proportion of developers to users is pretty low.
Edited by JeffDM - 5/3/13 at 1:09pm
post #91 of 136
Quote:
Originally Posted by Evilution View Post

I thought the native language in Texas was English!
This reads like a Dutch kid who has only just started learning English.

Excuse me, Evilution! My English grammar is fucking sucks okay? I'm deaf and the school don't know how to teach deaf how to do English grammar. The English is deaf's "second language" and the American Sign Language (ASL) is deaf primary language. So the ASL is my primary language.

Please excuse my lame English grammar. American Sign Language is my first language and English's the second.
Tallest Skill, you can edit my English grammar for me. My English grammar sucks! lol

Reply

Please excuse my lame English grammar. American Sign Language is my first language and English's the second.
Tallest Skill, you can edit my English grammar for me. My English grammar sucks! lol

Reply
post #92 of 136
Originally Posted by TexDeafy View Post
…the American Sign Language (ASL)…

 

It has always mystified me that even given a language that could unite humanity we've still chosen to break it into sects.

 

My family learned Signing Exact English, and you run into compatibility issues! 1oyvey.gif

Originally Posted by helia

I can break your arm if I apply enough force, but in normal handshaking this won't happen ever.
Reply

Originally Posted by helia

I can break your arm if I apply enough force, but in normal handshaking this won't happen ever.
Reply
post #93 of 136
I'm not sure how you came up with some of these points.
Quote:
Originally Posted by JeffDM View Post

I think you have a terrible mismatch on applying appropriate expectations to a given machine. Don't expect a pen to do the job of a chisel. But that does not mean the pen is useless to everyone even though the only thing you need is a chisel.
That was the whole point, right now the AIRs are pens, they simply don't have the performance one would get out of a MBP. Haswell has the potential to change the performance equation of the AIRs for my needs. That is the context here, right now they are terrible performers for my needs.
Quote:
When comparing to "other laptops", don't cross classification lines so wily-nilly.
Err that is the whole point, I see Haswell offering Apple the opportunity to do much better in the AIR chassis.
Quote:
You're probably not going to do development work on any other ultrabook either, and most computer manufacturer offer at least one, even though you apparently disagree with the existence of the category.
At no time did I disagree with the category. The problem is that the category currently doesn't provide the performance I need.
Quote:
That doesn't make the ultrabook class useless, because the proportion of developers to users is pretty low.
I never said they where useless overall just that they are useless for the way I currently use my laptop.

It is interesting that every time I post anything negative about the AIR and its performance there is always a bunch of people chiming in that the machine is all so wonderful. That may be the case but apparently they can't come to grips with the idea that some people use their hardware differently. The point is valid AIRs are terrible performers when judged against Ivy Bridge processors running on alternative hardware. They are a low end mobile solution as their pricing and specs indicate.

By the way, an AIR can be a fine machine for a developer, it really depends upon how you use the machine.
post #94 of 136
Quote:
Originally Posted by wizard69 View Post

I'm not sure how you came up with some of these points.
That was the whole point, right now the AIRs are pens, they simply don't have the performance one would get out of a MBP. Haswell has the potential to change the performance equation of the AIRs for my needs. That is the context here, right now they are terrible performers for my needs.

You initially said they're terrible processors, you didn't say "for me" in that line. They're just terrible, period.

Quote:
Err that is the whole point, I see Haswell offering Apple the opportunity to do much better in the AIR chassis.
At no time did I disagree with the category. The problem is that the category currently doesn't provide the performance I need.
I never said they where useless overall just that they are useless for the way I currently use my laptop.

It is interesting that every time I post anything negative about the AIR and its performance there is always a bunch of people chiming in that the machine is all so wonderful. That may be the case but apparently they can't come to grips with the idea that some people use their hardware differently. The point is valid AIRs are terrible performers when judged against Ivy Bridge processors running on alternative hardware. They are a low end mobile solution as their pricing and specs indicate.

Which is odd, since you're constantly lacking the grip that some people don't use their hardware the same way you do. You seem to understand there's a different model better suited for your needs. Why still harp on the fact that the Air doesn't suit your needs?

Also, you're comparing processors of different tiers against each other. Engineering is about trade-offs. To get the light weight and battery life, you have to trade away the extreme performance. Increasing technology mitigates it over time, but not as quickly as you'd like, apparently. You can't have everything, and right now.

I think your problem is that you're repeatedly giving off an impression of expecting a lightweight to do a heavy duty's job. Having sane expectations at the outset would be a good idea.
Edited by JeffDM - 5/3/13 at 9:43pm
post #95 of 136
I don't think the Air processors are that bad. They're best for what they can do. Obviously I prefer standard i5 and i7 (more the i7) mobile processors.
post #96 of 136
Quote:
Originally Posted by Winter View Post

I don't think the Air processors are that bad.
Only in relation to what is currently available in other laptops. If you are looking for mainstream performance the AIRs are pretty terrible. I'm not so certain why people get so spun up over that statement as the processors simply aren't designed for performance.
Quote:
They're best for what they can do.
This is certainly the case.
Quote:
Obviously I prefer standard i5 and i7 (more the i7) mobile processors.
Then in fact the AIRs are currently a terrible solution for your needs! People seem to be reluctant to put certain Apple models into buckets that categorize the line up performance wise. The AIRs aren't MBPs by a long shot with anemic CPUs, embedded Intel GPUs, no OpenCL support yet and other shortcomings. I don't know, maybe people have a different definition of terrible than I do, but AIRs can be an unpleasant experience if your needs are greater than what the current models offer.

There is only so much performance you can get out of a low wattage chip on a given process node. Even with Haswell there could be limitations with respect to just how much of an improvement the AIr capable variants will actually deliver. I'm very hopeful that Haswell can deliver a big boost to the AIR but it isn't obvious that the performance will be that great in the final product. Right now it looks like there is a big gulf between what the GPUs in the ULV chips can do versus upper end "Iris" containing chips can do.

Who knows what Apple will do with the AIRs, at this point they could remain terrible or actually become very attractive to MBP users.
post #97 of 136
The order of products that interest me starting with most to least to give you an idea with regards to processors.

1. Mac mini
2. iMac
3. 15" retina MacBook Pro
4. Mac Pro
5. 13" retina MacBook Pro
6. 13" MacBook Air
post #98 of 136
Quote:
Originally Posted by wizard69 View Post

Then in fact the AIRs are currently a terrible solution for your needs! People seem to be reluctant to put certain Apple models into buckets that categorize the line up performance wise. The AIRs aren't MBPs by a long shot with anemic CPUs, embedded Intel GPUs, no OpenCL support yet and other shortcomings. I don't know, maybe people have a different definition of terrible than I do, but AIRs can be an unpleasant experience if your needs are greater than what the current models offer.

There is only so much performance you can get out of a low wattage chip on a given process node. Even with Haswell there could be limitations with respect to just how much of an improvement the AIr capable variants will actually deliver. I'm very hopeful that Haswell can deliver a big boost to the AIR but it isn't obvious that the performance will be that great in the final product. Right now it looks like there is a big gulf between what the GPUs in the ULV chips can do versus upper end "Iris" containing chips can do.

Who knows what Apple will do with the AIRs, at this point they could remain terrible or actually become very attractive to MBP users.

 

http://************/2012/01/25/macbook-air-thunderbolt-editing-4k-video-shows-why-the-mac-pro-as-we-know-it-can-die/

 

http://www.youtube.com/watch?v=dxLRFZ3jeEk

 

It is spendy but the MBA is pretty svelte.   If only there were mac drivers for external GPUs.

 

/shrug

 

Yes, I know you're going to complain that it's not the MBA doing the work in these two examples but claiming that the Air will be an unpleasant experience if your needs are greater than what the current models offer is equally true for the Mac Pro.  It will be an unpleasant experience if your needs are greater than what the current Mac Pro models offer too.

 

The current 2012 MBA benches in around a 2010 21.5" iMac, a desktop machine only 2 years older (faster than the i3s, about even with the i5s and below the i7s).

 

http://browser.primatelabs.com/geekbench2/search?utf8=✓&q=MacBook+air+2012

 

http://browser.primatelabs.com/geekbench2/search?utf8=✓&q=imac+2010

 

They aren't far behind the 2012 13" MBP either which isn't surprising

 

http://browser.primatelabs.com/geekbench2/search?utf8=✓&q=macbook+pro+13+2012

 

It's faster than the 2010 MBP I'm currently using for software development (averages around 5K in geekbench).

 

http://browser.primatelabs.com/geekbench2/search?q=MacBookPro6,2

 

Yah, that's a three year old MBP but the current year MBP and MBAs aren't out yet either.

post #99 of 136
So why are you comparing the AIRs to two or three year old hardware. I'm not sure why people can't grasp what has been said here, AIR is a terrible performer when judged against what is possible with current technology. It is NOT debatable and is the nature of the ULV processor


It is spendy but the MBA is pretty svelte.   If only there were mac drivers for external GPUs.
[/Quote]
You attempt to defend the AIR then resort to highlighting one of the issues I've mentioned as a problem on the AIR. Why I don't know but you need to realize that when I say the AIR has a terrible processor I'm not knocking the machine as a whole.
Quote:

/shrug

Yes, I know you're going to complain that it's not the MBA doing the work in these two examples but claiming that the Air will be an unpleasant experience if your needs are greater than what the current models offer is equally true for the Mac Pro.  It will be an unpleasant experience if your needs are greater than what the current Mac Pro models offer too.
So?

The point is AIR can be a disappointment to a lot of people if they have demanding app requirements.
Quote:
The current 2012 MBA benches in around a 2010 21.5" iMac, a desktop machine only 2 years older (faster than the i3s, about even with the i5s and below the i7s).
Again so?

Benchmarks seldom reflect real world use.
Quote:
http://browser.primatelabs.com/geekbench2/search?utf8=✓&q=MacBook+air+2012

http://browser.primatelabs.com/geekbench2/search?utf8=✓&q=imac+2010

They aren't far behind the 2012 13" MBP either which isn't surprising

http://browser.primatelabs.com/geekbench2/search?utf8=✓&q=macbook+pro+13+2012

It's faster than the 2010 MBP I'm currently using for software development (averages around 5K in geekbench).

http://browser.primatelabs.com/geekbench2/search?q=MacBookPro6,2

Yah, that's a three year old MBP but the current year MBP and MBAs aren't out yet either.
I really wish people would read what I've written here. Be definition when you buy an AIR you get a low end processor. It doesn't really matter how well that machine performs with respect to 3 year old hardware. I don't go out and buy hardware for to get 3 year old performance nor do I pine for the foolish concept of an external GPU.
post #100 of 136
Quote:
Originally Posted by nht View Post

 

   If only there were mac drivers for external GPUs.

 

The market is really going the other way. Beyond that you have to look at the spectrum beyond just macs. The market for eGPUs would basically be limited to notebooks. There is little to no benefit pushing it out of the box with a desktop. With the majority of notebooks in performance gpus you can get to something like a 650m at the 15" level or a Quadro K3000m in mobile workstation graphics, although it's quite expensive. I don't foresee thunderbolt alone motivating external gpus. You would need something more universal to distribute costs, and either way the bandwidth is a bit slim so it isn't appropriate for the high end cards. You would need to address things like hot pluggability  as it's a requirement for thunderbolt certification. Typically that kind of thing is still restricted to enterprise hardware. You would end up with something like a GTX 660 with a slight performance hit under some applications at a $600 price point, as it would be aimed at a smaller market. Whenever companies do that, they seek higher margins, so that development costs are met quickly. That's why I guessed $600 or so. Either way it wouldn't be cheap, and they would need companies that deal with things like gaming machine on board.

 

 

Quote:
Originally Posted by wizard69 View Post

Benchmarks seldom reflect real world use.
I really wish people would read what I've written here. Be definition when you buy an AIR you get a low end processor. It doesn't really matter how well that machine performs with respect to 3 year old hardware. I don't go out and buy hardware for to get 3 year old performance nor do I pine for the foolish concept of an external GPU.

It's not even limited to just geekbench ones. There are a lot of benchmarking tests within specific applications. Barefeats is one source, and it's good. It's just that most people don't understand how to interpret data. When you buy a new machine, you might have a huge range of tasks within a given application and between applications where performance can be quite variable, which is why I don't really like static testing. It can highlight speed increases that might reflect a very small percentage of the total workload. The 3 year old analogy only works if  workloads remain static or are outpaced by marginal speed increases. The example of the imac wasn't that great. i3s were by definition low end options. The desktop i5s are not suitable for comparison. Hyperthreading on the Airs vs lack of it on desktop i5s tends to be exaggerated in something like geekbench beyond real world use. Presently the ULV options have been mostly marketing. Take higher end option and disable cores or underclock. Market as low voltage. If they were designing from the ground up for the lower power that would be different. There's some insinuation that they are doing such a thing with Haswell or Broadwell, but I'm not sure how much of that is early tech site kool-aid.


Edited by hmm - 5/4/13 at 10:25pm
post #101 of 136
Quote:
Originally Posted by Tallest Skil View Post

The engineer I know at Intel has said as much (though he's on the Cannonlake team, so Haswell is very old news to him)

Wow, that's a bit down the line: Haswell 22nm 2013 -> Broadwell 15nm 2014 -> Skylake 15nm 2015 -> Skymont 10nm 2016 -> Cannonlake 10nm 2017 -> 7nm die-shrink 2018 -> 7nm 2019 -> 5nm die-shrink 2020 -> 5nm 2021

They are currently hiring for testing:

http://jobs.intel.com/job/Folsom-PowerPerformance-Engineer-Job-CA-95630/2507464/

So that's the same 10nm as Skymont vs current 22nm. They said they have plans to go down to 5nm. Even if Intel only increases the GPU 50% every year, the Cannonlake GPU they are working on right now will be 1.5^4 = 5x faster than Haswell. That puts the laptop chips somewhere between a GTX 680 and the Titan right on the CPU die. They can likely cut the power draw by then too.

They could probably put these into production right now but obviously they want to profit from the incremental upgrades. Still, that's only another 4 years away.

Then think 2021 is another 5x, that makes it 25x faster than we have now or equivalent to 5 GTX680s in SLI, potentially in an Ultrabook in just 8 years. Here's just two in SLI:



Even phones/tablets will be turning out amazing graphics pretty soon. The PowerVR 6 should really raise the bar for mobile visual quality.
post #102 of 136
Do you really think they will get to 10nm in 4 years? It just seems like the interval between nodes is getting a little longer.
Quote:
Originally Posted by Marvin View Post

Wow, that's a bit down the line: Haswell 22nm 2013 -> Broadwell 15nm 2014 -> Skylake 15nm 2015 -> Skymont 10nm 2016 -> Cannonlake 10nm 2017 -> 7nm die-shrink 2018 -> 7nm 2019 -> 5nm die-shrink 2020 -> 5nm 2021

They are currently hiring for testing:

http://jobs.intel.com/job/Folsom-PowerPerformance-Engineer-Job-CA-95630/2507464/

So that's the same 10nm as Skymont vs current 22nm. They said they have plans to go down to 5nm. Even if Intel only increases the GPU 50% every year, the Cannonlake GPU they are working on right now will be 1.5^4 = 5x faster than Haswell. That puts the laptop chips somewhere between a GTX 680 and the Titan right on the CPU die. They can likely cut the power draw by then too.

They could probably put these into production right now but obviously they want to profit from the incremental upgrades. Still, that's only another 4 years away.
I'm surprised you would say something like that. If manufactures could crap out a new process at will Intel would have more challenging competition.
Quote:
Then think 2021 is another 5x, that makes it 25x faster than we have now or equivalent to 5 GTX680s in SLI, potentially in an Ultrabook in just 8 years. Here's just two in SLI:
I'm not sure is Haswell will end the whining about integrated graphics but I'm pretty certain by 2015 few will complain. I'm actually a bit excited by the potential of Haswell in something like a Mini and low end laptops. Lets hope Apple goes for broke implementing Haswell.
Quote:

Even phones/tablets will be turning out amazing graphics pretty soon. The PowerVR 6 should really raise the bar for mobile visual quality.
I'm wondering if I can hold out with my iPad 3 that long. We should be getting four CPUs with VR6 and that would be really sweet in an iPad. Maybe even 64 bit computing.
post #103 of 136
Quote:
Originally Posted by wizard69 View Post

Do you really think they will get to 10nm in 4 years? It just seems like the interval between nodes is getting a little longer.

Intel plans to be there in 4 years:

http://www.anandtech.com/show/6253/intel-by-2020-the-size-of-meaningful-compute-approaches-zero
http://www.reuters.com/article/2010/10/28/us-chipmakers-idUSTRE69R4GT20101028

They might not be able to stick to that roadmap but they've done a pretty good job so far. The next node is 14nm, not 15nm as I wrote earlier.
Quote:
Originally Posted by wizard69 View Post

I'm surprised you would say something like that. If manufactures could crap out a new process at will Intel would have more challenging competition.

It's not easy and it costs a lot of money but I think profitability plays a big role in holding it back. Intel's plants cost upwards of $5b. Intel can afford this because they make over $2b profit per quarter. NVidia can't afford to do this because they make 1/20th the profit. AMD is in a similar situation. They both use TSMC:

http://www.theinquirer.net/inquirer/news/2256742/nvidia-says-tsmcs-rivals-are-knocking-on-its-door
http://www.extremetech.com/computing/123529-nvidia-deeply-unhappy-with-tsmc-claims-22nm-essentially-worthless

"As the process nodes shrink, it takes longer and longer for the cost-per-transistor to fall below the previous generation. At 20nm, the gains all-but vanish. Want to know why Nvidia rearchitected Fermi with a new emphasis on efficiency and performance/watt? You’re looking at the reason. If per-transistor costs remain constant, the only way to improve your cost structure is to make better use of the transistors you’ve got."

This sort of thing is going to affect Apple too because unless a rival chip foundry can match the output from the likes of Samsung, Samsung is going to have a competitive advantage over them. There's a recent article that says TSMC is ramping up for the next iPhone but the 5S will use Samsung:

http://www.tomshardware.com/news/TSMC-iPhone-Phase-5-Samsung,22423.html

There are certainly technical issues, especially at the dimensions they are working with but there's not an economic advantage in moving too quickly:

http://www.zdnet.com/intel-we-know-how-to-make-10nm-chips-7000004170/

Intel doesn't have any reason to rush Haswell Xeons for example. AMD's latest Opterons still don't beat Sandy Bridge Xeons.
post #104 of 136
Quote:
Originally Posted by wizard69 View Post

We should be getting four CPUs with VR6 and that would be really sweet in an iPad. Maybe even 64 bit computing.

 

What's the point of 64 bit in a computer with only half a GB of RAM?

post #105 of 136
Quote:
Originally Posted by nht View Post

 

There is no magic pixie dust in the ARM architecture that's going to make it outperform Intel chips.  The rapid increase in performance is paid by a higher TDP.

 

The current top end A15s is benching around 3K in geekbench at best for the Exynos (8W TDP).  Or around the same performance as a old Core 2 Duo.  The current i5-3317U (17W) in the 11" MBA benches in around 6.5K.  You can see the Core i3 330M crushing the Exynos Dual in every benchmark and it benches in at 3906 and the Core i3 330M is from 2010.

 

http://www.phoronix.com/scan.php?page=article&item=samsung_exynos5_dual&num=3

 

With Haswell 10W chips expected to perform as well as the 17W Ivy Bridge chips and the expected 7W chips I think that the window for any ARM MBA is history.

 

The Haswell Xeon E3 look to be really interesting...at 13W I'd like a 13" MBP version with 32GB ECC RAM max and a BTO discrete GPU (doesn't have to be super fast, just has to do both CUDA and OpenCL)...that would be a killer engineering laptop.

 

A Mac Mini Server Xeon E3 with 32GB ECC ram slots would be killer too.  Connect with a render or compute farm and that would work nicely as well for both small server and low-end workstation needs.

 

Xeon chips will never be in Mac Mini, period.

post #106 of 136
Quote:
Originally Posted by Marvin View Post


Intel plans to be there in 4 years:

http://www.anandtech.com/show/6253/intel-by-2020-the-size-of-meaningful-compute-approaches-zero
http://www.reuters.com/article/2010/10/28/us-chipmakers-idUSTRE69R4GT20101028

They might not be able to stick to that roadmap but they've done a pretty good job so far. The next node is 14nm, not 15nm as I wrote earlier.
It's not easy and it costs a lot of money but I think profitability plays a big role in holding it back. Intel's plants cost upwards of $5b. Intel can afford this because they make over $2b profit per quarter. NVidia can't afford to do this because they make 1/20th the profit. AMD is in a similar situation. They both use TSMC:

http://www.theinquirer.net/inquirer/news/2256742/nvidia-says-tsmcs-rivals-are-knocking-on-its-door
http://www.extremetech.com/computing/123529-nvidia-deeply-unhappy-with-tsmc-claims-22nm-essentially-worthless

"As the process nodes shrink, it takes longer and longer for the cost-per-transistor to fall below the previous generation. At 20nm, the gains all-but vanish. Want to know why Nvidia rearchitected Fermi with a new emphasis on efficiency and performance/watt? You’re looking at the reason. If per-transistor costs remain constant, the only way to improve your cost structure is to make better use of the transistors you’ve got."

This sort of thing is going to affect Apple too because unless a rival chip foundry can match the output from the likes of Samsung, Samsung is going to have a competitive advantage over them. There's a recent article that says TSMC is ramping up for the next iPhone but the 5S will use Samsung:

http://www.tomshardware.com/news/TSMC-iPhone-Phase-5-Samsung,22423.html

There are certainly technical issues, especially at the dimensions they are working with but there's not an economic advantage in moving too quickly:

http://www.zdnet.com/intel-we-know-how-to-make-10nm-chips-7000004170/

Intel doesn't have any reason to rush Haswell Xeons for example. AMD's latest Opterons still don't beat Sandy Bridge Xeons.

 

TSMC co-developed with GlobalFoundries who is growing much faster. ARM/AMD both contract to TSMC and GF. Both are stamped out certified for ARM. AMD based APU/CPUs are only stamp certified for GF. The > $8 Billion being invested in Malta with IBM is a lock for the most advanced wafer scale downs for both ARM and AMD.

 

TSMC and Samsung were both partners with GF. They all compete against one another.

 

MEMS markets are exploding and that requires those most current with the future and it isn't INTEL.

 

http://thefoundryfiles.com/2013/04/23/bringing-mems-to-the-mainstream-latest-milestones-and-future-trends/

 

GF has already announced stamp outs for 10nm at the start of 2015 and 7nm at the start of 2017.

 

Intel annoucing 10nm is months after GF/TSMC is a me too response.

 

ARM CEO has a great view of what he thinks of Intel:

 

http://www.electronicsweekly.com/mannerisms/markets/intel-has-no-process-advantage-2012-10/

 

As usual Intel makes crap up as it goes along and the media buys it hook, line and sinker.

Quote:
Mannerisms

Intel Has No Process Advantage In Mobile, says ARM CEO

Intel has no advantage in IC manufacturing when it comes to manufacturing processes used for mobile ICs, Warren East, CEO of ARM, tells EW.

 

“This time last year there was a lot of noise from the Intel camp about their manufacturing superiority,” says East, “we’re sceptical about this because, while the ARM ecosystem was shipping on 28nm, Intel was shipping on 32nm. So I don’t see where they’re ahead.”

 

Furthermore, with the foundries accelerating their process development timescales, it looks increasingly unlikely that Intel will be able to find any advantage on mobile process technology in the future.

 

“We’re supporting all the independent foundries,” says East. That includes 20nm planar bulk CMOS and 16nm finfet at TSMC; 20nm planar bulk CMOS and 14nm finfet at Samsung and 20nm planar bulk CMOS, 20nm FD-SOI and 14nm finfet at Globalfoundries.

 

It gives the ARM ecosystem a formidable array of processes to choose from. “I’m no better equipped to judge which of these processes will be more successful than anyone else,” says East, “our approach is to be process agnostic.”

 

The important thing is that the foundries’ process roadmap is on track to intersect Intel’s at 14nm.

 

14nm will be the first process at which Intel intends to put mobile SOCs to the front of the node i.e. putting them among the first ICs to be made on a new process.

 

Asked if the foundries were prepping their next generation processes with the intention of putting mobile SOC at the front of the node, East replies: That’s the information we’re seeing from our foundry partners.”

 

Globalfoundries intends to have 14nm finfet in volume manufacturing in 2014, the same timescale as Intel has for introducing 14nm finfet manufacturing.

 

In fact, GF’s 14nm process may have smaller features than Intel’s 14nm process because, says Mojy Chian senior vp at Globalfoundries, because “Intel’s terminology doesn’t typically correlate with the terminology used by the foundry industry. For instance Intel’s 22nm in terms of the back-end metallisation is similar to the foundry industry’s 28nm. The design rules and pitch for Intel’s 22nm are very similar to those for foundries’ 28nm processes.”

 

Jean-Marc Chery, CTO of STMicroelectronics points out that the drawn gate length on Intel’s ˜22nm” process is actually 26nm.

 

Furthermore Intel’s triangular fins, which degrade the advantages of finfet processing could underperform GF’s rectangular fins which optimise the finfet advantage.

 

At the front of the GF 14nm finfet node will be mobile SOCs says Chian. GF has been working with ARM since 2009 to optimise its processes for ARM-based SOCs.

 

At TSMC the first tape-out on its 16nm finfet process is expected at the end of next year. That test chip will be based on ARM’s 64-bit V8 processor.

 

Using an ARM processor to validate its 16-nm finfet process should give TSMC’s ARM-based SOC customers great confidence.

 

Asked about the effects of finfets on ARM-based SOCs, East replies: “There’ss no rocket science in what you get out of it. The question is does it deliver the benefits at an acceptable cost? You don’t get something for nothing. How much does it cost to manufacture? How good is the yield? And that, of course, affects cost.”

 

And so on goes Intel beating its head against the wall to get into the low-margin mobile business.

 

Recently Intel  said it expected its Q4 gross margin to drop 6% from Q3′s 63% to 57%. Shock, horror said the analysts

 

But if Intel succeeds in the mobile business, its gross margin will drop a lot more than that.

 

It’s a funny old world.

 

ARM is quite pleased with the tape-outs at TSMC, Samsung and GF. Apple has choices.

 

With the announcement of an additional Fab 8.2 and $10 Billion invested in Saratoga:

 

http://www.timesunion.com/business/article/New-chip-fab-could-go-up-fast-4318910.php

 

It's rather clear that Intel will get squeezed from all sides. Apple will never have to use Intel for any of it's embedded devices.


Edited by mdriftmeyer - 5/5/13 at 7:46pm
post #107 of 136
Performance will improve for one. That however isn't the big issue, today's iOS systems come with 1GB of RAM which can easily expand to 2GB in iPads this year. Considering the way Safari and other apps behave on the platform this extra memory is needed. After that it becomes a problem of how do you divide up the memory map between system demands and memory usage.

What a 64 bit chip, with a 64 bit address space provides is a clean way for Apple to move iOS forward without having to jump through hoops to leverage more RAM on 32 bit systems once they move past 2GB of RAM. Beyond that I suspect that Apple will have to at some point address demands for multitasking user apps. One approach there would be virtual 32 bit images. Cortex A15 is a possibility but frankly it is a short term solution, it just makes sense to set yourself up with 64 bit hardware and be done with it. Also more RAM means that Apple can add more features to the OS that operate without encroaching on user space RAM.

So the real issue comes down to this; 64 bit hardware would give iOS a lock on future development. It effectively removes an obstacle or distraction from software development. Technically it isn't a huge issue today as Apple can transition to 2GB easily, but it becomes significant very quickly after that.
Quote:
Originally Posted by v5v View Post

What's the point of 64 bit in a computer with only half a GB of RAM?
IPad is at 1GB.
post #108 of 136
Quote:
Originally Posted by wizard69 View Post

IPad is at 1GB.

The sad thing for mobile computing is that it won't be too long before Android-based devices will be needing more than 4GB RAM and 64-bit ARM will be upon us. They'll use that as marketing and consumers will likely eat it up and the trolls will likely come here saying how Apple is out of touch and falling behind for only offering 2GB RAM and 32-bit architecture.

"The real haunted empire?  It's the New York Times." ~SockRolid

"There is no rule that says the best phones must have the largest screen." ~RoundaboutNow

Reply

"The real haunted empire?  It's the New York Times." ~SockRolid

"There is no rule that says the best phones must have the largest screen." ~RoundaboutNow

Reply
post #109 of 136
Quote:
Originally Posted by mdriftmeyer View Post

It's rather clear that Intel will get squeezed from all sides. Apple will never have to use Intel for any of it's embedded devices.

I hope that will be the case. I don't think it would be a good situation if one company (especially Intel) had a manufacturing advantage.

GlobalFoundries is suggesting a better roadmap than Intel:

http://www.xbitlabs.com/news/other/display/20130211140309_GlobalFoundries_10nm_Process_on_Track_for_2015_7nm_Fabrication_Process_Due_in_2017.html

7nm in 4 years. Apple is on 32nm just now with the A6X. The A6 and A6X are already faster than an entry PowerMac G5. By the time they get down to those smaller processes, they'll be able to get 2013 laptop performance on passively cooled hardware using less than 2W. No fan, 24+ hour battery life.
post #110 of 136
Originally Posted by SolipsismX View Post
The sad thing for mobile computing is that it won't be too long before Android-based devices will be needing more than 4GB RAM and 64-bit ARM will be upon us.

 

Hey, look at it this way:

For the first time in its existence, a company other than Apple will be the driving force (and testbed, as Apple often is) behind innovation in the industry.

 

Android crap will test out all the early (too early*) 64-bit ARM chips. They'll have the failures and the glitches, and by the time they get it right, Apple will be wanting the later, faster, cheaper models. 

 

*Oh, I guess LTE, but that's not really Apple's industry.

Originally Posted by helia

I can break your arm if I apply enough force, but in normal handshaking this won't happen ever.
Reply

Originally Posted by helia

I can break your arm if I apply enough force, but in normal handshaking this won't happen ever.
Reply
post #111 of 136
Quote:
Originally Posted by Tallest Skil View Post

Hey, look at it this way:

For the first time in its existence, a company other than Apple will be the driving force (and testbed, as Apple often is) behind innovation in the industry.

Android crap will test out all the early (too early*) 64-bit ARM chips. They'll have the failures and the glitches, and by the time they get it right, Apple will be wanting the later, faster, cheaper models. 

*Oh, I guess LTE, but that's not really Apple's industry.

As with the desktop OS move from 32-bit to 64-bit Apple wasn't first but they were the first to do it right. Fat binaries that support both architectures simultaneously and excellent driver support when they made the move. I really hope Android doesn't copy what MS did since there is no server market for Android I don't expect them to.

"The real haunted empire?  It's the New York Times." ~SockRolid

"There is no rule that says the best phones must have the largest screen." ~RoundaboutNow

Reply

"The real haunted empire?  It's the New York Times." ~SockRolid

"There is no rule that says the best phones must have the largest screen." ~RoundaboutNow

Reply
post #112 of 136
Quote:
Originally Posted by wizard69 View Post

So why are you comparing the AIRs to two or three year old hardware.

 

 

2-3 year old desktop hardware.  That's not shabby.

Quote:
 I'm not sure why people can't grasp what has been said here, AIR is a terrible performer when judged against what is possible with current technology. It is NOT debatable and is the nature of the ULV processor

 

 

It sure as hell is debatable if the performance is equal to what a desktop was doing a couple years ago.

 

 

Quote:
Benchmarks seldom reflect real world use.

 

Yes, in the real world the performance differences are even smaller.

 

Quote:
I really wish people would read what I've written here. Be definition when you buy an AIR you get a low end processor. It doesn't really matter how well that machine performs with respect to 3 year old hardware. I don't go out and buy hardware for to get 3 year old performance nor do I pine for the foolish concept of an external GPU.

 

I read what you wrote here, it's just stupid.  Of course a ULV processor isn't as fast as the top end desktop CPU.  As you say: So what? 

 

It doesn't make it terrible.  Nor does it make it automatically a "terrible solution for your needs" even as a developer (unless you happen to be a 3-D dev).  The fact is that the current 2.0Ghz dual core i7 is fast enough for many users who a few years ago would have been considered "demanding" users.

 

When you buy an Air you do not get a low end processor but a low power processor.

post #113 of 136
I was talking with a friend of mine and I predict the Iris Pro graphics processors will probably be BTO only. The processors with GT2 graphics will be standard.
post #114 of 136
Quote:
Originally Posted by nht View Post

 

 

2-3 year old desktop hardware.  That's not shabby.

 

 

It sure as hell is debatable if the performance is equal to what a desktop was doing a couple years ago.

 

 

 

Yes, in the real world the performance differences are even smaller.

 

 

I read what you wrote here, it's just stupid.  Of course a ULV processor isn't as fast as the top end desktop CPU.  As you say: So what? 

 

It doesn't make it terrible.  Nor does it make it automatically a "terrible solution for your needs" even as a developer (unless you happen to be a 3-D dev).  The fact is that the current 2.0Ghz dual core i7 is fast enough for many users who a few years ago would have been considered "demanding" users.

 

When you buy an Air you do not get a low end processor but a low power processor.

 

Nice reply to a bully.

 

What's more "surprising" is that the BTO 13" MBA (with a low-end 2.0 Core i7-3667U in it) performs as well as the standard 13" MBPs (classic or retina) equiped with regular faster cpus (2.5 Core i5-3210M or 2.6 Core i5-3230M). And once you configure those models with the same RAM and storage, the MBA is still cheaper.

 

MacBook Air (13-inch Mid 2012) Intel Core i7-3667U 2000 MHz (2 cores) geekbench score=6832
$1599 with 8GB RAM and 256GB SSD
MacBook Pro (13-inch Retina Early 2013) Intel Core i5-3230M 2600 MHz (2 cores) geekbench score=6821
$1699 with 8GB RAM and 256GB SSD
MacBook Pro (13-inch Mid 2012) Intel Core i5-3210M 2500 MHz (2 cores) geekbench score=6654
$1699 with 8GB RAM and 256GB SSD
 
That's the only comparison that makes a little sense: similar machines, with similar hardware, at similar prices. Comparing the 11/13" MBAs to 15" MBPs only (QC, dedicated graphics, much more expensive), like Dave does, makes absolutly no sense.
 
In any case, Dave doesn't know nothing about Intel processors, he said that himself in another thread/post. He has absolutly no credibility with regards to evaluating the performance of computers, he is still on 2008 hardware, what does he know about modern parts?
post #115 of 136
Quote:
Originally Posted by nht View Post


2-3 year old desktop hardware.  That's not shabby.
It depends upon the desktop software, the app and the user.
Quote:

It sure as hell is debatable if the performance is equal to what a desktop was doing a couple years ago.
When you go out and buy a car do you compare it to what was available in the past or to other products currently on the market. My point which many seem to mis is that it is the AIRs performance compared to what else is on the market that makes the processor look terrible. What was available last year, three years ago or ten years ago means nothing.
Quote:

Yes, in the real world the performance differences are even smaller.
Generally I find the opposite, in the real world cores make a difference because the real world isn't about single app performance.
Quote:
I read what you wrote here, it's just stupid.  Of course a ULV processor isn't as fast as the top end desktop CPU.  As you say: So what? 
Err no that is exactly what makes the current AIr processor terrible. It isn't even as fast as many mobile processors and sometimes looses by a large margin.
Quote:
It doesn't make it terrible.  Nor does it make it automatically a "terrible solution for your needs" even as a developer (unless you happen to be a 3-D dev).  The fact is that the current 2.0Ghz dual core i7 is fast enough for many users who a few years ago would have been considered "demanding" users.
Sure for many users but certainly not all users.
Quote:
When you buy an Air you do not get a low end processor but a low power processor.
Effectively the same thing in today's air. Intel might market them as low power processors to justify a higher price but that is only if you as a customer buy into that as part of the value equation.
post #116 of 136
Quote:
Originally Posted by mjteix View Post

Nice reply to a bully.
Nice work there bud! If the best reply you can come up with for somebodies use of a word you don't like is to call them a bully, you just painted a pretty crappy self portrait. I called the AIRs processor terrible because that is what is to me. The irrational responses own this forum aren't my doing.
Quote:
What's more "surprising" is that the BTO 13" MBA (with a low-end 2.0 Core i7-3667U in it) performs as well as the standard 13" MBPs (classic or retina) equiped with regular faster cpus (2.5 Core i5-3210M or 2.6 Core i5-3230M). And once you configure those models with the same RAM and storage, the MBA is still cheaper.
Does that not say something about Apples tiering if products more than anything?
Quote:
MacBook Air (13-inch Mid 2012) Intel Core i7-3667U 2000 MHz (2 cores) geekbench score=6832
$1599 with 8GB RAM and 256GB SSD
MacBook Pro (13-inch Retina Early 2013) Intel Core i5-3230M 2600 MHz (2 cores) geekbench score=6821

$1699 with 8GB RAM and 256GB SSD



MacBook Pro (13-inch Mid 2012) Intel Core i5-3210M 2500 MHz (2 cores) geekbench score=6654


$1699 with 8GB RAM and 256GB SSD


 


That's the only comparison that makes a little sense: similar machines, with similar hardware, at similar prices. Comparing the 11/13" MBAs to 15" MBPs only (QC, dedicated graphics, much more expensive), like Dave does, makes absolutly no sense.
It makes perfect sense as you don't go shopping for new computers based upon what was available 3 years ago. You make reasonable comparisons to what is available in the marketplace at the time.
Quote:
 

In any case, Dave doesn't know nothing about Intel processors, he said that himself in another thread/post. He has absolutly no credibility with regards to evaluating the performance of computers, he is still on 2008 hardware, what does he know about modern parts?

I certainly don't follow Intel like I use too, as it is a waste of my time, that doesn't mean I know nothing about processor or computers in general. As for modem computers I have plenty to work with at work that give me a very good indication of where computing hardware is these days. Being work those are all Windows machines, but a trip down to the Apple store can confirm my opinion fairly quickly. To put it bluntly it is a waste of my cash to even consider a dual processor machine that doesn't support OpenCL on a GPU.

As far as credibility goes I have all I need, I understand how the various software packages I use work and how they can or can not leverage the cores in the machine. All you have to go on is some public benchmarks.
Quote:
post #117 of 136

Witty rejoinder withdrawn to avoid unnecessary histrionics.


Edited by v5v - 5/7/13 at 1:21am
post #118 of 136
Quote:
Originally Posted by wizard69 View Post

It depends upon the desktop software, the app and the user.

 

Yes.  So for many folks the MBA is not a "terrible" machine.

 

 

Quote:
When you go out and buy a car do you compare it to what was available in the past or to other products currently on the market. My point which many seem to mis is that it is the AIRs performance compared to what else is on the market that makes the processor look terrible. What was available last year, three years ago or ten years ago means nothing.

 

 

When I go out to buy a car I compare it to what my NEEDS are, not relative to all the other cars on the market.  If the 2013 F-150 now has enough towing capacity for what I need then it doesn't really matter that the 2013 F-250 can tow twice as much.  The fact is that smaller modern engines, like modern CPUs, have as much power as larger truck engines in the past.  Likewise towing capacity isn't just dependent on the engine but the transmission, frame, brakes, axles, etc.

 

Quote:
Generally I find the opposite, in the real world cores make a difference because the real world isn't about single app performance.

 

And the real world isn't all about CPU performance.

 

 
Quote:
Originally Posted by wizard69 View Post

To put it bluntly it is a waste of my cash to even consider a dual processor machine that doesn't support OpenCL on a GPU.

 

Except that the Ivy Bridge Core i7 does support OpenCL.  Just not in OSX but that's hardly Intel's fault.  Intel just released updated drivers for OpenCL 1.2.

 

Which means that Apple doesn't consider OpenCL GPU support to be very important to OSX given that the Mini, MBA and the 13" MBP all lack OpenCL support despite the fact that it is available on Intel GPUs.  This may not change even with Haswell or if it does it may be retroactive to IvyBridge machines just like the new Intel OpenCL drivers are for both Haswell and IvyBridge.

 

Are you going to say the MBP 13" is also a terrible machine?

post #119 of 136
Quote:
Originally Posted by mdriftmeyer View Post

 

Xeon chips will never be in Mac Mini, period.

 

Probably not but never say never.  

 

A mac pro mini having a Xeon + ECC RAM + Quadro K500M GPU would be a nice pro addition. If it starts at the $1699 price slot it would be attractive and have a sufficiently high ASP to fit the lineup well (as in little iMac cannibalization and a good low end Mac Pro entry point)

post #120 of 136
Quote:
Originally Posted by SCProfessor View Post

The Enterprise's core…

 

 

Not enough! I need Barkley tucked away in a closet, acting as my home neural net…

Late 2009 Unibody MacBook (modified)
2.26GHz Core 2 Duo CPU/8GB RAM/60GB SSD/500GB HDD
SuperDrive delete
Reply
Late 2009 Unibody MacBook (modified)
2.26GHz Core 2 Duo CPU/8GB RAM/60GB SSD/500GB HDD
SuperDrive delete
Reply
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Future Apple Hardware
AppleInsider › Forums › Mac Hardware › Future Apple Hardware › Intel outlines upcoming Core i7 'Haswell' integrated graphics, touts up to triple performance