or Connect
AppleInsider › Forums › Mac Hardware › Future Apple Hardware › 2014 Mac mini Wishlist
New Posts  All Forums:Forum Nav:

2014 Mac mini Wishlist - Page 12

post #441 of 1392

Apple has always had a low end 15" MBP.    The question being raised is this, would a Intel GPU only machine, that is a Haswell GT3 machine, be good enough for that role.   My answer is hedged with this thought, it would be good enough if Apple supported OpenCL on that GPU and it delivered respectable performance.   Otherwise the machine would simply not be good enough for many of the more advanced apps users run on the machine.   

 

Sooo the answer is highly qualified as we really don't know what GT3 will be like nor do we know if the drivers for Intel hardware will ever get there.  You need to realize though that technology is moving forward by some rather huge leaps.   With Haswell and follow on chips the good ole i86 CPU complex will be taking up very little space on these chips.   Due to this reality it will be possible to put one heck of a GPU on the chip.   I say possible because as far as we know Intel has yet to do so andApple  has yet to ship drivers to really exploit the hardware.  

 

You can see this trend in pics of Apples "A" series chips and even AMDs fusion processors.   The GPUs become the largest real estate users on the dies.   It is indeed a brave new world!    This is one reason why I express so much concern about GPUs in the various threads on this forum.   Often the GPU is the single most important part of the chip in delivering a good user experience.   This is why AMDs BRAZOS often whips ATOMs ass when it comes to user acceptance.   

Quote:
Originally Posted by Winter View Post


My thing is, why have a low-end 15"? Apple markets premium computers. It would be like selling a Ferrari without heated seats and air conditioning.
post #442 of 1392

The big problem with Intel and OpenCL is that OpenCL isn't supported on the GPU under Mac OS yet.   At least it was as of last year.    This is really pathetic but highlights the problems with crappy intel drivers and the general lag at Apple for supporting new hardware.   If Apple doesn't turn this around I really don't see strong acceptance of Intel GPU only based MBPs.   Without OpenCL the machine would be significantly weakened for the types of software users run on the machine.  

 

At least that was the way it was late last year when the only OpenCL support, under Mac OS, was via the CPU and not the GPU.    Apparently it is Apple dragging its feet here because OpenCL, on Intel supporting GPUs, is supported on other operating systems.    Either way an Intel GPU only, MBP will be less than inviting unless this is rectified.   

Quote:
Originally Posted by Marvin View Post


It depends on what you mean by low-end. If you buy a refurb 2012 MBP in 2013, it doesn't become low-end unless the new one is significantly improved. NVidia is doing a GPU rebrand this year as is AMD. NVidia's 20nm Maxwell GPUs won't be out until next year. This gives Intel an opportunity to make a really competitive GPU. If Intel can match the 650M, the NVidia Kepler refresh will only be 25-30% faster and cost more.

It really comes down to the choice between getting a 25-30% faster GPU or a Retina display and I reckon the Retina model at $1799 would be more popular, especially considering 650M performance can run almost every game that's out today on high quality and Intel might even outperform the 650M for OpenCL with a lower power usage, given how poor Kepler has been with this.

I expect the CPUs destined for the MBA will perform around the same as the 640M but the Mini will get the chips that go in the MBP.

The higher-end MBPs can use the 5200 that doesn't have Crystalwell so that they aren't more expensive and stick with a dedicated GPU.
post #443 of 1392
Quote:
Originally Posted by wizard69 View Post

Last I knew OpenCL support from Intel sucked so it would be nice to hear that things have changed.   Atleast it is nonexistent for GPU compute as of late last year.

The HD4000 seems to benchmark ok with OpenCL:

http://clbenchmark.com/device-info.jsp?config=11977159

Here's the NVidia 660M for comparison:

http://clbenchmark.com/device-info.jsp?config=12811850

The HD3000 lacked driver support.

There is another list here of OpenCL functions tested in CS6:

http://www.pugetsystems.com/labs/articles/Adobe-Photoshop-CS6-GPU-Acceleration-161/

Here's another one where it scores close to the very low-end Radeon 6450:

http://www.computerbase.de/artikel/grafikkarten/2012/test-intel-graphics-hd-4000-und-2500/10/#abschnitt_gpucomputing

Haswell GT3 will double the shader processors and underclock them a little so I'm expecting a 80-100% (1.8x-2x) performance increase but that's without considering the Crystalwell memory feature. To match the 650M, they'd need a 180% (2.8x) increase. The shader increase alone will match the performance of the 640M. The Crystalwell design might not have as big of an improvement for compute as it would for gaming so that would still give the higher up models a selling point.
post #444 of 1392
Thread Starter 
Quote:
Originally Posted by wizard69 View Post

Apple has always had a low end 15" MBP.    The question being raised is this, would a Intel GPU only machine, that is a Haswell GT3 machine, be good enough for that role.   My answer is hedged with this thought, it would be good enough if Apple supported OpenCL on that GPU and it delivered respectable performance.   Otherwise the machine would simply not be good enough for many of the more advanced apps users run on the machine.

The "low-end" 15" should be discrete as far as I see it. The difference maybe between that and a higher-end 15" is to offer more options.

I don't feel Haswell will offer enough just yet. You're cashing in a big portion of your chips on something that may not be up to snuff.

Now... tell me this: Having the new 10.9 as standard, will graphics memory shared with the main memory finally reach 1024 MB or 1 GB or will it still be 768 MB?
post #445 of 1392

This is all well and good but last I knew nothing from intel was supported under Mac OS.   That is all OpenCL support, that works under Mac OS, is targeted at the CPU.   I don't know of anybody that has gotten  their OpenCL apps to recognize an Intel GPU under Mac OS.   Maybe that has changed with the recent rev of Mac OS but it wasn't the case late last year. 

 

I did find some of those benchmarks interesting, AMD holds all five of the top slots.  Good for them I say!  

Quote:
Originally Posted by Marvin View Post


The HD4000 seems to benchmark ok with OpenCL:

http://clbenchmark.com/device-info.jsp?config=11977159

Here's the NVidia 660M for comparison:

http://clbenchmark.com/device-info.jsp?config=12811850

The HD3000 lacked driver support.

There is another list here of OpenCL functions tested in CS6:

http://www.pugetsystems.com/labs/articles/Adobe-Photoshop-CS6-GPU-Acceleration-161/

Here's another one where it scores close to the very low-end Radeon 6450:

http://www.computerbase.de/artikel/grafikkarten/2012/test-intel-graphics-hd-4000-und-2500/10/#abschnitt_gpucomputing

Haswell GT3 will double the shader processors and underclock them a little so I'm expecting a 80-100% (1.8x-2x) performance increase but that's without considering the Crystalwell memory feature. To match the 650M, they'd need a 180% (2.8x) increase. The shader increase alone will match the performance of the 640M. The Crystalwell design might not have as big of an improvement for compute as it would for gaming so that would still give the higher up models a selling point.

 

Again Haswell looks good on paper.   The problem is Apple will have to support the GPU as a compute device if they expect to sell MBPs with only a Haswell GPU in them.    Unless things have change very recently Apple doesn't support the intel GPUs as OpenCL compute devices.    I know Apple has been having significant issues with the Intel drivers so maybe OpenCL is coming.   

 

As to memory the big mistake here is believing that every OpenCL app needs large amounts of memory to work well.  It helps of course but isn't a requirement for every use.  Beyond that larger caches and faster memory interfaces to RAM can benefit OpenCL on machines with integrated GPUs.     I haven't seen any numbers yet but having that little bit of VRAM built into the package could have a big impact on power usage, that depends of course on the exact nature of the interface but the potential is there.  

 

Like Ive indicated before I'm looking forward to Haswell.    I'm just not convinced that the GPU will be all that great under Mac OS.    It seems like ever Intel GPU release under delivers with respect to every bodies expectations.   

post #446 of 1392

I'm far from cashing in my chips, if you have read my posts you would see that I have serious reservations about a Haswell only MBP.   Performance is important and it looks like Haswell might do it for many of us.   That does depend upon Apple getting the intel drivers sorted. 

Quote:
Originally Posted by Winter View Post


The "low-end" 15" should be discrete as far as I see it. The difference maybe between that and a higher-end 15" is to offer more options.

I don't feel Haswell will offer enough just yet. You're cashing in a big portion of your chips on something that may not be up to snuff.

Now... tell me this: Having the new 10.9 as standard, will graphics memory shared with the main memory finally reach 1024 MB or 1 GB or will it still be 768 MB?

With Apple who knows!    Honestly we don't even know what is in 10.9 yet.   Personally I think Apple might just put a hold on current configurations and wait for hardware supporting heterogeneous addressing.  Then you can say good bye to memory allocation problems.  

post #447 of 1392

Why?
 

post #448 of 1392
Thread Starter 
What I am saying is... you are cashing in your chips if you think Haswell's graphics can drive the 15" display and I have to disagree quite heavily with Marvin that it will be able to. I am not saying he is wrong, though I would really need to see the proof.
post #449 of 1392
Quote:
Originally Posted by Winter View Post

What I am saying is... you are cashing in your chips if you think Haswell's graphics can drive the 15" display and I have to disagree quite heavily with Marvin that it will be able to. I am not saying he is wrong, though I would really need to see the proof.

The 13" Retina has a high-res display and driven by the HD4000. The 13" one is 2560 x 1600, the 15" one is 2880 x 1800 so just over 26% more pixels.
post #450 of 1392
Thread Starter 
Quote:
Originally Posted by Marvin View Post

The 13" Retina has a high-res display and driven by the HD4000. The 13" one is 2560 x 1600, the 15" one is 2880 x 1800 so just over 26% more pixels.

Yeah I feel that's pushing it as well and wish maybe the 640M LE was used. Broadwell in 2014 is when I would gain more confidence in pure integrated graphics.
post #451 of 1392
Quote:
Originally Posted by Tallest Skil View Post

What makes you think that the Mac Mini is anywhere near being discontinued? Of course there will be a Haswell model.


If the Mac Mini is discontinued, we'd have a big problem as that is the desktop Mac in this house.

We use two minis daily. All in one Macs will never again be for us.

post #452 of 1392

I doubt this will happen about the Mac Mini. It is a popular computer.
 

post #453 of 1392

The Mini could easily be refactored into something better all the way around.   It isn't so much that it gets dropped as morphs into something that is a more rational design.  

 

As far as dropping it completely I agree it would be a huge mistake.  

Quote:
Originally Posted by JoshA View Post


If the Mac Mini is discontinued, we'd have a big problem as that is the desktop Mac in this house.

We use two minis daily. All in one Macs will never again be for us.

post #454 of 1392

What is wrong with the design?
 

post #455 of 1392
Quote:
Originally Posted by marvfox View Post

What is wrong with the design?

Nothing, some people just want a consumer tower so they don't have to buy an iMac or Macbook Pro - a quad-core i7 + 680-ish GPU on the cheap.

A Mini can't dissipate that much heat without sounding like a vacuum cleaner though so it would have to be bigger and/or it would have to use some advanced cooling.

The Haswell Mini should at least satisfy people looking for a cheap, decent gaming box that also has a fast CPU.
post #456 of 1392

Marvin why do you always associate a more rational mini with a consumer tower?   It really makes no sense at all.    The concept of the Mini has its place in Apples lineups but that doesn't mean that we need a machine like today's Mini.   Apple really should focus on these issues to make the Mini more attractive and to recover from slipping sales;

  1. Make the Mini more accessible!    It is almost laughable that they even have a model that they call a server because servicing in the hard drives on the machine is a joke.   
  2. Beef up the power supply and cooling to handle a bit more wattage.   In the past the idea here was to support a discrete GPU of respectable performance.  With the advent of Haswell and follow on chips the need for descrete GPU chips isn't overwhelming but it would be nice to be able to buy a Mini without bottom of the barrel chips driving it.   
  3. Make a PCI Express based Solid State storage subsystem a part of the design.   This ought to be easy in the Mini as they don't use all the lanes available to them anyways.  Apple is the only company with the capability to do this at a rational cost point and such a Mini would be very nice performance wise.  Give us two slots for PCI Express based storage and they can throw out support for magnetic drives.  
  4. Support more RAM in the machines.  

 

This isn't asking a lot really.   With the right approach the machine might even end up thinner which always seems to be a win at Apple.   For example getting rid of mechanical drives saves on cooling needs, and power supply size requirements.  This can lead to a thinner Mini.   Less power used in storage means that more of the cooling capability can be steared to keeping the processor cool.   This means we might be able to bump the processor by ten watts or so which can lead to far better performance.  

 

In any event I'm not sure why you always seem to equate the need for an improved Mini with a desire for an XMac.  They really are two different levels of hardware.  As to solid state memory we have seen clearly what such memory can do for performance in the likes of the Mac Book AIRs.  At this point SSD storage suitable for the Mini isn't that far out of the affordability range even for Apple hardware.   

Quote:
Originally Posted by Marvin View Post


Nothing, some people just want a consumer tower so they don't have to buy an iMac or Macbook Pro - a quad-core i7 + 680-ish GPU on the cheap.

A Mini can't dissipate that much heat without sounding like a vacuum cleaner though so it would have to be bigger and/or it would have to use some advanced cooling.

The Haswell Mini should at least satisfy people looking for a cheap, decent gaming box that also has a fast CPU.

You are right to an extent about the Minis cooling capacity but you should realize that removal of the hard drives significantly alters the cooling burden.   If your primary heat load is one device then you can optimize for that.   The thing here is that with Haswell and follow on chips even a ten watt gain in processor capability can lead to large performance deltas.  

 

You pop up the word gaming again to imply that the desire for decent GPU performance is only linked to that.   Sadly that makes you look pretty foolish.   For the most part people dont buy Apple hardware for gaming desires so im not sure why you even bother beating that bush.  

 

While somewhat reserved I do believe that Haswell can lead to a vastly improved Mini but I also believe that an architecture change can take Haswell far further in the Mini.   I really just want to see the Mini get the same engineering respect that the laptop line gets.  That means breaking new ground in a desktop computer.  I'd also like to see the Mini become the good value it was in the past, it is far from that today.  

post #457 of 1392

It is an old architecture that isn't the good value it has been in the past!    That is the simplest way to put it!   

 

In a nut shell it needs to be modernized for better performance so that we don't have the massive gap between the Mini and Apples other desktop the Mac Pro.  In a nutshell they need to look at what they have done with the laptops and transfer some of that tech to the Mini.   One item to go for would be deletion of the internal hard drives in favor of solid state storage modules.   This would lower the thermal burden allowing for either a higher performance processor or the return of a discrete GPU.  

 

If Apple and Intel would get OpenCL working on Intels GPUs the need for a discrete GPU would melt away.    If that isn't to be then they really need to support a discrete GPU so that modern software doing GPGPU computing doesn't suffer so much on the machine.  Frankly I'd rather see Intel and Apple get off their butts in this regard, especially with Haswell and the follow ons near.  

 

Other things I'd like to see include more TB ports, expanded RAM capability and a bundling of a keyboard.   All of this without a price hike.  If you are wondering yes I believe it is doable.   If Haswell mobile lives up to billing and Apple leverages some of those new features it should actually be a bit cheaper to build a Mini while increasing performance.   The addition of flash would impact the cost a bit in the wrong direction but Apple will have to do something sooner or later as the speed differential between secondary storage and the processor is so great now that it is almost a joke.    If people think such a machine is impossible all I have to say is look at the laptop line up.  

 

To sum it up the Mini is a stagnate platform, the concept hasn't changed in close to a decade now.  A modern platform demands a rethink of what a base computer ought to be designed around.  

Quote:
Originally Posted by marvfox View Post

What is wrong with the design?
 

post #458 of 1392

You say a gaming box is that all this haswell processor is good for? The MM has more functions than gaming i am sure.
 

post #459 of 1392
Thread Starter 
Quote:
Originally Posted by marvfox View Post

You say a gaming box is that all this haswell processor is good for? The MM has more functions than gaming i am sure.

Oh of course. The mini is a fully functioning computer as well as HTPC. The thing will Haswell is that while the cpu speed is only supposed to be a 10-15% upgrade, the GPU is supposed to be a lot more. A 650M? Doubtable. A 640M with GDDR5 like in the iMac now? Hopefully with enough memory.
post #460 of 1392
Quote:
Originally Posted by wizard69 View Post

Marvin why do you always associate a more rational mini with a consumer tower?   It really makes no sense at all.

...

In a nut shell it needs to be modernized for better performance so that we don't have the massive gap between the Mini and Apples other desktop the Mac Pro.

It always comes round to being about the space the iMac is in.
Quote:
Originally Posted by wizard69 View Post

With the advent of Haswell and follow on chips the need for descrete GPU chips isn't overwhelming but it would be nice to be able to buy a Mini without bottom of the barrel chips driving it.

If NVidia chips are only 25-30% faster, it's not appropriate to say they bottom of the barrel. It has been a 100% difference in many cases in the past.
Quote:
Originally Posted by wizard69 View Post

Make a PCI Express based Solid State storage subsystem a part of the design. Give us two slots for PCI Express based storage and they can throw out support for magnetic drives.

I don't get this obsession with PCIe storage. It's not going to make a huge difference to real world performance and is just going to restrict storage options. mSATA is fine if you want blade storage but the Mini is a low price option. PCIe storage would just raise the entry price.
Quote:
Originally Posted by wizard69 View Post

removal of the hard drives significantly alters the cooling burden.

SSDs can get as hot as if not hotter than HDDs so it doesn't change anything as far as cooling is concerned.
Quote:
Originally Posted by wizard69 View Post

You pop up the word gaming again to imply that the desire for decent GPU performance is only linked to that.   Sadly that makes you look pretty foolish.   For the most part people dont buy Apple hardware for gaming desires so im not sure why you even bother beating that bush.

It doesn't matter what the GPU is used for, the Haswell chip should perform adequately for it. OpenCL runs on the CPU and in some cases faster than the GPU. I tested a Core i7 with a 650M with LuxMark and the CPU is double the speed of the GPU. It'll vary between different tests but the Mini has a low power limit by design so they ramp the CPU and GPU up and down to maintain the power limit. They designed it to be a low power machine and you want to make it bigger, more expensive, hotter, noisier and less power efficient because as you've said many times, you want a consumer tower in the $1000-2000 range, which there isn't a market for any more.
post #461 of 1392
Quote:
Originally Posted by Marvin View Post


It always comes round to being about the space the iMac is in.
If NVidia chips are only 25-30% faster, it's not appropriate to say they bottom of the barrel. It has been a 100% difference in many cases in the past.
I don't get this obsession with PCIe storage. It's not going to make a huge difference to real world performance and is just going to restrict storage options. mSATA is fine if you want blade storage but the Mini is a low price option.

 

Apple seems to still use SATA. I think the obsession was due to single ssds approaching the bandwidth limit of SATA III. Has Apple actually used mSATA? I thought they used their own tweaked implementation. Also regarding speed differences, you might note that he referred to what is actually supported and what actually works, not so much benchmarks on things that do run. NAND has dropped considerably in price. One factor is how much storage is required for the base mini. Apple's cto pricing can be quite high, but it is what it is, so which is why I emphasize the base amount.

 

Quote:
I tested a Core i7 with a 650M with LuxMark and the CPU is double the speed of the GPU. It'll vary between different tests but the Mini has a low power limit by design so they ramp the CPU and GPU up and down to maintain the power limit. They designed it to be a low power machine and you want to make it bigger, more expensive, hotter, noisier and less power efficient because as you've said many times, you want a consumer tower in the $1000-2000 range, which there isn't a market for any more.

Consumer variants of Kepler were supposedly less focused on computation than Fermi. I would have to look up all the details, but the tech sites reported this consistently. Even then your results puzzle me. I don't really care for LuxMark or Lux Render (which it's based on), but I still find it surprising. The general strength of GPGPU is in highly parallel computing, like you would have with render buckets, assuming the memory on the video card is enough to fully contain the information being processed. That was a complaint when NVidia released iray (not to mention botched implementation). Dealing with a scene of decent size can generate too much data for most gpu to hold considering that even television productions deal with GB of texture data.

post #462 of 1392
Quote:
Originally Posted by Marvin View Post

It always comes round to being about the space the iMac is in.
If NVidia chips are only 25-30% faster, it's not appropriate to say they bottom of the barrel. It has been a 100% difference in many cases in the past.
For a given generation of chips, The Mini has always received bottom of the barrel. I'm not sure how you can deny this. It really isn't a case of What NVidia can do, it is a matter of the mini being designed with the lowest performance chips of the entire Mac line up each year.
Quote:
I don't get this obsession with PCIe storage. It's not going to make a huge difference to real world performance and is just going to restrict storage options. mSATA is fine if you want blade storage but the Mini is a low price option. PCIe storage would just raise the entry price.
SSD would not have a significant impact on the Minis price. Sure there would be a slight increase in costs. but as we have seen on the AIRs the cost of the storage modules isn't a huge factor in pricing for Apple. The AIRs literally have replaced the old Mac Books at the low end.

The thing with PCIe storage is this, SATA is a dead end. It is just barely capable of keeping up with second generation solid state drives. As such it isn't the interface to design a long term platform around. Going PCIe is all about designing for the future.
Quote:
SSDs can get as hot as if not hotter than HDDs so it doesn't change anything as far as cooling is concerned.
That depends upon the technology implemented in the SSD. Everything about the SSD industry is changing extremely fast, One of those things that is changing is the power profile for such storage.
Quote:
It doesn't matter what the GPU is used for, the Haswell chip should perform adequately for it. OpenCL runs on the CPU and in some cases faster than the GPU. I tested a Core i7 with a 650M with LuxMark and the CPU is double the speed of the GPU.
Sadly this is meaningless as you can always find examples of software that runs to advantage in one place and not another. It is most interesting that the top 5 OpenCL processors, benchmark wise, are AMD GPU's. Even with that reality you have to recognize that benchmarks are often focused on specific types of compute uses. In the end the lack of Intel OpenCL GPU drivers is a demonstration of a lack of commitment and nothing more. Apps should be free to distribute their compute loads as they see fit.
Quote:
It'll vary between different tests but the Mini has a low power limit by design so they ramp the CPU and GPU up and down to maintain the power limit. They designed it to be a low power machine and you want to make it bigger, more expensive, hotter, noisier and less power efficient because as you've said many times, you want a consumer tower in the $1000-2000 range, which there isn't a market for any more.
Is it really that difficult for you to separate what I say about the Mini from what I'd like to see in an XMac? You constantly do this and I really don't know why. I'm not talking about blowing out the Mini to handle 300 watts of GPU and CPU power, I'm talking about being able to add 10 to 20 watts to its capability. That might not sound like much but with the likes of Haswell it could be a huge step up in performance.
post #463 of 1392
Thread Starter 
Suppose they got rid of the server last year and went with a dual-core i7 and the 640M with 512 MB of GDDR5 for $799. Could that have been feasible?
post #464 of 1392
Quote:
Originally Posted by Winter View Post

Suppose they got rid of the server last year and went with a dual-core i7 and the 640M with 512 MB of GDDR5 for $799. Could that have been feasible?

Possibly. It really depends upon how the wattages add up. The big problem is that I wouldn't pay $800 these days for a dual core machine. I look at it this way they managed to put much more into the Mac Book Pros.
post #465 of 1392

So the GPU is really the main factor with this Haswell processor. Interesting thought.
 

post #466 of 1392
Quote:
Originally Posted by hmm View Post

I think the obsession was due to single ssds approaching the bandwidth limit of SATA III.

Only in sequential reads and in some cases writes. Random reads/writes are nowhere near it. With 500MB/s sequential writes, you'd fill up a 256GB SSD in 8.5 minutes so there's no urgency to use a faster interface.
Quote:
Originally Posted by hmm View Post

Even then your results puzzle me. I don't really care for LuxMark or Lux Render (which it's based on), but I still find it surprising. The general strength of GPGPU is in highly parallel computing, like you would have with render buckets, assuming the memory on the video card is enough to fully contain the information being processed. That was a complaint when NVidia released iray (not to mention botched implementation). Dealing with a scene of decent size can generate too much data for most gpu to hold considering that even television productions deal with GB of texture data.

It could just be poor optimisation with LuxRender but the results are:
Core-i7 3615QM : NVidia 650M
basic scene = 5678K rays/sec : 1510K rays/sec
medium scene = 3746K rays/sec : 1387K rays/sec
complex scene = 2231K rays/sec : 1082K rays/sec

The memory never exceeded the GPU memory but in every case, the i7 was at least twice as fast.

CLBenchmark (only runs under Windows) showed that in some cases the GPU is ahead and in others, the CPU. I don't know why the raytrace failed on the 650M as it has the latest drivers:



In the OpenCL Galaxies demo, the i7 scores 195FPS/62GFLOPs, the 650M scores 479FPS/161GFLOPs.

Adobe CS apps show massive gains for just the NVidia GPU as it's CUDA-only but if it was OpenCL, it would run vector code on the CPU too.

A dedicated GPU is not necessary for decent levels of computation or gaming. As long as the IGP is powerful enough, when it's combined with a powerful CPU, it will run OpenCL code just fine. With more computation units like the 40 in Haswell vs 16 in Ivy Bridge, parallel tasks should run faster.

Power has to be taken into consideration too. A 250W GPU might perform 5x faster than a 95W CPU. When you factor in the power draw, it's not as impressive.

There's no question that GPU computation is better in many cases but it's not as impressive with low power GPUs.
Quote:
Originally Posted by Wizard69 
the mini being designed with the lowest performance chips of the entire Mac line up each year.

Not true, I don't see a quad-core i7 Air or 13" Macbook Pro. Even the iMac base models are all quad-core i5. The $799 Mini is quad-core i7.
Quote:
Originally Posted by Wizard69 
SSD would not have a significant impact on the Minis price.

You can configure it with an SSD already - a 256GB SSD adds $300.
Quote:
Originally Posted by Wizard69 
the lack of Intel OpenCL GPU drivers is a demonstration of a lack of commitment

Intel had their own compute framework going for a while but they seem to be on board with OpenCL. It was an afterthought with Ivy Bridge but the Haswell IGP should be OpenCL 1.2 compliant and I expect it to ship with OpenCL drivers.
Quote:
Originally Posted by wizard69 
I'm talking about being able to add 10 to 20 watts to its capability. That might not sound like much but with the likes of Haswell it could be a huge step up in performance.

It depends what you consider to be a huge step up. You only get about 5GFLOPs per watt from a GPU so 10-20W won't double the performance. You can always justify adding a little more to something but is it going to make it any more compelling to the target audience? The target audience for the Mini is people who want a low-price desktop and server.

Let's say they made a Mini with a quad i7 and a 750M Kepler GPU with a 105W PSU and it was a bit bigger with accessible drive slots at the back. Would you buy it for $999 and if so, what would you be using the GPU for?
Quote:
Originally Posted by marvfox 
So the GPU is really the main factor with this Haswell processor.

Power saving too as they can shut down parts more efficiently. They ran a GPU test quite well at 8W. During idle or low usage, they'd be able to scale it down pretty low so a laptop might get an extra few hours of battery life.
post #467 of 1392
Quote:
Originally Posted by wizard69 View Post

 

To sum it up the Mini is a stagnate platform, the concept hasn't changed in close to a decade now.  A modern platform demands a rethink of what a base computer ought to be designed around.  

 

Given that the mini was released in 2005 and updated form factor in 2010 it's a wonder that you don't think that the iPhone, being a whole 2 years younger, isn't also a stagnate [sic] platform given that the concept hasn't changed in a while now either. 

 

Gee...the MBP must be an even more stagnate [sic] platform too given the laptop concept has been around since the 80's. 

 

Ow.  Rolling your eyes really hard hurts.

 

For everyone that doesn't use the GPU heavily the current mini is an f-ing powerhouse for the form factor.  When the Haswell update happens then the Mini will handle moderate GPU tasks.

 

"Don’t look now, but the new Mac minis are getting comparable to the last gen Xserve and 2010 Mac Pros as far as benchmarks. Tech progress marches on."

 

http://blog.macminicolo.net/post/34175374589/impressions-of-the-2012-mac-mini-updated

 

The biggest limitation is 16GB RAM.  If you need 32+GB and a better GPU then the cheapest option today is a refurb 2010 MP on the Apple site for $1819.

 

Given that a tricked out mini is $1449 (16GB, Fusion, 2.6Ghz) that's not a bad deal even it is an older Nahelem CPU.  It's anywhere from only $600-$700 more depending on who you source for aftermarket memory and SSDs if you don't buy that off the apple store...which I wouldn't.

post #468 of 1392
Thread Starter 
Quote:
Originally Posted by nht View Post

For everyone that doesn't use the GPU heavily the current mini is an f-ing powerhouse for the form factor.  When the Haswell update happens then the Mini will handle moderate GPU tasks.

I cannot wait for that. I am so psyched for Haswell. I hope I haven't given the impression that I think the 2012 Mac mini is terrible, I just think it is not good enough of an update for me to replace my current 2011 Mac mini. The Haswell GPU and quad-core processor I feel will be worthy of my money.
post #469 of 1392
Quote:
Originally Posted by Marvin View Post


Only in sequential reads and in some cases writes. Random reads/writes are nowhere near it. With 500MB/s sequential writes, you'd fill up a 256GB SSD in 8.5 minutes so there's no urgency to use a faster interface.

It was an incomplete explanation on my part. I was thinking of my explanation in the thread regarding mac pros and the potential for 2TB SSDs. I don't expect them soon, but it made me think of their use for scratch data or files that require quick access where a larger volume would be essential. I was saying that future ssd generations would further eat away from the raid on a chip solutions as sizes grow to become appropriate for an increasing number of usage scenarios. A good internal raid card can run you $1000. Enterprise or standard drives are just a matter of how large the volume and what configuration you want to use, but it can get quite expensive and you generally don't want to fill more than half their capacity to ensure the required level of performance. SSDs will continue to leverage against such solutions as their capacities increase. This isn't to be confused with longer term storage or larger storage solutions, and I guess it's a bit less relevant to the mini. Since 2008 or so, SSDs have become significantly faster at peak speeds. With HDDs the SATA bus limit was a non issue for single drives. No one suggested drag racing them, which is really what you're suggesting with that 8.5 minute figure. 111111111111111111111..... Okay I clearly didn't sleep well last night if it's mid day and I find that funny.

Quote:

It could just be poor optimisation with LuxRender but the results are:
Core-i7 3615QM : NVidia 650M
basic scene = 5678K rays/sec : 1510K rays/sec
medium scene = 3746K rays/sec : 1387K rays/sec
complex scene = 2231K rays/sec : 1082K rays/sec

The memory never exceeded the GPU memory but in every case, the i7 was at least twice as fast.

CLBenchmark (only runs under Windows) showed that in some cases the GPU is ahead and in others, the CPU. I don't know why the raytrace failed on the 650M as it has the latest drivers:



In the OpenCL Galaxies demo, the i7 scores 195FPS/62GFLOPs, the 650M scores 479FPS/161GFLOPs.

Adobe CS apps show massive gains for just the NVidia GPU as it's CUDA-only but if it was OpenCL, it would run vector code on the CPU too.

A dedicated GPU is not necessary for decent levels of computation or gaming. As long as the IGP is powerful enough, when it's combined with a powerful CPU, it will run OpenCL code just fine. With more computation units like the 40 in Haswell vs 16 in Ivy Bridge, parallel tasks should run faster.

 

Adobe CS varies. Premiere and After Effects are CUDA. They started to migrate those earlier. Photoshop which moved later went the OpenCL route. A lot of these things seem to be used to increase responsiveness where realtime interaction would have been impossible with uncached data in an X86 implementation. Apps like Capture One have gone the OpenCL route to counter this as they moved away from cached low res previews with version 4 (2007 or 2008). DaVinci Resolve can use OpenCL for playback, but only on Macs. Windows requires CUDA, which is fine as NVidia tends to dominate professional graphics on the Windows side. Mari requires CUDA. I don't really use that one at all, but I'm familiar with it. The same goes for Resolve. I still have Resolve Lite installed.

 

Quote:
Power has to be taken into consideration too. A 250W GPU might perform 5x faster than a 95W CPU. When you factor in the power draw, it's not as impressive.
 

Barefeats used to post benchmarks on this kind of stuff. Those aren't really realistic use cases for that software, but I chose an example that is somewhat of a mass market application. GPU acceleration can be quite significant. Prior to capable gpus, companies like this were often more conservative on the tools they would build. Capture One is another example. I suspect Hasselblad will eventually do the same thing with Phocus. Generating high resolution previews that involve debayering, gamma correction, and a bunch of other adjustments can soak up a lot of cpu cycles.

 

Quote:

Not true, I don't see a quad-core i7 Air or 13" Macbook Pro. Even the iMac base models are all quad-core i5. The $799 Mini is quad-core i7.

I don't think intel will drop the use of dual cores too soon. They have focused much more on gpu improvements. Core count is likely to be a lower priority in low tdp chips until the gpu is considered up to snuff in these solutions. I suspect the low end of the discrete gpu market will not last unless this turns out to be a bubble.

 

Quote:
Intel had their own compute framework going for a while but they seem to be on board with OpenCL. It was an afterthought with Ivy Bridge but the Haswell IGP should be OpenCL 1.2 compliant and I expect it to ship with OpenCL drivers.

I wasn't aware of it. OpenCL isn't a bad thing. CUDA is only used in so many things because it was available in a stable form at an earlier date.

 

Quote:
It depends what you consider to be a huge step up. You only get about 5GFLOPs per watt from a GPU so 10-20W won't double the performance. You can always justify adding a little more to something but is it going to make it any more compelling to the target audience? The target audience for the Mini is people who want a low-price desktop and server.


That is a good question. I suspect highly parallel processes will become typically allocated to the gpu in the near future. I'm not even completely focused on the Mini here. I'm more curious what it will mean for things like the ipad, which may be able to take on additional types of applications.

 

Quote:
Let's say they made a Mini with a quad i7 and a 750M Kepler GPU with a 105W PSU and it was a bit bigger with accessible drive slots at the back. Would you buy it for $999 and if so, what would you be using the GPU for?
Power saving too as they can shut down parts more efficiently. They ran a GPU test quite well at 8W. During idle or low usage, they'd be able to scale it down pretty low so a laptop might get an extra few hours of battery life.

I noticed you mentioned Kepler, as Maxwell will be at least 2014. They could always return to AMD, although I think at the very least the mac pro needs at least one decent NVidia option given the presence of CUDA software. It doesn't necessarily have to be direct through Apple. It just has to be stable.

post #470 of 1392
Quote:
Originally Posted by hmm View Post

I was saying that future ssd generations would further eat away from the raid on a chip solutions as sizes grow

SATA Express should be in place in plenty of time:

http://www.sata-io.org/technology/sataexpress.asp
http://www.computerworld.com/s/article/9235229/Speedy_8Gbit_16Gbit_SATA_Express_systems_coming_this_year

I reckon non-volatile RAM will be where we end up anyway:

http://www.desktopreview.com/default.asp?newsID=2016&News=Toshiba+reRAM+Non-Volatile+RAM
Quote:
Originally Posted by hmm View Post

Prior to capable gpus, companies like this were often more conservative on the tools they would build.

Certainly the image processing apps seem to have jumped on it in a big way and that's really where the GPU shines with so many units working in parallel. Haswell will hopefully put in a good showing with more than doubling their execution units. With shared memory, Intel might even be able to shuttle the processing to the most appropriate parts of the CPU. They could process the first couple of cycles of a process on both the CPU and IGP and whichever was faster, use it for the rest of the task. OS X should really be able to do that too but context switching won't be as easy without shared memory.
Quote:
I wasn't aware of it.

It used to be called Intel Ct but it looks like they might have renamed it to Cilk:

http://software.intel.com/en-us/intel-cilk-plus

A while ago it seemed like Intel wanted to dismiss GPU computation entirely and NVidia made fun of them when they said GPUs were only 14x faster:

http://blogs.nvidia.com/2010/06/gpus-are-only-up-to-14-times-faster-than-cpus-says-intel/

That probably encouraged them to focus more on their IGPs because they really seem to dislike NVidia.
Quote:
I noticed you mentioned Kepler, as Maxwell will be at least 2014. They could always return to AMD

AMD is renaming their GPUs this year too. It's the 7000 series rebranded as the 8000 series:

http://www.techpowerup.com/178404/AMD-Rebrands-Radeon-HD-7000-Series-GPUs-to-HD-8000-for-OEMs.html

That's why I was wondering about the report of the AMD 7000 series drivers for the Mac Pro but it makes sense when they are the same GPUs as last year.
post #471 of 1392

It's worded like it's a way of using PCIe directly (SATA bandwidth comes from available PCI lanes anyway) while maintaining backward compatibility. At least that is how your article describes it. I'm not sure if that is really the case, but it's worded that way.

Quote:

The SATA Express specification will enable development of new devices that can utilize the PCIe interface and maintain compatibility with existing SATA applications. The technology, SATA-IO claims, will provide a cost-effective means to increase device interface speeds to 8Gb/s and 16Gb/s.

 

 

Quote:

It might be a little while.

 

 

Quote:
At this point, ReRAM looks to cost several times as much as NAND flash. To make ReRAM memory, you have to take a process as complex as NAND manufacturing and then put another layer on top of that, he said.

Of course ram has a far higher cost per GB than solid state storage. The two don't have to be in line with one another to find a practical use. The first thing that came to mind was Apple's sleep states.

 

Quote:
Certainly the image processing apps seem to have jumped on it in a big way and that's really where the GPU shines with so many units working in parallel. Haswell will hopefully put in a good showing with more than doubling their execution units. With shared memory, Intel might even be able to shuttle the processing to the most appropriate parts of the CPU. They could process the first couple of cycles of a process on both the CPU and IGP and whichever was faster, use it for the rest of the task. OS X should really be able to do that too but context switching won't be as easy without shared memory.
It used to be called Intel Ct but it looks like they might have renamed it to Cilk:

http://software.intel.com/en-us/intel-cilk-plus

Image processing apps have always used things like tiles and bucket systems, so they are a good example. They're also an area of familiarity to me, which is why I tend to reference them. What I would like to see is a really good set of editing tools that can work with data as close to its raw form as possible in something like 16 bit half float regardless of whether we're talking about still images or video data. Given that these things are often processed in tiles or buckets, they are a good area for gpu computation. 3D rendering is the same. It's just that the mind boggling amounts of data can be difficult with the ram limitations.

 

Quote:
A while ago it seemed like Intel wanted to dismiss GPU computation entirely and NVidia made fun of them when they said GPUs were only 14x faster:

http://blogs.nvidia.com/2010/06/gpus-are-only-up-to-14-times-faster-than-cpus-says-intel/

Intel's claims there sound like posturing, but "only 14 times faster" is a somewhat silly thing to say unless NVidia's 14x faster solution was much more costly or difficult to implement. NVidia was first to market with a stable solution, and they show up frequently in a number of top ranked HPC solutions. Note top 500. It varies. Sometimes they're more prevalent in that list than others, but they do provide a viable solution to highly parallel number crunching.

 

Quote:


That probably encouraged them to focus more on their IGPs because they really seem to dislike NVidia.
AMD is renaming their GPUs this year too. It's the 7000 series rebranded as the 8000 series:

http://www.techpowerup.com/178404/AMD-Rebrands-Radeon-HD-7000-Series-GPUs-to-HD-8000-for-OEMs.html

That's why I was wondering about the report of the AMD 7000 series drivers for the Mac Pro but it makes sense when they are the same GPUs as last year.

 

You may remember the litigation between the two companies a few years back. They are clearly in competition. Intel also wants to ensure that they don't eventually lose the low end of the market to ARM, so a balanced strategy is obviously needed. I could see the death of a greater range of low end discrete gpus. I don't know what will happen to the higher end ones without the ability to leverage low margin high volume areas to distribute development costs.

post #472 of 1392
Quote:
Originally Posted by hmm View Post

It might be a little while.

"At this point, ReRAM looks to cost several times as much as NAND flash. To make ReRAM memory, you have to take a process as complex as NAND manufacturing and then put another layer on top of that, he said."

Of course ram has a far higher cost per GB than solid state storage.

A hybrid solution would be great though:

http://www.engadget.com/2012/06/14/chou-university-builds-hybrid-nand-reram-unit/

You can imagine having a 256GB SSD blade with 16GB ReRAM connected to a fast memory interface. It would effectively give you 256GB of RAM for about $350 but it would depend on how full the storage was - minimum would be 16GB. When loading data off the SSD, it would be extremely fast and with it being non-volatile, it maintains the exact OS state during restarts.
Quote:
Given that these things are often processed in tiles or buckets, they are a good area for gpu computation. 3D rendering is the same. It's just that the mind boggling amounts of data can be difficult with the ram limitations.

Heterogeneous computing is really the best way forward for everything as can get rid of the memory limits. Certain kinds of processing work faster on the CPU as the earlier benchmarks showed so both need to be used in the right situation. It's good that OpenCL has taken off the way it has. I remember the days of Altivec and hardly anyone supported it.
Quote:
Originally Posted by hmm View Post

IIntel's claims there sound like posturing, but "only 14 times faster" is a somewhat silly thing to say unless NVidia's 14x faster solution was much more costly or difficult to implement.

The paper they wrote is here:

http://www.hwsw.hu/kepek/hirek/2010/06/p451-lee.pdf

Intel said NVidia took what they wrote out of context but they didn't really. They just picked out the most favourable stat. At one point Intel says they tested 14 compute kernels and the 230W GTX 280 was on average 2.5x faster than the 130W Core i7-960. One important point they made is that developers tend to not optimise for CPUs as much as GPUs. This will partly be because they have to write the whole thing in vectorised OpenCL for it to work on a GPU.

They talk about the transcendental hardware in GPUs being used to speed up GPU acceleration. This probably led them to implement those in hardware with Sandy Bridge:

http://www.anandtech.com/show/3922/intels-sandy-bridge-architecture-exposed/5

"There are other improvements within the EU. Transcendental math is handled by hardware in the EU and its performance has been sped up considerably. Intel told us that sine and cosine operations are several orders of magnitude faster now than they were in current HD Graphics."

These things all happened around the same time. Intel's push towards faster IGPs, Crystalwell being directly compared to NVidia GPUs not AMD, their Phi hardware aimed at the Tesla market and OpenCL support all seems to be motivated by a desire to compete with NVidia. Let's hope NVidia doesn't stop challenging them because as we can see with AMD without that challenge, Intel slows back down and makes very small improvements year after year.

Quote:
That probably encouraged them to focus more on their IGPs because they really seem to dislike NVidia.

AMD is renaming their GPUs this year too. It's the 7000 series rebranded as the 8000 series:

http://www.techpowerup.com/178404/AMD-Rebrands-Radeon-HD-7000-Series-GPUs-to-HD-8000-for-OEMs.htmlI don't know what will happen to the higher end GPUs without the ability to leverage low margin high volume areas to distribute development costs.

I think this is Intel's strategy with Haswell. NVidia won over quite a few laptop manufacturers with Kepler. If Haswell matches the 650M while also offering huge power savings, it could be game over for NVidia in the low-end market because NVidia's GPUs wouldn't offer a significant enough advantage. Even the 21.5" iMac just has the 640M and 650M. I could easily see them going with the Haswell IGP.
post #473 of 1392
Thread Starter 
I have mentioned this briefly before, though are we expecting Haswell to match the 640M or even the 650M with DDR3 or GDDR5? If it matches the latter, great. If not, eh... : /
post #474 of 1392
Quote:
Originally Posted by Winter View Post

are we expecting Haswell to match the 640M or even the 650M with DDR3 or GDDR5? If it matches the latter, great. If not, eh... : /

Intel seems to think it will match the 650M otherwise they'd probably compare it with a lower model:



The problem with a demo like Dirt 3 is they are running at 1080p, high quality no AA, which the 650M runs at 80FPS. Even if the Haswell was running at 640M speed ~64FPS, you can't tell the difference. They did get 40FPS in Heaven vs 20FPS on the Ivy Bridge:

http://www.notebookcheck.net/Intel-Keynote-Haswell-Dragon-Assistant-Atom-and-more.81667.0.html

The Ivy Bridge laptop in the video actually looks like a Macbook Air - they taped over the name.

Unigine Heaven can be configured with different settings though and on Notebookcheck, the HD 4000 scores 10FPS with the 640M around 24FPS. Intel claims they've doubled performance so that only really puts it at 640M class as the 650M gets 30FPS.

However, their test was in an Ultrabook with a 17W test chip, which will ship at 15W. So I reckon the 15W Haswell destined for the MBA will match the 640M (around double last year's Air) and the Haswell destined for the Mini and Macbook Pro will come closer to the 650M as it has the higher power limit - the highest ones go up to 57W now but they will likely use the 47W models in the Mini and MBP.

They should only use DDR3 memory for the expanded part of the memory. The Crystalwell part has 128MB of GDDR5. It won't be as good as a 650M GT with 512MB-1GB GDDR5 but the real-world difference should be negligible and it uses less power.

Whatever it comes out to, anything between the 640M and 650M will be a good enough GPU for the entry level machines - Apple uses the 640M on the entry iMac. It should get OpenCL 1.2 support from launch as well as OpenGL 4:

http://www.anandtech.com/show/6355/intels-haswell-architecture/12

"Haswell's processor graphics extends API support to DirectX 11.1, OpenCL 1.2 and OpenGL 4.0."

This might mean the next OS X has OpenGL 4 support as it will be supported on every machine.
post #475 of 1392
Quote:
Originally Posted by Marvin View Post


A hybrid solution would be great though:

http://www.engadget.com/2012/06/14/chou-university-builds-hybrid-nand-reram-unit/

You can imagine having a 256GB SSD blade with 16GB ReRAM connected to a fast memory interface. It would effectively give you 256GB of RAM for about $350 but it would depend on how full the storage was - minimum would be 16GB. When loading data off the SSD, it would be extremely fast and with it being non-volatile, it maintains the exact OS state during restarts.
Heterogeneous computing is really the best way forward for everything as can get rid of the memory limits. Certain kinds of processing work faster on the CPU as the earlier benchmarks showed so both need to be used in the right situation. It's good that OpenCL has taken off the way it has. I remember the days of Altivec and hardly anyone supported it.
 

CPUs are designed for a complex set of tasks. It reminds me of the ARM kool-aid. They're frequently compared on here to X86 without much mention of chip design or complexity. The way you word the thing about ReRAM just sounds like a type of secondary caching. CPU caches have obviously risen too to minimize distances. Parts of that SSD could already be allocated as virtual memory. What would be the difference with the proposed method? Fast wake from sleep comes to mind, as OSX stores a lot of information, but it's not really that slow with an SSD.  Heterogeneous doesn't mean discrete gpus would have to disappear. Several Macs physically contain 2 today. What matters is whether it continues to make sense for NVidia to keep designing them and for oems to include them in a portion of their line. The high end would likely be the last to go. The past concern I mentioned was that the development costs of things like Teslas are essentially subsidized by the number of cheap gpus that are turned out simultaneously.

 

Quote:
The paper they wrote is here:

http://www.hwsw.hu/kepek/hirek/2010/06/p451-lee.pdf

Intel said NVidia took what they wrote out of context but they didn't really. They just picked out the most favourable stat. At one point Intel says they tested 14 compute kernels and the 230W GTX 280 was on average 2.5x faster than the 130W Core i7-960. One important point they made is that developers tend to not optimise for CPUs as much as GPUs. This will partly be because they have to write the whole thing in vectorised OpenCL for it to work on a GPU.

They talk about the transcendental hardware in GPUs being used to speed up GPU acceleration. This probably led them to implement those in hardware with Sandy Bridge:

http://www.anandtech.com/show/3922/intels-sandy-bridge-architecture-exposed/5

"There are other improvements within the EU. Transcendental math is handled by hardware in the EU and its performance has been sped up considerably. Intel told us that sine and cosine operations are several orders of magnitude faster now than they were in current HD Graphics."

I will read that later, but I'm curious if they're referring solely to geometric expressions. I'm not aware of what kinds of complex math can be run on bare metal. I understand a small amount of OpenGL, but not so much beyond that. I would point out that they compared against a generic gaming gpu, which especially at that time was not so heavily tuned for computation. Actually NVidia's biggest claims were in performance per watt and per dollar of hardware cost when marketing their Tesla line. I already linked to the supercomputer list on that one.

 

Quote:

These things all happened around the same time. Intel's push towards faster IGPs, Crystalwell being directly compared to NVidia GPUs not AMD, their Phi hardware aimed at the Tesla market and OpenCL support all seems to be motivated by a desire to compete with NVidia. Let's hope NVidia doesn't stop challenging them because as we can see with AMD without that challenge, Intel slows back down and makes very small improvements year after year.
I think this is Intel's strategy with Haswell. NVidia won over quite a few laptop manufacturers with Kepler. If Haswell matches the 650M while also offering huge power savings, it could be game over for NVidia in the low-end market because NVidia's GPUs wouldn't offer a significant enough advantage. Even the 21.5" iMac just has the 640M and 650M. I could easily see them going with the Haswell IGP.

 

Intel never really had a discrete gpu market to cannibalize, and they have moved toward IGPs in their mainstream lines where growth in X86 computing requirements has been somewhat logarithmic lately. I'm not sure they're aiming solely at NVidia. It's likely that they don't want to see ARM start to threaten the lower end of their market, which I suggested is likely at least important when it comes to distributing fabrication and development costs. It wouldn't surprise me to see them go from a 640M to integrated graphics, but intel makes some pretty bold claims.

 

Quote:
Originally Posted by Marvin View Post


Intel seems to think it will match the 650M otherwise they'd probably compare it with a lower model:



The problem with a demo like Dirt 3 is they are running at 1080p, high quality no AA, which the 650M runs at 80FPS. Even if the Haswell was running at 640M speed ~64FPS, you can't tell the difference. They did get 40FPS in Heaven vs 20FPS on the Ivy Bridge:

http://www.notebookcheck.net/Intel-Keynote-Haswell-Dragon-Assistant-Atom-and-more.81667.0.html

The Ivy Bridge laptop in the video actually looks like a Macbook Air - they taped over the name.

Unigine Heaven can be configured with different settings though and on Notebookcheck, the HD 4000 scores 10FPS with the 640M around 24FPS. Intel claims they've doubled performance so that only really puts it at 640M class as the 650M gets 30FPS.

However, their test was in an Ultrabook with a 17W test chip, which will ship at 15W. So I reckon the 15W Haswell destined for the MBA will match the 640M (around double last year's Air) and the Haswell destined for the Mini and Macbook Pro will come closer to the 650M as it has the higher power limit - the highest ones go up to 57W now but they will likely use the 47W models in the Mini and MBP.

They should only use DDR3 memory for the expanded part of the memory. The Crystalwell part has 128MB of GDDR5. It won't be as good as a 650M GT with 512MB-1GB GDDR5 but the real-world difference should be negligible and it uses less power.

Whatever it comes out to, anything between the 640M and 650M will be a good enough GPU for the entry level machines - Apple uses the 640M on the entry iMac. It should get OpenCL 1.2 support from launch as well as OpenGL 4:

http://www.anandtech.com/show/6355/intels-haswell-architecture/12

"Haswell's processor graphics extends API support to DirectX 11.1, OpenCL 1.2 and OpenGL 4.0."

This might mean the next OS X has OpenGL 4 support as it will be supported on every machine.

I hesitate to take intel's demos as hard evidence at times. They talk things up quite a bit, but seeing how they're tested under a variety of circumstances means far more. I wouldn't think this way if it wasn't for their past record on these things. Supporting the latest OpenGL and OpenCL standards out the door is a major improvement though.

post #476 of 1392
Thread Starter 
Yes though that doesn't answer my question. The difference between the 650M with GDDR5 memory and DDR3 memory is kind of substantial. At least 20%...

By the way, the laptop I saw in Best Buy was a Samsung Series 7 17.3" and had a 650M with 2 GB of GDDR5 memory.
post #477 of 1392

Haswell not coming out for another 5 months I heard from someone in the field.
 

post #478 of 1392
Originally Posted by marvfox View Post
Haswell not coming out for another 5 months I heard from someone in the field.

 

This was posted three days ago. And Wikipedia still lists May 27 and June 2 as release dates for performance mobile and the entire desktop line, respectively.

Originally posted by Relic

...those little naked weirdos are going to get me investigated.
Reply

Originally posted by Relic

...those little naked weirdos are going to get me investigated.
Reply
post #479 of 1392
Thread Starter 
Quote:
Originally Posted by Tallest Skil View Post

This was posted three days ago. And Wikipedia still lists May 27 and June 2 as release dates for performance mobile and the entire desktop line, respectively.

That makes me even more psyched.
post #480 of 1392
Quote:
Originally Posted by Tallest Skil View Post

Quote:
Originally Posted by marvfox View Post

Haswell not coming out for another 5 months I heard from someone in the field.

This was posted three days ago. And Wikipedia still lists May 27 and June 2 as release dates for performance mobile and the entire desktop line, respectively.

It might have been speculation due to the USB 3 bug but Intel's just going to ship the chipsets with it and fix it later:

http://www.xbitlabs.com/news/mainboards/display/20130312004932_Intel_to_Fix_USB_3_0_Issues_of_8_Series_Chipset_in_New_Revision.html

There's some info that could affect the Mac Pro too:

http://www.maximumpc.com/article/news/will_intel_skip_over_ivy_bridge_e2013
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Future Apple Hardware
AppleInsider › Forums › Mac Hardware › Future Apple Hardware › 2014 Mac mini Wishlist