or Connect
AppleInsider › Forums › Mac Hardware › Future Apple Hardware › Geek Benchmarks for 2.3 Ghz i7 Mac Mini - Late 2012
New Posts  All Forums:Forum Nav:

Geek Benchmarks for 2.3 Ghz i7 Mac Mini - Late 2012

post #1 of 35
Thread Starter 

I just ordered one of these bad boys and 8gb of ram

 

 
Model Processor Frequency Cores Platform User Score
Macmini6,2 Intel Core i7-3615QM 2300 4 Mac OS X 32-bit   11067
Macmini6,2 Intel Core i7-3615QM 2300 4 Mac OS X 32-bit   10175
Macmini6,2 Intel Core i7-3615QM 2300 4 Mac OS X 32-bit   10345
Macmini6,2 Intel Core i7-3615QM 2300 4 Mac OS X 32-bit   10308
Macmini6,2 Intel Core i7-3615QM 2300 4 Mac OS X 32-bit   10430
Macmini6,2 Intel Core i7-3770K 3501 4 Mac OS X 32-bit   12043
Macmini6,2 Intel Core i7-3615QM 2300 4 Mac OS X 32-bit   10201
Macmini6,2 Intel Core i7-3615QM 2300 4 Mac OS X 32-bit   10461
Macmini6,2 Intel Core i7-3615QM 2300 4 Mac OS X 64-bit dewbage 11970
Macmini6,2 Intel Core i7-3615QM 2300 4 Mac OS X 64-bit   11851
Macmini6,2 Intel Core i7-3615QM 2300 4 Mac OS X 64-bit   11953
Macmini6,2 Intel Core i7-3615QM 2300 4 Mac OS X 32-bit   10899
Macmini6,2 Intel Core i7-3615QM 2300 4 Mac OS X 32-bit   10769
Macmini6,2 Intel Core i7-3615QM 2300 4 Mac OS X 32-bit   10706
Macmini6,2 Intel Core i7-3615QM 2300 4 Mac OS X 32-bit   11009
Macmini Server (Late 2012) Intel Core i7-3615QM 2300 4 Mac OS X 32-bit thinkbook 10772
Macmini6,2 Intel Core i7-3615QM 2300 4 Mac OS X 32-bit   10912
Macmini6,2 Intel Core i7-3615QM 2300 4 Mac OS X 64-bit dewbage 11583
Macmini6,2 Intel Core i7-3615QM 2300 4 Mac OS X 64-bit dewbage 11817
Macmini6,2 Intel Core i7-3615QM 2300 4 Mac OS X 64-bit   10484
Macmini6,2 Intel Core i7-3615QM 2300 4 Mac OS X 32-bit   10773
Macmini6,2 Intel Core i7-3615QM 2300 4 Mac OS X 64-bit   11697
Macmini6,2 Intel Core i7-3615QM 2300 4 Mac OS X 32-bit   10765
Macmini6,2 Intel Core i7-3615QM 2300 4 Mac OS X 64-bit CrouchingDonkey 11641
Macmini6,2 Intel Core i5-3570K 3392 4 Mac OS X 32-bit   9278
 
post #2 of 35
It seems someone got an i7-3770k in their's. I thought it had the i7-3610QM but it appears that Intel has made another model with a slightly higher clocked IGP:

http://ark.intel.com/products/64899/Intel-Core-i7-3610QM-Processor-6M-Cache-up-to-3_30-GHz
http://ark.intel.com/products/64900/Intel-Core-i7-3615QM-Processor-6M-Cache-up-to-3_30-GHz

1.2GHz instead of 1.1GHz. That extra 9% won't make a huge difference to anything but every little helps. You can see the difference here:

http://www.notebookcheck.net/Intel-HD-Graphics-4000-Benchmarked.73567.0.html

Battlefield 3 goes from 24.5fps on low to a whopping 26.9fps. At least the CPU is making up for it.

The Cinebench score should be around 6.2, which puts it about even with the 8-core 2.8GHz 2008 Mac Pro ($2799) and the 4-core 2010 3.2GHz model ($2899) for just $799. Put in 16GB RAM, an SSD and you got a neat little workstation.
post #3 of 35
I guess it depends upon what you call a workstation. The lack of strong 3D and OpenCL support makes seeing the Mini as a workstation a bit difficult. Well as a workstation where those features are important, the Mini should be a nice little XCode workstation.

As nice as the updated models are, I still see them as a regression simply due to that missing GPU.
Quote:
Originally Posted by Marvin View Post

It seems someone got an i7-3770k in their's. I thought it had the i7-3610QM but it appears that Intel has made another model with a slightly higher clocked IGP:
http://ark.intel.com/products/64899/Intel-Core-i7-3610QM-Processor-6M-Cache-up-to-3_30-GHz
http://ark.intel.com/products/64900/Intel-Core-i7-3615QM-Processor-6M-Cache-up-to-3_30-GHz
1.2GHz instead of 1.1GHz. That extra 9% won't make a huge difference to anything but every little helps. You can see the difference here:
http://www.notebookcheck.net/Intel-HD-Graphics-4000-Benchmarked.73567.0.html
Battlefield 3 goes from 24.5fps on low to a whopping 26.9fps. At least the CPU is making up for it.
The Cinebench score should be around 6.2, which puts it about even with the 8-core 2.8GHz 2008 Mac Pro ($2799) and the 4-core 2010 3.2GHz model ($2899) for just $799. Put in 16GB RAM, an SSD and you got a neat little workstation.
post #4 of 35
Quote:
Originally Posted by wizard69 View Post

I guess it depends upon what you call a workstation. The lack of strong 3D and OpenCL support makes seeing the Mini as a workstation a bit difficult.

OpenCL on the GPU is never going to take off if Apple do stupid things with it like break support during an OS update, disable it based on video memory and don't keep it up to date:

http://forum.netkas.org/index.php?topic=2250.0
http://netkas.org/?p=1161
http://forums.macrumors.com/showthread.php?t=1422871

If they're going to do things like that, they need to put someone more responsible in control of the graphics drivers like NVidia/AMD. Then we might have a chance of also getting OpenGL 4 support within a reasonable timeframe compared to other platforms (2.5 years and counting).

Typically the low-end GPUs match the CPU in OpenCL performance so doubling the CPU performance should leave OpenCL the same vs having a dedicated GPU. Still slow though.

In terms of standard GPU usage, the HD4000 got level 2 support in CS6:

http://help.adobe.com/en_US/aftereffects/cs/using/WS3878526689cb91655866c1103a4f2dff7-79f4a.html

It's supported by Apple in FCPX/Motion so it should be ok for most things.
post #5 of 35
Huh? I see the i7-3770K listed next to the Mac mini? Say what?
post #6 of 35
Quote:
Originally Posted by Winter View Post

Huh? I see the i7-3770K listed next to the Mac mini? Say what?

That sort of thing happens with hackintoshes - OS X gets modified to run on custom built PCs.
post #7 of 35
Quote:
Originally Posted by Marvin View Post

That sort of thing happens with hackintoshes - OS X gets modified to run on custom built PCs.

Oh duh! I saw Mac mini 6,2 and then i7-3770K and had to do a double take.
post #8 of 35
Quote:
Originally Posted by Marvin View Post

OpenCL on the GPU is never going to take off if Apple do stupid things with it like break support during an OS update, disable it based on video memory and don't keep it up to date:
http://forum.netkas.org/index.php?topic=2250.0
http://netkas.org/?p=1161
http://forums.macrumors.com/showthread.php?t=1422871
Apple isn't the first developer to have bugs in their app, drivers or what have you. As to keeping up to date, I'm not to certain who is responsible for what at Apple. Beyond that there is the issue of hardware support.
Quote:
If they're going to do things like that, they need to put someone more responsible in control of the graphics drivers like NVidia/AMD. Then we might have a chance of also getting OpenGL 4 support within a reasonable timeframe compared to other platforms (2.5 years and counting).
Apples issues with drivers started well before OpenCL even existed. Yes they need to do better, but on shouldn't dismiss OpenCL because Apple is a little slow here or there.
Quote:
Typically the low-end GPUs match the CPU in OpenCL performance so doubling the CPU performance should leave OpenCL the same vs having a dedicated GPU. Still slow though.
The effectiveness of a GPU for computation is highly variable but even if it is at parity with the CPU it is still an alternative execution path that runs in parallel with the CPU.
Quote:
In terms of standard GPU usage, the HD4000 got level 2 support in CS6:
http://help.adobe.com/en_US/aftereffects/cs/using/WS3878526689cb91655866c1103a4f2dff7-79f4a.html
It's supported by Apple in FCPX/Motion so it should be ok for most things.
post #9 of 35
Quote:
Originally Posted by Marvin View Post


OpenCL on the GPU is never going to take off if Apple do stupid things with it like break support during an OS update, disable it based on video memory and don't keep it up to date:
http://forum.netkas.org/index.php?topic=2250.0
http://netkas.org/?p=1161
http://forums.macrumors.com/showthread.php?t=1422871
If they're going to do things like that, they need to put someone more responsible in control of the graphics drivers like NVidia/AMD. Then we might have a chance of also getting OpenGL 4 support within a reasonable timeframe compared to other platforms (2.5 years and counting).
Typically the low-end GPUs match the CPU in OpenCL performance so doubling the CPU performance should leave OpenCL the same vs having a dedicated GPU. Still slow though.
In terms of standard GPU usage, the HD4000 got level 2 support in CS6:
http://help.adobe.com/en_US/aftereffects/cs/using/WS3878526689cb91655866c1103a4f2dff7-79f4a.html
It's supported by Apple in FCPX/Motion so it should be ok for most things.


I've seen a lot of software developers announcing OpenCL support. NVidia supports it on their cards, so available hardware shouldn't be a big deal. On OS updates, this is one of those reasons I tell anyone who uses a computer for work to be very conservative on applying updates or retain a backup.

post #10 of 35
Industry adoption of OpenCL has been very significant, it is probably as successful as LLVM/CLang. Maybe a little lower key as companies often don't tout thier use of OpenCL nor involvement in its devlopment the way they do with LLVM.

As to updates lately I take the attitude of let the chips fall where they may. If no one updates then no bugs are ever found. Of course I can get away with that right now.

I've been reading about the massive reorganization at Apple which has me wondering how OS development will be refocused. Frankly I think Apple has gone off the quality track with the yearly Mac OS updates. I'd much rather see them go back to longer release cycles and refocus on quality. Some of these OpenCL issues raised here just strike me as the result of rush jobs, as such I don't see evil intent.
Quote:
Originally Posted by hmm View Post


I've seen a lot of software developers announcing OpenCL support. NVidia supports it on their cards, so available hardware shouldn't be a big deal. On OS updates, this is one of those reasons I tell anyone who uses a computer for work to be very conservative on applying updates or retain a backup.
post #11 of 35
Quote:
Originally Posted by wizard69 View Post

Industry adoption of OpenCL has been very significant, it is probably as successful as LLVM/CLang. Maybe a little lower key as companies often don't tout thier use of OpenCL nor involvement in its devlopment the way they do with LLVM.
As to updates lately I take the attitude of let the chips fall where they may. If no one updates then no bugs are ever found. Of course I can get away with that right now.
I've been reading about the massive reorganization at Apple which has me wondering how OS development will be refocused. Frankly I think Apple has gone off the quality track with the yearly Mac OS updates. I'd much rather see them go back to longer release cycles and refocus on quality. Some of these OpenCL issues raised here just strike me as the result of rush jobs, as such I don't see evil intent.

I get that. I tell people if they use it for work to ensure they have a backup prior to updating. This way if anything crucial breaks, they can revert. The alternative is just watching for any crippling bugs and bug fixes prior to applying updates. If you use something for work, the top priority is that it continues working, and either of those methods address that by minimizing the potential for significant downtime.

 

Apple seems to have consolidated upper management. I'm still weirded out by the retail thing. I figured they knew who they were hiring. He would have presented plans at some point, as I doubt the role of a new executive is completely autonomous. I wouldn't be bothered by annual OS updates if they were entirely stable. That is always a hugely annoying issue. I'm running Lion due to a couple application issues under Mountain Lion (grumble) although I should check on that. Snow Leopard still seemed like the best to date overall once it stabilized. 


Edited by hmm - 10/30/12 at 1:11am
post #12 of 35
Quote:
Originally Posted by hmm 
I get that. I tell people if they use it for work to ensure they have a backup prior to updating. This way if anything crucial breaks, they can revert.

When it comes to important things like graphics drivers, I'd like to see them put multiple versions in the OS that can be switched dynamically via an interface. They can bundle OpenGL 4 drivers but only have OpenGL 3 enabled by default. If you want to use OpenGL 4, switch the driver. The drivers should be developed in a way that when they crash, it doesn't take out the OS too.
post #13 of 35
Quote:
Originally Posted by Marvin View Post


When it comes to important things like graphics drivers, I'd like to see them put multiple versions in the OS that can be switched dynamically via an interface. They can bundle OpenGL 4 drivers but only have OpenGL 3 enabled by default. If you want to use OpenGL 4, switch the driver. The drivers should be developed in a way that when they crash, it doesn't take out the OS too.

I always wished they'd implement the potential for 10 bit drivers, but according to Adobe Apple stated they have no interest in doing so. It's one of the areas where I'm somewhat jealous of Windows. I'm not sure how such graphics drivers options would work out. I just don't know enough about the details of how they're loaded and accessed to give you an informed response there.

post #14 of 35
Quote:
Originally Posted by hmm 
I always wished they'd implement the potential for 10 bit drivers, but according to Adobe Apple stated they have no interest in doing so. It's one of the areas where I'm somewhat jealous of Windows.

There are other options for 10-bit like:

http://www.blackmagicdesign.com/products/ultrastudio3d/software

But it would only work in certain apps. It would be quite nice to see a shift like we have seen to 64-bit to future-proof things. 12 bits per channel would be better under the assumption that 4K is going to be the standard but only if it's not too resource-intensive. I don't like how we seem to have settled on 8bpc. It's not high enough.

On the subject of the Mac Mini performance, Macworld has benchmarked the new ones:

http://www.macworld.com/article/2013250/lab-tested-2012-mac-mini-gets-a-nice-speed-boost.html

I don't know why they always measure Cinebench by time. It gives you a score so you can compare it with other machines and they don't write the score. Anyway, it shows that for raw rendering, the new middle Mini is double the speed of the old one, which was expected. It's not quite as much for video encoding but a good boost all round.

The GPU scores are a bit higher for the quad i7 vs the entry model. Intel clocks the quad-i7 GPU 9% higher. It actually benchmarks close to the 6630M and these tests would be for productive real-time graphics but for gaming, the 6630M was still faster. Casual-moderate gaming is fine though:

http://www.youtube.com/watch?v=wZQgVlk7Bvw
post #15 of 35
Quote:
Originally Posted by Marvin View Post


There are other options for 10-bit like:
http://www.blackmagicdesign.com/products/ultrastudio3d/software
But it would only work in certain apps. It would be quite nice to see a shift like we have seen to 64-bit to future-proof things. 12 bits per channel would be better under the assumption that 4K is going to be the standard but only if it's not too resource-intensive. I don't like how we seem to have settled on 8bpc. It's not high enough.
On the subject of the Mac Mini performance, Macworld has benchmarked the new ones:
http://www.macworld.com/article/2013250/lab-tested-2012-mac-mini-gets-a-nice-speed-boost.html
I don't know why they always measure Cinebench by time. It gives you a score so you can compare it with other machines and they don't write the score. Anyway, it shows that for raw rendering, the new middle Mini is double the speed of the old one, which was expected. It's not quite as much for video encoding but a good boost all round.
The GPU scores are a bit higher for the quad i7 vs the entry model. Intel clocks the quad-i7 GPU 9% higher. It actually benchmarks close to the 6630M and these tests would be for productive real-time graphics but for gaming, the 6630M was still faster. Casual-moderate gaming is fine though:
http://www.youtube.com/watch?v=wZQgVlk7Bvw


10 bit displayport has been around for some time. Standards for it are contained within displayport 1.2 specifications. This has nothing to do with resolution. It has to do with the number of values that can be fed to the gpu and display per channel for each pixel. 4k is something entirely different, although it would also be bandwidth intensive. It's supported under Windows assuming the use of a compatible display (which I own). The tendency toward wider gamut displays is what really makes such features relevant, as it reduces the need for dithering. Even then they all dither to some degree.

 

Your link doesn't suggest gains quite as good as the ones you're suggesting, but they're still very significant. I felt the 6630m was really limiting outside of games. Ram available to the gpu isn't really a common thing to stress on with games, but it can be critical in other areas where applications simply will not run tasks on the gpu below a certain threshold.That's why I never suggested paying for the discrete option last year. Hopefully intel's progress with gpus will continue. Out of curiosity, does this make the cut to replace your current mini? I recall you planned it that way a long time ago, but I have no idea how well your current one is holding up.

post #16 of 35
Quote:
Originally Posted by hmm 
This has nothing to do with resolution. It has to do with the number of values that can be fed to the gpu and display per channel for each pixel. 4k is something entirely different, although it would also be bandwidth intensive.

It helps avoid banding on higher resolution displays, which wouldn't be as noticable on lower resolution displays:

http://www.digitalhome.ca/forum/showthread.php?t=156591&page=4

"After seeing native 4k presentation on my 96" 21:9 screen viewed from 10' away, I can truly say that I welcome 4K wholeheartedly, but the colour banding and motion artifacts remain extremely visible"

The colour range advantages are important for HDR photography and other high colour range media as you were saying. That'll likely be the next marketing term: HDR.
Quote:
Originally Posted by hmm 
Your link doesn't suggest gains quite as good as the ones you're suggesting

In what way? Cinebench is double vs the old model, GPU in the quad i7 scores more than 9% vs the base model.
Quote:
Originally Posted by hmm 
Out of curiosity, does this make the cut to replace your current mini? I recall you planned it that way a long time ago, but I have no idea how well your current one is holding up.

If it had a 640M and a quad-i7, it would have been an instant buy. I played Battlefield 3 on my current one no problem and I reckon it would just be too slow with the HD4000.

BF3 on the HD4000 looks like this (choppy on low):
http://www.youtube.com/watch?feature=player_detailpage&v=hNhTyDs0xrk#t=554s
On the 6630M, it looks like this (steady 30FPS+):
http://www.youtube.com/watch?v=d1AkRT80SXc

I'll probably just wait out Haswell. I have a feeling it will arrive around June so not a long wait. I don't want the headache of an OS upgrade right now either. I'm getting concerned that Apple doesn't seem to want to bring feature parity to Quicktime X and seem to be happy to break things along the way so I'm overly cautious about adopting the new OSs.
post #17 of 35
Quote:
Originally Posted by Marvin View Post


It helps avoid banding on higher resolution displays, which wouldn't be as noticable on lower resolution displays:
http://www.digitalhome.ca/forum/showthread.php?t=156591&page=4
"After seeing native 4k presentation on my 96" 21:9 screen viewed from 10' away, I can truly say that I welcome 4K wholeheartedly, but the colour banding and motion artifacts remain extremely visible"
The colour range advantages are important for HDR photography and other high colour range media as you were saying. That'll likely be the next marketing term: HDR.
 

I haven't viewed 4k displays, so I can't really comment there. HDR is interesting, although it's often used in really bad ways. Given that I have a reasonable background in that area, I've been shopping for a decent pano kit. The old one won't support a Canon 1 series body, although I could switch to something lighter. Spherical HDR panos are very useful for producing realistic reflections in renders, especially if you want to place something at a specific location. It's just annoying finding something with the locks/clamps appropriate to support a few pounds of camera.

 

Anyway back to color gamut. The thing is that a display has a certain matrix of values that can be addressed. It's not one solid shape as it's portrayed if you bring up a gamut preview in colorsync. It's better to think of the device behavior as a point cloud. Given that the larger volumetric gamut means these points are spread thinner, it's natural to migrate toward the use of more points to at least partially alleviate the need for dithering. Buying a 10 bit display currently isn't an all encompassing solution to the dithering thing. People have noticed it in Dreamcolor displays and other super expensive ones in the past few years. It's just that it does need to be adopted if the trend is toward expanding gamuts. Right now they basically cap out at Adobe RGB. This has been possible for at least a decade, but they're much more common now. Apple has kind of ignored this and stuck within sRGB. It's their choice, and it may have been partially motivated by their focus on thunderbolt, as it doesn't fully support displayport 1.2.

 

 

Quote:
In what way? Cinebench is double vs the old model, GPU in the quad i7 scores more than 9% vs the base model.

I just looked at the links again. I see what you were comparing now. The cinebench cpu test on the mid range model came up considerably, as it was bumped to a quad i7. The others aren't showing quite that advantage, although the mathematica gains are quite impressive.

Quote:

 

2012 Mac mini: Individual application scores

  iTunes
Encode
Cinebench
CPU
VMware
PCMark
Mathe-
maticaMark 8
Mac mini/2.3GHz Core i7 (Late 2012) 100.7 73 1347.7 1.82
Mac mini/2.5GHz Core i5 (Late 2012) 113.7 148 1052.7 1.06
Mac mini/2.5GHz Core i5 (Mid 2011) 115 154 1045.7 1.02
Mac mini/2.3GHz Core i5 (Mid 2011) 123.7 186.3 855 1
Mac mini/2.4GHz Core 2 Duo (Mid 2010) 161.7 309.7 707.7 0.7

 

 

 

Quote:

 

If it had a 640M and a quad-i7, it would have been an instant buy. I played Battlefield 3 on my current one no problem and I reckon it would just be too slow with the HD4000.
BF3 on the HD4000 looks like this (choppy on low):
http://www.youtube.com/watch?feature=player_detailpage&v=hNhTyDs0xrk#t=554s
On the 6630M, it looks like this (steady 30FPS+):
http://www.youtube.com/watch?v=d1AkRT80SXc
I'll probably just wait out Haswell. I have a feeling it will arrive around June so not a long wait. I don't want the headache of an OS upgrade right now either. I'm getting concerned that Apple doesn't seem to want to bring feature parity to Quicktime X and seem to be happy to break things along the way so I'm overly cautious about adopting the new OSs.

That makes sense. They did reallocate costs somewhat. I thought the cpu pricing was closer together, but I just looked it up. I checked intel's site and wikipedia, specifically because they list launch pricing as opposed to current pricing, and it's always been accurate on this.

 

 

http://ark.intel.com/products/52229

http://ark.intel.com/products/64900/Intel-Core-i7-3615QM-Processor-6M-Cache-up-to-3_30-GHz

 

Recommended customer pricing on the former was $150 cheaper on the 2011. I have no idea what the cost of a 640m looks like, but these listings are the same as they were at launch. It would have been nice. This is a downside to using mobile innards. Components with equivalent performance come with a pricing premium.  If 77W was doable, there are a couple decent upper range i5s around $225, which lines up with the cpu cost they were using last year. Unfortunately that wattage is still too high. In my opinion they made it too restrictive in favor of size. Given that Apple maintains a very lean lineup, it would have been nice to see a slightly better range of use cases.

post #18 of 35
Quote:
Originally Posted by hmm 
Spherical HDR panos are very useful for producing realistic reflections in renders, especially if you want to place something at a specific location. It's just annoying finding something with the locks/clamps appropriate to support a few pounds of camera.

Although not as high quality due to the lenses, the iPhone's panorama feature with HDR setting could be good for this if it was combined with saving the source data:

http://www.popphoto.com/2012/04/not-qute-raw-shooting-iphone-app-645-pro-now-available
Quote:
Originally Posted by hmm 
I have no idea what the cost of a 640m looks like, but these listings are the same as they were at launch. It would have been nice. This is a downside to using mobile innards.

PC manufacturers can put a quad-i7 and 640M in Ultrabooks for $750:

http://www.newegg.com/Product/Product.aspx?Item=N82E16834215541

It should have been possible in an $800 Mini. Like I say, the integrated GPU will make things simpler for them going forward and it should be ok from next year onwards but I don't like it when they downgrade some components in a new machine relative to the old one.
post #19 of 35
Quote:
Originally Posted by Marvin View Post


Although not as high quality due to the lenses, the iPhone's panorama feature with HDR setting could be good for this if it was combined with saving the source data:
http://www.popphoto.com/2012/04/not-qute-raw-shooting-iphone-app-645-pro-now-available
 

If that was all I had along at the time, I'd use it. You may be misinterpreting how I'm using these. I need a considerable amount of resolution and detail. Otherwise things tend to look blah. When I said rig, the point is to line everything up as precisely as possible to make stitching easier, as you have to stitch it, seamlessly remove tripod legs and any undesirable elements from the comp on an hdr image. You want it to be seamless, and this tends to be easier if you have really clean data, lens grid to correct distortion where possible, etc. I like as much control as possible, and I plan out time, check weather in the area, check google maps, plan exact time of day for lighting. If I need dawn light, show up while it's still dark carrying a maglite. It's difficult to get everything just right, and getting a really good axis of rotation with a sturdy head would make that a lot easier to line up. Just removing objects from hdr is a lot more annoying than on typical images if you're a perfectionist. I use a number of test curves and things to ensure it is absolutely seamless.

 

 

Quote:
PC manufacturers can put a quad-i7 and 640M in Ultrabooks for $750:
http://www.newegg.com/Product/Product.aspx?Item=N82E16834215541
It should have been possible in an $800 Mini. Like I say, the integrated GPU will make things simpler for them going forward and it should be ok from next year onwards but I don't like it when they downgrade some components in a new machine relative to the old one.

 

I didn't know that, but I previously suggested this had to do with Apple's desired margins. Note that the cpu price referenced against intel's recommended pricing was offset by $150. I don't blame you for being hesitant on it. I'm just not surprised they did this. Hopefully the Haswell version will meet your expectations. As of right now predictions on that are all over the place, so I'm not even going to comment. There are suggestions that Broadwell will be more impressive simply because of the list of changes being made, but I never know with Intel. When it's this far out, they always hype it.

post #20 of 35
Quote:
Originally Posted by hmm 
I need a considerable amount of resolution and detail. Otherwise things tend to look blah. When I said rig, the point is to line everything up as precisely as possible to make stitching easier, as you have to stitch it, seamlessly remove tripod legs and any undesirable elements from the comp on an hdr image.

Are you quick enough to capture the sea seamlessly?



The resolution you get is about 10784 x 2376 pixels (up to 28MPixels or something). The quality will certainly be better from a proper camera but for lighting and reflections, it should be usable for some output. Maybe not print resolution though.
Quote:
Originally Posted by hmm 
There are suggestions that Broadwell will be more impressive simply because of the list of changes being made, but I never know with Intel. When it's this far out, they always hype it.

Intel will support OpenGL 4 and OpenCL 1.2 from the outset with Haswell. The GPU performance will hopefully jump at least 80%.

I don't see Broadwell improving performance as much because they are integrating things into the CPU:

http://news.softpedia.com/news/Intel-Broadwell-CPUs-to-Make-Appearance-in-2014-237178.shtml

I think a lot of people will be content with a $799 Haswell Mini especially if the Fusion drive price is lower by then.
post #21 of 35
Quote:
Originally Posted by Marvin View Post


Are you quick enough to capture the sea seamlessly?

The resolution you get is about 10784 x 2376 pixels (up to 28MPixels or something). The quality will certainly be better from a proper camera but for lighting and reflections, it should be usable for some output. Maybe not print resolution though.
 

 

It's difficult to do so, but some of it can be corrected later. It's just tedious, and most people make a mess of stuff like that. As for resolution, that would still be pretty limiting. It's helpful to have start with that extra resolution then downsample at the end if necessary. Starting with something that is a little over 2k high is really really limiting. You'd also need a bit more ground. It's still awesome that you can capture that on your phone.

 

Edit: What I meant was that I'm still using "pro" level cameras (in terms of their target market), just not renting medium format digital, and not upgrading that often at this point. I've been testing out some things I'd like to implement with 10k + renders. This means I need a lot of image data to keep it smooth. The other problem I've been having is that the renderer I've been using likes to sample reflections at low quality to speed up render times. This angers Morbo, so I may have see if I can write something to modify this behavior. Combine that with being limited to an awkward method of irradiance caching to make use of IBL nodes, and it's difficult to really get around the CG feel of it, especially with the poor interaction of bump maps. I've been looking at something like vray because of this. Their GI is really nice, but their shader documentation is pretty bad, meaning I'd have a lot of testing to determine certain situational behavior. It takes some tweaks in post regardless, but I like the best starting quality possible as I like things to look as photographic as possible. This means extremely precise models without distorted faces, a lot of shader testing, bump maps for micro-texture, and good IBL interaction with the use of bump maps. I just wanted to add some context, and I already know you're familiar with all the terms used as you've referenced this stuff before.

 

 

Quote:
Intel will support OpenGL 4 and OpenCL 1.2 from the outset with Haswell. The GPU performance will hopefully jump at least 80%.
I don't see Broadwell improving performance as much because they are integrating things into the CPU:
http://news.softpedia.com/news/Intel-Broadwell-CPUs-to-Make-Appearance-in-2014-237178.shtml
I think a lot of people will be content with a $799 Haswell Mini especially if the Fusion drive price is lower by then.

Anandtech seemed unsure on the gpu improvements there.

 

http://www.anandtech.com/show/6355/intels-haswell-architecture/12

Quote:

Haswell's GPU

Although Intel provided a good amount of detail on the CPU enhancements to Haswell, the graphics discussion at IDF was fairly limited. That being said, there's still some to talk about here.

Haswell builds on the same fundamental GPU architecture we saw in Ivy Bridge. We won't see a dramatic redesign/re-plumbing of the graphics hardware until Broadwell in 2014 (that one is going to be a big one).

 

I read the article a few weeks ago. Regardless of that, it should be a boost in functionality for the Mini. OpenCL 1.2 and OpenGL 4 are quite significant. What I find interesting is the pace of ipad improvements at the moment.


Edited by hmm - 11/2/12 at 7:27pm
post #22 of 35
Quote:
Originally Posted by Marvin View Post

Are you quick enough to capture the sea seamlessly?

The resolution you get is about 10784 x 2376 pixels (up to 28MPixels or something). The quality will certainly be better from a proper camera but for lighting and reflections, it should be usable for some output. Maybe not print resolution though.
You guys keep posting pics like that and I might upgrade my phone sooner than planned.
Quote:
Intel will support OpenGL 4 and OpenCL 1.2 from the outset with Haswell. The GPU performance will hopefully jump at least 80%.
There are more stories, rumors and guesses floating around than one knows what to do with. One thing is obvious, intel has gone silent on the GPU and is not discussing it publicly. I take this as a sign of a major upgrade in performance.
Quote:
I don't see Broadwell improving performance as much because they are integrating things into the CPU:
http://news.softpedia.com/news/Intel-Broadwell-CPUs-to-Make-Appearance-in-2014-237178.shtml
I think a lot of people will be content with a $799 Haswell Mini especially if the Fusion drive price is lower by then.

Haswell may very well be the turning point for intel integrated graphics. It may be so good that the base model Mini might look appealing. Depending upon what rumor mill you want to believe the GT3 variant could come with VRAM built into the SoC package giving performance similar to midrange video cards.

I really think people underestimate Haswell.
post #23 of 35
Quote:
Originally Posted by hmm View Post

It's difficult to do so, but some of it can be corrected later. It's just tedious, and most people make a mess of stuff like that. As for resolution, that would still be pretty limiting. It's helpful to have start with that extra resolution then downsample at the end if necessary. Starting with something that is a little over 2k high is really really limiting. You'd also need a bit more ground. It's still awesome that you can capture that on your phone.
This is true. However I think you miss the point, this tech allows the average iPhone user to take some really nice pics that would look like crap otherwise if tried from a cell phone camera.
Quote:
Edit: What I meant was that I'm still using "pro" level cameras (in terms of their target market), just not renting medium format digital, and not upgrading that often at this point. I've been testing out some things I'd like to implement with 10k + renders. This means I need a lot of image data to keep it smooth. The other problem I've been having is that the renderer I've been using likes to sample reflections at low quality to speed up render times. This angers Morbo, so I may have see if I can write something to modify this behavior. Combine that with being limited to an awkward method of irradiance caching to make use of IBL nodes, and it's difficult to really get around the CG feel of it, especially with the poor interaction of bump maps.
Which proves my point above. You are looking at this from the stand point of a professional while iPhone users employing this feature are anything but professionals. All they are looking for is an aspect ratio that can't be had with normal cell phone optics.
Quote:
I've been looking at something like vray because of this. Their GI is really nice, but their shader documentation is pretty bad, meaning I'd have a lot of testing to determine certain situational behavior. It takes some tweaks in post regardless, but I like the best starting quality possible as I like things to look as photographic as possible. This means extremely precise models without distorted faces, a lot of shader testing, bump maps for micro-texture, and good IBL interaction with the use of bump maps. I just wanted to add some context, and I already know you're familiar with all the terms used as you've referenced this stuff before.


Anandtech seemed unsure on the gpu improvements there.
I have no respect for Anandtech as he seems hell bent on promoting intel. In any event what ever intel is up to here that are keeping it very close to their chest and have virtually eliminated their openness with respect to the GPUs in future products.
Quote:

I read the article a few weeks ago. Regardless of that, it should be a boost in functionality for the Mini. OpenCL 1.2 and OpenGL 4 are quite significant. What I find interesting is the pace of ipad improvements at the moment.

IPad is amazing! It is almost a perfect storm of hardware technologies all arriving at the right time.

Haswell does have the potential of turning the Mini into a very desirable machine. [SIZE5]IF[/SIZE] that is Apple sees to implementing the right chips and actually delivers the $799 machine as a performance machine.
post #24 of 35
Quote:
Originally Posted by hmm 
Anandtech seemed unsure on the gpu improvements there.

http://www.anandtech.com/show/6355/intels-haswell-architecture/12
Quote:

Haswell's GPU



Although Intel provided a good amount of detail on the CPU enhancements to Haswell, the graphics discussion at IDF was fairly limited. That being said, there's still some to talk about here.
Haswell builds on the same fundamental GPU architecture we saw in Ivy Bridge. We won't see a dramatic redesign/re-plumbing of the graphics hardware until Broadwell in 2014 (that one is going to be a big one).

The Intel demo showed a decent improvement:



They are saying that's a mobile reference platform so not just a high TDP part. Running at 1080p is enough to suggest a big improvement. Anandtech suggests Ultrabooks will see a smaller increase but they might be getting lower GT2 parts. The quad-core chips with the 45W TDP (MBP and upper Minis at least) should get GT3.
Quote:
Originally Posted by wizard69 
Haswell may very well be the turning point for intel integrated graphics. It may be so good that the base model Mini might look appealing. Depending upon what rumor mill you want to believe the GT3 variant could come with VRAM built into the SoC package giving performance similar to midrange video cards.

Yes, these articles suggest 64MB of on-die memory that will act like the fast memory on dedicated GPUs:

http://semiaccurate.com/2012/04/02/haswells-gpu-prowess-is-due-to-crystalwell/
http://news.softpedia.com/news/Intel-s-Crystalwell-Technology-Uses-a-Wide-Memory-BUS-291277.shtml

while it's not a lot of memory, every improvement is welcome.
post #25 of 35
Quote:
Originally Posted by Marvin View Post


Although not as high quality due to the lenses, the iPhone's panorama feature with HDR setting could be good for this if it was combined with saving the source data:
http://www.popphoto.com/2012/04/not-qute-raw-shooting-iphone-app-645-pro-now-available
PC manufacturers can put a quad-i7 and 640M in Ultrabooks for $750:
http://www.newegg.com/Product/Product.aspx?Item=N82E16834215541
It should have been possible in an $800 Mini. Like I say, the integrated GPU will make things simpler for them going forward and it should be ok from next year onwards but I don't like it when they downgrade some components in a new machine relative to the old one.


"Yes, but..."

 

You should remember that the PC manufacturers operate at much lower margins. If one were to compare "build cost", the way some of the teardown sites do, most times Apple's pricing will be higher due to their margins. One notable exception, however, appears to be the Surface which I saw an article indicating that it has a 53% margin. By the way, the article was critical of that and posited that it would not meet with public approval.

 

I think enough people would be willing to pay the upcharge for the discreet processor to make it worthwhile for Apple to have it as either a BTO or, better yet, a ready-to-go model people could pick up at the Apple Store and "take it home today". By doing so, Apple could increase sales which would actually reduce their costs enough to allow slightly more "realistic" pricing.

 

Anyway, I agree with your point that there is no reason Apple can not do this somewhere within the range of the current price points. Let's hope they have a change of heart about this.

post #26 of 35
Quote:
Originally Posted by RBR 
there is no reason Apple can not do this somewhere within the range of the current price points. Let's hope they have a change of heart about this.

At the very least, they can make the $999 model have a dedicated GPU (it could have had the same 512MB 640M in the entry iMac) instead of being a server. Any model can be configured as a server really with BTO options. You might only want the base model with a Fusion drive as a server. It should come with an App Store redeem code so you can install OS X Server on future models.
post #27 of 35
Quote:
Originally Posted by Marvin View Post


At the very least, they can make the $999 model have a dedicated GPU (it could have had the same 512MB 640M in the entry iMac) instead of being a server. Any model can be configured as a server really with BTO options. You might only want the base model with a Fusion drive as a server. It should come with an App Store redeem code so you can install OS X Server on future models.

 

Amen, brother. From your lips to Tim's ears.
post #28 of 35
I second Marvin's suggestion.
post #29 of 35
Quote:
Originally Posted by Marvin View Post


At the very least, they can make the $999 model have a dedicated GPU (it could have had the same 512MB 640M in the entry iMac) instead of being a server. Any model can be configured as a server really with BTO options. You might only want the base model with a Fusion drive as a server. It should come with an App Store redeem code so you can install OS X Server on future models.


The entry imac uses a much less expensive cpu. It's not so much a question of what they could do. It's more what they can do within their desired margins. Apple has never really held gpu performance as a top priority anyway. They like to offer a couple performance options, but even then choosing mobile options makes it much more expensive. If you want that much power within a 100W tdp and small form factor, it tends to be expensive.

post #30 of 35
Quote:
Originally Posted by hmm 
The entry imac uses a much less expensive cpu. It's not so much a question of what they could do. It's more what they can do within their desired margins.

It's hard to compare price to the iMac as it's an entirely different machine. The old Mini had a $200 premium for a dedicated GPU and a faster CPU. They should be able to add a 640M to the $799 Mini for a $200 premium. Right now, for that $200, they give you 2x 1TB 7200rpm drives and OS X Server. I think more people would go for a GPU than hard drives and software they can get from the App Store.

The Acer Ultrabook above has the same spec for $799 and that has a display. If we assume Acer are making 15% margins, cost to build is about $695. Take off $50 for a 17" screen, take off $25 for the DVD drive. Add $100 for a metal chassis and some money for things like Thunderbolt and the OS license to get say $799, add 25% margins and you get $999. Apple typically operates at 25% profit margins.

If Haswell delivers on the GPU, I won't be too bothered though because that $799 Mini would be pretty good and it can't get any worse from that point on. Intel graphics are as low as you can go really. I just don't see the point in offering a Server model. Something around the 1500 mark in the following chart would be a comfortable GPU:

http://www.videocardbenchmark.net/high_end_gpus.html

The HD 4000 is sitting at 570. I'm expecting Haswell to jump 80% to sit around 1000 - this has to be the case if Intel's 1080p demo isn't completely phoney. The 650M is around 1300 and the 640M is about 70% of that so Haswell should be either the same or slightly faster than the 640M next year. It can easily arrive in June. Coupled with a further 10% increase in the CPU and the $799 Mini will be a good buy. If Intel continues like this and offers a 10% increase in CPU, 50% increase in GPU for Broadwell, it should get to the 1500 mark sometime in 2014. Broadwell might not have such a big jump as Haswell as they are making it an SoC but that can bring the price down.

By 2013, NVidia will have a 740M though and I assume that would be at least 50% faster than Haswell so would still be compelling for the $999 model.
post #31 of 35
Quote:
Originally Posted by Marvin View Post


The Acer Ultrabook above has the same spec for $799 and that has a display. If we assume Acer are making 15% margins, cost to build is about $695. Take off $50 for a 17" screen, take off $25 for the DVD drive. Add $100 for a metal chassis and some money for things like Thunderbolt and the OS license to get say $799, add 25% margins and you get $999. Apple typically operates at 25% profit margins.
 

They do seem to use a different cpu. Here is the Acer ultrabook with 640m I located on amazon. Some of the other sites gave it better reviews, although I've never really liked Acer.

 

The cpus are still a bit different in cost. Acer uses a 2467m. Apple uses a 3615qm in the 2.3 quad. The cpu in the 2.6 model is virtually identical in price, so at the $900 price point, it still contributes roughly that to the cost. I felt like they didn't design it for a discrete gpu. The thing runs on the hot side as it is, and they went skimpy on the video ram last time. I know I focus on that point, but for any OpenCL functionality, that tends to be a pivotal factor. They'd actually satisfy a very wide range of needs with a semi-decent gpu, and I feel like that's important for such a machine. It needs to leverage as many new customers as possible. I don't see the desktops as facebook and email machines. You can do that on anything.

post #32 of 35
Quote:
Originally Posted by hmm 
The cpus are still a bit different in cost. Acer uses a 2467m. Apple uses a 3615qm in the 2.3 quad.

The Acer I linked above used the i7-2670QM, same price as the Mini i7.
Quote:
Originally Posted by hmm 
The cpu in the 2.6 model is virtually identical in price, so at the $900 price point, it still contributes roughly that to the cost.

Those are ULV chips. The MBA-style machines cost a bit more for less.
Quote:
Originally Posted by hmm 
I felt like they didn't design it for a discrete gpu. The thing runs on the hot side as it is

The space is definitely tight for two chips and it doesn't help when they blow the air out the same hole the cold air comes in. I was surpised at how quickly it manages to cool down though and is a lot better than the old ones - I think they run the fans faster and sooner these days. They used to wait right until the last minute and keep the machines running until they were burning hot.

When we get hetergeneous computing, I think the problems over power budget will be much less important. They will be able to get the best graphics and processing performance out of an overall power budget. Instead of having a very fast CPU coupled with a weak GPU or vice versa, it will be a fast SoC no matter what it's used for. Some sort of dynamic architecture perhaps where a core isn't just a GPU core or a CPU core, it configures itself based on what type of process is needed.

I bet they've figured this stuff out already and have qubit processors in their labs somewhere. They won't release them until they've milked the growth out of the industry though, drip-feeding minor percentage improvements. It'll come soon enough though.
post #33 of 35
Quote:
Originally Posted by Marvin View Post


The Acer I linked above used the i7-2670QM, same price as the Mini i7.

That was my mistake. I looked for Acer ultrabooks with the correct name. The post where you linked one was further up, so I missed it the mismatch with the link there.

 

Quote:
Those are ULV chips. The MBA-style machines cost a bit more for less.

I know about that. They're quite expensive. I've wondered how Apple will eventually budget for some screen upgrades there or if the line will remain much like it is today with the rMBP models simply filling in the older price points within a generation or two.

 

 

Quote:
The space is definitely tight for two chips and it doesn't help when they blow the air out the same hole the cold air comes in. I was surpised at how quickly it manages to cool down though and is a lot better than the old ones - I think they run the fans faster and sooner these days. They used to wait right until the last minute and keep the machines running until they were burning hot.

I've never personally examined the cooling design on them. You probably know way more than me about the details of the mini. I read about them, but I've never really used or owned one beyond a few minutes at the Apple Store. They are able to fulfill a reasonable range of requirements. I know quite a few graphic designers. If they were looking for a desktop, I'd suggest one + an NEC screen if they dislike imacs, assuming the work doesn't include motion graphics. For motion graphics, it's still not enough. You really don't want to require the step of making low res proxies to animate simply because the gpu is too slow. Most of that software also tends not to be certified on integrated graphics.

 

Quote:
When we get hetergeneous computing, I think the problems over power budget will be much less important. They will be able to get the best graphics and processing performance out of an overall power budget. Instead of having a very fast CPU coupled with a weak GPU or vice versa, it will be a fast SoC no matter what it's used for. Some sort of dynamic architecture perhaps where a core isn't just a GPU core or a CPU core, it configures itself based on what type of process is needed.
I bet they've figured this stuff out already and have qubit processors in their labs somewhere. They won't release them until they've milked the growth out of the industry though, drip-feeding minor percentage improvements. It'll come soon enough though.

I was kind of hoping for better OpenCL alignment. I haven't analyzed the difference in shared DDR3 compared to dedicated DDR5. A lot of people on Apple forums accuse Windows oems of pumping vram with a larger amount of DDR3, yet I don't see that frequently when I check specs. OpenCL and CUDA are really driving some interesting developments in gpu leveraging. It's just memory intensive. This is why I laughed when anyone wanted a 2011 discrete gpu mini for anything leveraging OpenCL. It wasn't well aligned with the requirements of these applications, which are totally different from gaming requirements. I agree with you on drip-feeding. If you look at Apple's releases, they try to schedule major updates of their own on years when the updates from intel are a bit softer. Apple still releases under the hood tuning kinds of updates on Intel's stronger years. It's just fewer people will notice those.

post #34 of 35
Quote:
Originally Posted by hmm 
If they were looking for a desktop, I'd suggest one + an NEC screen if they dislike imacs, assuming the work doesn't include motion graphics. For motion graphics, it's still not enough. You really don't want to require the step of making low res proxies to animate simply because the gpu is too slow. Most of that software also tends not to be certified on integrated graphics.

The standard features work ok on the HD4000 and the fast quad-i7 makes it fine for this stuff. Even dedicated GPUs fall short in some cases:

http://prolost.com/cs6

" My MacBook Pro has an NVIDIA GeForce GT 330M, but that’s not beefy enough for AE6, so I’m stuck with CPU rendering."

When you have to do CPU processing, the quad-i7 in the Mini matches the MBP. When you do have a compatible GPU, the advanced features show big improvements:

http://www.barefeats.com/aecs6.html

Look how slow the one with an AMD GPU is vs the 650M rMBP. They are supposed to have switched some things from CUDA to OpenCL, which should work everywhere but obviously not that feature yet.

The 27" iMac with GTX 680MX should be pretty good though.
post #35 of 35
Quote:
Originally Posted by Marvin View Post



http://www.barefeats.com/aecs6.html
Look how slow the one with an AMD GPU is vs the 650M rMBP. They are supposed to have switched some things from CUDA to OpenCL, which should work everywhere but obviously not that feature yet.
The 27" iMac with GTX 680MX should be pretty good though.

 The thing that confuses people with Adobe is that they label the entire thing their Mercury Engine regardless of the frameworks that are used. Mercury leverages OpenGL and either OpenCL or CUDA depending upon the application. Premiere and After Effects use CUDA for certain features. OpenCL isn't implemented in either at this point. Photoshop, illustrator, and I think Lightroom use OpenCL. They are unable to use CUDA. That hit is because it simply cannot use an AMD gpu there. The 330m is either too old or it doesn't make the minimum vram requirement for CS6. Even photoshop requires 512MB of ram to use most of its OpenCL based functions, and in my experience going beyond their requirements is a good idea if you frequently work with larger files. Some of these things like After Effects, Vray RT, iRay, etc. are really designed for the 2GB Kepler and Fermi cards. I say this because they typically have to load the entire thing into video memory prior to running calculations. It's totally different from gaming where people would generally examine memory bandwidth more than total amount.

 

By the way, the performance of the desktop cards there is extremely interesting. I'd like to see rendering go that way. It could enable a lot of cool stuff without exotic hardware requirements.

 

I thought I should add, I read your article there in its entirety. The problem is that guy is giving an extremely basic description of point caching and sampling algorithms. There are certain varieties. Some use pre-passes to determine importance relative to what can be seen by the camera. Some allow for different settings on secondary GI. The typical irradiance caching method that he seems to be describing there has historically relied on stochastic algorithms, meaning they're randomized and interpolated. The problem with this is that it takes a certain amount of tuning. I'm not sure what Adobe intended for a workflow wthere, but the historic problem with stochastic sampling of irradiance is that it has difficulty with high contrast micro-textures or areas of high contrast near the clipping plane. When it comes to areas that have small extreme points, it tends to throw off randomized sampling, especially when calculated frame by frame. You can get a lot of flicker and in extreme cases distortion patterns. They were really developed for a much earlier era. Seeing as Adobe is jumping in now, they're looking at what is practical going forward. Oh and just like Nuke and Fusion they had a scanline renderer prior to this.


Edited by hmm - 11/17/12 at 9:42am
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Future Apple Hardware
AppleInsider › Forums › Mac Hardware › Future Apple Hardware › Geek Benchmarks for 2.3 Ghz i7 Mac Mini - Late 2012