Geek Benchmarks for 2.3 Ghz i7 Mac Mini - Late 2012

2»

Comments

  • Reply 21 of 34
    wizard69wizard69 Posts: 13,377member
    Marvin wrote: »
    Are you quick enough to capture the sea seamlessly?
    1000
    The resolution you get is about 10784 x 2376 pixels (up to 28MPixels or something). The quality will certainly be better from a proper camera but for lighting and reflections, it should be usable for some output. Maybe not print resolution though.
    You guys keep posting pics like that and I might upgrade my phone sooner than planned.
    Intel will support OpenGL 4 and OpenCL 1.2 from the outset with Haswell. The GPU performance will hopefully jump at least 80%.
    There are more stories, rumors and guesses floating around than one knows what to do with. One thing is obvious, intel has gone silent on the GPU and is not discussing it publicly. I take this as a sign of a major upgrade in performance.
    I don't see Broadwell improving performance as much because they are integrating things into the CPU:
    http://news.softpedia.com/news/Intel-Broadwell-CPUs-to-Make-Appearance-in-2014-237178.shtml
    I think a lot of people will be content with a $799 Haswell Mini especially if the Fusion drive price is lower by then.

    Haswell may very well be the turning point for intel integrated graphics. It may be so good that the base model Mini might look appealing. Depending upon what rumor mill you want to believe the GT3 variant could come with VRAM built into the SoC package giving performance similar to midrange video cards.

    I really think people underestimate Haswell.
  • Reply 22 of 34
    wizard69wizard69 Posts: 13,377member
    hmm wrote: »
    It's difficult to do so, but some of it can be corrected later. It's just tedious, and most people make a mess of stuff like that. As for resolution, that would still be pretty limiting. It's helpful to have start with that extra resolution then downsample at the end if necessary. Starting with something that is a little over 2k high is really really limiting. You'd also need a bit more ground. It's still awesome that you can capture that on your phone.
    This is true. However I think you miss the point, this tech allows the average iPhone user to take some really nice pics that would look like crap otherwise if tried from a cell phone camera.
    Edit: What I meant was that I'm still using "pro" level cameras (in terms of their target market), just not renting medium format digital, and not upgrading that often at this point. I've been testing out some things I'd like to implement with 10k + renders. This means I need a lot of image data to keep it smooth. The other problem I've been having is that the renderer I've been using likes to sample reflections at low quality to speed up render times. This angers Morbo, so I may have see if I can write something to modify this behavior. Combine that with being limited to an awkward method of irradiance caching to make use of IBL nodes, and it's difficult to really get around the CG feel of it, especially with the poor interaction of bump maps.
    Which proves my point above. You are looking at this from the stand point of a professional while iPhone users employing this feature are anything but professionals. All they are looking for is an aspect ratio that can't be had with normal cell phone optics.
    I've been looking at something like vray because of this. Their GI is really nice, but their shader documentation is pretty bad, meaning I'd have a lot of testing to determine certain situational behavior. It takes some tweaks in post regardless, but I like the best starting quality possible as I like things to look as photographic as possible. This means extremely precise models without distorted faces, a lot of shader testing, bump maps for micro-texture, and good IBL interaction with the use of bump maps. I just wanted to add some context, and I already know you're familiar with all the terms used as you've referenced this stuff before.


    Anandtech seemed unsure on the gpu improvements there.
    I have no respect for Anandtech as he seems hell bent on promoting intel. In any event what ever intel is up to here that are keeping it very close to their chest and have virtually eliminated their openness with respect to the GPUs in future products.

    I read the article a few weeks ago. Regardless of that, it should be a boost in functionality for the Mini. OpenCL 1.2 and OpenGL 4 are quite significant. What I find interesting is the pace of ipad improvements at the moment.

    IPad is amazing! It is almost a perfect storm of hardware technologies all arriving at the right time.

    Haswell does have the potential of turning the Mini into a very desirable machine. [SIZE5]IF[/SIZE] that is Apple sees to implementing the right chips and actually delivers the $799 machine as a performance machine.
  • Reply 23 of 34
    MarvinMarvin Posts: 15,326moderator
    hmm wrote:
    Anandtech seemed unsure on the gpu improvements there.

    http://www.anandtech.com/show/6355/intels-haswell-architecture/12
    <h2>Haswell's GPU</h2>

    Although Intel provided a good amount of detail on the CPU enhancements to Haswell, the graphics discussion at IDF was fairly limited. That being said, there's still some to talk about here.
    Haswell builds on the same fundamental GPU architecture we saw in Ivy Bridge. We won't see a dramatic redesign/re-plumbing of the graphics hardware until Broadwell in 2014 (that one is going to be a big one).

    The Intel demo showed a decent improvement:


    [VIDEO]


    They are saying that's a mobile reference platform so not just a high TDP part. Running at 1080p is enough to suggest a big improvement. Anandtech suggests Ultrabooks will see a smaller increase but they might be getting lower GT2 parts. The quad-core chips with the 45W TDP (MBP and upper Minis at least) should get GT3.
    wizard69 wrote:
    Haswell may very well be the turning point for intel integrated graphics. It may be so good that the base model Mini might look appealing. Depending upon what rumor mill you want to believe the GT3 variant could come with VRAM built into the SoC package giving performance similar to midrange video cards.

    Yes, these articles suggest 64MB of on-die memory that will act like the fast memory on dedicated GPUs:

    http://semiaccurate.com/2012/04/02/haswells-gpu-prowess-is-due-to-crystalwell/
    http://news.softpedia.com/news/Intel-s-Crystalwell-Technology-Uses-a-Wide-Memory-BUS-291277.shtml

    while it's not a lot of memory, every improvement is welcome.
  • Reply 24 of 34
    rbrrbr Posts: 631member

    Quote:

    Originally Posted by Marvin View Post





    Although not as high quality due to the lenses, the iPhone's panorama feature with HDR setting could be good for this if it was combined with saving the source data:

    http://www.popphoto.com/2012/04/not-qute-raw-shooting-iphone-app-645-pro-now-available

    PC manufacturers can put a quad-i7 and 640M in Ultrabooks for $750:

    http://www.newegg.com/Product/Product.aspx?Item=N82E16834215541

    It should have been possible in an $800 Mini. Like I say, the integrated GPU will make things simpler for them going forward and it should be ok from next year onwards but I don't like it when they downgrade some components in a new machine relative to the old one.




    "Yes, but..."


     


    You should remember that the PC manufacturers operate at much lower margins. If one were to compare "build cost", the way some of the teardown sites do, most times Apple's pricing will be higher due to their margins. One notable exception, however, appears to be the Surface which I saw an article indicating that it has a 53% margin. By the way, the article was critical of that and posited that it would not meet with public approval.


     


    I think enough people would be willing to pay the upcharge for the discreet processor to make it worthwhile for Apple to have it as either a BTO or, better yet, a ready-to-go model people could pick up at the Apple Store and "take it home today". By doing so, Apple could increase sales which would actually reduce their costs enough to allow slightly more "realistic" pricing.


     


    Anyway, I agree with your point that there is no reason Apple can not do this somewhere within the range of the current price points. Let's hope they have a change of heart about this.

  • Reply 25 of 34
    MarvinMarvin Posts: 15,326moderator
    rbr wrote:
    there is no reason Apple can not do this somewhere within the range of the current price points. Let's hope they have a change of heart about this.

    At the very least, they can make the $999 model have a dedicated GPU (it could have had the same 512MB 640M in the entry iMac) instead of being a server. Any model can be configured as a server really with BTO options. You might only want the base model with a Fusion drive as a server. It should come with an App Store redeem code so you can install OS X Server on future models.
  • Reply 26 of 34
    rbrrbr Posts: 631member
    Quote:
    Originally Posted by Marvin View Post



    At the very least, they can make the $999 model have a dedicated GPU (it could have had the same 512MB 640M in the entry iMac) instead of being a server. Any model can be configured as a server really with BTO options. You might only want the base model with a Fusion drive as a server. It should come with an App Store redeem code so you can install OS X Server on future models.

     

    Amen, brother.

    From your lips to Tim's ears.
  • Reply 27 of 34
    winterwinter Posts: 1,238member
    I second Marvin's suggestion.
  • Reply 28 of 34
    hmmhmm Posts: 3,405member

    Quote:

    Originally Posted by Marvin View Post





    At the very least, they can make the $999 model have a dedicated GPU (it could have had the same 512MB 640M in the entry iMac) instead of being a server. Any model can be configured as a server really with BTO options. You might only want the base model with a Fusion drive as a server. It should come with an App Store redeem code so you can install OS X Server on future models.




    The entry imac uses a much less expensive cpu. It's not so much a question of what they could do. It's more what they can do within their desired margins. Apple has never really held gpu performance as a top priority anyway. They like to offer a couple performance options, but even then choosing mobile options makes it much more expensive. If you want that much power within a 100W tdp and small form factor, it tends to be expensive.

  • Reply 29 of 34
    MarvinMarvin Posts: 15,326moderator
    hmm wrote:
    The entry imac uses a much less expensive cpu. It's not so much a question of what they could do. It's more what they can do within their desired margins.

    It's hard to compare price to the iMac as it's an entirely different machine. The old Mini had a $200 premium for a dedicated GPU and a faster CPU. They should be able to add a 640M to the $799 Mini for a $200 premium. Right now, for that $200, they give you 2x 1TB 7200rpm drives and OS X Server. I think more people would go for a GPU than hard drives and software they can get from the App Store.

    The Acer Ultrabook above has the same spec for $799 and that has a display. If we assume Acer are making 15% margins, cost to build is about $695. Take off $50 for a 17" screen, take off $25 for the DVD drive. Add $100 for a metal chassis and some money for things like Thunderbolt and the OS license to get say $799, add 25% margins and you get $999. Apple typically operates at 25% profit margins.

    If Haswell delivers on the GPU, I won't be too bothered though because that $799 Mini would be pretty good and it can't get any worse from that point on. Intel graphics are as low as you can go really. I just don't see the point in offering a Server model. Something around the 1500 mark in the following chart would be a comfortable GPU:

    http://www.videocardbenchmark.net/high_end_gpus.html

    The HD 4000 is sitting at 570. I'm expecting Haswell to jump 80% to sit around 1000 - this has to be the case if Intel's 1080p demo isn't completely phoney. The 650M is around 1300 and the 640M is about 70% of that so Haswell should be either the same or slightly faster than the 640M next year. It can easily arrive in June. Coupled with a further 10% increase in the CPU and the $799 Mini will be a good buy. If Intel continues like this and offers a 10% increase in CPU, 50% increase in GPU for Broadwell, it should get to the 1500 mark sometime in 2014. Broadwell might not have such a big jump as Haswell as they are making it an SoC but that can bring the price down.

    By 2013, NVidia will have a 740M though and I assume that would be at least 50% faster than Haswell so would still be compelling for the $999 model.
  • Reply 30 of 34
    hmmhmm Posts: 3,405member

    Quote:

    Originally Posted by Marvin View Post





    The Acer Ultrabook above has the same spec for $799 and that has a display. If we assume Acer are making 15% margins, cost to build is about $695. Take off $50 for a 17" screen, take off $25 for the DVD drive. Add $100 for a metal chassis and some money for things like Thunderbolt and the OS license to get say $799, add 25% margins and you get $999. Apple typically operates at 25% profit margins.

     


    They do seem to use a different cpu. Here is the Acer ultrabook with 640m I located on amazon. Some of the other sites gave it better reviews, although I've never really liked Acer.


     


    The cpus are still a bit different in cost. Acer uses a 2467m. Apple uses a 3615qm in the 2.3 quad. The cpu in the 2.6 model is virtually identical in price, so at the $900 price point, it still contributes roughly that to the cost. I felt like they didn't design it for a discrete gpu. The thing runs on the hot side as it is, and they went skimpy on the video ram last time. I know I focus on that point, but for any OpenCL functionality, that tends to be a pivotal factor. They'd actually satisfy a very wide range of needs with a semi-decent gpu, and I feel like that's important for such a machine. It needs to leverage as many new customers as possible. I don't see the desktops as facebook and email machines. You can do that on anything.

  • Reply 31 of 34
    MarvinMarvin Posts: 15,326moderator
    hmm wrote:
    The cpus are still a bit different in cost. Acer uses a 2467m. Apple uses a 3615qm in the 2.3 quad.

    The Acer I linked above used the i7-2670QM, same price as the Mini i7.
    hmm wrote:
    The cpu in the 2.6 model is virtually identical in price, so at the $900 price point, it still contributes roughly that to the cost.

    Those are ULV chips. The MBA-style machines cost a bit more for less.
    hmm wrote:
    I felt like they didn't design it for a discrete gpu. The thing runs on the hot side as it is

    The space is definitely tight for two chips and it doesn't help when they blow the air out the same hole the cold air comes in. I was surpised at how quickly it manages to cool down though and is a lot better than the old ones - I think they run the fans faster and sooner these days. They used to wait right until the last minute and keep the machines running until they were burning hot.

    When we get hetergeneous computing, I think the problems over power budget will be much less important. They will be able to get the best graphics and processing performance out of an overall power budget. Instead of having a very fast CPU coupled with a weak GPU or vice versa, it will be a fast SoC no matter what it's used for. Some sort of dynamic architecture perhaps where a core isn't just a GPU core or a CPU core, it configures itself based on what type of process is needed.

    I bet they've figured this stuff out already and have qubit processors in their labs somewhere. They won't release them until they've milked the growth out of the industry though, drip-feeding minor percentage improvements. It'll come soon enough though.
  • Reply 32 of 34
    hmmhmm Posts: 3,405member

    Quote:

    Originally Posted by Marvin View Post





    The Acer I linked above used the i7-2670QM, same price as the Mini i7.


    That was my mistake. I looked for Acer ultrabooks with the correct name. The post where you linked one was further up, so I missed it the mismatch with the link there.


     


    Quote:


    Those are ULV chips. The MBA-style machines cost a bit more for less.



    I know about that. They're quite expensive. I've wondered how Apple will eventually budget for some screen upgrades there or if the line will remain much like it is today with the rMBP models simply filling in the older price points within a generation or two.


     


     


    Quote:


    The space is definitely tight for two chips and it doesn't help when they blow the air out the same hole the cold air comes in. I was surpised at how quickly it manages to cool down though and is a lot better than the old ones - I think they run the fans faster and sooner these days. They used to wait right until the last minute and keep the machines running until they were burning hot.



    I've never personally examined the cooling design on them. You probably know way more than me about the details of the mini. I read about them, but I've never really used or owned one beyond a few minutes at the Apple Store. They are able to fulfill a reasonable range of requirements. I know quite a few graphic designers. If they were looking for a desktop, I'd suggest one + an NEC screen if they dislike imacs, assuming the work doesn't include motion graphics. For motion graphics, it's still not enough. You really don't want to require the step of making low res proxies to animate simply because the gpu is too slow. Most of that software also tends not to be certified on integrated graphics.


     


    Quote:


    When we get hetergeneous computing, I think the problems over power budget will be much less important. They will be able to get the best graphics and processing performance out of an overall power budget. Instead of having a very fast CPU coupled with a weak GPU or vice versa, it will be a fast SoC no matter what it's used for. Some sort of dynamic architecture perhaps where a core isn't just a GPU core or a CPU core, it configures itself based on what type of process is needed.

    I bet they've figured this stuff out already and have qubit processors in their labs somewhere. They won't release them until they've milked the growth out of the industry though, drip-feeding minor percentage improvements. It'll come soon enough though.



    I was kind of hoping for better OpenCL alignment. I haven't analyzed the difference in shared DDR3 compared to dedicated DDR5. A lot of people on Apple forums accuse Windows oems of pumping vram with a larger amount of DDR3, yet I don't see that frequently when I check specs. OpenCL and CUDA are really driving some interesting developments in gpu leveraging. It's just memory intensive. This is why I laughed when anyone wanted a 2011 discrete gpu mini for anything leveraging OpenCL. It wasn't well aligned with the requirements of these applications, which are totally different from gaming requirements. I agree with you on drip-feeding. If you look at Apple's releases, they try to schedule major updates of their own on years when the updates from intel are a bit softer. Apple still releases under the hood tuning kinds of updates on Intel's stronger years. It's just fewer people will notice those.

  • Reply 33 of 34
    MarvinMarvin Posts: 15,326moderator
    hmm wrote:
    If they were looking for a desktop, I'd suggest one + an NEC screen if they dislike imacs, assuming the work doesn't include motion graphics. For motion graphics, it's still not enough. You really don't want to require the step of making low res proxies to animate simply because the gpu is too slow. Most of that software also tends not to be certified on integrated graphics.

    The standard features work ok on the HD4000 and the fast quad-i7 makes it fine for this stuff. Even dedicated GPUs fall short in some cases:

    http://prolost.com/cs6

    " My MacBook Pro has an NVIDIA GeForce GT 330M, but that’s not beefy enough for AE6, so I’m stuck with CPU rendering."

    When you have to do CPU processing, the quad-i7 in the Mini matches the MBP. When you do have a compatible GPU, the advanced features show big improvements:

    http://www.barefeats.com/aecs6.html

    Look how slow the one with an AMD GPU is vs the 650M rMBP. They are supposed to have switched some things from CUDA to OpenCL, which should work everywhere but obviously not that feature yet.

    The 27" iMac with GTX 680MX should be pretty good though.
  • Reply 34 of 34
    hmmhmm Posts: 3,405member

    Quote:

    Originally Posted by Marvin View Post







    http://www.barefeats.com/aecs6.html

    Look how slow the one with an AMD GPU is vs the 650M rMBP. They are supposed to have switched some things from CUDA to OpenCL, which should work everywhere but obviously not that feature yet.

    The 27" iMac with GTX 680MX should be pretty good though.


     The thing that confuses people with Adobe is that they label the entire thing their Mercury Engine regardless of the frameworks that are used. Mercury leverages OpenGL and either OpenCL or CUDA depending upon the application. Premiere and After Effects use CUDA for certain features. OpenCL isn't implemented in either at this point. Photoshop, illustrator, and I think Lightroom use OpenCL. They are unable to use CUDA. That hit is because it simply cannot use an AMD gpu there. The 330m is either too old or it doesn't make the minimum vram requirement for CS6. Even photoshop requires 512MB of ram to use most of its OpenCL based functions, and in my experience going beyond their requirements is a good idea if you frequently work with larger files. Some of these things like After Effects, Vray RT, iRay, etc. are really designed for the 2GB Kepler and Fermi cards. I say this because they typically have to load the entire thing into video memory prior to running calculations. It's totally different from gaming where people would generally examine memory bandwidth more than total amount.


     


    By the way, the performance of the desktop cards there is extremely interesting. I'd like to see rendering go that way. It could enable a lot of cool stuff without exotic hardware requirements.


     


    I thought I should add, I read your article there in its entirety. The problem is that guy is giving an extremely basic description of point caching and sampling algorithms. There are certain varieties. Some use pre-passes to determine importance relative to what can be seen by the camera. Some allow for different settings on secondary GI. The typical irradiance caching method that he seems to be describing there has historically relied on stochastic algorithms, meaning they're randomized and interpolated. The problem with this is that it takes a certain amount of tuning. I'm not sure what Adobe intended for a workflow wthere, but the historic problem with stochastic sampling of irradiance is that it has difficulty with high contrast micro-textures or areas of high contrast near the clipping plane. When it comes to areas that have small extreme points, it tends to throw off randomized sampling, especially when calculated frame by frame. You can get a lot of flicker and in extreme cases distortion patterns. They were really developed for a much earlier era. Seeing as Adobe is jumping in now, they're looking at what is practical going forward. Oh and just like Nuke and Fusion they had a scanline renderer prior to this.

Sign In or Register to comment.