2014 Mac mini Wishlist

1141517192077

Comments

  • Reply 321 of 1528
    wizard69wizard69 Posts: 13,377member


    Thunderbolt is an issue but I find it hard to believe that Apple would have signed up for exclusive TB use and not cover their butt when it comes to implementing the interface on alternative hardware.    You would have to believe at the very least that Apple has the option to put TB on ARM hardware.  If they didn't get that written into the contract somebody is asleep at the wheel at Apple.  


    Quote:

    Originally Posted by Marvin View Post





    AMD's graphics are better just now but you also have to think about Thunderbolt. Maybe it would pass certification but it's not clear that it would.



    Haswell might be able to come close enough to AMD's GPUs anyway.


    Close to today's AMD GPUs sure but what happens when AMD puts a Southern Islands core in their SoC?    Remember AMD is developing SoC tech just as fast if not faster than AMDs solutions.   Intel has never come close to AMD Zacate based chips for example.  AMD is suffering right now but that also means aggressiveness.   

  • Reply 322 of 1528
    wizard69wizard69 Posts: 13,377member


    Even though the linked article is dated it still supports my argument, AMD does better than Intel for GPU focused apps.   The article is dated though because compiler technology can improve Piledriver performance significantly.    On top of all of that AMDs chips will run much cooler and provide a smoother user experience.   Further Apple isn't exactly using top of the line performance in the Mini anyways so the CPU gap isn't that great.   


     


    In the end it comes down to this, you pay less for AMDs chips and get better video performance especially in 3D.  

  • Reply 323 of 1528
    mjteixmjteix Posts: 563member

    Quote:


    Originally Posted by wizard69


     


    Further Apple isn't exactly using top of the line performance in the Mini anyways so the CPU gap isn't that great.   




    Apple is using exactly the same cpus in the Mac mini than in the 13" and 15" MBPs. If Apple isn't exactly using top of the line performance in the Mini, there are still using very good Core i5/i7 parts with the best igpu Intel has to offer. But that doesn't matter, the results are the same: the cpu gap with AMD is HUGE!


     


    Just look at geekbench scores those parts:


    Intel Core i7-3720QM 2600 MHz (4 cores) 10132 


    Intel Core i7-3615QM 2300 MHz (4 cores) 9037 


    Intel Core i5-3210M 2500 MHz (2 cores) 5742 


    Intel Core i5-2410M 2300 MHz (2 cores) 5052


    AMD A10-4600M 2300 MHz (4 cores) 4235


     


    Or cpu benchmarks:


    Intel Core i7-3720QM @ 2.60GHz 8,468 


    Intel Core i7-3615QM @ 2.30GHz 7,481 


    Intel Core i5-3210M @ 2.50GHz 3,818 


    Intel Core i5-2410M @ 2.30GHz 3,196


    AMD A10-4600M APU @ 2.30GHz 3,072


     


    The cpu in the $799 Mac mini is more than two times faster than the best mobile APU AMD offers. And that Intel cpu is not even top of the line performance. But even desktop apus can't achieve the cpu performance of an Intel mobile quad-core cpu. I'd rather take a small "hit" in graphics than to cut the cpu performance in half.


     


    There's a reason why those APUs are cheap, they suck! Even AMD officially compares them to Intel's Core i3 parts, not Core i5, let alone Core i7. Yeah, they have nice graphics, so what?

  • Reply 324 of 1528
    marvfoxmarvfox Posts: 2,275member


    Nice to know Apple has a bunch of liars working for them or incompetents.

     

  • Reply 325 of 1528
    winterwinter Posts: 1,238member
    If I wanted discrete graphics than I would pay for discrete graphics but I think for the cost, they should offer some more options. Not having 2 GB in the 650M with the retina MacBook Pro to deal with the retina display was a mistake in my view.

    Haswell will be a decent step up from Ivy Bridge, Rockwell from Haswell, Skylake from Rockwell, and Skymont from Skylake.
  • Reply 326 of 1528
    wizard69wizard69 Posts: 13,377member


    Your numbers below support my point completely, you give up a little but of CPU performance for a more versatile and better performing GPU.     For many uses that is the better trade off.   Since AMD has a deep well of GPU IP I expect them to keep that differential with Intel.   


     


    Tis is only significant if you look at the integrated solutions, if your focus is on discrete GPUs then you might as well go for Intels better CPU performance. 


     


    Finally the numbers below mean nothing without a context.   


    Quote:

    Originally Posted by mjteix View Post


    Apple is using exactly the same cpus in the Mac mini than in the 13" and 15" MBPs. If Apple isn't exactly using top of the line performance in the Mini, there are still using very good Core i5/i7 parts with the best igpu Intel has to offer. But that doesn't matter, the results are the same: the cpu gap with AMD is HUGE!


     


    Just look at geekbench scores those parts:


    Intel Core i7-3720QM 2600 MHz (4 cores) 10132 


    Intel Core i7-3615QM 2300 MHz (4 cores) 9037 


    Intel Core i5-3210M 2500 MHz (2 cores) 5742 


    Intel Core i5-2410M 2300 MHz (2 cores) 5052


    AMD A10-4600M 2300 MHz (4 cores) 4235


     


    Or cpu benchmarks:


    Intel Core i7-3720QM @ 2.60GHz 8,468 


    Intel Core i7-3615QM @ 2.30GHz 7,481 


    Intel Core i5-3210M @ 2.50GHz 3,818 


    Intel Core i5-2410M @ 2.30GHz 3,196


    AMD A10-4600M APU @ 2.30GHz 3,072


     


    The cpu in the $799 Mac mini is more than two times faster than the best mobile APU AMD offers. And that Intel cpu is not even top of the line performance. But even desktop apus can't achieve the cpu performance of an Intel mobile quad-core cpu. I'd rather take a small "hit" in graphics than to cut the cpu performance in half.


     


    There's a reason why those APUs are cheap, they suck! Even AMD officially compares them to Intel's Core i3 parts, not Core i5, let alone Core i7. Yeah, they have nice graphics, so what?


  • Reply 327 of 1528
    winterwinter Posts: 1,238member
    It's a shame they can't collaborate and you have a powerful Intel chip with AMD graphics and that probably would compare to a 650M.
  • Reply 328 of 1528
    wizard69wizard69 Posts: 13,377member


    I see no need for AMD to hook up with Intel.   Right now it is an issue of public perception and a focus on CPU performance that needs to change.   In a modern computer CPU performance isn't everything, for many users a well performing GPU is more important even if hey don't realize that.  


     


    Beyond that some of these old benchmarks don't reflect the importance of software built with the latest compiler tools which can enhance Piledriver performance.    To use this AMD hardware without optimizing for it would be a mistake.  Also it isn't that hard to find tests that indicate AMD delivering better system behavior than Intel based hardware.  In the end it is really a question of how much performance you can get for a minimal cost.  


     


     


    Quote:

    Originally Posted by Winter View Post



    It's a shame they can't collaborate and you have a powerful Intel chip with AMD graphics and that probably would compare to a 650M.

  • Reply 329 of 1528
    hmmhmm Posts: 3,405member

    Quote:

    Originally Posted by Winter View Post



     that probably would compare to a 650M.


    Where do you get that idea? Has anyone suggested they have obtained that level of performance? It could also have certain requirements in terms of die space and design that would not work with intel, and of course those two companies have little experience working together. If you want optimal performance, you need some form of discrete graphics today. Integrated can actually be good enough for a number of things compared to where they were a few years ago. If discrete graphics die out, I suspect it will be due to insufficient volume to overcome fixed development costs. That is typically how these things go. NVidia was able to leverage out a lot of the graphics workstation vendors in the 1990s. They sold a limited number of units at tens of thousands of dollars each. NVidia used a rise in PC gaming to further desktop graphics research. Today that business model is becoming problematic. It may not be sustainable. I don't expect Haswell to even approach it outside of a limited number of synthetic tests. Intel's problems haven't just been one of raw hardware, but one of implementation. There are many use cases where the only thing that makes me occasionally hesitant to suggest them as good enough is the number of annoying gotchas. The HDMI issue comes to mind. In terms of performance, it's fine for a lot of things. It's not great for gaming, but real time 3d graphics really do push the hardware. They're one of the most demanding non-work related use cases.

  • Reply 330 of 1528
    mjteixmjteix Posts: 563member

    Quote:

    Originally Posted by wizard69 View Post


    Your numbers below support my point completely, you give up a little but of CPU performance for a more versatile and better performing GPU.     For many uses that is the better trade off.   Since AMD has a deep well of GPU IP I expect them to keep that differential with Intel.   


     


    Tis is only significant if you look at the integrated solutions, if your focus is on discrete GPUs then you might as well go for Intels better CPU performance. 


     


    Finally the numbers below mean nothing without a context.   



     


    It's funny how the same numbers can "mean nothing" and "support your point completely" at the same time.


     


    You're in denial. I can understand that. You don't really know what little, versatile, and many really means anymore. You have a hard time understanding how 9037 relates to 4235. Or 7481 vs 3072. It happens. It's not getting easier as you're getting old.

  • Reply 331 of 1528
    wizard69wizard69 Posts: 13,377member



    Originally Posted by mjteix View Post

    It's funny how the same numbers can "mean nothing" and "support your point completely" at the same time.


     


    You're in denial. I can understand that. You don't really know what littleversatile, and many really means anymore. You have a hard time understanding how 9037 relates to 4235. Or 7481 vs 3072. It happens. It's not getting easier as you're getting old.




     


    Here is the problem, everybody knows that AMDs CPUs don't perform as well as Intels.   Nobody can rationally argue with that.    The point I'm making is it doesn't matter for many users because GPU performance is more important for the overall experience.  If you can't grasp that then the discussion won't go anywhere.    Beyond that hand picked benchmarks to support your position don't mean a whole hell of a lot because we can hand pick benchmarks showing AMDs GPUs running 50 to a 100% faster than Intels.    So yeah you supported my point but it really doesn't mean much.  


     


    In any event you really should try to address my point, AMD has a better offering for a low cost machine.   This is especially the case if the user needs good GPU support for their usage patterns.

  • Reply 332 of 1528
    wizard69wizard69 Posts: 13,377member


    It will be very interesting to see how Haswell performs when it hits the market.  


     


    Intel would likely loose more working with AMD than it is worth.   Intel seems to be very willing to have their GPUs publicly trashed at every opportunity.  However I don't see die space as being an issue with an AMD GPU integrated on an Intel die, there is just an amazing amount of room on today's dies and most of that room already goes to the GPUs integrated on board.  


     


    The split between integrated and discrete will likely become far more interesting in 2013 and beyond.   At some point things will flip and it will be more of a disadvantage to have a discrete GPU chip.   That will likely happen with unified memory access and very high bandwidth to main memory.    Heterogeneous computing is here to stay but won't really come into its own until more mature hardware is delivered.  Once the GPU is an equal to the CPU, on the memory bus, I see a quick decline in discrete GPU support.    As you note that will dry up funs to support discrete GPU development.   


     


    I share with you the reluctance to recommend intel only GPU solutions.  Intels integrated solutions are no where near as good as AMDs and have significant issues that drivers can't correct.  Speaking of drivers, Intel is a mixed bag here, some issues just hang around to long.  By the way Intels OpenCL support isn't much better than its 3D support.  In any event I'm really hoping that Intel can iron out the hardware glitches with Haswell and also squash the driver issues.  If intel can deliver decent 3D and OpenCL support then the reservations about recommending them should melt away.  


     


    The problem today is that because of these issues Intel is a poor choice for Integrated GPU only machines.  We can only hope that driver issues get corrected right now.   Sadly this sucks due to the importance of GPUs in modern systems.   These days GPU acceleration sneaks into every thing from web browsers to decompression utilities, as such lagging GPU support negatively impacts the user experience.  


    Quote:

    Originally Posted by hmm View Post


    Where do you get that idea? Has anyone suggested they have obtained that level of performance? It could also have certain requirements in terms of die space and design that would not work with intel, and of course those two companies have little experience working together. If you want optimal performance, you need some form of discrete graphics today. Integrated can actually be good enough for a number of things compared to where they were a few years ago. If discrete graphics die out, I suspect it will be due to insufficient volume to overcome fixed development costs. That is typically how these things go. NVidia was able to leverage out a lot of the graphics workstation vendors in the 1990s. They sold a limited number of units at tens of thousands of dollars each. NVidia used a rise in PC gaming to further desktop graphics research. Today that business model is becoming problematic. It may not be sustainable. I don't expect Haswell to even approach it outside of a limited number of synthetic tests. Intel's problems haven't just been one of raw hardware, but one of implementation. There are many use cases where the only thing that makes me occasionally hesitant to suggest them as good enough is the number of annoying gotchas. The HDMI issue comes to mind. In terms of performance, it's fine for a lot of things. It's not great for gaming, but real time 3d graphics really do push the hardware. They're one of the most demanding non-work related use cases.


  • Reply 333 of 1528
    hmmhmm Posts: 3,405member

    Quote:

    Originally Posted by wizard69 View Post


     


    The split between integrated and discrete will likely become far more interesting in 2013 and beyond.   At some point things will flip and it will be more of a disadvantage to have a discrete GPU chip.   That will likely happen with unified memory access and very high bandwidth to main memory.    Heterogeneous computing is here to stay but won't really come into its own until more mature hardware is delivered.  Once the GPU is an equal to the CPU, on the memory bus, I see a quick decline in discrete GPU support.    As you note that will dry up funs to support discrete GPU development.   


     



     


    NVidia has been cutting out gpu partners and doing whatever they can to aid margins due to this problem. It could be that high end systems use two gpus like we see in macbook pros today, but I don't know that this would be sufficient to keep them going. Desktop graphics were initially driven by gaming. Integrated graphics are driven by mobile technologies. It's nothing new. The advancements often come from areas where the potential volume ensures that investments in development can be recouped. I could always use the minicomputers could never displace mainframes exampleimage. I think the higher end gpus will stick around longer. Even if they stopped developing today, some workstation gpus could be sold for several years due to the sheer gap in hardware performance and drivers when compared to intel's offerings. I expect the transition in notebooks will happen sooner. I expect some level of consolidation as computing hardware shakes out, but I'm not entirely sure what will and won't be left yet. Some level of hybridization seems likely in the notebook area. People on here have claimed they're different things. They claimed the same thing about phones and pdas prior to Handspring and RIM. If you recall Palm bought Handspring after the Treo. I can't see myself going without a large display. Even with slow development in that area today, the designs are still sold. They'll probably just become further commoditized with televisions and bin parts based on performance.


     


    Quote:


    I share with you the reluctance to recommend intel only GPU solutions.  Intels integrated solutions are no where near as good as AMDs and have significant issues that drivers can't correct.  Speaking of drivers, Intel is a mixed bag here, some issues just hang around to long.  By the way Intels OpenCL support isn't much better than its 3D support.  In any event I'm really hoping that Intel can iron out the hardware glitches with Haswell and also squash the driver issues.  If intel can deliver decent 3D and OpenCL support then the reservations about recommending them should melt away.



    It's one of those things where I think a portion of users will be limited by driver and implementation issues rather than the raw potential of the chip hardware. Graphic design is one of those areas that comes up occasionally. Outside of motion graphics, I think the mini works there and opens up a range of display options if said designer isn't simply looking to dock their notebook to something. The only thing that makes me a little hesitant at times is the potential for bugs, which is why I'll typically suggest buying it from a place with a good return policy as a just in case measure. Things like 2D raster or vector graphics and video content aren't a problem for modern gpu hardware, even at the low end. If you're inhibited by something, it's probably more of a bug issue.

  • Reply 334 of 1528
    wizard69wizard69 Posts: 13,377member


    Well it will be interesting to see what happens to NVidia over the next couple of years.   One thing is for sure I'm not buying into that company as an investment.    The Mac Pro in a way reflects on the high end GPU market, there just isn't a lot of demand to drive investment as you have indicated.   NVIdia went after the high performance computing sector but even there I don't see enough demand to justify development, eventually the price ends up so high it ends up being cheaper to just implement more standard processors.


     


    As to selling into the workstation market that is certainly a possibility however even there the impact of integrated GPU's can not be underestimated.   Either with Haswell or its follow on there will be enough performance to support the more modest workstation needs.   Here I'm talking about the fabled midrange machines that sit below Mac Pro class machines to provide better than average video performance to drive workstation class apps.


    Quote:

    Originally Posted by hmm View Post


     


    NVidia has been cutting out gpu partners and doing whatever they can to aid margins due to this problem. It could be that high end systems use two gpus like we see in macbook pros today, but I don't know that this would be sufficient to keep them going. Desktop graphics were initially driven by gaming. Integrated graphics are driven by mobile technologies. It's nothing new. The advancements often come from areas where the potential volume ensures that investments in development can be recouped. I could always use the minicomputers could never displace mainframes exampleimage. I think the higher end gpus will stick around longer. Even if they stopped developing today, some workstation gpus could be sold for several years due to the sheer gap in hardware performance and drivers when compared to intel's offerings. I expect the transition in notebooks will happen sooner. I expect some level of consolidation as computing hardware shakes out, but I'm not entirely sure what will and won't be left yet. Some level of hybridization seems likely in the notebook area. People on here have claimed they're different things. They claimed the same thing about phones and pdas prior to Handspring and RIM. If you recall Palm bought Handspring after the Treo. I can't see myself going without a large display. Even with slow development in that area today, the designs are still sold. They'll probably just become further commoditized with televisions and bin parts based on performance.


     


    It's one of those things where I think a portion of users will be limited by driver and implementation issues rather than the raw potential of the chip hardware. Graphic design is one of those areas that comes up occasionally. Outside of motion graphics, I think the mini works there and opens up a range of display options if said designer isn't simply looking to dock their notebook to something. The only thing that makes me a little hesitant at times is the potential for bugs, which is why I'll typically suggest buying it from a place with a good return policy as a just in case measure. Things like 2D raster or vector graphics and video content aren't a problem for modern gpu hardware, even at the low end. If you're inhibited by something, it's probably more of a bug issue.



    Drivers can be a real kicker.   However driver issues can be more or less a universal issue as it does not matter if your needs are high end or low end if you get hung up on a driver issue your opinion of the hardware is bound to be negative.    With the Mini though I have to disagree, the raw performance of the Intel integrated solution is still a problem for many users.   While I agree that 2D performance isn't that bad once you leave that area of concern the hardware doesn't look that good.

  • Reply 335 of 1528
    hmmhmm Posts: 3,405member

    Quote:

    Originally Posted by wizard69 View Post


    Well it will be interesting to see what happens to NVidia over the next couple of years.   One thing is for sure I'm not buying into that company as an investment.    The Mac Pro in a way reflects on the high end GPU market, there just isn't a lot of demand to drive investment as you have indicated.   NVIdia went after the high performance computing sector but even there I don't see enough demand to justify development, eventually the price ends up so high it ends up being cheaper to just implement more standard processors.



    Workstations have shown minimal growth, although it's nowhere near what they had a few years ago. Overall sales in that market segment dropped in 2011, but that was partly due to the lack of new hardware. I wouldn't say NVidia focuses solely on the high end market. They just derive a lot of their profits from it. They still need the cheaper markets for their volume. Without cheap gpus they wouldn't be able to maintain the pricing of Teslas. A big selling point was performance per dollar relative to X86 solutions. Without some way to amortize chip development through higher volume products, they will be in trouble. I suspect their board has discussed this though.


    Quote:


    As to selling into the workstation market that is certainly a possibility however even there the impact of integrated GPU's can not be underestimated.   Either with Haswell or its follow on there will be enough performance to support the more modest workstation needs.



    At some point I expect integrated graphics to show up in Xeon EN packages. That will be the start of it. The mac pro actually uses EP, but EN is closer to the desktop cpu designs.


     


    Quote:


    Drivers can be a real kicker.   However driver issues can be more or less a universal issue as it does not matter if your needs are high end or low end if you get hung up on a driver issue your opinion of the hardware is bound to be negative.    With the Mini though I have to disagree, the raw performance of the Intel integrated solution is still a problem for many users.   While I agree that 2D performance isn't that bad once you leave that area of concern the hardware doesn't look that good.


     




     


    That (drivers) is why I'm not always 100% confident in what to suggest. Apples price to performance ratio on graphics hardware isn't very good. They tend to enforce a high minimum sale on anything I would consider credible.

  • Reply 336 of 1528
    nhtnht Posts: 4,522member
    NVidia is betting on Tegra for their future.
  • Reply 337 of 1528
    wizard69wizard69 Posts: 13,377member


    So far that isn't working out too well for them.  


     


    I don't think Tegra is a bad idea but rather it isn't focused enough to attract a lot of design ins.    That and NVidia needs to bone up on low power GPUs.  


     


     


    Quote:

    Originally Posted by nht View Post



    NVidia is betting on Tegra for their future.

  • Reply 338 of 1528
    wizard69wizard69 Posts: 13,377member

    Quote:

    Originally Posted by hmm View Post


    Workstations have shown minimal growth, although it's nowhere near what they had a few years ago. Overall sales in that market segment dropped in 2011, but that was partly due to the lack of new hardware. I wouldn't say NVidia focuses solely on the high end market. They just derive a lot of their profits from it. They still need the cheaper markets for their volume. Without cheap gpus they wouldn't be able to maintain the pricing of Teslas. A big selling point was performance per dollar relative to X86 solutions. Without some way to amortize chip development through higher volume products, they will be in trouble. I suspect their board has discussed this though.


    Right now it doesn't look pretty. Tegra is a wise move in concept but I don't think they are focused on that correctly as the Tegra solutions released so far are hacks. They really need to deliver a well integrated 64 bit ARM solution or they will get steam rolled in this market.


    At some point I expect integrated graphics to show up in Xeon EN packages. That will be the start of it. The mac pro actually uses EP, but EN is closer to the desktop cpu designs.


     


     


    I'm not too sure, It is a matter of what is important in the markets they sell those chips into.

     


     


    That (drivers) is why I'm not always 100% confident in what to suggest. Apples price to performance ratio on graphics hardware isn't very good. They tend to enforce a high minimum sale on anything I would consider credible.



     


    Yeah it kinda sucks but Intel is getting there. I just don't know when good enough will happen.

  • Reply 339 of 1528
    winterwinter Posts: 1,238member
    I'm jumping ahead here but this topic is all over the place so who cares. By the time Broadwell hits, will we see DDR4 16 GB SO-DIMMS so we can have two modules = to 32 GB of memory in a mini?
  • Reply 340 of 1528


    Originally Posted by Winter View Post

    By the time Broadwell hits, will we see DDR4 16 GB SO-DIMMS…


     


    If Broadwell is supposed to allow DDR4 chips across the board, which I don't know. Haswell uses it in just the server model.






    …so we can have two modules = to 32 GB of memory in a mini?



     


    A better question would be if there is anything actually preventing this now, other than chipset limitations.

Sign In or Register to comment.