2014 Mac mini Wishlist

145791077

Comments

  • Reply 121 of 1528


    Originally Posted by Winter View Post

    I feel the base mini should have a few more BTO options. I know Apple wants people to buy the higher priced models but that sort of defeats the purpose of a lower model.


     


    I get what they're doing by not offering the Fusion Drive on either low-end models, but as a consumer I have to say, "Come on."

  • Reply 122 of 1528
    winterwinter Posts: 1,238member
    And for some who wish to, they are selling themselves short because people will either not buy a higher end model or they will go third-party. If you at least offer the option, it creates more of a money making opportunity.
  • Reply 123 of 1528
    wizard69wizard69 Posts: 13,377member
    aandcmedia wrote: »
    Apple has this weird fetish with making things as thin and compact as they possibly can. 
    I really don't see it as a fetish. Small compact devices can be extremely handy. On the flip side small devices waste far fewer resources.
    They eventually learned their lesson with the iPod series, ie people were buying shuffles to work out with and when they made it super tiny it was no long convenient. The nano's became a bit larger than the shuffle with just a touch screen and learned people wanted something smaller than an iPod touch but not super tiny.
    True when it comes to human interaction there is a thing such as too small. However it might have been better for Apple to focus on thin instead of small.
    When will they learn the desktops don't need to be ungodly thin...probably in a matter of time.
    I don't see thin as the issue so much as the lack of choices. At this point one Mini, except for Apples artificial constraints, doesn't really offer much over the other models.
    Removing the cd/dvd drive from the iMac fine, but the mac mini...leave one of them with it. The mini being about 1/2in taller isn't going to bother anyone, add in the CD/DVD. Having it as a media PC would be nice or when I burn things for my parents or a friend. 

    It might not bother anybody but if it was a 1/2" taller I'd consider putting in a optical drive a serious waste of space. It would be far better to use that space for a beefed up power supply and a decent GPU implementation.

    The GPU situation in the Mini has always bothered me. Last years attempt was such a joke that I don't even want to talk about it. For many of us we where tip honking if we wait till the next release Apple will address the issues with the GPU implementation so that it becomes a worthwhile investment. Instead the whole idea of a discrete GPU was borked in favor of integrated Intel crap.

    I'm not really sure what Apples excuse is here. They clearly don't understand the needs of some of the common software apps out there, nor the idea that the Mini is often used as a media / gaming PC. More and more CPU power doesn't do much for many apps out there, nor does it do a lot for the general user experience. GPU power though has never be adequate in Apples hardware. That may be OK in the base Mini, it is almost a requirement, but when you get close to a $1000 for a Mini you kinda expect a little better.
  • Reply 124 of 1528
    winterwinter Posts: 1,238member
    wizard69 wrote: »
    That may be OK in the base Mini, it is almost a requirement, but when you get close to a $1000 for a Mini you kinda expect a little better.

    For $1,000 you should absolutely expect a discrete GPU and a good one at that with enough memory, I agree.

    Edit: Kepler has a 2 GB maximum. 1 GB on the 640M would have been nice even on the iMac.
  • Reply 125 of 1528
    hmmhmm Posts: 3,405member

    Quote:

    Originally Posted by Winter View Post





    Edit: Kepler has a 2 GB maximum. 1 GB on the 640M would have been nice even on the iMac.




    I know I've seen higher than that on some of the desktop cards. It's interesting how the need for memory efficiency never completely went away. Now that we have desktop computers with incredible amounts of ram, the problem has moved more to gpus rather than becoming a non issue. Pixar has done some work in that area if you look up Opensubdiv. I was looking for a Siggraph link, but I can't find one at the moment.


     


    Edit: I found a link. This kind of thing could end up in places like game engines where subdivision surface based geometry has afaik historically been avoided.

  • Reply 126 of 1528
    winterwinter Posts: 1,238member
    hmm wrote: »

    I know I've seen higher than that on some of the desktop cards. It's interesting how the need for memory efficiency never completely went away. Now that we have desktop computers with incredible amounts of ram, the problem has moved more to gpus rather than becoming a non issue. Pixar has done some work in that area if you look up Opensubdiv. I was looking for a Siggraph link, but I can't find one at the moment.

    Absolutely. I meant for mobile cards and should have posted links.

    http://www.notebookcheck.net/NVIDIA-GeForce-GT-640M.71579.0.html
    http://www.notebookcheck.net/NVIDIA-GeForce-GT-650M.71887.0.html
    http://www.notebookcheck.net/NVIDIA-GeForce-GTX-660M.71859.0.html
    http://www.notebookcheck.net/NVIDIA-GeForce-GTX-675MX.82580.0.html
    http://www.notebookcheck.net/NVIDIA-GeForce-GTX-680MX.83519.0.html

    Max memory is 2 GB for all of those cards.
  • Reply 127 of 1528
    hmmhmm Posts: 3,405member




    Oh yeah for notebooks, although even then the mobile workstation cards can go higher. It typically locks you to at least $3k for a notebook, but you can get 4GB with a Quadro K4000M or K5000M. 2GB is the cap for the gaming cards. To be fair those computers are for people who use a notebook due to a need to be extremely mobile with heavier work. They are somewhat of edge cases.

  • Reply 128 of 1528
    winterwinter Posts: 1,238member
    Yeah I looked at one of those notebooks. Not exactly something I would personally purchase.
  • Reply 129 of 1528
    hmmhmm Posts: 3,405member

    Quote:

    Originally Posted by Winter View Post



    Yeah I looked at one of those notebooks. Not exactly something I would personally purchase.




    I wouldn't either. There's a point where most people are better served by a desktop or workstation as your money should go further there. Mobile workstation variants aren't as pleasant to transport either. They aren't so bad on their own. If you're already encumbered with luggage or gear of some kind, it becomes annoying. It's also a factor when you're trying to stay under a given weight limit for carry-on baggage.

  • Reply 130 of 1528
    wizard69wizard69 Posts: 13,377member
    I was over at 9to5mac just a short while ago and they had a blurb on refurbished Minis where they indicated that Apple has taken another $50 off one model. I can't say if this is true or not because most of the Minis are gone and I don't track them that closely. However if true maybe the current Mini has a shorter expected lifespan then we suspect.

    Maybe that is wishful thinking, but it wouldn't be the first time refurb prices have been adjusted before a model upgrade. Then again this is the 2013 wish list, as such I think the best thing to wish for is a respectable Mini upgrade as soon as possible.
  • Reply 131 of 1528
    wizard69wizard69 Posts: 13,377member
    winter wrote: »
    For $1,000 you should absolutely expect a discrete GPU and a good one at that with enough memory, I agree.
    This is one thing I just don't understand about Apple, it is almost as if they go out of their way to produce machines that confirm the idea that the desktop is dying. The 2011 machine, with the discrete GPU, simply wasn't worth the expense that Apple was charging for it. Not enough VRAM and not enough of a performance delta. I have to wonder if there was a sales issue with the GPU model.
    Edit: Kepler has a 2 GB maximum. 1 GB on the 640M would have been nice even on the iMac.

    Frankly I don't expect 2GB of VRAM in the Mini, but I do expect enough that it can deliver respectable performance and run a reasonable cross section of the software out there. In other words a Mini that can provide that missing mid range performance.
  • Reply 132 of 1528
    winterwinter Posts: 1,238member
    Yeah it would never happen in the Mini, though 512 MB would be nice and 1 GB for all iMac models (with the 2 GB upgrade).

    Sales could be an issue and maybe heat is as well. Also maybe Intel is making promises to Apple and paying them a huge sum to only use their integrated graphics.

    Anything is possible.
  • Reply 133 of 1528


    Along with a Mini, with Haswell, quad-core as a base-line config, 32 GB RAM option, etc.--I'd like to see a cheaper Apple display. Maybe a 23", I would say $999 is a bit much. And 27", IMO, is over doing it. I'd like to see a $500 21-23" and a $699-799 27". 

  • Reply 134 of 1528
    wizard69wizard69 Posts: 13,377member
    winter wrote: »
    Yeah it would never happen in the Mini, though 512 MB would be nice and 1 GB for all iMac models (with the 2 GB upgrade).
    That is what I'm thinking for the Mini with today's technology. However Apple still needs an entry level iMac. This reality I'd the other reason I don't understand Apples attitude towards the Mini, they do maintain what amounts to an entry level iMac so by that measure the Mini can't really be seen as their entry level machine.
    Sales could be an issue and maybe heat is as well. Also maybe Intel is making promises to Apple and paying them a huge sum to only use their integrated graphics.
    Considering Apples history with GPUs, Intel wouldn't have to pay them much.
    Anything is possible.

    For some reason I have it in my head that both the Mini and the Mac Pro will get overhauled this year. Maybe that is wishful thinking but I see wishful thinking fitting into this thread. From the engineering standpoint I realize that in a couple of years or so the need for a discrete GPU will be highly minimalized. Today with Ivy Bridge it is more of a toss up and depends upon your needs as a user, so I still see the need for a Mini with a decent GPU.

    As to that overhaul itself, they really should address what appears to be a power limitation in the Mini. It just seems like and probably is, that this machine is power limited. Thus we get these up and down configuration cycles. 20 to 25 watts more capability would do wonders. Some of that extra power should be budgeted for extra TB ports, some for either a GPU or more RAM slots. I also want to see a dedicated and standardized internal PCI Express slot(s) for SSD storage cards. We really need to get away from SATA and the old mechanical form factors.
  • Reply 135 of 1528
    wizard69wizard69 Posts: 13,377member
    Along with a Mini, with Haswell, quad-core as a base-line config, 32 GB RAM option, etc.--I'd like to see a cheaper Apple display.
    You and me both. Apples current pricing scheme might be viable as a pro screen but my god really decent monitors can be had for $250 dollars these days. Most Mini users can get by fine with one of those monitors.
    Maybe a 23", I would say $999 is a bit much. And 27", IMO, is over doing it. I'd like to see a $500 21-23" and a $699-799 27". 
    Even those numbers are high. They need a solution that is well under $350. Mind you this is just a very basic monitor. Part of the problem with Apples one solution is that the monitor isn't just a monitor, it is also a hub for USB and other ports.
  • Reply 136 of 1528
    MarvinMarvin Posts: 14,806moderator
    hmm wrote:
    Now that we have desktop computers with incredible amounts of ram, the problem has moved more to gpus rather than becoming a non issue. Pixar has done some work in that area if you look up Opensubdiv. I was looking for a Siggraph link, but I can't find one at the moment.

    Edit: I found a link. This kind of thing could end up in places like game engines where subdivision surface based geometry has afaik historically been avoided.

    Subdivision is being used in games already thanks to companies who actually bother to implement tessellation (not Apple):

    http://blogs.amd.com/play/2012/11/20/hitman-absolution-in-depth/
    http://blogs.amd.com/play/files/2012/11/TessDisabled.png
    http://blogs.amd.com/play/files/2012/11/TessEnabled.png

    (they don't seem to have turned it on for the clothes, just the head - you can see the difference in the ears)

    I'm surprised that Pixar only has 4 GPU engineers (out of 1200 staff) and as usual the genius is the one who can't speak English well.

    It seems Pixar want to push the Catmull-Clark subdivision so that surfaces don't get subdivided in different ways and take on a different shape from one platform to another.

    They explained that hardware tessellation doesn't store the points in memory, it generates data on the fly based on the viewport and then flushes it once it's rendered. This can save loads of memory as they can turn very simple objects into extremely complex ones with a few points and a procedural noise function or image map. An entire detailed street could be made with under 100 points and instancing.

    The GPU and CPU should use the same memory eventually:

    http://www.xbitlabs.com/news/cpu/display/20120202102405_AMD_Promises_Full_Fusion_of_CPU_and_GPU_in_2014.html

    so the amount of memory will be a non-issue. Haswell appears to manage to do this:

    http://www.anandtech.com/show/6277/haswell-up-to-128mb-onpackage-cache-ulv-gpu-performance-estimates

    This can work in a similar way to Apple's Fusion drives. It just swaps the immediately needed data into the very fast 128MB cache and everything else can sit in the 4GB+ main memory. You shouldn't need more than 128MB of textures in a given scene. They could actually bundle all their GPUs with a small amount of dedicated memory as long as they get the bandwidth high enough between it and main memory.

    Given that Apple apparently requested this feature and expressed discontent with Intel's IGP performance, I think Haswell will be a big improvement:

    http://news.cnet.com/8301-13924_3-20125598-64/steve-jobs-knocked-intels-chip-design-inflexibility/

    "There were two reasons we didn't go with them. One was that Intel are just really slow. They're like a steamship, not very flexible. We're used to going pretty fast. Second is that we just didn't want to teach them everything, which they could go and sell to our competitors," Jobs is quoted as saying.

    "At the high-performance level, Intel is the best," Jobs is quote in the book. "They build the fastest, if you don't care about power and cost."

    Jobs didn't stop there. "We tried to help Intel, but they don't listen much," he said.

    And Jobs also voiced a gripe that many PC game enthusiasts have been leveling at Intel for many years. "We've been telling them for years that their graphics [silicon] suck."

    Optimistically, I'd like to see the 2013 Mini arrive at WWDC in June with quad-core Haswell at $799 with a 10-15% CPU boost and 80-100% GPU boost over the HD3000 with OpenGL 4 support in the bundled OS. If the memory design works better with soldered RAM, that won't be too good because they might cap it at 8GB like the 13" rMBP or charge double 3rd party RAM providers for 16GB. At least they offer 16GB on the current Mini but it's over 3x the price from a 3rd party, which is insane considering how easy it is to install on the Mini.
  • Reply 137 of 1528
    winterwinter Posts: 1,238member
    Haswell is expected to have an increase of 80-100% with their IGP over Sandy Bridge IGP? I am sold!
  • Reply 138 of 1528
    wizard69wizard69 Posts: 13,377member
    Marvin wrote: »
    Subdivision is being used in games already thanks to companies who actually bother to implement tessellation (not Apple):
    Apple is just plain slow, when it comes to improving anything to do with GPUs, drivers and such. I hold out hope that this is due to excellent quality control and an attempt to keep APIs stable. The cynic in me just thinks they are slow due to a lack of interest on Apples part.
    (they don't seem to have turned it on for the clothes, just the head - you can see the difference in the ears)
    I'm surprised that Pixar only has 4 GPU engineers (out of 1200 staff) and as usual the genius is the one who can't speak English well.
    It seems Pixar want to push the Catmull-Clark subdivision so that surfaces don't get subdivided in different ways and take on a different shape from one platform to another.
    They explained that hardware tessellation doesn't store the points in memory, it generates data on the fly based on the viewport and then flushes it once it's rendered. This can save loads of memory as they can turn very simple objects into extremely complex ones with a few points and a procedural noise function or image map. An entire detailed street could be made with under 100 points and instancing.
    The GPU and CPU should use the same memory eventually:
    http://www.xbitlabs.com/news/cpu/display/20120202102405_AMD_Promises_Full_Fusion_of_CPU_and_GPU_in_2014.html
    This is another example of where I see AMD as better aligned with Apple than Intel. Obviously they
    Need to deliver on this objective but they seem to have meant their midterm goals. This is good considering the state they are in financially. Intel still ats as if they are afraid of the GPU, this is strange because they do have their own implementation. That implementation may suck today but it will look like true crap if AMD delivers complete heterogeneous computing chips. Plus AMD is on the OpenCL banded wagon and isn't looking back.
    so the amount of memory will be a non-issue. Haswell appears to manage to do this:
    http://www.anandtech.com/show/6277/haswell-up-to-128mb-onpackage-cache-ulv-gpu-performance-estimates
    This should make Haswell worth waiting for. The real question though is this, would Apple actually implement such a chip in the Mini. They should be put they also have a history of screwing the machine up more than incrementally improving it. On the other hand if they can offer such a chip in a power profile suitable for the AIRs look out. Such a chip would be a drastic improvement for that machine.
    This can work in a similar way to Apple's Fusion drives. It just swaps the immediately needed data into the very fast 128MB cache and everything else can sit in the 4GB+ main memory. You shouldn't need more than 128MB of textures in a given scene. They could actually bundle all their GPUs with a small amount of dedicated memory as long as they get the bandwidth high enough between it and main memory.
    This highlights a problem that both AMD and Intel now have, integrated GPUs are slowed by the processor memory interface. Performance actually scales with the increase in memory subsystem performance. Thus Intels rumored in package RAM as rumored above. I can see both AMD and Intel using various techniques in the future to deal with this bandwidth issue.
    Given that Apple apparently requested this feature and expressed discontent with Intel's IGP performance, I think Haswell will be a big improvement:
    Not to be a pain here but who hasn't expressed discontent with Intels GPU hardware? AMDs hardware from last year can still outperform the Ivy Bridge GPU, further that AMD GPU supports OpenCL really well. I have no idea if AMD can keep it up in their current state, but for many users the APU represents a better more well rounded processor.
    http://news.cnet.com/8301-13924_3-20125598-64/steve-jobs-knocked-intels-chip-design-inflexibility/
    "There were two reasons we didn't go with them. One was that Intel are just really slow. They're like a steamship, not very flexible. We're used to going pretty fast. Second is that we just didn't want to teach them everything, which they could go and sell to our competitors," Jobs is quoted as saying.
    "At the high-performance level, Intel is the best," Jobs is quote in the book. "They build the fastest, if you don't care about power and cost."
    Jobs didn't stop there. "We tried to help Intel, but they don't listen much," he said.
    And Jobs also voiced a gripe that many PC game enthusiasts have been leveling at Intel for many years. "We've been telling them for years that their graphics [silicon] suck."
    Well at least his appraisal is right on the money.
    Optimistically, I'd like to see the 2013 Mini arrive at WWDC in June with quad-core Haswell at $799 with a 10-15% CPU boost and 80-100% GPU boost over the HD3000 with OpenGL 4 support in the bundled OS.
    A CPU boost doesn't mean as much to me right now as does getting decent GPU performance in a Mini class machine. It really looks like Haswell will do that for us. Frankly I'm expecting better than 100% improvement over the HD3000 in the top of the line Haswell chips. If the GT3 version actually comes with that in package high speed VRAM then such a leap is possible. The gotcha is Apples willingness to implement such a chip in the Mini.
    If the memory design works better with soldered RAM, that won't be too good because they might cap it at 8GB like the 13" rMBP or charge double 3rd party RAM providers for 16GB. At least they offer 16GB on the current Mini but it's over 3x the price from a 3rd party, which is insane considering how easy it is to install on the Mini.
    The in package RAM buffer is one way around the congestion to main memory. The other obviously is faster main memory. Unfortunately if faster main memory is required at some point the only supported standards will be soldered in RAM. This isn't Apple being evil, but rather engineers having to deal with the physical realities of logic board layouts and timing integrity. So yeah let's hope they build systems with large memory arrays. All is not completely lost here though it may be possible for Apple to build systems with conventional expansion memory sitting on a different bus/channel. But then you are back to the processor actually supporting two independent memory interfaces.

    Technology is moving really fast right now with respect to RAM, solid state secondary store and een GPU integration. It will interesting to see what the 2013 Mini picks up.
  • Reply 139 of 1528
    wizard69wizard69 Posts: 13,377member
    winter wrote: »
    Haswell is expected to have an increase of 80-100% with their IGP over Sandy Bridge IGP? I am sold!

    Well don't count your chickens before they are hatched. However the rumors and preliminary reports are very positive. The extra execution units and some modest tweaks should work out good in the Haswell chips that get the best offerings. The thing here is that Intel apparently has three variants of GPUs planned for Haswell, which means Apple could implement a crappy chip in the Mini. Just because Intel delivers doesn't mean Apple will.

    So a 100% improvement or even more is possible. By the way that is with respect to Sandy Bridge, the improvement might not be as great with respect to Ivy Bridge.

    Hey maybe Intel will have their drivers fixed by then.
  • Reply 140 of 1528
    winterwinter Posts: 1,238member
    Yeah I want to make sure everything works first. I bought my 2011 Mini in October even though it was released in July. I don't want to be a guinea pig.
Sign In or Register to comment.