Mac Studio with M1 UItra review: A look at the future power of Apple Silicon

2

Comments

  • Reply 21 of 51
    corp1corp1 Posts: 92member
    Speaking of crazy machines that Apple will (probably) never build, given that the Apple Studio Display already has an A13 CPU and 64GB of storage, I imagine they could upgrade the CPU/flash a bit  and add a multitouch sensor to turn it into the giant desktop iPad of my dreams.

    edited March 2022
  • Reply 22 of 51
    dewmedewme Posts: 5,335member
    I was so excited to get a Mac Studio until I saw the GPU benchmarks. Very disappointed and the wait continues for a Mac with great 3D performance. They should really clarify that  their performance graphs are for video editors only - this is no where near the performance of a 3090 for 3D and I didn't think it was but even if it was 70% of the performance of a 3080 I would have got one. Will there ever be a Mac with comparable GPU performance? Probably not, it seems their focus is solely on the video side of things.
    While I think there may be more GPU performance in the M1 Ultra than what we’ve seen in current benchmarks there is a point where you have to recognize that physics still do apply. The Mac Studio with M1 Ultra is still a very small form factor machine that you can run very comfortably right on your desktop next to or underneath your monitors. I’m not a gaming PC person but I can only imagine it would be somewhat difficult to find an RTX 3090 equipped machine with the same form factor, power supply requirements, operating thermals, and audible characteristics (fan noise) as Apple’s Studio systems without losing something on the performance side. 

    I’m very interested to hear about directly comparable machines (as defined above) that demonstrate that Apple has somehow missed the mark on the graphics performance of the M1 Ultra. I don’t discount the possibility of their existence. I’m also expecting that Apple will deliver a new Mac Pro that breaks through some of the constraints imposed by the Studio’s very small form factor and user friendly operating characteristics, i.e., no earplugs required to operate the system 12 inches from your keyboard. 

    At the same time, I do agree that Apple has some 'splaining to do regarding their M1 launch presentation material that shows their wonderchip running neck and neck with competitive graphics platforms. This includes ones like the RTX 3090 that require small fission reactors and cooling towers to supply them with sufficient power to max out those critical benchmarks and 3D applications that drive some people's purchase decisions. This is where even diehard Apple supporters are feeling like we’re not getting the whole story from Apple. What exactly did they mean by those graphs and what assumptions were they making? The review numbers don’t add up and we need to know why.
    muthuk_vanalingamargonautforegoneconclusionFileMakerFellercgWerks
  • Reply 23 of 51
    michelb76michelb76 Posts: 612member
    It's great for Apple shareholders.
    williamlondon
  • Reply 24 of 51
    flydog said: An Ultra is complete waste of money, as is adding cores.  Xcode does not use the extra cores, nor does anything made by Adobe, Blender, or even Final Cut Pro.  None of those apps are significantly faster on an Ultra vs Max (or even the most recent 27" iMac). Games may see a large improvement, but even the fastest Ultra is not a gaming PC replacement (nor is it intended to be).
    The current version of Adobe After Effects can make use of all available cores with Multi-Frame Rendering. 

    https://helpx.adobe.com/after-effects/using/multi-frame-rendering.html
  • Reply 25 of 51
    thttht Posts: 5,421member
    Mac Studio vs Mac mini: The aluminum enclosure has the same styling, with it having the footprint of a rounded square, a flat top with distinct edges, and a black Apple logo on the top. The Mac Studio is more than twice the height of the Mac mini at 3.7 inches to 1.4 inches, but the two models both have the same width and depth of 7.7 inches.
    Mike, the radius at the corners is identical between the Studio and the mini?

    The OWC miniStack would be nicely and neatly stacked on top of either. Nice for Time Machine backup and external HDD or SSD storage. If there are enough mini and Studios out on in the wild, a nice ecosystem of stackable Mac mini footprint modules could be made. So, I really hope the M2 Mac mini has the same footprint. It could be thinner (less tall), but hopefully it will have the same footprint.

    FileMakerFeller
  • Reply 26 of 51
    Mike WuertheleMike Wuerthele Posts: 6,858administrator
    tht said:
    Mac Studio vs Mac mini: The aluminum enclosure has the same styling, with it having the footprint of a rounded square, a flat top with distinct edges, and a black Apple logo on the top. The Mac Studio is more than twice the height of the Mac mini at 3.7 inches to 1.4 inches, but the two models both have the same width and depth of 7.7 inches.
    Mike, the radius at the corners is identical between the Studio and the mini?

    The OWC miniStack would be nicely and neatly stacked on top of either. Nice for Time Machine backup and external HDD or SSD storage. If there are enough mini and Studios out on in the wild, a nice ecosystem of stackable Mac mini footprint modules could be made. So, I really hope the M2 Mac mini has the same footprint. It could be thinner (less tall), but hopefully it will have the same footprint.

    Looks like.
  • Reply 27 of 51
    neilmneilm Posts: 985member
    corp1 said:
    Nobody said anything about snap-on modules. There is no sane definition of modular as it applies to computing that this machine fills.

    It's a very nice machine. It's just not what you'd call modular.
    I expect by "modular" they simply mean that it isn't an all-in-one machine like the iMac. 

    Which is to say... it doesn't include an integrated display.

    But I agree that it's no Mac Pro with internal modules which can be added, removed, swapped, and upgraded.

    If Apple managed to cram an M1 Mac into a Magic Keyboard/trackpad combo  to make a retro sort of computer - a laptop without a display -  I wonder if they'd call that "modular" as well since it wouldn't include an integrated display?
    It's modular in the sense of being a module — in fact the central module — of the modular system of a Mac, multiple monitors, external high speed storage, and so on.
  • Reply 28 of 51
    dewme said:
    I was so excited to get a Mac Studio until I saw the GPU benchmarks. Very disappointed and the wait continues for a Mac with great 3D performance. They should really clarify that  their performance graphs are for video editors only - this is no where near the performance of a 3090 for 3D and I didn't think it was but even if it was 70% of the performance of a 3080 I would have got one. Will there ever be a Mac with comparable GPU performance? Probably not, it seems their focus is solely on the video side of things.
    While I think there may be more GPU performance in the M1 Ultra than what we’ve seen in current benchmarks there is a point where you have to recognize that physics still do apply. The Mac Studio with M1 Ultra is still a very small form factor machine that you can run very comfortably right on your desktop next to or underneath your monitors. I’m not a gaming PC person but I can only imagine it would be somewhat difficult to find an RTX 3090 equipped machine with the same form factor, power supply requirements, operating thermals, and audible characteristics (fan noise) as Apple’s Studio systems without losing something on the performance side. 

    I’m very interested to hear about directly comparable machines (as defined above) that demonstrate that Apple has somehow missed the mark on the graphics performance of the M1 Ultra. I don’t discount the possibility of their existence. I’m also expecting that Apple will deliver a new Mac Pro that breaks through some of the constraints imposed by the Studio’s very small form factor and user friendly operating characteristics, i.e., no earplugs required to operate the system 12 inches from your keyboard. 

    At the same time, I do agree that Apple has some 'splaining to do regarding their M1 launch presentation material that shows their wonderchip running neck and neck with competitive graphics platforms. This includes ones like the RTX 3090 that require small fission reactors and cooling towers to supply them with sufficient power to max out those critical benchmarks and 3D applications that drive some people's purchase decisions. This is where even diehard Apple supporters are feeling like we’re not getting the whole story from Apple. What exactly did they mean by those graphs and what assumptions were they making? The review numbers don’t add up and we need to know why.
    This, plus the “modularity” spat earlier in this thread reflect something about the Peek Performance presentation—it’s like the people who designed the Studio handed responsibility over to Marketing and didn’t look back. I guess it probably reflects an attitude that the product speaks for itself. That truly tech-savvy people will recognize a functional marvel when they see it. A good example of this dynamic is the Ars Technica dismantling of the foolish (the nicest word I can think of) YouTube posturing of Luke Miani (mentioned earlier in this thread in a post that has apparently been deservedly vaporized): https://arstechnica.com/gadgets/2022/03/explaining-the-mac-studios-removable-ssds-and-why-you-cant-just-swap-them-out/

    I’d like to see Apple do a better job of defending their design decisions and their pricing, but I suspect they see it as a sort of no-win situation. Better not to get drawn into it. 
    dewmeFileMakerFeller
  • Reply 29 of 51
    thttht Posts: 5,421member
    dewme said:
    I was so excited to get a Mac Studio until I saw the GPU benchmarks. Very disappointed and the wait continues for a Mac with great 3D performance. They should really clarify that  their performance graphs are for video editors only - this is no where near the performance of a 3090 for 3D and I didn't think it was but even if it was 70% of the performance of a 3080 I would have got one. Will there ever be a Mac with comparable GPU performance? Probably not, it seems their focus is solely on the video side of things.
    While I think there may be more GPU performance in the M1 Ultra than what we’ve seen in current benchmarks there is a point where you have to recognize that physics still do apply. The Mac Studio with M1 Ultra is still a very small form factor machine that you can run very comfortably right on your desktop next to or underneath your monitors. I’m not a gaming PC person but I can only imagine it would be somewhat difficult to find an RTX 3090 equipped machine with the same form factor, power supply requirements, operating thermals, and audible characteristics (fan noise) as Apple’s Studio systems without losing something on the performance side. 

    I’m very interested to hear about directly comparable machines (as defined above) that demonstrate that Apple has somehow missed the mark on the graphics performance of the M1 Ultra. I don’t discount the possibility of their existence. I’m also expecting that Apple will deliver a new Mac Pro that breaks through some of the constraints imposed by the Studio’s very small form factor and user friendly operating characteristics, i.e., no earplugs required to operate the system 12 inches from your keyboard. 

    At the same time, I do agree that Apple has some 'splaining to do regarding their M1 launch presentation material that shows their wonderchip running neck and neck with competitive graphics platforms. This includes ones like the RTX 3090 that require small fission reactors and cooling towers to supply them with sufficient power to max out those critical benchmarks and 3D applications that drive some people's purchase decisions. This is where even diehard Apple supporters are feeling like we’re not getting the whole story from Apple. What exactly did they mean by those graphs and what assumptions were they making? The review numbers don’t add up and we need to know why.
    This, plus the “modularity” spat earlier in this thread reflect something about the Peek Performance presentation—it’s like the people who designed the Studio handed responsibility over to Marketing and didn’t look back. I guess it probably reflects an attitude that the product speaks for itself. That truly tech-savvy people will recognize a functional marvel when they see it. A good example of this dynamic is the Ars Technica dismantling of the foolish (the nicest word I can think of) YouTube posturing of Luke Miani (mentioned earlier in this thread in a post that has apparently been deservedly vaporized): https://arstechnica.com/gadgets/2022/03/explaining-the-mac-studios-removable-ssds-and-why-you-cant-just-swap-them-out/

    I’d like to see Apple do a better job of defending their design decisions and their pricing, but I suspect they see it as a sort of no-win situation. Better not to get drawn into it. 
    That Luke Miani video has got to be an SEO Youtube ASMR designed shit piece. There isn't any honesty in it. It's all acting. It seems ludicrous that any Apple rumorologist, and Miani's video channel trades in Apple rumors, doesn't know how Apple's NAND storage works. Apple hasn't used a normal SSD since the T1/T2 Macs came out. Both the 2017 iMac Pro and the 2019 Mac Pro have this same basic storage design as the Mac Studio: dumb NAND in a daughter card with the storage controller in the T2. Other Macs with the T2 have the dumb NAND soldered onto the logic board instead of a daughter card. With Apple Silicon, the storage controller is in the SoC itself. It's not like Apple hasn't illustrated this multiple times in their videos.

    The M1 Ultra has a 21 single precision TFLOPS GPU. It will have some wins over the 3090 because of memory architecture and TBDR, but yes, it isn't going to compete on most GPU compute tasks against a 35 TFLOPS 3090. The rumored 32+8+128 Apple Silicon package for the Mac Pro will have a ~42 TFLOPS GPU, hopefully at the end of 2022, running at about 250 W. So they are getting there. They might even have raytracing hardware in it.

    The GPU really is a platform issue for Apple. Virtually all GPU compute code is optimized for CUDA, and hardly any for Metal. It's a long road ahead. They need to get developers to optimize their GPU compute code for Metal. They need to sell enough units for developers to make money. Those units have to be competitive to the competition. So, a Mac Pro will Apple Silicon needs to be rack mountable, be able to have 4 or 5 of those 128 g-core GPUs, consume 2x less power than Nvidia and AMD, and Apple needs to have their own developers optimize code for Metal. Like with Blender, but a lot of other open source software and closed source software. Otherwise, it is the same old same old. Apple's high end hardware is for content creation (FCP, Logic, etc), and not much else.

    dewmetenthousandthingsmuthuk_vanalingamFileMakerFellercgWerks
  • Reply 30 of 51
    lkrupplkrupp Posts: 10,557member
    crowley said:
    lkrupp said:
    flydog said:
    keithw said:
    I'm still trying to decide whether or not to spend the extra $1k for 64 GPU cores instead of 48.  I tend to keep machines for at least 5 years (or more,) and want to "future proof" as much as possible up front. Sure, I know there will probably be an M2 "Ultra" or M3 or M4 or M5 in the next 5 years, but the "studio" is the Mac I've always wanted.  My current 2017 iMac Pro was a compromise since the only thing available at the time was the "trashcan" Mac, and it was obsolete by then.  This thing is 2-1/2 times faster than my iMac Pro in multi-core CPU tests. Howerver, it's significantly slower in GPU performance then my AMD RX 6900 XT eGPU.
    An Ultra is complete waste of money, as is adding cores.  Xcode does not use the extra cores, nor does anything made by Adobe, Blender, or even Final Cut Pro.  None of those apps are significantly faster on an Ultra vs Max (or even the most recent 27" iMac). Games may see a large improvement, but even the fastest Ultra is not a gaming PC replacement (nor is it intended to be).

    In real world use, my Ultra is actually slower than my old iMac in some tasks, and the actual difference in performance across the average app is more like 15-20% (not the 300-400% that the benchmarks suggest).  Xcode builts are 30% faster, and exporting a 5 minute 4k video via Final Cut Pro is about 10% faster.  Anything that uses single core (safari, word, excel) will not be any faster than a Mac Mini.  On an average workday, that $4,000 Ultra saves maybe 3 minutes.  

    Most people will do just fine with a Mac mini. 
    And you don’t think the developers you mention won’t soon optimize their code to take advantage of the M1 Ultra cores and GPUs? Really? This was a topic on today’s MacBreak Weekly podcast. None of the benchmark software has been optimized yet either. Wait a couple of months and your might change your tune. 
    Multi core CPUs and GPUs are not a new thing. If developers haven’t managed to utilise them before what makes you think they’re going to be able to now?  Parallel processing isn’t just something you can switch on in any given app.
    So what’s your point? That the M1 Ultra’s potential will never be fully realized? That the M1 Ultra is a flop that will never live up to its promise? Apple should just admit the M1Ultra is a failed attempt, stop production and go back to Intel? All hail Nvidia?
    williamlondon
  • Reply 31 of 51
    crowleycrowley Posts: 10,453member
    lkrupp said:
    crowley said:
    lkrupp said:
    flydog said:
    keithw said:
    I'm still trying to decide whether or not to spend the extra $1k for 64 GPU cores instead of 48.  I tend to keep machines for at least 5 years (or more,) and want to "future proof" as much as possible up front. Sure, I know there will probably be an M2 "Ultra" or M3 or M4 or M5 in the next 5 years, but the "studio" is the Mac I've always wanted.  My current 2017 iMac Pro was a compromise since the only thing available at the time was the "trashcan" Mac, and it was obsolete by then.  This thing is 2-1/2 times faster than my iMac Pro in multi-core CPU tests. Howerver, it's significantly slower in GPU performance then my AMD RX 6900 XT eGPU.
    An Ultra is complete waste of money, as is adding cores.  Xcode does not use the extra cores, nor does anything made by Adobe, Blender, or even Final Cut Pro.  None of those apps are significantly faster on an Ultra vs Max (or even the most recent 27" iMac). Games may see a large improvement, but even the fastest Ultra is not a gaming PC replacement (nor is it intended to be).

    In real world use, my Ultra is actually slower than my old iMac in some tasks, and the actual difference in performance across the average app is more like 15-20% (not the 300-400% that the benchmarks suggest).  Xcode builts are 30% faster, and exporting a 5 minute 4k video via Final Cut Pro is about 10% faster.  Anything that uses single core (safari, word, excel) will not be any faster than a Mac Mini.  On an average workday, that $4,000 Ultra saves maybe 3 minutes.  

    Most people will do just fine with a Mac mini. 
    And you don’t think the developers you mention won’t soon optimize their code to take advantage of the M1 Ultra cores and GPUs? Really? This was a topic on today’s MacBreak Weekly podcast. None of the benchmark software has been optimized yet either. Wait a couple of months and your might change your tune. 
    Multi core CPUs and GPUs are not a new thing. If developers haven’t managed to utilise them before what makes you think they’re going to be able to now?  Parallel processing isn’t just something you can switch on in any given app.
    So what’s your point? That the M1 Ultra’s potential will never be fully realized? That the M1 Ultra is a flop that will never live up to its promise? Apple should just admit the M1Ultra is a failed attempt, stop production and go back to Intel? All hail Nvidia?
    Maybe to the first one, no to all of the others. That really should have been very clear and not needing of any clarification.
    edited March 2022
  • Reply 32 of 51
    lkrupp said:
    crowley said:
    lkrupp said:
    flydog said:
    keithw said:
    I'm still trying to decide whether or not to spend the extra $1k for 64 GPU cores instead of 48.  I tend to keep machines for at least 5 years (or more,) and want to "future proof" as much as possible up front. Sure, I know there will probably be an M2 "Ultra" or M3 or M4 or M5 in the next 5 years, but the "studio" is the Mac I've always wanted.  My current 2017 iMac Pro was a compromise since the only thing available at the time was the "trashcan" Mac, and it was obsolete by then.  This thing is 2-1/2 times faster than my iMac Pro in multi-core CPU tests. Howerver, it's significantly slower in GPU performance then my AMD RX 6900 XT eGPU.
    An Ultra is complete waste of money, as is adding cores.  Xcode does not use the extra cores, nor does anything made by Adobe, Blender, or even Final Cut Pro.  None of those apps are significantly faster on an Ultra vs Max (or even the most recent 27" iMac). Games may see a large improvement, but even the fastest Ultra is not a gaming PC replacement (nor is it intended to be).

    In real world use, my Ultra is actually slower than my old iMac in some tasks, and the actual difference in performance across the average app is more like 15-20% (not the 300-400% that the benchmarks suggest).  Xcode builts are 30% faster, and exporting a 5 minute 4k video via Final Cut Pro is about 10% faster.  Anything that uses single core (safari, word, excel) will not be any faster than a Mac Mini.  On an average workday, that $4,000 Ultra saves maybe 3 minutes.  

    Most people will do just fine with a Mac mini. 
    And you don’t think the developers you mention won’t soon optimize their code to take advantage of the M1 Ultra cores and GPUs? Really? This was a topic on today’s MacBreak Weekly podcast. None of the benchmark software has been optimized yet either. Wait a couple of months and your might change your tune. 
    Multi core CPUs and GPUs are not a new thing. If developers haven’t managed to utilise them before what makes you think they’re going to be able to now?  Parallel processing isn’t just something you can switch on in any given app.
    So what’s your point? That the M1 Ultra’s potential will never be fully realized? That the M1 Ultra is a flop that will never live up to its promise? Apple should just admit the M1Ultra is a failed attempt, stop production and go back to Intel? All hail Nvidia?
    Apple can add more GPU cores within reason related to how big they can make their chips and their packages, so that scales pretty well, for GPU-based processing, which tends to be reasonably described as “embarrassingly-parallel” in nature.  That’s the easy type of thing to make faster.

    There are very few types of tasks you can do with regular CPUs that scales well, if at all, by throwing more cores at it: most applications aren’t possible to implement in a parallel-processing manner that can make use of more than one core: this is where faster single cores and fewer of them are far more valuable for the majority of tasks and users.  The sorts of uses for so many cores is amenable to server-level tasks more than all but a tiny few specialized client-level tasks.  As such, the M1 Ultra SoC absolutely will not be very effectively usable for a regular desktop machine for easily 99% of regular desktop users and their use-cases, as at a minimum, they’d not have a way to do much of anything to get to even 90% CPU usage in a useful manner.

    For the record, that’s also true for the M1 Max as well.
    tenthousandthingswilliamlondoncgWerks
  • Reply 33 of 51
    Mike's review was mostly very solid, but his own numbers don't support one of the key conclusions! "Unlike most multiple chip situations, this practically results in almost exactly double the benchmarking and performance of one chip." The Ultra, at least in real-world tasks, is almost never 2x the speed of the Max. A *few* benchmarks support this, but not many.

    Some GPU benchmarks are showing ~50% speedup. Some other tasks are showing very minimal speedups. So far I haven't seen any reasonable analysis of what's going on.

    Are the benchmarks poorly optimized for this many cores (CPU *and* GPU, both issues exist)? Geekbench scales CPU just fine, but that's only representative of a very few kinds of real-world use cases. So we know the chip can scale at least sometimes... so why not other times? What's the bottleneck?

    The cinebench numbers are very interesting in that they are a nontrivial test case. One thing we know from them and a couple other benches is that each core can continue to run at full speed pretty much indefinitely. This is a huge advantage over Intel and AMD chips, that must throttle back from their max clocks on sustained load, often by quite a lot. Apple did a great job matching cooling to actual top-end thermal load.

    On the other hand, so far we have no good tests to tell us how well the interposer is working at letting the two Max chips work like one. Is the insane bandwidth (1.25 or 2.5TB/s, depending on how you read Apple's statements) actually enough to make this work? Is the bandwidth even the issue? If there is an issue, I'm guessing it's latency, and more specifically cache coherency. But we don't know! (Now is when I'm really extra sad that Andrei left Anandtech. There was nobody else in the business doing even half the job he was doing on chip analysis. :-( )

    I hope that we'll hear from some Blender or Affinity people eventually, to explain why they think they're not getting anything close to linear speedups. (Blender's a stretch, as I understand it, since Apple contributed the code. But maybe someone else will start working on it.) And possibly the Geekbench people - they have to be wondering why they're only seeing +50% on the Ultra GPU compute test, which should be sufficiently "embarrassingly parallel" to get ~+100%.

    I wish I had a free week or two to dive into this myself, but I don't. And like Mike I have lots of desire and no actual need for this amazing machine, so I'll keep using my MBP for now.

    Best case scenario, Apple revises their OS and things magically speed up. That's not that unlikely! All it takes is a slightly stupid scheduler, broken affinity code, etc., and your performance can nosedive. Having some time with these things out in the wild, with 3rd-party apps running on them, could give Apple more profiling data and more time to discern and resolve their performance issues.

    If that's doesn't happen, then I don't really know what happens next. They've build brilliant CPU and GPU cores... they seem to need better interconnect. Of course, they've been building great cores for years, and large-scale interconnects are really new to them - until a few months ago the biggest chip they'd ever made had 8 CPUs and 8 GPUs. They have scaled this up a lot. Perhaps they just have one ring and they're stretching it across the interposer? That would explain a lot... I hope that's not it.

    Of course it's possible this is all entirely by design - for AMD and Intel, uncore chews up more than half the power on the big multi-core chips. Perhaps Apple decided that they'd rather give up some performance there in order to save on power? Until we get really careful benchmarks from someone on or near Andrei's level, we're not likely to know. Very frustrating.

    The less optimal possibility is that we're stuck with this on the M1 Ultra. If they really are limited by the interconnect, well, they've got the kick-ass cores, and the interposer tech (which really has stolen a march on the rest of the industry in a dramatic way - this is at least as big a deal and the A7 coming out with full 64-bit support two years ahead of everyone else). So now they need to build a mesh, or better/multiple rings, or whatever they decide to do. This is their *first* many-cores chip, and if performance isn't everything it could have been, it's still pretty amazing, especially in that power envelope. They'll get there, and probably pretty quickly.

    This whole performance issue, whether hardware or software is at the root of it, is very likely a (the?) primary reason why we haven't seen the full-blown Mac Pro yet. The fact that they already dropped a giant hint that it's coming soon(ish?) suggests to me that the solution is on the horizon.
    tenthousandthingsFileMakerFeller
  • Reply 34 of 51
    Also, about the SSDs - if I had to bet money, I would bet that they wind up being upgradable by 3rd parties (though perhaps only OWC makes the effort). That didn't happen with the nnMP because it's too niche - and also, it's easy to add a pile of NVMe SSDs with an inexpensive PCIe carrier card. The Studio is a different story. I think it'll sell in large quantities, and the size of that market will drive attention to the upgrade market in a way the nnMP couldn't.

    Of course it's possible that Apple could prevent this. If they're *seriously* serious about preventing this, they can use crypto to make it *really* hard to figure out (basically impossible unless you get hold of an internal Apple tool, which would probably be a major violation of the law (DMCA at least, possibly title 18 felony). I doubt they've bothered, but we won't know for a while (if ever).

    Edit: I apparently missed some recent developments. Apple did *not* get serious about preventing it. So it's definitely doable, and if they sell as expected, I figure we'll see upgrade SSDs from OWC within a year (maybe slower than Apple's, going by past experience, but at least they'll be bigger).
    edited March 2022 tenthousandthingsdarkvaderFileMakerFeller
  • Reply 35 of 51
    dewmedewme Posts: 5,335member
    lkrupp said:
    crowley said:
    lkrupp said:
    flydog said:
    keithw said:
    I'm still trying to decide whether or not to spend the extra $1k for 64 GPU cores instead of 48.  I tend to keep machines for at least 5 years (or more,) and want to "future proof" as much as possible up front. Sure, I know there will probably be an M2 "Ultra" or M3 or M4 or M5 in the next 5 years, but the "studio" is the Mac I've always wanted.  My current 2017 iMac Pro was a compromise since the only thing available at the time was the "trashcan" Mac, and it was obsolete by then.  This thing is 2-1/2 times faster than my iMac Pro in multi-core CPU tests. Howerver, it's significantly slower in GPU performance then my AMD RX 6900 XT eGPU.
    An Ultra is complete waste of money, as is adding cores.  Xcode does not use the extra cores, nor does anything made by Adobe, Blender, or even Final Cut Pro.  None of those apps are significantly faster on an Ultra vs Max (or even the most recent 27" iMac). Games may see a large improvement, but even the fastest Ultra is not a gaming PC replacement (nor is it intended to be).

    In real world use, my Ultra is actually slower than my old iMac in some tasks, and the actual difference in performance across the average app is more like 15-20% (not the 300-400% that the benchmarks suggest).  Xcode builts are 30% faster, and exporting a 5 minute 4k video via Final Cut Pro is about 10% faster.  Anything that uses single core (safari, word, excel) will not be any faster than a Mac Mini.  On an average workday, that $4,000 Ultra saves maybe 3 minutes.  

    Most people will do just fine with a Mac mini. 
    And you don’t think the developers you mention won’t soon optimize their code to take advantage of the M1 Ultra cores and GPUs? Really? This was a topic on today’s MacBreak Weekly podcast. None of the benchmark software has been optimized yet either. Wait a couple of months and your might change your tune. 
    Multi core CPUs and GPUs are not a new thing. If developers haven’t managed to utilise them before what makes you think they’re going to be able to now?  Parallel processing isn’t just something you can switch on in any given app.
    So what’s your point? That the M1 Ultra’s potential will never be fully realized? That the M1 Ultra is a flop that will never live up to its promise? Apple should just admit the M1Ultra is a failed attempt, stop production and go back to Intel? All hail Nvidia?
    Apple can add more GPU cores within reason related to how big they can make their chips and their packages, so that scales pretty well, for GPU-based processing, which tends to be reasonably described as “embarrassingly-parallel” in nature.  That’s the easy type of thing to make faster.

    There are very few types of tasks you can do with regular CPUs that scales well, if at all, by throwing more cores at it: most applications aren’t possible to implement in a parallel-processing manner that can make use of more than one core: this is where faster single cores and fewer of them are far more valuable for the majority of tasks and users.  The sorts of uses for so many cores is amenable to server-level tasks more than all but a tiny few specialized client-level tasks.  As such, the M1 Ultra SoC absolutely will not be very effectively usable for a regular desktop machine for easily 99% of regular desktop users and their use-cases, as at a minimum, they’d not have a way to do much of anything to get to even 90% CPU usage in a useful manner.

    For the record, that’s also true for the M1 Max as well.
    I agree with all of the arguments regarding CPU scalability. After all, nothing of significance in computer architecture has been done in many decades that defies Amdahl’s Law for the maximum theoretical speed up of applications that still have a dependency on serialized operations. Even the tiniest amount of serialization dependency crushes the maximum achievable speed up. However, this does not mean that having many cores and hyper-threading is not effective or certain workloads. Having multiple cores and hyper-threading may not be efficient for certain (actually very many) workloads, but there are workloads even on workstations, but more so on servers, that benefit greatly from all those cores.

    From my perspective the primary beneficiary of adding more processing elements, i.e., more cores and more hyper-threading, has been on scaling multiprocessing to allow more work to be done in a pseudo-parallel or time-shared manner on a single computer in a certain period of time. This has an enormous benefit even though the scaling of each individual process (and its threads) aren’t scaling linearly with the number of available computing elements, i.e., cores. The ability of a single server to manage thousands of concurrent clients or the ability of a single core to manage thousands of concurrent threads is clear evidence that preemptive multiprocessing/multitasking is very effective in terms of getting a lot of work done for multiple clients all relying on the same limited resource.

    Adding even more cores allows even more work to be done over the same period of time - assuming the operating system is capable of effectively utilizing all of the available cores and hyper-threads. Additionally, the compilers that are used to build the applications that run on computers with multiple computing elements need to structure their work such that they at least try to give the operating system the best shot at distributing their individual workload effectively across the available computing resources, i.e., cores and hyper threads. If the compilers and operating systems aren’t coordinated to take best advantage of what the hardware platform provides the applications will still run, but they won’t be nearly as efficient in terms of overall resource utilization. 

    The bottom line case, for what I’d say is the majority case for run of the mill applications, is that systems like the Mac Studio are probably overkill for a large class of users. At the same time, for applications that are specially crafted to utilize every bit of available resource from CPU/GPU cores to taking advantage of operating system tailoring, like thread priorities to processor affinity to overriding default swapping behaviors, the Mac Studio may be less effective or underkill because of its design goals. System architects have to make trade offs and make choices that will benefit the most number of users for the use cases they are going after. You may or not be one of their target users and thus may end up with a whole lot more, or less, computer than you really need. It may still be very effective, but not very efficient.


    cgWerks
  • Reply 36 of 51
    keithwkeithw Posts: 140member
    I was so excited to get a Mac Studio until I saw the GPU benchmarks. Very disappointed and the wait continues for a Mac with great 3D performance. They should really clarify that  their performance graphs are for video editors only - this is no where near the performance of a 3090 for 3D and I didn't think it was but even if it was 70% of the performance of a 3080 I would have got one. Will there ever be a Mac with comparable GPU performance? Probably not, it seems their focus is solely on the video side of things.
    ...or they could just re-compile the AMD drivers and support eGPUs like they do with the Intel Macs.  My iMac Pro gets really good graphics performance with my AMD RX 6900 XT eGPU.

    darkvadercgWerks
  • Reply 37 of 51
    crowleycrowley Posts: 10,453member
    keithw said:
    I was so excited to get a Mac Studio until I saw the GPU benchmarks. Very disappointed and the wait continues for a Mac with great 3D performance. They should really clarify that  their performance graphs are for video editors only - this is no where near the performance of a 3090 for 3D and I didn't think it was but even if it was 70% of the performance of a 3080 I would have got one. Will there ever be a Mac with comparable GPU performance? Probably not, it seems their focus is solely on the video side of things.
    ...or they could just re-compile the AMD drivers and support eGPUs like they do with the Intel Macs.  My iMac Pro gets really good graphics performance with my AMD RX 6900 XT eGPU.
    I'd be super pissed if they did that.  Just sold my Razor Core X on eBay, as I figure I'll never be able to use it again.
  • Reply 38 of 51
    maximaramaximara Posts: 409member
    "RAM and SSD prices are borderline extortionate"

    It would be helpful to see some real world price comparisons to RAM/SSD that has the same spec, both for off-the-shelf components and also comparable builds offered by other companies. 
    Someone compared the LPDDR5 to DDR5 RAM: Dell Memory Upgrade - 32GB - 2RX8 DDR5 UDIMM 4800MHz is $519.99 which is $119 more than the $400  Apple charges to go from 32 GB to 64 GB

    As for the SSD on the MacRumors forums 
    F-Train posted this set up:



    Want more space? Add on an external drive - this is a desktop set.  Somebody even joked 
    about what a digital packrat's desk would look like:

    edited March 2022
  • Reply 39 of 51
    cgWerkscgWerks Posts: 2,952member
    "... The M1 Ultra Mac Studio while idle hit 39 dBa idle, and 42 dBa under load. ... While idle, the M1 Mac mini is indistinguishable from ambient when idle, and 40 dBA under load."

    This kind of surprised me. The MaxTech review said the fans didn't move much off of their 'idle' speeds. Something you did must have gotten them to ramp up. Or, did you use the utility to set max fan speeds? The tone thing is interesting - and I kind of assumed that would be the case. I find the fans in MacBooks and the mini to be rather 'shrill' compared to the fan in my Blackmagic eGPU, for example, *If* I ever hear the fans in the Blackmagic (which I don't aside from initial max load until it catches up), they are much more mellow... more of a 'woosh' sound, or maybe like the 'warp engine' background noise in Star Trek.

    I was hoping it was going to be effectively silent most of the time, though. Thanks for the focus on the sound, not many reviews do much with that!

    "
    What you do not get, though, is Nvidia 3090-beating speeds, as Apple promised. Apple's graph as it pertains to the Nvidia 3090 card is misleading.  ... So, the bottom line to that conversation is that Apple's M1 Ultra is not a head-to-head performer as it compares to a full-throttle 3090 under maximum load. But, that's also almost exclusively a gaming card, and I find it highly unlikely that anybody will buy a Mac Studio for gaming."

    Yeah, that is disappointing, especially in a machine costing this much (Ultra, the Max model is a reasonably good deal if you're not expecting higher-end PC graphics performance). It is going to probably depend on what one is doing. With all that RAM potentially available, it might perform surprisingly well for something like handling large scenes in the working interface of a CAD or 3D app. But, for stuff like gaming (or consider missing raytracing hardware), or crypto-mining (or other such things), it isn't even close to a 3090. For example, a Max achieves around 12 MH/s mining Ethereum, so I'm going to guess an Ultra will be around 25 MH/s, maybe as high as 30 (given much more RAM speed). But, a 3090 is more like 115 MH/s. I'm probably not going to be playing Minecraft RTX with a mac Studio either.

    re: gaming - it isn't so much that I'd buy such a machine as a gaming PC. It is about trying to double-duty. It would be nice if I buy a $3000 Mac, I could game on it w/o having to also buy a $2-3k PC to sit alongside it. Or, my son is currently saving for a new computer. When he gets enough saved up, given that he likes to game, will he pick (or should I recommend) a Mac? Hopefully we see some impressive results once more optimization is done, but I'm still thinking maybe best, it will be similar to a mid-range gaming PC (which might be fine), not a 3080 or 3090 based one.

    "
    Whether or not any particular benchmark we ran directly applies to your use case, Apple's Mac Studio is a brutally fast computer."

    re: benchmarks - I know going outside the standard benchmarks gets tricky. But, it would be nice to see more review of 'in app' type use, rather than just final outputs, like rendering times or video encoding, etc. I'd love to see a huge scene in Blender or some CAD app being manipulated, or how the interface responds working in a big project of Logic. For example, I saw a guy trying to use a MBP Max in some modeling app, and it was too slow to be usable. Hopefully that was a matter of the software not being optimized, but thing like that often matter more than shaving seconds off a final render.
  • Reply 40 of 51
    cgWerkscgWerks Posts: 2,952member
    keithw said:
    I'm still trying to decide whether or not to spend the extra $1k for 64 GPU cores instead of 48.  I tend to keep machines for at least 5 years (or more,) and want to "future proof" as much as possible up front. Sure, I know there will probably be an M2 "Ultra" or M3 or M4 or M5 in the next 5 years, but the "studio" is the Mac I've always wanted.  My current 2017 iMac Pro was a compromise since the only thing available at the time was the "trashcan" Mac, and it was obsolete by then.  This thing is 2-1/2 times faster than my iMac Pro in multi-core CPU tests. Howerver, it's significantly slower in GPU performance then my AMD RX 6900 XT eGPU.
    Yeah, this is my concern as well. It isn't so much that I don't understand that whenever I jump in, things are going to continue to improve. But, given what I want the machine to do, I wonder if, especially, the GPU aspect is going to advance greatly with addition of things like raytracing hardware in the M2 or M3, that it would almost force me to sell/rebuy again. The rest of the machine seems perfect, and I have no doubt the 5+ years thing applies. If eGPUs were an option, then it wouldn't be an issue, as I could just add one down the road if I needed more power.

    It is kind of a question of saving for the Ultra, or just getting the Max, knowing it will likely get replaced in a couple of years anyway. I do kind of want to make the jump into Apple Silicon, though... partly because I'm currently stalled on Mojave (for software reasons). A new machine would allow me to get current on everything, while still keeping my Intel-based mini on Mojave for a few things I need there while I wait for those updates.

    anonconformist said:
    Unless/until I choose to become a youTube content creator, almost all the codec power of an Ultra would be unused potential, because I don’t tend to watch too many 8k videos at any given time 😏
    Yes, there are a few use-cases where these things are a no-brainer. I just wonder if outside those narrow cases, it will end up looking as good. Still impressive for what it is, especially at the power-usage level. But, it probably won't be so night & day as those certain cases portray.

    dewme said:
    Unfortunately for some, Apple’s desire to deliver systems that excel in ways that we never thought possible places some hard constraints on other qualities that we would like to have available to us, like modifiability and customization.
    It is even worse in this case, because not only are you not able to modify, but you can't even necessarily buy the setup that suits your purpose initially. For example, if you want maximum GPU power, you're getting extra CPU power as well and probably storage. The various aspects are coupled together. I think modularity here means you add your own keyboard, display, mouse/trackpad.

    bobolicious said:
    Will customers need an ultra to meaningfully transcend a Vega 56 eGPU while Intel mini RAM and eGPU seem hardware upgradable if supported...?
    I think outside some certain uses, I think the answer is yes. You'll need the top Ultra to match what is now a mid-level 'PC' GPU. Then, on certain metrics, it is closer to the 3090 as advertised. And, maybe some others, it could exceed it (if you needed more graphics RAM).

    ... Will there ever be a Mac with comparable GPU performance? Probably not, it seems their focus is solely on the video side of things.
    Yep. As impressed with Apple Silicon as I've been, this remains the big question. If they added eGPU support, problem solved. But, looks like we're chasing PCs again in this regard.
Sign In or Register to comment.