Rumored Mac Pro & Mac Studio aren't dead -- but neither are now expected at WWDC

2»

Comments

  • Reply 21 of 26
    9secondkox29secondkox2 Posts: 2,664member
    macxpress said:
    AniMill said:
    JamesCude said:
    The Mac Pro becomes even more of a niche product with the Mac Studio out there. The Studio offers more than enough power for most use cases and the need for PCI cards is rarer than ever. 

    Too bad- it’s awesome to have gobs of power but with the insane efficiency of Apple Silicon it’s no longer necessary. You can have your cake and eat it already.
    The nfortunatelyhe Ultra in the Studio is handicapped due to design flaw. So it doesn’t live up to anywhere near its potential. 

    An M2 Ultra is likely too hot to makes sense. So it waits for M3. Should get a revision or 2 prior to launch of big screen iMac and Mac Pro. 
    What design flaw? I own one and it’s been a reliable beast for Redshift and C4D, but Adobe products just don’t feel snappy with it. Please link so I can research. 👍
    A memory buffer bottleneck (Translation Lookaside Buffer). 

    The Ultra is quick, since the Max is quick, but it’s nowhere near as quick as it should/could be. 

    Of course it’s a bummer when you buy something and find this stuff out later. But that’s the way it is. 

    It’s a widely known issue. Just Google it. 

    Thankfully, Apple is aware and has already resolved this in m2. So when the M3 Ultra (and greater) chips arrive, they’ll be singing at full potential. 
    Yeah it looked like the M1 series was having issues with scaling. The Ultra wasn't as fast as it should have been. Definitely sounds like a design flaw but oh well it's a 1st gen M series chip so kinda to be expected. M2 at least with Pro and Max series seems to fix that. I'm sure M3 will be even better. 
    Per the article below, the M1 Ultra does have twice the performance of an M1 Max for CPU while the GPU is more like 40% to 80% of the M1 Max depending on use.

    https://www.engadget.com/m1-ultra-benchmarks-upscaled-video-143024262.html?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAAJyvyZn46gzAtB4gfOuQkL-X26_bmsrwLcXz3gT3Y7gnBydg4rHLSaSfwL8QM5gqeHvP1hcJXvMQF1FR21QaVKV39rO8C2mjf3GfOUa9p4t8b1zEg84e_DLzirXx4_hs0R7SET7IPeesl_P75aTLTTtSwt7VQj1SQWBaE3lrhId8

    "The M1 Ultra does best when its hardware accelerators can kick in. These are the parts of the chip built to speed up specific tasks, namely video rendering and AI processing. In a test processing ten 8K video clips at once, the M1 Ultra did the job in just 29 seconds when its accelerators were able to help out. This was about twice as fast as the PC we were testing, despite it having a 16-core AMD 5950X processor and Nvidia RTX 3080 Ti graphics card."
    Both CPU AND GPU have issues with staying “fed” in the ultra. This is true to an aforementioned TLB bottleneck. Some YouTubers figured this out a long time ago watching how much COU and GPU were actually being utilized in benchmarks snd real world testing. The cores weren’t getting maxed when they should have. It is as later discovered that this was due to too little memory in buffer and there’s nothing that can be done to address that on existing SOCs. Therefore it is a design flaw. 

    Apple figured out how to get two max tier SOCs to function as one so that hurdle is cleared. Next, they figured out how to properly forecast buffer size for such a novel setup. 

    Though the design is all squared away for ultra series, We likely won’t see an M2 based chip due to the fact that it’s not a massive leap over m1 and it would run “hot” according to Apple standards. Yet, it could still happen if Apple wants to do something to address their corner of the industry wide PC sales slump. I doubt it though, since most who want an ultra likely have the m1 version already and the rest are waiting purposely for new M3 hotness. 

    An M3 version would be a statement piece, where all the ducks are in a row, the architectural improvements are in place, the 3nm process affording more power while staying cooler, and NEW GPU 
    FEATURES are ready to shine. 

    Not only will the M3 version be notably superior due to architectural improvements, GPU features, higher clocks, and a smaller process, but it will be utilizing all of its potential, making comparisons with the m1 version look even more pronounced than they already will be. 




    macxpresswilliamlondonwatto_cobra
  • Reply 22 of 26
    It will be interesting to see what they do.

    I think the M3 sooner is wishful thinking. It will be later, like next year. The first 3nm capacity will have been reserved for the A17.

    Apple has chosen to say twice in the past six months that the goal is to have every Mac get every M generation. So iMac and Mac Studio will get M2, plus the 15" Air. 

    I also think the M2 Mac Pro is not happening, maybe for the same reasons the M1 didn’t happen. But Tim will make some kind of comment affirming the importance of the Mac Pro and Pro Display technologies. 
    watto_cobra
  • Reply 23 of 26
    macxpress said:
    AniMill said:
    JamesCude said:
    The Mac Pro becomes even more of a niche product with the Mac Studio out there. The Studio offers more than enough power for most use cases and the need for PCI cards is rarer than ever. 

    Too bad- it’s awesome to have gobs of power but with the insane efficiency of Apple Silicon it’s no longer necessary. You can have your cake and eat it already.
    The nfortunatelyhe Ultra in the Studio is handicapped due to design flaw. So it doesn’t live up to anywhere near its potential. 

    An M2 Ultra is likely too hot to makes sense. So it waits for M3. Should get a revision or 2 prior to launch of big screen iMac and Mac Pro. 
    What design flaw? I own one and it’s been a reliable beast for Redshift and C4D, but Adobe products just don’t feel snappy with it. Please link so I can research. 👍
    A memory buffer bottleneck (Translation Lookaside Buffer). 

    The Ultra is quick, since the Max is quick, but it’s nowhere near as quick as it should/could be. 

    Of course it’s a bummer when you buy something and find this stuff out later. But that’s the way it is. 

    It’s a widely known issue. Just Google it. 

    Thankfully, Apple is aware and has already resolved this in m2. So when the M3 Ultra (and greater) chips arrive, they’ll be singing at full potential. 
    Yeah it looked like the M1 series was having issues with scaling. The Ultra wasn't as fast as it should have been. Definitely sounds like a design flaw but oh well it's a 1st gen M series chip so kinda to be expected. M2 at least with Pro and Max series seems to fix that. I'm sure M3 will be even better. 
    Per the article below, the M1 Ultra does have twice the performance of an M1 Max for CPU while the GPU is more like 40% to 80% of the M1 Max depending on use.

    https://www.engadget.com/m1-ultra-benchmarks-upscaled-video-143024262.html?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAAJyvyZn46gzAtB4gfOuQkL-X26_bmsrwLcXz3gT3Y7gnBydg4rHLSaSfwL8QM5gqeHvP1hcJXvMQF1FR21QaVKV39rO8C2mjf3GfOUa9p4t8b1zEg84e_DLzirXx4_hs0R7SET7IPeesl_P75aTLTTtSwt7VQj1SQWBaE3lrhId8

    "The M1 Ultra does best when its hardware accelerators can kick in. These are the parts of the chip built to speed up specific tasks, namely video rendering and AI processing. In a test processing ten 8K video clips at once, the M1 Ultra did the job in just 29 seconds when its accelerators were able to help out. This was about twice as fast as the PC we were testing, despite it having a 16-core AMD 5950X processor and Nvidia RTX 3080 Ti graphics card."
    Both CPU AND GPU have issues with staying “fed” in the ultra. This is true to an aforementioned TLB bottleneck. Some YouTubers figured this out a long time ago watching how much COU and GPU were actually being utilized in benchmarks snd real world testing. The cores weren’t getting maxed when they should have. It is as later discovered that this was due to too little memory in buffer and there’s nothing that can be done to address that on existing SOCs. Therefore it is a design flaw. 
    Sounds like you're referring to Vadim Yuryev from Max Tech. He's the one that pushed that theory about the M1 Ultra. But I've seen threads online where people point out all the mistakes/misunderstandings that he himself makes when talking about the subject. He doesn't really understand it well enough to be taken seriously. 

    The link I posted is more likely to be what people can expect from the M1 Ultra: CPU performance that is realistically 2X the M1 Max and GPU performance that is 40-80% greater than the M1 Max depending on use. 
    williamlondonwatto_cobra
  • Reply 24 of 26
    blastdoorblastdoor Posts: 3,258member
    I wonder if intel might offer apple a sweetheart deal on Xeon pricing just to keep x86 with a toe in the door on the Mac. 
    williamlondonwatto_cobra
  • Reply 25 of 26
    macxpressmacxpress Posts: 5,801member
    blastdoor said:
    I wonder if intel might offer apple a sweetheart deal on Xeon pricing just to keep x86 with a toe in the door on the Mac. 
    I don't think it matters what Intel offers Apple. Apple can't continue to support 2 platforms, just like they didn't keep PPC and x86 around together for any longer than they had to. The current M series already is faster then the Xeon CPU. There's no reason for Apple to keep Intel and Apple is pushing developers to make Apple Silicon software and that would only entice developers to drag their feet. 
    williamlondonwatto_cobra
  • Reply 26 of 26
    programmerprogrammer Posts: 3,457member
    saarek said:
    You’re right about performance, the Threadripper will no doubt be faster, but it will also consume far more power for said performance.

    I wonder if Apple is cooking up a method of multiple ultra class SoC in the same system. Imagine 5 daisy chained M3 Ultras (or even better an unannounced Ultra Max) in the Mac Pro.
    Yes, I think this is really their only choice, if they want to compete in that part of the market.  

    They might be able to push clock rates a bit higher, but their chips are primarily tuned for low power mobile applications and retuning everything to compete with the AMD and Intel (and nVidia) blast furnaces doesn't make sense for a machine that is a niche product for Apple.  Plus their whole story is about efficiency.  So how to make a more powerful machine without a huge amount of effort?  Make it modular, put their chips in modules (each module being a full-blown CPU/GPU/Memory/IO unit), and pack them in together.  With lower heat output and power requirements, they can do this better than anyone else.  

    It becomes a software problem then -- how to make use of so many discrete processors in a single box?  If they know what their target markets are (e.g. ML/AI, render farms, Xcode cloud, etc) then they can write software (applications, libraries, drivers) that addresses each of those.  And if they have software that makes use of what are essentially "headless servers" then selling these units into clouds or server farms suddenly becomes sensible, which is a market Apple hasn't had access to for a long time.  

    The real question is whether Apple is actually interested in going after these markets.  That's anybody's guess.  They've been willing to walk away from the very high end of the market before, and haven't done servers in a long time.

    tenthousandthingswatto_cobra
Sign In or Register to comment.