M1 Pro and M1 Max GPU performance versus Nvidia and AMD

2»

Comments

  • Reply 21 of 32
    MarvinMarvin Posts: 15,320moderator
    Can we play Crysis?
    Exactly.  Teraflops don't really matter much.  What matters for games is frames per second and average frame time.  What matters for productivity apps is time to task completion.  What matters for mining is the hash rate.  And heat generation and dissipation matter for all of them.

    I'm reasonably sure that the M1 Pro and Max will measure up favorably to comparable laptops in "real world" comparisons, but a teraflops comparison is really kind of secondary, or maybe even tertiary, where actual application benchmarks are concerned.
    Teraflops gives a good idea of the rough performance level much like GHz in CPUs. Different architectures perform differently but it wouldn't be likely that a 5TFLOP GPU would outperform a 10TFLOP GPU so it puts them in a ballpark. It works for segmenting GPUs by low/mid/high tiers and is useful for comparing GPUs in the same generation.

    Benchmarks can be pretty unreliable too:

    A desktop 3080 here scores 77,200:
    https://browser.geekbench.com/v5/compute/3553001
    and another here scores 184,824:
    https://browser.geekbench.com/v5/compute/3553543

    One is nearly 2.5x the other.

    I expect the results for M1 Max in OpenCL to be in the range of 60,000-80,000, which is 4x the M1 (15,000-20,000):

    https://browser.geekbench.com/v5/compute/search?page=1&q=m1

    The upcoming Mac OS was reported to have different power modes:

    https://appleinsider.com/articles/21/09/29/apple-might-introduce-new-high-power-mode-for-mac

    They may have a low/standard/high power profile setting.

    Apple said that the M1 Max performance is similar to PCs at 100W less power. Anything within 70% of the performance would qualify. In general I expect 3060 level raw performance from the M1 Max and the unified memory will make it perform even better than this in some cases.

    https://www.notebookcheck.net/GeForce-RTX-3060-Laptop-GPU-vs-GeForce-RTX-3080-Laptop-GPU_10478_10474.247598.0.html

    The dual-chip for the 27" iMac should exceed a desktop 3080 and the quad chip for either the iMac or Mac Pro should exceed a 3090. The highest end likely comes with over 1TB/s of memory bandwidth and up to 256GB unified memory.
    fastasleepwatto_cobracgWerks
  • Reply 22 of 32
    thttht Posts: 5,437member
    Marvin said:
    https://www.notebookcheck.net/GeForce-RTX-3060-Laptop-GPU-vs-GeForce-RTX-3080-Laptop-GPU_10478_10474.247598.0.html

    The dual-chip for the 27" iMac should exceed a desktop 3080 and the quad chip for either the iMac or Mac Pro should exceed a 3090. The highest end likely comes with over 1TB/s of memory bandwidth and up to 256GB unified memory.
    I don't understand how they, Nvidia and the media, can say with a straight face that these are "Laptop" GPUs. More than half of the systems are consuming 140 to 180 Watts in notebookcheck's game benchmarks for power consumption. I assume it's at-the-wall power, not the GPU itself. Then, words like gaming PCs having a "builtin UPS" (the battery) has entered the vernacular. They are running with about half the power for the desktop PCIe GPU cards, but that just tells me how bonkers it has gotten.

    There are some 15 to 17 inch portable monitors now. If those get thin enough and are TB/USB bus powered, you can get yourself a SFF PC with a PCIe slot, a small monitor, peripherals, cabling, a UPS, and stuff it into your bag, and it is 90% the same experience as this laptops.
    williamlondonwatto_cobracgWerks
  • Reply 23 of 32
    mcdavemcdave Posts: 1,927member
    KITA said:
    mcdave said:
    The compute looks disappointing though it is OpenCL so Metal should do better:


    The Apple M1 scores ~18,000 in OpenCL (1) and ~20,000 in Metal (2) on Geekbench.

    A 130W version of the RTX 3080 in a laptop scores ~120,000 in OpenCL (3) on Geekbench.

    Apple used a 165W version of the RTX 3080 in their comparison:


    Compute results don't support this graphic but closer real scenarios may be better...


    watto_cobra
  • Reply 24 of 32
    fastasleepfastasleep Posts: 6,417member
    mcdave said:
    KITA said:
    mcdave said:
    The compute looks disappointing though it is OpenCL so Metal should do better:


    The Apple M1 scores ~18,000 in OpenCL (1) and ~20,000 in Metal (2) on Geekbench.

    A 130W version of the RTX 3080 in a laptop scores ~120,000 in OpenCL (3) on Geekbench.

    Apple used a 165W version of the RTX 3080 in their comparison:


    Compute results don't support this graphic but closer real scenarios may be better...


    where's that from?
    watto_cobra
  • Reply 25 of 32
    MarvinMarvin Posts: 15,320moderator
    mcdave said:
    KITA said:
    mcdave said:
    The compute looks disappointing though it is OpenCL so Metal should do better:

    The Apple M1 scores ~18,000 in OpenCL (1) and ~20,000 in Metal (2) on Geekbench.

    A 130W version of the RTX 3080 in a laptop scores ~120,000 in OpenCL (3) on Geekbench.

    Apple used a 165W version of the RTX 3080 in their comparison:
    Compute results don't support this graphic but closer real scenarios may be better...
    where's that from?
    That's from GFXBench:

    https://gfxbench.com/result.jsp

    Those results are more impressive. The offscreen tests show very high results. The ones that aren't offscreen are a bit lower because the screen resolution is so high, offscreen tests render at the same resolution on every machine. Typing M1 into the search box will show where the M1 Max is on each test.

    The M1 Max is sitting among the top desktop GPUs there. The Aztec Ruins High Tier test is like a gaming test so games like Shadow of the Tomb Raider should run very well:



    Even at native resolution (3456 x 2234) it got around 120FPS.

    There's also a listing for M1 Pro:
    M1 got 81FPS
    M1 Pro got 165FPS (2x M1)
    M1 Max got 292FPS (1.77x M1 Pro)

    Some tests will be measuring the 24-core M1 Max, possibly the Geekbench tests. These GFXBench tests are definitely the 32-core models.
    watto_cobra
  • Reply 26 of 32
    cgWerkscgWerks Posts: 2,952member
    michelb76 said:
    Indeed. I assume it will use much less power which is great, because pc laptops with those GPU's don't last long on a battery charge. So that's something. But yeah, it seems middle of the road, which is where highend Apple GPU performance always has been.

    Also, I wonder how we are going to benchmark them properly, most GPU heavy games don't run a mac, and the Pro Apps don't run on PC's..Maybe only Cinebench? Adobe apps skew heavily towards the Windows platform.

    In the end maybe it doesn't matter, since most people buying these will using the Pro Apps anyway. 3D and ML has moved to other platforms.
    Yeah, I was hoping Apple would finally break free of meh GPU performance. I think they will do it, but it will be $$$, and won't matter except for if you're using a certain few apps. (Unless we can get more developers on-board.)

    beowulfschmidt said:
    Exactly.  Teraflops don't really matter much.  What matters for games is frames per second and average frame time.  What matters for productivity apps is time to task completion.  What matters for mining is the hash rate.  And heat generation and dissipation matter for all of them.

    I'm reasonably sure that the M1 Pro and Max will measure up favorably to comparable laptops in "real world" comparisons, but a teraflops comparison is really kind of secondary, or maybe even tertiary, where actual application benchmarks are concerned.
    Yes, and we'll need the games, productivity apps, and miners. We only have a few right now, which is the real problem. It won't much matter of Apple's GPUs are 100x the performance, if we don't get the apps.

    Marvin said:
    The dual-chip for the 27" iMac should exceed a desktop 3080 and the quad chip for either the iMac or Mac Pro should exceed a 3090. The highest end likely comes with over 1TB/s of memory bandwidth and up to 256GB unified memory.
    Only, if this pricing trend continues, it will cost as much or more than a Mac Pro w/ a high-end GPU (which I think some of us were hoping we'd get something more mid-tier at a lower cost). And, then there is the problem of a current Mac pro being able to have 4x of those GPUs.

    At least with the Intel platform, I can buy a Mac mini, an eGPU case, and stuff a 3080 or 3090 in there (or AMD if needing it under MacOS). I was hoping Apple Silicon would bring more flexibility, not less. I guess we'll have to wait and see till they fill the roadmap out, but I'm a bit concerned at this point (at least compared to what I was hoping to see).
  • Reply 27 of 32
    Forget about the performance comparison. I had used the new MacBook Pro 14 for a few days. A while ago, when I use TouchID to wake it up. The screen becomes frozen. Later it rebooted by itself. I never have this kind of crash before. A crash report was sent to Apple. 
    williamlondon
  • Reply 28 of 32
    crowleycrowley Posts: 10,453member
    Forget about the performance comparison. I had used the new MacBook Pro 14 for a few days. A while ago, when I use TouchID to wake it up. The screen becomes frozen. Later it rebooted by itself. I never have this kind of crash before. A crash report was sent to Apple. 
    This is a thread about performance comparison. 
    fastasleep
  • Reply 29 of 32
    DuhSesameDuhSesame Posts: 1,278member
    tht said:
    Marvin said:
    https://www.notebookcheck.net/GeForce-RTX-3060-Laptop-GPU-vs-GeForce-RTX-3080-Laptop-GPU_10478_10474.247598.0.html

    The dual-chip for the 27" iMac should exceed a desktop 3080 and the quad chip for either the iMac or Mac Pro should exceed a 3090. The highest end likely comes with over 1TB/s of memory bandwidth and up to 256GB unified memory.
    I don't understand how they, Nvidia and the media, can say with a straight face that these are "Laptop" GPUs. More than half of the systems are consuming 140 to 180 Watts in notebookcheck's game benchmarks for power consumption. I assume it's at-the-wall power, not the GPU itself. Then, words like gaming PCs having a "builtin UPS" (the battery) has entered the vernacular. They are running with about half the power for the desktop PCIe GPU cards, but that just tells me how bonkers it has gotten.

    There are some 15 to 17 inch portable monitors now. If those get thin enough and are TB/USB bus powered, you can get yourself a SFF PC with a PCIe slot, a small monitor, peripherals, cabling, a UPS, and stuff it into your bag, and it is 90% the same experience as this laptops.
    Mobile GPUs lags even more behind their CPU counterparts, a same model, mobile variant back then can be twice as slow.  This shows how good the Apple Silicon are right now.

    I'm sure nobody would believe me at this point but the 16" have the potential for a 2-die configuration, IMO should match the upcoming iMac Pro.  Doesn't mean it will happen, but it will be a miss opportunity if they don't.
  • Reply 30 of 32
    DuhSesameDuhSesame Posts: 1,278member
    cgWerks said:
    OK, that isn't nearly as positive as I had hoped (and not as good as some previous estimates I've seen).

    That means the Pro is more entry-level dGPU, and the Max is kind of middle-of-the-road dGPU. I guess that's fine in laptops, but then hopefully we'll see more cores in the desktops that are coming (mini 'Pro', bigger iMac, etc.). I'm sure we'll see another 'Uber' version for the new Mac Pro, but I'm disappointed in terms of anything below that level if these same chips will go in them. Maybe there is still hope for eGPUs?

    And, then there is still the issue Metal (vs OpenGL, etc.) and apps kind of emulating until they get ported (if they ever do), which further degrades performance.
    I believe the current 16" is capable of dissipating 120 watts from that silicon, the current 8+2+32 is still underpowered a lot.  I'd say right now is more like a low-power silicon you'd expect from previous 13", just got surpassed by the 12th gen mobile i7.

    With all that said...

    Right now there's no practical applications that can utilize all that power.  There's not many practical compute tasks for Macs, and If you want gaming, chances are you'd need emulations and workarounds, which will reduce your GPU performance & equals out the gain.  It's great & awkward the same time.  I'd go for a lower GPU just for that and save some extra power/bucks.
  • Reply 31 of 32
    I have used M1 Max 32 Core product for more than 5 weeks. I have tested Sketchup, Rhino 3D, and some other graphic programs including Adobe products. So far, M1 Max's graphic performance is pretty similar to Radeon graphic card in 2016 iMac product.
  • Reply 32 of 32
    cgWerkscgWerks Posts: 2,952member
    DuhSesame said:
    Right now there's no practical applications that can utilize all that power.  There's not many practical compute tasks for Macs, and If you want gaming, chances are you'd need emulations and workarounds, which will reduce your GPU performance & equals out the gain.  It's great & awkward the same time.  I'd go for a lower GPU just for that and save some extra power/bucks.
    Yes, the apps are the big problem. A couple of the apps I use might be a while, as all the shifts, I think, threw the dev schedule off (hopefully this year, though), and the other afaik, the modeling kernel it uses hasn't been built/updated, so the dev will have to wait on that. Who knows how long all that will be. And, then there is the Windows software that maybe can work if the whole Windows ARM thing comes along. Lots of ifs right now. I'll most likely just keep my i7 mini/eGPU for that stuff as the transition happens.

    re: lower GPU - I suppose, but I have to get a handle on the performance better first. Need to see where the Pro falls... is it more like the RX580 or Vega 56? I hate to buy a brand new machine and have it be GPU-wise similar to the stuff from years ago. That said, if I were buying a PC, the price of the better GPUs pushes them way up there in price as well. It is just kind of the reality of the times. (Heh, hopefully someone will write a Metal crypto-miner as that would help me make the decision.)

    Nflight79 said:
    I have used M1 Max 32 Core product for more than 5 weeks. I have tested Sketchup, Rhino 3D, and some other graphic programs including Adobe products. So far, M1 Max's graphic performance is pretty similar to Radeon graphic card in 2016 iMac product.
    I hope a lot of it just has to get optimized first, and that is the issue. That's not acceptable!
    I saw some rather disappointing stuff in some modeling forums the other day too... like pretty poor performance given the cost (mostly un-usable on a project that PC mid-GPU would handle fine). But, if it is running in emulation, that's probably to be expected. It may be a while for those of us in those fields unless we can switch apps when someone updates. The alpha Blender stuff looks promising.
Sign In or Register to comment.