AMD Radeon RX 6700 XT GPU may launch at March 3 event

Posted:
in Current Mac Hardware
AMD will be expanding its Radeon RX 6000 graphics card line on March 3, with the launch thought to introduce lower-priced options that could eventually be used with the Mac Pro or eGPU solutions.




The first Radeon RX 6000-series GPUs were introduced on October 28, with the RX 6800, RX 6800 XT, and RX 6900 XT unveiled by AMD. In an announcement made on Twitter, AMD plans to add more cards to the range.

The tweet proclaims "the journey continues for RDNA2" at 11 A.M. Eastern on March 3. The event will be the third episode of AMD's "Where Gaming Begins" virtual event streams, and is confirmed via the tweet to include at least one new card.

On March 3rd, the journey continues for #RDNA2. Join us at 11AM US Eastern as we reveal the latest addition to the @AMD Radeon RX 6000 graphics family. https://t.co/5CFvT9D2SR pic.twitter.com/tUpUwRfpgk

-- Radeon RX (@Radeon)


Current speculation is that AMD's next card will be one that extends the range down toward the value end of the spectrum. Thought to be the Radeon RX 6700 XT, images and details of the cards leaked by VideoCardzon Sunday indicate Asus is already preparing versions for sale.

The RX 6700 XT is believed to use the Navi 22 XT GPU, with a reduced 40 compute units versus the 60 to 80 units offered by the RX 6800 and RX 6900 XT. The core count will also allegedly be reduced to 2,560, well below the 3,840 of the RX 6800.

The card may also switch out the 16GB of GDDR6 memory for 12GB as a cost-saving measure and use a 192-bit memory bus instead of 256-bit. A lower memory bandwidth may also be revealed, with it being capable of up to 384GB/s rather than 512GB/s performed by its stablemates.

It will likely provide some power savings, with a power consumption of up to 230W undercutting the RX 6800's 250W power draw or the 300W of the higher two cards.

Rumors point to a launch date in March 2021 for the card, and potentially a price below $500. For comparison, the RX 6800 was priced at $579 at launch, while the RX 6800 XT cost $649, and the RX 6900 XT was $999.

The card is likely to be a potential future upgrade for Mac Pro users and those with Thunderbolt 3 eGPU enclosures, though not at the moment. Drivers do not yet exist for macOS for the RX 6000-series cards, but are anticipated to arrive soon, given historical trends for addition of support.

Apple's decision to drop Nvidia GPU support in favor of Radeon gives upgradable Mac users few video upgrade options. The launch of more affordable top-end Radeon cards may help press Apple to support them.

Comments

  • Reply 1 of 11
    cloudguycloudguy Posts: 323member
    "with the launch thought to introduce lower-priced options that could eventually be used with the Mac Pro or eGPU solutions."

    Wait what? The Apple switch from Intel CPUs and AMD GPUs to their own integrated CPUs and GPUs lasts two years. It started in November 2020. Meaning that there is only 18 months left. While Apple is rumoured to have one final run of Intel-based Mac Pro and iMac Pro workstations on the way while they work out the bugs with the M2 and M2X CPUs that are capable of replacing the Intel Core i9 and Xeon CPUs that currently inhabit them, you would be absolutely nuts to actually go out and spend all that cash on one. Case in point: the resale value of Intel-based Macs is already plummeting! 
    https://www.zdnet.com/article/the-mac-price-crash-of-2021/ 
    So even the idea of buying an Intel Mac now, using it for a couple of years and then flipping it while in still "like new" status and using the cash for an Apple Silicon Mac Pro in early 2023 makes no sense, and getting the AMD eGPU with it even less so. 

    Apple has finite resources and organizational focus. Realize Macs have been playing second fiddle to iPhones and iPads for awhile now because those units bring in far more money. Indeed, these days Macs probably come in fourth behind services (which also generate far more revenue than Macs) and wearables (where the Apple Watch and the AirPod give Cupertino rare market place dominance). So you would be absolutely nuts to spend $6000 or more on a device running an outdated software platform. Proof: because you won't do it. No, you want everyone else to do it so Apple's market share doesn't take a nosedive. You want them to be the ones to "trust Apple" when they spend 10 seconds reading boilerplate teleprompter-speech on how they will continue to offer world class support to their Intel-based Mac customers ... before they pivot to talking about how the future is ARM and everyone on x86 is going to be left behind. (And given that Apple is the only company with viable ARM PC and workstation products and will be for the next 3-5 years ...)

    Yeah, no. Only buy Apple's last batch of Intel-based machines if you are 100% comfortable replacing macOS with Windows 10 Workstation Pro or Ubuntu on them 3 years down the line. The former isn't that much of a problem - bootcamp will still be supported for awhile - but drivers and such will continue to be a big deal for the latter. Otherwise your platform is going to be an afterthought for a company that has the vast majority of is consumers and market share elsewhere (mobile, services, wearables) and regards your obsolete hardware platform a burden and a chore to develop for. 
    elijahg
  • Reply 2 of 11
    aderutteraderutter Posts: 604member
    Plenty of people buy Macs and write therm off through depreciation in less than 3 years, so I’d say “only buy Apple Intel-based machines if you are 100% comfortable seeing them as disposable assets with practically zero resale value in 2 years time”.

    Top of the line graphics cards are a dirt cheap upgrade to some people but I can’t see Apple considering drivers for AMD cards as a priority.

  • Reply 3 of 11
    Right now, no one is talking about Apple's upcoming discrete GPUs. Apple has not announced them but they pretty much have to be released this year. The release of Apple discrete GPUs will be an extremely important event. It is clear that Apple can compete with Intel/AMD in CPUs but how will it compare on GPUs? Apple's embedded GPUs are at best 1/10th the speed of discrete GPUs. That's actually pretty impressive for a mobile GPU built into an iPhone or iPad but it is not going to impress anyone buying an iMac, let alone a Mac Pro. The only other choice Apple has is to write Apple Silicon drivers for an AMD discrete GPU. That seems counterproductive given Apple's stated intention to build its own CPUs and GPUs going forwards.
  • Reply 4 of 11
    auxioauxio Posts: 2,727member
    cloudguy said:

    Wait what?

    ... a bunch of other "Apple doesn't care about Mac users" nonsense ...

    Yeah, no. Only buy Apple's last batch of Intel-based machines if you are 100% comfortable replacing macOS with Windows 10 Workstation Pro or Ubuntu on them 3 years down the line.
    Put your money where your mouth is.  I'll wager you $10k that Apple supports Intel-based Macs for at least 5 years.

    For professionals (including myself), it's expected that you'll need to upgrade within 3-5 years.  Heck, people don't even keep cars (a purchase around 10x more expensive than the average computer) for much longer than that these days.
    MplsPwatto_cobra
  • Reply 5 of 11
    zimmiezimmie Posts: 651member
    Right now, no one is talking about Apple's upcoming discrete GPUs. Apple has not announced them but they pretty much have to be released this year. The release of Apple discrete GPUs will be an extremely important event. It is clear that Apple can compete with Intel/AMD in CPUs but how will it compare on GPUs? Apple's embedded GPUs are at best 1/10th the speed of discrete GPUs. That's actually pretty impressive for a mobile GPU built into an iPhone or iPad but it is not going to impress anyone buying an iMac, let alone a Mac Pro. The only other choice Apple has is to write Apple Silicon drivers for an AMD discrete GPU. That seems counterproductive given Apple's stated intention to build its own CPUs and GPUs going forwards.
    Your "1/10th the speed" statement is incorrect. The M1's GPU can perform 2.6 TFLOPS with eight cores. That's 325 GFLOPS per core.

    The Radeon RX 5700 XT 50th Anniversary can get 10.1 TFLOPS with 40 compute units, so 253 GFLOPS per compute unit. This is actually the best performance per compute unit across the Radeon RX 5000 line.

    The Radeon RX 6000 series is technically released, but they're extremely rare right now. I'm not going to count them until stores can go longer than an hour without selling out. Even so, they get 288 GFLOPS per compute unit.

    GPU compute performance scales linearly with core count. Power performance scales a bit worse because high-core interconnects have to be more complicated, but not hugely so. 32 of Apple's A14/M1 GPU cores would get about 10.4 TFLOPS, beating the best AMD consumer cards you could get last generation and beating the Nvidia RTX 2080 (also 10.1 TFLOPS). That would still have low enough power draw to fit in a laptop, though admittedly, not a laptop Apple is interested in making. An iMac could easily have four times the M1's GPU.
    tmaywatto_cobra
  • Reply 6 of 11
    elijahgelijahg Posts: 2,759member
    zimmie said:
    Right now, no one is talking about Apple's upcoming discrete GPUs. Apple has not announced them but they pretty much have to be released this year. The release of Apple discrete GPUs will be an extremely important event. It is clear that Apple can compete with Intel/AMD in CPUs but how will it compare on GPUs? Apple's embedded GPUs are at best 1/10th the speed of discrete GPUs. That's actually pretty impressive for a mobile GPU built into an iPhone or iPad but it is not going to impress anyone buying an iMac, let alone a Mac Pro. The only other choice Apple has is to write Apple Silicon drivers for an AMD discrete GPU. That seems counterproductive given Apple's stated intention to build its own CPUs and GPUs going forwards.
    Your "1/10th the speed" statement is incorrect. The M1's GPU can perform 2.6 TFLOPS with eight cores. That's 325 GFLOPS per core.

    The Radeon RX 5700 XT 50th Anniversary can get 10.1 TFLOPS with 40 compute units, so 253 GFLOPS per compute unit. This is actually the best performance per compute unit across the Radeon RX 5000 line.

    The Radeon RX 6000 series is technically released, but they're extremely rare right now. I'm not going to count them until stores can go longer than an hour without selling out. Even so, they get 288 GFLOPS per compute unit.

    GPU compute performance scales linearly with core count. Power performance scales a bit worse because high-core interconnects have to be more complicated, but not hugely so. 32 of Apple's A14/M1 GPU cores would get about 10.4 TFLOPS, beating the best AMD consumer cards you could get last generation and beating the Nvidia RTX 2080 (also 10.1 TFLOPS). That would still have low enough power draw to fit in a laptop, though admittedly, not a laptop Apple is interested in making. An iMac could easily have four times the M1's GPU.
    FLOPS isn't a great measurement of GPU power in graphics processing at least, due the memory bandwidth etc being important. But the top end Nvidia 30x0 CPUs range from 12.7 to 35.7 TFLOPS. AMD GPUs whilst better than they were a few years ago, are still far behind; the RX6900 gets 23 TFLOPS. Also texture fill rate is important, and the M1 gets only 82GT/sec vs 460GT/sec for the RTX 3090, and 584GT/sec for the RX6900. This is mainly due to the memory bandwidth being a fraction of a discrete GPU, 68GB/sec vs 500GB+/sec. Adding more cores to improve the texture fill rate hits diminishing returns pretty quickly due to shared busses etc, so just using more GPU cores won't help things as much as it initially appears.

    Apple is screwing is own customers by still refusing to use (or permit use of) Nvidia cards. Also there're plenty of hardware features the Apple GPUs don't support, and unlike CPUs, the architecture of the card doesn't really matter as the drivers handle the Metal/OpenGL/Vulkan/DirectX translation so the architecture has much less bearing on performance. Also the M1 relies on Metal shader language, some parts of which even Unity doesn't support, presumably because the userbase is so small its not worth it.

    M1 hardware is impressive, but much like force touch of yore, if the userbase is too small developers won't put effort into supporting the more obscure parts of it. Hardware is useless without software to go with it, and the software needs developers, who need marketshare. And Apple's disinterest in marketshare (stagnant as a symptom of Macs being borderline overpriced) means the support isn't likely to improve anytime soon unfortunately.
    muthuk_vanalingam
  • Reply 7 of 11
    mdriftmeyermdriftmeyer Posts: 7,503member
    elijahg said:
    zimmie said:
    Right now, no one is talking about Apple's upcoming discrete GPUs. Apple has not announced them but they pretty much have to be released this year. The release of Apple discrete GPUs will be an extremely important event. It is clear that Apple can compete with Intel/AMD in CPUs but how will it compare on GPUs? Apple's embedded GPUs are at best 1/10th the speed of discrete GPUs. That's actually pretty impressive for a mobile GPU built into an iPhone or iPad but it is not going to impress anyone buying an iMac, let alone a Mac Pro. The only other choice Apple has is to write Apple Silicon drivers for an AMD discrete GPU. That seems counterproductive given Apple's stated intention to build its own CPUs and GPUs going forwards.
    Your "1/10th the speed" statement is incorrect. The M1's GPU can perform 2.6 TFLOPS with eight cores. That's 325 GFLOPS per core.

    The Radeon RX 5700 XT 50th Anniversary can get 10.1 TFLOPS with 40 compute units, so 253 GFLOPS per compute unit. This is actually the best performance per compute unit across the Radeon RX 5000 line.

    The Radeon RX 6000 series is technically released, but they're extremely rare right now. I'm not going to count them until stores can go longer than an hour without selling out. Even so, they get 288 GFLOPS per compute unit.

    GPU compute performance scales linearly with core count. Power performance scales a bit worse because high-core interconnects have to be more complicated, but not hugely so. 32 of Apple's A14/M1 GPU cores would get about 10.4 TFLOPS, beating the best AMD consumer cards you could get last generation and beating the Nvidia RTX 2080 (also 10.1 TFLOPS). That would still have low enough power draw to fit in a laptop, though admittedly, not a laptop Apple is interested in making. An iMac could easily have four times the M1's GPU.
    FLOPS isn't a great measurement of GPU power in graphics processing at least, due the memory bandwidth etc being important. But the top end Nvidia 30x0 CPUs range from 12.7 to 35.7 TFLOPS. AMD GPUs whilst better than they were a few years ago, are still far behind; the RX6900 gets 23 TFLOPS. Also texture fill rate is important, and the M1 gets only 82GT/sec vs 460GT/sec for the RTX 3090, and 584GT/sec for the RX6900. This is mainly due to the memory bandwidth being a fraction of a discrete GPU, 68GB/sec vs 500GB+/sec. Adding more cores to improve the texture fill rate hits diminishing returns pretty quickly due to shared busses etc, so just using more GPU cores won't help things as much as it initially appears.

    Apple is screwing is own customers by still refusing to use (or permit use of) Nvidia cards. Also there're plenty of hardware features the Apple GPUs don't support, and unlike CPUs, the architecture of the card doesn't really matter as the drivers handle the Metal/OpenGL/Vulkan/DirectX translation so the architecture has much less bearing on performance. Also the M1 relies on Metal shader language, some parts of which even Unity doesn't support, presumably because the userbase is so small its not worth it.

    M1 hardware is impressive, but much like force touch of yore, if the userbase is too small developers won't put effort into supporting the more obscure parts of it. Hardware is useless without software to go with it, and the software needs developers, who need marketshare. And Apple's disinterest in marketshare (stagnant as a symptom of Macs being borderline overpriced) means the support isn't likely to improve anytime soon unfortunately.
    Those TFLOPS are in FP32 Single Precision and You're comparing the RTX 3090 against the RX 6900.


    https://www.geeks3d.com/20140305/amd-radeon-and-nvidia-geforce-fp32-fp64-gflops-table-computing/

    The irony with those numbers is the FP16 and the FP64 for AMD stomps RTX 30. There is a reason Nvidia isn't winning over in gaming and people want the Radeons but since we'll soon be passing 10 million custom APUs of Zen 2/Radeon SoC APUs for the PS 5/XBox X it's been very difficult to get newer cards, and when they arrive they're sold immediately in large orders. AMD prioritized their TSMC allocation to EPYC, Game Consoles, Zen 3 and then RX 6000 cards.

    And right now the ones that are available by OEM are absurdly overpriced, right along side Nvidia's own top models.

    AMD vowed to sell the RX 6700XT on-site and to start increasing non-OEM models [Reference models] of all RX 6000 series to be available directly from AMD online.

    The fact is Apple will never have a discrete GPU to touch AMD or Nvidia because the mine field of IP required to do so. As it stands, they're licensing the vast majority of their IP [including Tiling that both AMD and Nvidia have had for quite some time] from Imagination Technologies. The inventor of Apple's ‘tiling' they bragged about is at AMD, and it is nothing that AMD and Nvidia have already had for several years.

    MCM is coming to RDNA 3.0 and CDNA 2.0 cards and no one, not even Nvidia will be ready. As of now the Xilinx merger looms even larger for AMD. They are already incorporating Xilinx IP as well [as of now Lisa Su has not said for which product lines] so it would be wise of Apple to extend eGPU support for future Mac M series systems.

    We already known MCM is coming to the M200 this year for CDNA 2.0 HPC/Supercomputing this year with Frontier and Zen 4 EPYC Genoa whose specs also have been partially leaked.

    Of course, if you bought a current M series Mac you're screwed for eGPU support and with all this power coming Apple better re-enable it in a later version of Big Sur or expect sales to start waning. As it stands lots of refurbs are on the Store and M series are selling with steep discounts--two things Apple rarely does and thus I doubt they are booming in sales after the initial push.

    As it stands the Mac Mini Intel is still available for sale: https://www.apple.com/shop/buy-mac/mac-mini/3.0ghz-intel-core-i5-6-core-processor-with-intel-uhd-graphics-630-512gb#

    And if you want eGPU that's your option, along side a Macbook Pro Intel. 

    If you don't know what MCM this is the patent read up and it is quite interesting.

    https://pdfaiw.uspto.gov/.aiw?PageNum=0&docid=20200409859&IDKey=8685EBB5F0F4 &HomeUrl=/

    There are roughly another dozen or so with Neural Engines, Machine Learning, Tensor Calculus specific operations, Ray Tracing and more that are of huge interest you can dig up on Justia and view the full PDFs at the US Patent office.
    watto_cobra
  • Reply 8 of 11
    cloudguy said:
    "with the launch thought to introduce lower-priced options that could eventually be used with the Mac Pro or eGPU solutions."

    Wait what? The Apple switch from Intel CPUs and AMD GPUs to their own integrated CPUs and GPUs lasts two years. It started in November 2020. Meaning that there is only 18 months left. While Apple is rumoured to have one final run of Intel-based Mac Pro and iMac Pro workstations on the way while they work out the bugs with the M2 and M2X CPUs that are capable of replacing the Intel Core i9 and Xeon CPUs that currently inhabit them, you would be absolutely nuts to actually go out and spend all that cash on one. Case in point: the resale value of Intel-based Macs is already plummeting! https://www.zdnet.com/article/the-mac-price-crash-of-2021/ 
    So even the idea of buying an Intel Mac now, using it for a couple of years and then flipping it while in still "like new" status and using the cash for an Apple Silicon Mac Pro in early 2023 makes no sense, and getting the AMD eGPU with it even less so. 

    Apple has finite resources and organizational focus. Realize Macs have been playing second fiddle to iPhones and iPads for awhile now because those units bring in far more money. Indeed, these days Macs probably come in fourth behind services (which also generate far more revenue than Macs) and wearables (where the Apple Watch and the AirPod give Cupertino rare market place dominance). So you would be absolutely nuts to spend $6000 or more on a device running an outdated software platform. Proof: because you won't do it. No, you want everyone else to do it so Apple's market share doesn't take a nosedive. You want them to be the ones to "trust Apple" when they spend 10 seconds reading boilerplate teleprompter-speech on how they will continue to offer world class support to their Intel-based Mac customers ... before they pivot to talking about how the future is ARM and everyone on x86 is going to be left behind. (And given that Apple is the only company with viable ARM PC and workstation products and will be for the next 3-5 years ...)

    Yeah, no. Only buy Apple's last batch of Intel-based machines if you are 100% comfortable replacing macOS with Windows 10 Workstation Pro or Ubuntu on them 3 years down the line. The former isn't that much of a problem - bootcamp will still be supported for awhile - but drivers and such will continue to be a big deal for the latter. Otherwise your platform is going to be an afterthought for a company that has the vast majority of is consumers and market share elsewhere (mobile, services, wearables) and regards your obsolete hardware platform a burden and a chore to develop for. 
    There's a couple of things here -firstly this might be an upgrade option for people who already own 2019 Mac Pro and secondly many people who are in the market for a Mac Pro have work to do now and there is not an M1 option for them today that gets anywhere near a 6700 level of GPU.  Yes I agree for most people the logic would be to wait, but most people don't drop what they do on Mac Pro unless they have  compelling need.
    watto_cobra
  • Reply 8 of 11
    cloudguy said:
    "with the launch thought to introduce lower-priced options that could eventually be used with the Mac Pro or eGPU solutions."

    Wait what? The Apple switch from Intel CPUs and AMD GPUs to their own integrated CPUs and GPUs lasts two years. It started in November 2020. Meaning that there is only 18 months left. While Apple is rumoured to have one final run of Intel-based Mac Pro and iMac Pro workstations on the way while they work out the bugs with the M2 and M2X CPUs that are capable of replacing the Intel Core i9 and Xeon CPUs that currently inhabit them, you would be absolutely nuts to actually go out and spend all that cash on one. Case in point: the resale value of Intel-based Macs is already plummeting! https://www.zdnet.com/article/the-mac-price-crash-of-2021/ 
    So even the idea of buying an Intel Mac now, using it for a couple of years and then flipping it while in still "like new" status and using the cash for an Apple Silicon Mac Pro in early 2023 makes no sense, and getting the AMD eGPU with it even less so. 

    Apple has finite resources and organizational focus. Realize Macs have been playing second fiddle to iPhones and iPads for awhile now because those units bring in far more money. Indeed, these days Macs probably come in fourth behind services (which also generate far more revenue than Macs) and wearables (where the Apple Watch and the AirPod give Cupertino rare market place dominance). So you would be absolutely nuts to spend $6000 or more on a device running an outdated software platform. Proof: because you won't do it. No, you want everyone else to do it so Apple's market share doesn't take a nosedive. You want them to be the ones to "trust Apple" when they spend 10 seconds reading boilerplate teleprompter-speech on how they will continue to offer world class support to their Intel-based Mac customers ... before they pivot to talking about how the future is ARM and everyone on x86 is going to be left behind. (And given that Apple is the only company with viable ARM PC and workstation products and will be for the next 3-5 years ...)

    Yeah, no. Only buy Apple's last batch of Intel-based machines if you are 100% comfortable replacing macOS with Windows 10 Workstation Pro or Ubuntu on them 3 years down the line. The former isn't that much of a problem - bootcamp will still be supported for awhile - but drivers and such will continue to be a big deal for the latter. Otherwise your platform is going to be an afterthought for a company that has the vast majority of is consumers and market share elsewhere (mobile, services, wearables) and regards your obsolete hardware platform a burden and a chore to develop for. 
    I think the thing is people who are in the market for a Mac Pro probably have work to do now.  There is no M1 option today that will give them anywhere near the GPU processing power of a 6700, or a 5700 for that matter.  There are also existing Mac Pro owners who would gladly update to 6700 or 6800 if they get mac os drivers.  3 years is a long time in Internet years.
    watto_cobra
  • Reply 10 of 11
    Assuming that this card will actually ever be in stock, and not immediately mass purchased by scalper's bots, like all the rest of the GPUs on the market, that is...
    watto_cobra
  • Reply 11 of 11
    zimmiezimmie Posts: 651member
    elijahg said:
    zimmie said:
    Right now, no one is talking about Apple's upcoming discrete GPUs. Apple has not announced them but they pretty much have to be released this year. The release of Apple discrete GPUs will be an extremely important event. It is clear that Apple can compete with Intel/AMD in CPUs but how will it compare on GPUs? Apple's embedded GPUs are at best 1/10th the speed of discrete GPUs. That's actually pretty impressive for a mobile GPU built into an iPhone or iPad but it is not going to impress anyone buying an iMac, let alone a Mac Pro. The only other choice Apple has is to write Apple Silicon drivers for an AMD discrete GPU. That seems counterproductive given Apple's stated intention to build its own CPUs and GPUs going forwards.
    Your "1/10th the speed" statement is incorrect. The M1's GPU can perform 2.6 TFLOPS with eight cores. That's 325 GFLOPS per core.

    The Radeon RX 5700 XT 50th Anniversary can get 10.1 TFLOPS with 40 compute units, so 253 GFLOPS per compute unit. This is actually the best performance per compute unit across the Radeon RX 5000 line.

    The Radeon RX 6000 series is technically released, but they're extremely rare right now. I'm not going to count them until stores can go longer than an hour without selling out. Even so, they get 288 GFLOPS per compute unit.

    GPU compute performance scales linearly with core count. Power performance scales a bit worse because high-core interconnects have to be more complicated, but not hugely so. 32 of Apple's A14/M1 GPU cores would get about 10.4 TFLOPS, beating the best AMD consumer cards you could get last generation and beating the Nvidia RTX 2080 (also 10.1 TFLOPS). That would still have low enough power draw to fit in a laptop, though admittedly, not a laptop Apple is interested in making. An iMac could easily have four times the M1's GPU.
    FLOPS isn't a great measurement of GPU power in graphics processing at least, due the memory bandwidth etc being important. But the top end Nvidia 30x0 CPUs range from 12.7 to 35.7 TFLOPS. AMD GPUs whilst better than they were a few years ago, are still far behind; the RX6900 gets 23 TFLOPS. Also texture fill rate is important, and the M1 gets only 82GT/sec vs 460GT/sec for the RTX 3090, and 584GT/sec for the RX6900. This is mainly due to the memory bandwidth being a fraction of a discrete GPU, 68GB/sec vs 500GB+/sec. Adding more cores to improve the texture fill rate hits diminishing returns pretty quickly due to shared busses etc, so just using more GPU cores won't help things as much as it initially appears.

    Apple is screwing is own customers by still refusing to use (or permit use of) Nvidia cards. Also there're plenty of hardware features the Apple GPUs don't support, and unlike CPUs, the architecture of the card doesn't really matter as the drivers handle the Metal/OpenGL/Vulkan/DirectX translation so the architecture has much less bearing on performance. Also the M1 relies on Metal shader language, some parts of which even Unity doesn't support, presumably because the userbase is so small its not worth it.

    M1 hardware is impressive, but much like force touch of yore, if the userbase is too small developers won't put effort into supporting the more obscure parts of it. Hardware is useless without software to go with it, and the software needs developers, who need marketshare. And Apple's disinterest in marketshare (stagnant as a symptom of Macs being borderline overpriced) means the support isn't likely to improve anytime soon unfortunately.
    Until you can actually buy a RC 6000 or RTX 3000 cards in a store, they’re functionally not released. AMD and Nvidia have been shipping preview-level quantities. Wake me when they are available enough that scalpers aren’t able to sell them for nearly three times MSRP.

    Sure, the M1’s total memory throughput is not great, but it’s talking to two RAM chips with what appears to be a 16-bit bus. An iMac would almost certainly have two 64-bit channels, for ~8x the potential memory throughput.

    As for Apple and Nvidia, that’s as much Nvidia’s fault as it is Apple’s. Nvidia doesn’t let anybody know enough about their cards to write drivers, and Apple doesn’t let anybody else write kernel code (read: drivers) for their operating systems. Nvidia does this in the name of protecting their trade secrets, and Apple does it in the name of system stability, but the result is the same.
Sign In or Register to comment.