AMD Radeon RX 6700 XT GPU may launch at March 3 event
AMD will be expanding its Radeon RX 6000 graphics card line on March 3, with the launch thought to introduce lower-priced options that could eventually be used with the Mac Pro or eGPU solutions.

The first Radeon RX 6000-series GPUs were introduced on October 28, with the RX 6800, RX 6800 XT, and RX 6900 XT unveiled by AMD. In an announcement made on Twitter, AMD plans to add more cards to the range.
The tweet proclaims "the journey continues for RDNA2" at 11 A.M. Eastern on March 3. The event will be the third episode of AMD's "Where Gaming Begins" virtual event streams, and is confirmed via the tweet to include at least one new card.
Current speculation is that AMD's next card will be one that extends the range down toward the value end of the spectrum. Thought to be the Radeon RX 6700 XT, images and details of the cards leaked by VideoCardzon Sunday indicate Asus is already preparing versions for sale.
The RX 6700 XT is believed to use the Navi 22 XT GPU, with a reduced 40 compute units versus the 60 to 80 units offered by the RX 6800 and RX 6900 XT. The core count will also allegedly be reduced to 2,560, well below the 3,840 of the RX 6800.
The card may also switch out the 16GB of GDDR6 memory for 12GB as a cost-saving measure and use a 192-bit memory bus instead of 256-bit. A lower memory bandwidth may also be revealed, with it being capable of up to 384GB/s rather than 512GB/s performed by its stablemates.
It will likely provide some power savings, with a power consumption of up to 230W undercutting the RX 6800's 250W power draw or the 300W of the higher two cards.
Rumors point to a launch date in March 2021 for the card, and potentially a price below $500. For comparison, the RX 6800 was priced at $579 at launch, while the RX 6800 XT cost $649, and the RX 6900 XT was $999.
The card is likely to be a potential future upgrade for Mac Pro users and those with Thunderbolt 3 eGPU enclosures, though not at the moment. Drivers do not yet exist for macOS for the RX 6000-series cards, but are anticipated to arrive soon, given historical trends for addition of support.
Apple's decision to drop Nvidia GPU support in favor of Radeon gives upgradable Mac users few video upgrade options. The launch of more affordable top-end Radeon cards may help press Apple to support them.

The first Radeon RX 6000-series GPUs were introduced on October 28, with the RX 6800, RX 6800 XT, and RX 6900 XT unveiled by AMD. In an announcement made on Twitter, AMD plans to add more cards to the range.
The tweet proclaims "the journey continues for RDNA2" at 11 A.M. Eastern on March 3. The event will be the third episode of AMD's "Where Gaming Begins" virtual event streams, and is confirmed via the tweet to include at least one new card.
On March 3rd, the journey continues for #RDNA2. Join us at 11AM US Eastern as we reveal the latest addition to the @AMD Radeon RX 6000 graphics family. https://t.co/5CFvT9D2SR pic.twitter.com/tUpUwRfpgk
-- Radeon RX (@Radeon)
Current speculation is that AMD's next card will be one that extends the range down toward the value end of the spectrum. Thought to be the Radeon RX 6700 XT, images and details of the cards leaked by VideoCardzon Sunday indicate Asus is already preparing versions for sale.
The RX 6700 XT is believed to use the Navi 22 XT GPU, with a reduced 40 compute units versus the 60 to 80 units offered by the RX 6800 and RX 6900 XT. The core count will also allegedly be reduced to 2,560, well below the 3,840 of the RX 6800.
The card may also switch out the 16GB of GDDR6 memory for 12GB as a cost-saving measure and use a 192-bit memory bus instead of 256-bit. A lower memory bandwidth may also be revealed, with it being capable of up to 384GB/s rather than 512GB/s performed by its stablemates.
It will likely provide some power savings, with a power consumption of up to 230W undercutting the RX 6800's 250W power draw or the 300W of the higher two cards.
Rumors point to a launch date in March 2021 for the card, and potentially a price below $500. For comparison, the RX 6800 was priced at $579 at launch, while the RX 6800 XT cost $649, and the RX 6900 XT was $999.
The card is likely to be a potential future upgrade for Mac Pro users and those with Thunderbolt 3 eGPU enclosures, though not at the moment. Drivers do not yet exist for macOS for the RX 6000-series cards, but are anticipated to arrive soon, given historical trends for addition of support.
Apple's decision to drop Nvidia GPU support in favor of Radeon gives upgradable Mac users few video upgrade options. The launch of more affordable top-end Radeon cards may help press Apple to support them.
Comments
Wait what? The Apple switch from Intel CPUs and AMD GPUs to their own integrated CPUs and GPUs lasts two years. It started in November 2020. Meaning that there is only 18 months left. While Apple is rumoured to have one final run of Intel-based Mac Pro and iMac Pro workstations on the way while they work out the bugs with the M2 and M2X CPUs that are capable of replacing the Intel Core i9 and Xeon CPUs that currently inhabit them, you would be absolutely nuts to actually go out and spend all that cash on one. Case in point: the resale value of Intel-based Macs is already plummeting! https://www.zdnet.com/article/the-mac-price-crash-of-2021/
So even the idea of buying an Intel Mac now, using it for a couple of years and then flipping it while in still "like new" status and using the cash for an Apple Silicon Mac Pro in early 2023 makes no sense, and getting the AMD eGPU with it even less so.
Apple has finite resources and organizational focus. Realize Macs have been playing second fiddle to iPhones and iPads for awhile now because those units bring in far more money. Indeed, these days Macs probably come in fourth behind services (which also generate far more revenue than Macs) and wearables (where the Apple Watch and the AirPod give Cupertino rare market place dominance). So you would be absolutely nuts to spend $6000 or more on a device running an outdated software platform. Proof: because you won't do it. No, you want everyone else to do it so Apple's market share doesn't take a nosedive. You want them to be the ones to "trust Apple" when they spend 10 seconds reading boilerplate teleprompter-speech on how they will continue to offer world class support to their Intel-based Mac customers ... before they pivot to talking about how the future is ARM and everyone on x86 is going to be left behind. (And given that Apple is the only company with viable ARM PC and workstation products and will be for the next 3-5 years ...)
Yeah, no. Only buy Apple's last batch of Intel-based machines if you are 100% comfortable replacing macOS with Windows 10 Workstation Pro or Ubuntu on them 3 years down the line. The former isn't that much of a problem - bootcamp will still be supported for awhile - but drivers and such will continue to be a big deal for the latter. Otherwise your platform is going to be an afterthought for a company that has the vast majority of is consumers and market share elsewhere (mobile, services, wearables) and regards your obsolete hardware platform a burden and a chore to develop for.
Top of the line graphics cards are a dirt cheap upgrade to some people but I can’t see Apple considering drivers for AMD cards as a priority.
For professionals (including myself), it's expected that you'll need to upgrade within 3-5 years. Heck, people don't even keep cars (a purchase around 10x more expensive than the average computer) for much longer than that these days.
The Radeon RX 5700 XT 50th Anniversary can get 10.1 TFLOPS with 40 compute units, so 253 GFLOPS per compute unit. This is actually the best performance per compute unit across the Radeon RX 5000 line.
The Radeon RX 6000 series is technically released, but they're extremely rare right now. I'm not going to count them until stores can go longer than an hour without selling out. Even so, they get 288 GFLOPS per compute unit.
GPU compute performance scales linearly with core count. Power performance scales a bit worse because high-core interconnects have to be more complicated, but not hugely so. 32 of Apple's A14/M1 GPU cores would get about 10.4 TFLOPS, beating the best AMD consumer cards you could get last generation and beating the Nvidia RTX 2080 (also 10.1 TFLOPS). That would still have low enough power draw to fit in a laptop, though admittedly, not a laptop Apple is interested in making. An iMac could easily have four times the M1's GPU.
Apple is screwing is own customers by still refusing to use (or permit use of) Nvidia cards. Also there're plenty of hardware features the Apple GPUs don't support, and unlike CPUs, the architecture of the card doesn't really matter as the drivers handle the Metal/OpenGL/Vulkan/DirectX translation so the architecture has much less bearing on performance. Also the M1 relies on Metal shader language, some parts of which even Unity doesn't support, presumably because the userbase is so small its not worth it.
M1 hardware is impressive, but much like force touch of yore, if the userbase is too small developers won't put effort into supporting the more obscure parts of it. Hardware is useless without software to go with it, and the software needs developers, who need marketshare. And Apple's disinterest in marketshare (stagnant as a symptom of Macs being borderline overpriced) means the support isn't likely to improve anytime soon unfortunately.
https://www.geeks3d.com/20140305/amd-radeon-and-nvidia-geforce-fp32-fp64-gflops-table-computing/
The irony with those numbers is the FP16 and the FP64 for AMD stomps RTX 30. There is a reason Nvidia isn't winning over in gaming and people want the Radeons but since we'll soon be passing 10 million custom APUs of Zen 2/Radeon SoC APUs for the PS 5/XBox X it's been very difficult to get newer cards, and when they arrive they're sold immediately in large orders. AMD prioritized their TSMC allocation to EPYC, Game Consoles, Zen 3 and then RX 6000 cards.
And right now the ones that are available by OEM are absurdly overpriced, right along side Nvidia's own top models.
AMD vowed to sell the RX 6700XT on-site and to start increasing non-OEM models [Reference models] of all RX 6000 series to be available directly from AMD online.
The fact is Apple will never have a discrete GPU to touch AMD or Nvidia because the mine field of IP required to do so. As it stands, they're licensing the vast majority of their IP [including Tiling that both AMD and Nvidia have had for quite some time] from Imagination Technologies. The inventor of Apple's ‘tiling' they bragged about is at AMD, and it is nothing that AMD and Nvidia have already had for several years.
MCM is coming to RDNA 3.0 and CDNA 2.0 cards and no one, not even Nvidia will be ready. As of now the Xilinx merger looms even larger for AMD. They are already incorporating Xilinx IP as well [as of now Lisa Su has not said for which product lines] so it would be wise of Apple to extend eGPU support for future Mac M series systems.
We already known MCM is coming to the M200 this year for CDNA 2.0 HPC/Supercomputing this year with Frontier and Zen 4 EPYC Genoa whose specs also have been partially leaked.
Of course, if you bought a current M series Mac you're screwed for eGPU support and with all this power coming Apple better re-enable it in a later version of Big Sur or expect sales to start waning. As it stands lots of refurbs are on the Store and M series are selling with steep discounts--two things Apple rarely does and thus I doubt they are booming in sales after the initial push.
As it stands the Mac Mini Intel is still available for sale: https://www.apple.com/shop/buy-mac/mac-mini/3.0ghz-intel-core-i5-6-core-processor-with-intel-uhd-graphics-630-512gb#
And if you want eGPU that's your option, along side a Macbook Pro Intel.
If you don't know what MCM this is the patent read up and it is quite interesting.
https://pdfaiw.uspto.gov/.aiw?PageNum=0&docid=20200409859&IDKey=8685EBB5F0F4 &HomeUrl=/
There are roughly another dozen or so with Neural Engines, Machine Learning, Tensor Calculus specific operations, Ray Tracing and more that are of huge interest you can dig up on Justia and view the full PDFs at the US Patent office.
Sure, the M1’s total memory throughput is not great, but it’s talking to two RAM chips with what appears to be a 16-bit bus. An iMac would almost certainly have two 64-bit channels, for ~8x the potential memory throughput.
As for Apple and Nvidia, that’s as much Nvidia’s fault as it is Apple’s. Nvidia doesn’t let anybody know enough about their cards to write drivers, and Apple doesn’t let anybody else write kernel code (read: drivers) for their operating systems. Nvidia does this in the name of protecting their trade secrets, and Apple does it in the name of system stability, but the result is the same.