AMD to unveil Radeon RX 6000 GPU family on Oct. 28
AMD is set to announce its new line of Radeon RX 6000 graphics chips at an event on Oct. 28, the chipmaker said Thursday.

Credit: AMD
The new GPUs will be based on AMD's new RDNA 2 architecture, which could bring a 50% performance-per-watt boost and "uncompromising 4K gaming."
Exact details and specifications for the upcoming chips are still sparse. However, AMD said that the Oct. 28 event will let users learn more about RDNA 2, Radeon RX 6000 chips, and the company's "deep collaboration with game developers and ecosystem partners."
Along with increased performance and power efficiency, the GPUs will also feature ray-tracing capabilities and variable-rate shading. That'll bring the AMD chips more in-line with main rival Nvidia. Rumors from earlier in 2020 also suggest that the RDNA 2 cards could come equipped with up to 16GB of GDDR6 video memory, a 256-bit bus, and more fans for additional cooling.
Leaker @coreteks has also indicated that AMD's goal may be to undercut Nvidia's pricing, though it isn't clear how much the Radeon RX 6000 series will retail for.
Apple presently has drivers in macOS for the Radeon RX 5700, Radeon VII, Vega 64, Vega 56, and most of the 400 and 500 series PCI-E cards. While Mac drivers don't typically arrive day and date with the cards' releases, they do arrive within a few months of unveiling in a macOS update.
AMD cards are the only PCI-E cards that Mac Pro or Mac-based Thunderbolt 3 eGPU users can use internally at the moment. Apple has ditched Nvidia GPU support in favor of Radeon cards, and there are no signs of it returning any time soon.
The AMD announcement event will kick off at 12 p.m. Eastern Time (9 a.m. Pacific) on Wednesday, Oct. 28.

Credit: AMD
The new GPUs will be based on AMD's new RDNA 2 architecture, which could bring a 50% performance-per-watt boost and "uncompromising 4K gaming."
Exact details and specifications for the upcoming chips are still sparse. However, AMD said that the Oct. 28 event will let users learn more about RDNA 2, Radeon RX 6000 chips, and the company's "deep collaboration with game developers and ecosystem partners."
Along with increased performance and power efficiency, the GPUs will also feature ray-tracing capabilities and variable-rate shading. That'll bring the AMD chips more in-line with main rival Nvidia. Rumors from earlier in 2020 also suggest that the RDNA 2 cards could come equipped with up to 16GB of GDDR6 video memory, a 256-bit bus, and more fans for additional cooling.
Leaker @coreteks has also indicated that AMD's goal may be to undercut Nvidia's pricing, though it isn't clear how much the Radeon RX 6000 series will retail for.
Apple presently has drivers in macOS for the Radeon RX 5700, Radeon VII, Vega 64, Vega 56, and most of the 400 and 500 series PCI-E cards. While Mac drivers don't typically arrive day and date with the cards' releases, they do arrive within a few months of unveiling in a macOS update.
AMD cards are the only PCI-E cards that Mac Pro or Mac-based Thunderbolt 3 eGPU users can use internally at the moment. Apple has ditched Nvidia GPU support in favor of Radeon cards, and there are no signs of it returning any time soon.
The AMD announcement event will kick off at 12 p.m. Eastern Time (9 a.m. Pacific) on Wednesday, Oct. 28.
Comments
Anyways, anyone want to clarify AMD GPU codenames? Both the chip and graphics API support? Getting confusing out there. Navi, Navi 14, Navi 12, RDNA, RDNA2, so on and so forth.
The Graphics Core Next (GCN) 4 instruction family is composed of the Polaris chip family. The codename for individual chips is from the "Arctic Islands" family.
Polaris 10 - RX 470, RX 480
Polaris 11 - RX 460
Polaris 12 - RX 540, RX 550
Polaris 20 - RX 570, RX 580
Polaris 21 - RX 560
Polaris 22 - RX Vega M GH, RX Vega M L
Polaris 30 - RX 590
After that came the GCN 5 instruction set and the Vega chip family:
Vega 10 - RX Vega 56, RX Vega 64
Vega 12 - Pro Vega 16, Pro Vega 20
Vega 20 - Pro Vega II, Radeon VII
After GCN5 comes the RDNA 1 instruction set and the Navi chip family:
Navi 10 - RX 5600, RX 5700
Navi 14 - RX 5300, RX 5500
Yes, it's all gratuitously confusing. The higher the number within a given chip family, the lower the performance.
‘how Apple could duplicate that in an IG graphics SoC is something that I doubt right now, even assuming it’s something they’re looking at. Ray tracing is one of the most difficult things to do in real-time. The number of calculations is immense. Apple also uses a small amount of shared RAM, that’s not considered to be the best way to power graphics hardware, so we
ll see.
As far as iGPUs versus dGPUs, our common distinctions for the difference between the two will become or are gradually becoming more and less distinct. If you look at the Xbox X series SoC, it's basically a 12 TFLOP GPU with a vestigial CPU (an 8 core Zen 2) in terms of the amount of area the GPU and CPU occupy. And this is fabbed on TSMC 7nm. TSMC 5nm yields 70% more transistors. So, I can see a TSMC 5nm SoC with raytracing hardware in it. Intel with Tiger Lake is getting closer to this with its iGPU taking about 40% of the chip area. This will drive dGPUs to higher performance niches, and commensurate higher Watt tiers. It's going to be interesting to see how the dGPU market shakes out. Maybe, server GPUs will be the vast majority sales of dGPUs in a few years as iGPUs start to take even more of the mid range of the gaming market for GPUs.
It's still a very big question on how much power Apple is willing to burn on the GPU and how they get more perf/Watt out of the GPU than competitors. The obvious way is to run the iGPU at low clocks, have a lot of GPU cores, and have a high bandwidth memory subsystem to feed it. This will require a lot of chip area and transistors, which is something Apple may have the luxury for, as they don't have to have 50% margin on their chips, and they are on the densest fab.
Large graphics RAM sizes are for 3D applications or GPU compute applications. Not a lot of people are doing that on ultrabooks. If you are playing a lot of 3D games, buy a PC desktop. If you do a lot of GPU compute, you should get a more performant computer or an eGPU for your ultrabook. If you are doing any of these things, getting the base amount of non-upgradeable RAM is the last thing you should do.
For web browsing, 8 GB will be fine for most people. Not for web workers maybe, but fine for light web browser duty, like school stuff, office automation, even a lot of programming/engineering tasks, etc.
while Apple’s built-in GPU will compete well against Intel’s new IG, which is twice as powerful as the current, to be replaced next month, generation, and possibly low end GPUs boards, and possibly even some low mid range boards, none of that can do ray trace, or other really high level computations. If Apple manages to come out with a separate GPU, then things could be somewhat different. But Apple has stated that one reason their GPU will be competitive is because of their “unique” admixture of these other SoC elements, and the software tying it all together. I’ve stated that possibly, if Apple can figure out how to put those elements on a separate GPU, and give that GPU sufficient bandwidth with proper graphics memory, and a lot of it, with a decent power budget, they could really have something.
but none of us know what Apple’s plans really are. We can speculate, but that’s about it. And Apple has told us the basics during the June conference. In fact, they told us more than I expected. I listened very closely. I looked at their charts. We don’t see a separate GPU. At least not for the recognizable future. I suspect that we’ll se an SoC that competes with Intel very well in the beginning, likely exceeding performance at the level of Apple’s similar machines previously. I don’t expect to see overwhelming GPU performance. I would to see performance of a lower level separate GPU.
‘’anything that exceeds that would thrill me.
Transistor budgets of over 25 billion are not that daunting for Apple Silicon. It's the power budgets that are. They just aren't going to run any of their SoCs at over 150 Watts, and I'd bet they want 100 W or lower. That will be self-limiting, so performance probably won't match these 300 W dGPUs or 200 W CPUs on average.
I expect them to be at parity or possibly more performant than dGPUs at each Watt class though. The fab advantage will give them about 30% perf/W advantage. The density advantage is going to be about 100%. They have so many transistors to play with on TSMC N5 that you wonder if they are willing use it all.
Yes. Apple has a chance to do something different than the packaging and designs afforded to them with components from Intel and AMD. It could be a monolithic SoC. It could be chiplets. It could be stacks of chiplets. They could use HBM or GDDR or whatever custom memory solution used commodity DRAM. It's going to be interesting to say the least.
I really don’t care how efficient ARM instructions are, or how good Apple’s engineering is. There are physical laws that everyone has to follow, and it’s not likely that Apple is special here. Not only is Apple nor go8ng to run over 150 watts, likely they don’t want to run much over 20 watts. There’s only so much they can do.
but Microsoft still supports Windows 32, and even older 16 bit software. Until they’re forced to organizations and government will stubbornly hold on to their old code, and not spend the money for modern software. That’s one advantage Apple has. They don’t have all of those customers to worry about, though, in recent years, they are getting further into that area, but with modern code.
people do have to understand that there’s nothing Apple is doing that anyone else can’t also do, if they want to. We’re seeing ARM chips for the server space, and the worlds biggest supercomputer is now comprised of ARM chips. So it can be done. Qualcomm is interested in contesting the space. If Apple is successful, then we will see others follow. Windows depends on a chip,that’s much better than what’s available to them now, and Microsoft knows it, which is why they’ve partnered with Qualcomm to develop one. Apple needs to look over its shoulder and not become complacent.
The 28-Core in the Mac Pro consumes ~280-300W and requires a ginormous heat sink. Anything beyond needs a high-end water cooling system, which will be disastrous for OEMs...
Also, A12X/Z can do 20 watts, if you stress them enough, the CPU alone will output 15 watts. Macs should have no problem to push beyond that, I understand Apple doesn’t need to win a spec-war but no reason to constraint themselves for competitors.
Maybe they’re pointing the 14” Pro and the 24” iMac, both aren’t exactly “high-end”.
Dude is probably not legit.
Even the DTK have 16GiB, for goodness sake.
Laptops, however, have a limited power budget, in cases of large Pros, no models went beyond 100W. Both 5500M and 5600M have a TGP of 50W, Intel's processor drains another ~60-80W. Not pretty. Cooling two chips side-by-side isn't so beneficial for a limited cooling system, where both relies heavily toward one side. An SoC could make sense here and the performance goal isn't hard to reach.