AMD to unveil Radeon RX 6000 GPU family on Oct. 28

Posted:
in General Discussion edited September 2020
AMD is set to announce its new line of Radeon RX 6000 graphics chips at an event on Oct. 28, the chipmaker said Thursday.

Credit: AMD
Credit: AMD


The new GPUs will be based on AMD's new RDNA 2 architecture, which could bring a 50% performance-per-watt boost and "uncompromising 4K gaming."

Exact details and specifications for the upcoming chips are still sparse. However, AMD said that the Oct. 28 event will let users learn more about RDNA 2, Radeon RX 6000 chips, and the company's "deep collaboration with game developers and ecosystem partners."

Along with increased performance and power efficiency, the GPUs will also feature ray-tracing capabilities and variable-rate shading. That'll bring the AMD chips more in-line with main rival Nvidia. Rumors from earlier in 2020 also suggest that the RDNA 2 cards could come equipped with up to 16GB of GDDR6 video memory, a 256-bit bus, and more fans for additional cooling.

Leaker @coreteks has also indicated that AMD's goal may be to undercut Nvidia's pricing, though it isn't clear how much the Radeon RX 6000 series will retail for.

Apple presently has drivers in macOS for the Radeon RX 5700, Radeon VII, Vega 64, Vega 56, and most of the 400 and 500 series PCI-E cards. While Mac drivers don't typically arrive day and date with the cards' releases, they do arrive within a few months of unveiling in a macOS update.

AMD cards are the only PCI-E cards that Mac Pro or Mac-based Thunderbolt 3 eGPU users can use internally at the moment. Apple has ditched Nvidia GPU support in favor of Radeon cards, and there are no signs of it returning any time soon.

The AMD announcement event will kick off at 12 p.m. Eastern Time (9 a.m. Pacific) on Wednesday, Oct. 28.
«1

Comments

  • Reply 1 of 30
    Many are ASSUMING Apple will eventually support the next AMD series/line, but with the ARM switch it is not a guarantee.  Apple just recently moved to "fully" adopt the 5XXX series across their machines and easily could drag those on through EOL of Intel processors, as long as the lingering driver issues are addressed.
    watto_cobra
  • Reply 2 of 30
    thttht Posts: 5,421member
    I'm betting that the A14 GPUs will have hardware support for raytracing. So, the iPhone will be the first Apple device to have hardware raytracing support, not Macs with AMD GPUs. ;)

    Anyways, anyone want to clarify AMD GPU codenames? Both the chip and graphics API support? Getting confusing out there. Navi, Navi 14, Navi 12, RDNA, RDNA2, so on and so forth. 
    watto_cobra
  • Reply 3 of 30
    tht said:
    I'm betting that the A14 GPUs will have hardware support for raytracing. So, the iPhone will be the first Apple device to have hardware raytracing support, not Macs with AMD GPUs. ;)

    Anyways, anyone want to clarify AMD GPU codenames? Both the chip and graphics API support? Getting confusing out there. Navi, Navi 14, Navi 12, RDNA, RDNA2, so on and so forth. 
    AMD's internal codenames for things are pretty rough, yeah. They have a set of codenames for the instruction family, a different set of codenames for the chip family, and a third set of codenames for the individual chips.

    The Graphics Core Next (GCN) 4 instruction family is composed of the Polaris chip family. The codename for individual chips is from the "Arctic Islands" family.

    Polaris 10 - RX 470, RX 480
    Polaris 11 - RX 460
    Polaris 12 - RX 540, RX 550

    Polaris 20 - RX 570, RX 580
    Polaris 21 - RX 560
    Polaris 22 - RX Vega M GH, RX Vega M L

    Polaris 30 - RX 590

    After that came the GCN 5 instruction set and the Vega chip family:

    Vega 10 - RX Vega 56, RX Vega 64
    Vega 12 - Pro Vega 16, Pro Vega 20
    Vega 20 - Pro Vega II, Radeon VII

    After GCN5 comes the RDNA 1 instruction set and the Navi chip family:

    Navi 10 - RX 5600, RX 5700
    Navi 14 - RX 5300, RX 5500



    Yes, it's all gratuitously confusing. The higher the number within a given chip family, the lower the performance.
    thtfastasleepPascalxxwatto_cobra
  • Reply 4 of 30
    melgrossmelgross Posts: 33,510member
    tht said:
    I'm betting that the A14 GPUs will have hardware support for raytracing. So, the iPhone will be the first Apple device to have hardware raytracing support, not Macs with AMD GPUs. ;)

    Anyways, anyone want to clarify AMD GPU codenames? Both the chip and graphics API support? Getting confusing out there. Navi, Navi 14, Navi 12, RDNA, RDNA2, so on and so forth. 
    Having support doesn’t mean having effective support. Nvidia has had support for ray tracing in their top GPUs for two generations, but despite having fast memory, and a lot of it, they weren’t really usable for that. Their new generation is supposed to,have, for the first time, enough oomf to make it useful.

    ‘how Apple could duplicate that in an IG graphics SoC is something that I doubt right now, even assuming it’s something they’re looking at. Ray tracing is one of the most difficult things to do in real-time. The number of calculations is immense. Apple also uses a small amount of shared RAM, that’s not considered to be the best way to power graphics hardware, so we
    ll see.
    muthuk_vanalingam
  • Reply 5 of 30
    thttht Posts: 5,421member
    melgross said:
    tht said:
    I'm betting that the A14 GPUs will have hardware support for raytracing. So, the iPhone will be the first Apple device to have hardware raytracing support, not Macs with AMD GPUs. ;)

    Anyways, anyone want to clarify AMD GPU codenames? Both the chip and graphics API support? Getting confusing out there. Navi, Navi 14, Navi 12, RDNA, RDNA2, so on and so forth. 
    Having support doesn’t mean having effective support. Nvidia has had support for ray tracing in their top GPUs for two generations, but despite having fast memory, and a lot of it, they weren’t really usable for that. Their new generation is supposed to have for the first time, enough oomf to make it useful.

    ‘how Apple could duplicate that in an IG graphics SoC is something that I doubt right now, even assuming it’s something they’re looking at. Ray tracing is one of the most difficult things to do in real-time. The number of calculations is immense. Apple also uses a small amount of shared RAM, that’s not considered to be the best way to power graphics hardware, so we [will] see.
    Yeah, they may not have enough transistors to do it. If AR is going to be a thing though, having some properly rendered shadows for the virtual object will be really really nice to see! And, they appear to be investing rather heavily on VR, AR glasses that will be powered by iPhones. Seemingly. So, I think there is a driving need for it for some of their upcoming products. Hence, raytracing hardware will be in the SoCs sooner or later.

    As far as iGPUs versus dGPUs, our common distinctions for the difference between the two will become or are gradually becoming more and less distinct. If you look at the Xbox X series SoC, it's basically a 12 TFLOP GPU with a vestigial CPU (an 8 core Zen 2) in terms of the amount of area the GPU and CPU occupy. And this is fabbed on TSMC 7nm. TSMC 5nm yields 70% more transistors. So, I can see a TSMC 5nm SoC with raytracing hardware in it. Intel with Tiger Lake is getting closer to this with its iGPU taking about 40% of the chip area. This will drive dGPUs to higher performance niches, and commensurate higher Watt tiers. It's going to be interesting to see how the dGPU market shakes out. Maybe, server GPUs will be the vast majority sales of dGPUs in a few years as iGPUs start to take even more of the mid range of the gaming market for GPUs.

    It's still a very big question on how much power Apple is willing to burn on the GPU and how they get more perf/Watt out of the GPU than competitors. The obvious way is to run the iGPU at low clocks, have a lot of GPU cores, and have a high bandwidth memory subsystem to feed it. This will require a lot of chip area and transistors, which is something Apple may have the luxury for, as they don't have to have 50% margin on their chips, and they are on the densest fab.
    muthuk_vanalingamwatto_cobra
  • Reply 6 of 30
    melgrossmelgross Posts: 33,510member
    tht said:
    melgross said:
    tht said:
    I'm betting that the A14 GPUs will have hardware support for raytracing. So, the iPhone will be the first Apple device to have hardware raytracing support, not Macs with AMD GPUs. ;)

    Anyways, anyone want to clarify AMD GPU codenames? Both the chip and graphics API support? Getting confusing out there. Navi, Navi 14, Navi 12, RDNA, RDNA2, so on and so forth. 
    Having support doesn’t mean having effective support. Nvidia has had support for ray tracing in their top GPUs for two generations, but despite having fast memory, and a lot of it, they weren’t really usable for that. Their new generation is supposed to have for the first time, enough oomf to make it useful.

    ‘how Apple could duplicate that in an IG graphics SoC is something that I doubt right now, even assuming it’s something they’re looking at. Ray tracing is one of the most difficult things to do in real-time. The number of calculations is immense. Apple also uses a small amount of shared RAM, that’s not considered to be the best way to power graphics hardware, so we [will] see.
    Yeah, they may not have enough transistors to do it. If AR is going to be a thing though, having some properly rendered shadows for the virtual object will be really really nice to see! And, they appear to be investing rather heavily on VR, AR glasses that will be powered by iPhones. Seemingly. So, I think there is a driving need for it for some of their upcoming products. Hence, raytracing hardware will be in the SoCs sooner or later.

    As far as iGPUs versus dGPUs, our common distinctions for the difference between the two will become or are gradually becoming more and less distinct. If you look at the Xbox X series SoC, it's basically a 12 TFLOP GPU with a vestigial CPU (an 8 core Zen 2) in terms of the amount of area the GPU and CPU occupy. And this is fabbed on TSMC 7nm. TSMC 5nm yields 70% more transistors. So, I can see a TSMC 5nm SoC with raytracing hardware in it. Intel with Tiger Lake is getting closer to this with its iGPU taking about 40% of the chip area. This will drive dGPUs to higher performance niches, and commensurate higher Watt tiers. It's going to be interesting to see how the dGPU market shakes out. Maybe, server GPUs will be the vast majority sales of dGPUs in a few years as iGPUs start to take even more of the mid range of the gaming market for GPUs.

    It's still a very big question on how much power Apple is willing to burn on the GPU and how they get more perf/Watt out of the GPU than competitors. The obvious way is to run the iGPU at low clocks, have a lot of GPU cores, and have a high bandwidth memory subsystem to feed it. This will require a lot of chip area and transistors, which is something Apple may have the luxury for, as they don't have to have 50% margin on their chips, and they are on the densest fab.
    Almost anything is possible, but no matter what Apple does, they won’t have a competitor to new top line GPUs. At least, not as an integrated unit. No matter how good Apple is, and how much they can leverage the machine learning cores, the neural engine and the ISP, it’s not going to come close to a 350 watt GPU board with huge amounts of graphics RAM.
    watto_cobra
  • Reply 7 of 30
    thttht Posts: 5,421member
    melgross said:
    tht said:
    melgross said:
    tht said:
    I'm betting that the A14 GPUs will have hardware support for raytracing. So, the iPhone will be the first Apple device to have hardware raytracing support, not Macs with AMD GPUs. ;)

    Anyways, anyone want to clarify AMD GPU codenames? Both the chip and graphics API support? Getting confusing out there. Navi, Navi 14, Navi 12, RDNA, RDNA2, so on and so forth. 
    Having support doesn’t mean having effective support. Nvidia has had support for ray tracing in their top GPUs for two generations, but despite having fast memory, and a lot of it, they weren’t really usable for that. Their new generation is supposed to have for the first time, enough oomf to make it useful.

    ‘how Apple could duplicate that in an IG graphics SoC is something that I doubt right now, even assuming it’s something they’re looking at. Ray tracing is one of the most difficult things to do in real-time. The number of calculations is immense. Apple also uses a small amount of shared RAM, that’s not considered to be the best way to power graphics hardware, so we [will] see.
    Yeah, they may not have enough transistors to do it. If AR is going to be a thing though, having some properly rendered shadows for the virtual object will be really really nice to see! And, they appear to be investing rather heavily on VR, AR glasses that will be powered by iPhones. Seemingly. So, I think there is a driving need for it for some of their upcoming products. Hence, raytracing hardware will be in the SoCs sooner or later.

    As far as iGPUs versus dGPUs, our common distinctions for the difference between the two will become or are gradually becoming more and less distinct. If you look at the Xbox X series SoC, it's basically a 12 TFLOP GPU with a vestigial CPU (an 8 core Zen 2) in terms of the amount of area the GPU and CPU occupy. And this is fabbed on TSMC 7nm. TSMC 5nm yields 70% more transistors. So, I can see a TSMC 5nm SoC with raytracing hardware in it. Intel with Tiger Lake is getting closer to this with its iGPU taking about 40% of the chip area. This will drive dGPUs to higher performance niches, and commensurate higher Watt tiers. It's going to be interesting to see how the dGPU market shakes out. Maybe, server GPUs will be the vast majority sales of dGPUs in a few years as iGPUs start to take even more of the mid range of the gaming market for GPUs.

    It's still a very big question on how much power Apple is willing to burn on the GPU and how they get more perf/Watt out of the GPU than competitors. The obvious way is to run the iGPU at low clocks, have a lot of GPU cores, and have a high bandwidth memory subsystem to feed it. This will require a lot of chip area and transistors, which is something Apple may have the luxury for, as they don't have to have 50% margin on their chips, and they are on the densest fab.
    Almost anything is possible, but no matter what Apple does, they won’t have a competitor to new top line GPUs. At least, not as an integrated unit. No matter how good Apple is, and how much they can leverage the machine learning cores, the neural engine and the ISP, it’s not going to come close to a 350 watt GPU board with huge amounts of graphics RAM.
    Yeah, sure I agree that Apple Silicon isn't going to compete with a 350 W GPU. But, I wouldn't eliminate the idea of hardware support for raytracing in an iGPU either. There is a lot of transistors on TSMC N5 for Apple to use.
    watto_cobra
  • Reply 8 of 30
    Since graphics cards often come with many GB of RAM (typically 4 GB), and the Apple Silicon architecture will be using shared RAM between its CPU and GPU, does that mean that an 8 GB Apple Silicon computer would be a very bad idea? Yesterday "AppleSeeder2020" said that the 14" MacBook Pro to be announced on Sept 15th will have just 8GB of RAM. How can 8 GB be enough on a system which has to split RAM into video and system memory?
    watto_cobra
  • Reply 9 of 30
    thttht Posts: 5,421member
    Since graphics cards often come with many GB of RAM (typically 4 GB), and the Apple Silicon architecture will be using shared RAM between its CPU and GPU, does that mean that an 8 GB Apple Silicon computer would be a very bad idea? Yesterday "AppleSeeder2020" said that the 14" MacBook Pro to be announced on Sept 15th will have just 8GB of RAM. How can 8 GB be enough on a system which has to split RAM into video and system memory?
    The current MBP13 doesn't have dedicated graphics RAM either, they all have iGPUs, and they can support one 5K or two 4K external displays. It won't be a problem that users aren't dealing with today.

    Large graphics RAM sizes are for 3D applications or GPU compute applications. Not a lot of people are doing that on ultrabooks. If you are playing a lot of 3D games, buy a PC desktop. If you do a lot of GPU compute, you should get a more performant computer or an eGPU for your ultrabook. If you are doing any of these things, getting the base amount of non-upgradeable RAM is the last thing you should do.

    For web browsing, 8 GB will be fine for most people. Not for web workers maybe, but fine for light web browser duty, like school stuff, office automation, even a lot of programming/engineering tasks, etc.
    GG1watto_cobra
  • Reply 10 of 30
    tht said:
    Since graphics cards often come with many GB of RAM (typically 4 GB), and the Apple Silicon architecture will be using shared RAM between its CPU and GPU, does that mean that an 8 GB Apple Silicon computer would be a very bad idea? Yesterday "AppleSeeder2020" said that the 14" MacBook Pro to be announced on Sept 15th will have just 8GB of RAM. How can 8 GB be enough on a system which has to split RAM into video and system memory?
    The current MBP13 doesn't have dedicated graphics RAM either, they all have iGPUs, and they can support one 5K or two 4K external displays. It won't be a problem that users aren't dealing with today.

    Large graphics RAM sizes are for 3D applications or GPU compute applications. Not a lot of people are doing that on ultrabooks. If you are playing a lot of 3D games, buy a PC desktop. If you do a lot of GPU compute, you should get a more performant computer or an eGPU for your ultrabook. If you are doing any of these things, getting the base amount of non-upgradeable RAM is the last thing you should do.

    For web browsing, 8 GB will be fine for most people. Not for web workers maybe, but fine for light web browser duty, like school stuff, office automation, even a lot of programming/engineering tasks, etc.
    Ok, fair enough. I think I've been using 8 GB for over 6 years and it feels no longer enough. And pundits urge people to get memory upgrades to at least 16 GB.
  • Reply 11 of 30
    melgrossmelgross Posts: 33,510member
    tht said:
    melgross said:
    tht said:
    melgross said:
    tht said:
    I'm betting that the A14 GPUs will have hardware support for raytracing. So, the iPhone will be the first Apple device to have hardware raytracing support, not Macs with AMD GPUs. ;)

    Anyways, anyone want to clarify AMD GPU codenames? Both the chip and graphics API support? Getting confusing out there. Navi, Navi 14, Navi 12, RDNA, RDNA2, so on and so forth. 
    Having support doesn’t mean having effective support. Nvidia has had support for ray tracing in their top GPUs for two generations, but despite having fast memory, and a lot of it, they weren’t really usable for that. Their new generation is supposed to have for the first time, enough oomf to make it useful.

    ‘how Apple could duplicate that in an IG graphics SoC is something that I doubt right now, even assuming it’s something they’re looking at. Ray tracing is one of the most difficult things to do in real-time. The number of calculations is immense. Apple also uses a small amount of shared RAM, that’s not considered to be the best way to power graphics hardware, so we [will] see.
    Yeah, they may not have enough transistors to do it. If AR is going to be a thing though, having some properly rendered shadows for the virtual object will be really really nice to see! And, they appear to be investing rather heavily on VR, AR glasses that will be powered by iPhones. Seemingly. So, I think there is a driving need for it for some of their upcoming products. Hence, raytracing hardware will be in the SoCs sooner or later.

    As far as iGPUs versus dGPUs, our common distinctions for the difference between the two will become or are gradually becoming more and less distinct. If you look at the Xbox X series SoC, it's basically a 12 TFLOP GPU with a vestigial CPU (an 8 core Zen 2) in terms of the amount of area the GPU and CPU occupy. And this is fabbed on TSMC 7nm. TSMC 5nm yields 70% more transistors. So, I can see a TSMC 5nm SoC with raytracing hardware in it. Intel with Tiger Lake is getting closer to this with its iGPU taking about 40% of the chip area. This will drive dGPUs to higher performance niches, and commensurate higher Watt tiers. It's going to be interesting to see how the dGPU market shakes out. Maybe, server GPUs will be the vast majority sales of dGPUs in a few years as iGPUs start to take even more of the mid range of the gaming market for GPUs.

    It's still a very big question on how much power Apple is willing to burn on the GPU and how they get more perf/Watt out of the GPU than competitors. The obvious way is to run the iGPU at low clocks, have a lot of GPU cores, and have a high bandwidth memory subsystem to feed it. This will require a lot of chip area and transistors, which is something Apple may have the luxury for, as they don't have to have 50% margin on their chips, and they are on the densest fab.
    Almost anything is possible, but no matter what Apple does, they won’t have a competitor to new top line GPUs. At least, not as an integrated unit. No matter how good Apple is, and how much they can leverage the machine learning cores, the neural engine and the ISP, it’s not going to come close to a 350 watt GPU board with huge amounts of graphics RAM.
    Yeah, sure I agree that Apple Silicon isn't going to compete with a 350 W GPU. But, I wouldn't eliminate the idea of hardware support for raytracing in an iGPU either. There is a lot of transistors on TSMC N5 for Apple to use.
    The latest Nvidia GPUs have over 25 billion transistors, and an energy budget to support that. There is simply no way Apple can even begin to approach that. It takes Nvidia’s highest GPU boards to enable ray tracing in a useful way. Remember we’re talking not about ray trace in static imagery, but in dynamically rendered images at a rate of at least 60fps. That’s real time tracing. It’s in addition to every other graphic demand, such as real time physics, dynamic texture generation, hidden image elimination, etc. these all require very large amounts of graphics RAM additionally.

    while Apple’s built-in GPU will compete well against Intel’s new IG, which is twice as powerful as the current, to be replaced next month, generation, and possibly low end GPUs boards, and possibly even some low mid range boards, none of that can do ray trace, or other really high level computations. If Apple manages to come out with a separate GPU, then things could be somewhat different. But Apple has stated that one reason their GPU will be competitive is because of their “unique” admixture of these other SoC elements, and the software tying it all together. I’ve stated that possibly, if Apple can figure out how to put those elements on a separate GPU, and give that GPU sufficient bandwidth with proper graphics memory, and a lot of it, with a decent power budget, they could really have something.

    but none of us know what Apple’s plans really are. We can speculate, but that’s about it. And Apple has told us the basics during the June conference. In fact, they told us more than I expected. I listened very closely. I looked at their charts. We don’t see a separate GPU. At least not for the recognizable future. I suspect that we’ll se an SoC that competes with Intel very well in the beginning, likely exceeding performance at the level of Apple’s similar machines previously. I don’t expect to see overwhelming GPU performance. I would to see performance of a lower level separate GPU.

    ‘’anything that exceeds that would thrill me.
    GG1watto_cobra
  • Reply 12 of 30
    thttht Posts: 5,421member
    melgross said:
    The latest Nvidia GPUs have over 25 billion transistors, and an energy budget to support that. There is simply no way Apple can even begin to approach that. It takes Nvidia’s highest GPU boards to enable ray tracing in a useful way. Remember we’re talking not about ray trace in static imagery, but in dynamically rendered images at a rate of at least 60fps. That’s real time tracing. It’s in addition to every other graphic demand, such as real time physics, dynamic texture generation, hidden image elimination, etc. these all require very large amounts of graphics RAM additionally.
    The A12Z on TSMC N7 has 10 billion transistors on a 120 mm^2 die. This is for a fanless tablet. I definitely see them hitting 20 to 30 billion transistors on TSMC N5 for the Mac SoCs. A notional A14X for an iPad Pro could be 15 billion transistors just based on the new fab alone. A Mac SoC for an iMac can easily be 2x that at around 30 billion.

    Transistor budgets of over 25 billion are not that daunting for Apple Silicon. It's the power budgets that are. They just aren't going to run any of their SoCs at over 150 Watts, and I'd bet they want 100 W or lower. That will be self-limiting, so performance probably won't match these 300 W dGPUs or 200 W CPUs on average.

    I expect them to be at parity or possibly more performant than dGPUs at each Watt class though. The fab advantage will give them about 30% perf/W advantage. The density advantage is going to be about 100%. They have so many transistors to play with on TSMC N5 that you wonder if they are willing use it all.

    melgross said:
    but none of us know what Apple’s plans really are. We can speculate, but that’s about it. And Apple has told us the basics during the June conference. In fact, they told us more than I expected. I listened very closely. I looked at their charts. We don’t see a separate GPU. At least not for the recognizable future. I suspect that we’ll se an SoC that competes with Intel very well in the beginning, likely exceeding performance at the level of Apple’s similar machines previously. I don’t expect to see overwhelming GPU performance. I would to see performance of a lower level separate GPU.

    ‘’anything that exceeds that would thrill me.
    Yes. Apple has a chance to do something different than the packaging and designs afforded to them with components from Intel and AMD. It could be a monolithic SoC. It could be chiplets. It could be stacks of chiplets. They could use HBM or GDDR or whatever custom memory solution used commodity DRAM. It's going to be interesting to say the least. 
    GG1watto_cobra
  • Reply 13 of 30
    melgrossmelgross Posts: 33,510member
    tht said:
    melgross said:
    The latest Nvidia GPUs have over 25 billion transistors, and an energy budget to support that. There is simply no way Apple can even begin to approach that. It takes Nvidia’s highest GPU boards to enable ray tracing in a useful way. Remember we’re talking not about ray trace in static imagery, but in dynamically rendered images at a rate of at least 60fps. That’s real time tracing. It’s in addition to every other graphic demand, such as real time physics, dynamic texture generation, hidden image elimination, etc. these all require very large amounts of graphics RAM additionally.
    The A12Z on TSMC N7 has 10 billion transistors on a 120 mm^2 die. This is for a fanless tablet. I definitely see them hitting 20 to 30 billion transistors on TSMC N5 for the Mac SoCs. A notional A14X for an iPad Pro could be 15 billion transistors just based on the new fab alone. A Mac SoC for an iMac can easily be 2x that at around 30 billion.

    Transistor budgets of over 25 billion are not that daunting for Apple Silicon. It's the power budgets that are. They just aren't going to run any of their SoCs at over 150 Watts, and I'd bet they want 100 W or lower. That will be self-limiting, so performance probably won't match these 300 W dGPUs or 200 W CPUs on average.

    I expect them to be at parity or possibly more performant than dGPUs at each Watt class though. The fab advantage will give them about 30% perf/W advantage. The density advantage is going to be about 100%. They have so many transistors to play with on TSMC N5 that you wonder if they are willing use it all.

    melgross said:
    but none of us know what Apple’s plans really are. We can speculate, but that’s about it. And Apple has told us the basics during the June conference. In fact, they told us more than I expected. I listened very closely. I looked at their charts. We don’t see a separate GPU. At least not for the recognizable future. I suspect that we’ll se an SoC that competes with Intel very well in the beginning, likely exceeding performance at the level of Apple’s similar machines previously. I don’t expect to see overwhelming GPU performance. I would to see performance of a lower level separate GPU.

    ‘’anything that exceeds that would thrill me.
    Yes. Apple has a chance to do something different than the packaging and designs afforded to them with components from Intel and AMD. It could be a monolithic SoC. It could be chiplets. It could be stacks of chiplets. They could use HBM or GDDR or whatever custom memory solution used commodity DRAM. It's going to be interesting to say the least. 
    I just have to disagree. Apple has been pretty clear that these Macs would use less power and, according to the chart they showed, have equal performance. But they didnt say which Macs they were talking about. There is no way Apple is going to make that leap in the next year. Maybe, over time, if they are willing to make a much larger chip, and Nvidia’s chips are very large indeed, they can do it. But then, the efficiency and power draw will be seventy comoromised. And you’re forgetting, I suppose that thatb25 billion transistores is just for the GPU. Apple has to. Contend with an entire SoC. It’s not going to happen.

    I really don’t care how efficient ARM instructions are, or how good Apple’s engineering is. There are physical laws that everyone has to follow, and it’s not likely that Apple is special here. Not only is Apple nor go8ng to run over 150 watts, likely they don’t want to run much over 20 watts. There’s only so much they can do.
    GG1
  • Reply 14 of 30
    GG1GG1 Posts: 483member
    melgross said:
    tht said:
    melgross said:
    The latest Nvidia GPUs have over 25 billion transistors, and an energy budget to support that. There is simply no way Apple can even begin to approach that. It takes Nvidia’s highest GPU boards to enable ray tracing in a useful way. Remember we’re talking not about ray trace in static imagery, but in dynamically rendered images at a rate of at least 60fps. That’s real time tracing. It’s in addition to every other graphic demand, such as real time physics, dynamic texture generation, hidden image elimination, etc. these all require very large amounts of graphics RAM additionally.
    The A12Z on TSMC N7 has 10 billion transistors on a 120 mm^2 die. This is for a fanless tablet. I definitely see them hitting 20 to 30 billion transistors on TSMC N5 for the Mac SoCs. A notional A14X for an iPad Pro could be 15 billion transistors just based on the new fab alone. A Mac SoC for an iMac can easily be 2x that at around 30 billion.

    Transistor budgets of over 25 billion are not that daunting for Apple Silicon. It's the power budgets that are. They just aren't going to run any of their SoCs at over 150 Watts, and I'd bet they want 100 W or lower. That will be self-limiting, so performance probably won't match these 300 W dGPUs or 200 W CPUs on average.

    I expect them to be at parity or possibly more performant than dGPUs at each Watt class though. The fab advantage will give them about 30% perf/W advantage. The density advantage is going to be about 100%. They have so many transistors to play with on TSMC N5 that you wonder if they are willing use it all.

    melgross said:
    but none of us know what Apple’s plans really are. We can speculate, but that’s about it. And Apple has told us the basics during the June conference. In fact, they told us more than I expected. I listened very closely. I looked at their charts. We don’t see a separate GPU. At least not for the recognizable future. I suspect that we’ll se an SoC that competes with Intel very well in the beginning, likely exceeding performance at the level of Apple’s similar machines previously. I don’t expect to see overwhelming GPU performance. I would to see performance of a lower level separate GPU.

    ‘’anything that exceeds that would thrill me.
    Yes. Apple has a chance to do something different than the packaging and designs afforded to them with components from Intel and AMD. It could be a monolithic SoC. It could be chiplets. It could be stacks of chiplets. They could use HBM or GDDR or whatever custom memory solution used commodity DRAM. It's going to be interesting to say the least. 
    I just have to disagree. Apple has been pretty clear that these Macs would use less power and, according to the chart they showed, have equal performance. But they didnt say which Macs they were talking about. There is no way Apple is going to make that leap in the next year. Maybe, over time, if they are willing to make a much larger chip, and Nvidia’s chips are very large indeed, they can do it. But then, the efficiency and power draw will be seventy comoromised. And you’re forgetting, I suppose that thatb25 billion transistores is just for the GPU. Apple has to. Contend with an entire SoC. It’s not going to happen.

    I really don’t care how efficient ARM instructions are, or how good Apple’s engineering is. There are physical laws that everyone has to follow, and it’s not likely that Apple is special here. Not only is Apple nor go8ng to run over 150 watts, likely they don’t want to run much over 20 watts. There’s only so much they can do.
    It is these technical discussions such as yours (Melgross, Tht, and a few others) that really gets me excited in ASi. Kudos to you both. Very enjoyable reads!
    watto_cobra
  • Reply 15 of 30
    melgrossmelgross Posts: 33,510member
    GG1 said:
    melgross said:
    tht said:
    melgross said:
    The latest Nvidia GPUs have over 25 billion transistors, and an energy budget to support that. There is simply no way Apple can even begin to approach that. It takes Nvidia’s highest GPU boards to enable ray tracing in a useful way. Remember we’re talking not about ray trace in static imagery, but in dynamically rendered images at a rate of at least 60fps. That’s real time tracing. It’s in addition to every other graphic demand, such as real time physics, dynamic texture generation, hidden image elimination, etc. these all require very large amounts of graphics RAM additionally.
    The A12Z on TSMC N7 has 10 billion transistors on a 120 mm^2 die. This is for a fanless tablet. I definitely see them hitting 20 to 30 billion transistors on TSMC N5 for the Mac SoCs. A notional A14X for an iPad Pro could be 15 billion transistors just based on the new fab alone. A Mac SoC for an iMac can easily be 2x that at around 30 billion.

    Transistor budgets of over 25 billion are not that daunting for Apple Silicon. It's the power budgets that are. They just aren't going to run any of their SoCs at over 150 Watts, and I'd bet they want 100 W or lower. That will be self-limiting, so performance probably won't match these 300 W dGPUs or 200 W CPUs on average.

    I expect them to be at parity or possibly more performant than dGPUs at each Watt class though. The fab advantage will give them about 30% perf/W advantage. The density advantage is going to be about 100%. They have so many transistors to play with on TSMC N5 that you wonder if they are willing use it all.

    melgross said:
    but none of us know what Apple’s plans really are. We can speculate, but that’s about it. And Apple has told us the basics during the June conference. In fact, they told us more than I expected. I listened very closely. I looked at their charts. We don’t see a separate GPU. At least not for the recognizable future. I suspect that we’ll se an SoC that competes with Intel very well in the beginning, likely exceeding performance at the level of Apple’s similar machines previously. I don’t expect to see overwhelming GPU performance. I would to see performance of a lower level separate GPU.

    ‘’anything that exceeds that would thrill me.
    Yes. Apple has a chance to do something different than the packaging and designs afforded to them with components from Intel and AMD. It could be a monolithic SoC. It could be chiplets. It could be stacks of chiplets. They could use HBM or GDDR or whatever custom memory solution used commodity DRAM. It's going to be interesting to say the least. 
    I just have to disagree. Apple has been pretty clear that these Macs would use less power and, according to the chart they showed, have equal performance. But they didnt say which Macs they were talking about. There is no way Apple is going to make that leap in the next year. Maybe, over time, if they are willing to make a much larger chip, and Nvidia’s chips are very large indeed, they can do it. But then, the efficiency and power draw will be seventy comoromised. And you’re forgetting, I suppose that thatb25 billion transistores is just for the GPU. Apple has to. Contend with an entire SoC. It’s not going to happen.

    I really don’t care how efficient ARM instructions are, or how good Apple’s engineering is. There are physical laws that everyone has to follow, and it’s not likely that Apple is special here. Not only is Apple nor go8ng to run over 150 watts, likely they don’t want to run much over 20 watts. There’s only so much they can do.
    It is these technical discussions such as yours (Melgross, Tht, and a few others) that really gets me excited in ASi. Kudos to you both. Very enjoyable reads!
    I hope that people don’t take what I say the wrong way. I do believe that Apple’s SoCs are going to be great. But we do have to remember that the reason for them has been AMD and Intel’s refusal to compromise backwards comparability. There is an awful lot of “wasted” real estate on those chips devoted to ancient 16 bit software, and the newer 32 bit software. If they took the step to get rid of the 16 bit support, as they finally did with the really obsolete 8 bits a number of years ago, it would free up space and power to be used for a more modern 64 bit chip.

    but Microsoft still supports Windows 32, and even older 16 bit software. Until they’re forced to organizations and government will stubbornly hold on to their old code, and not spend the money for modern software. That’s one advantage Apple has. They don’t have all of those customers to worry about, though, in recent years, they are getting further into that area, but with modern code.

    people do have to understand that there’s nothing Apple is doing that anyone else can’t also do, if they want to. We’re seeing ARM chips for the server space, and the worlds biggest supercomputer is now comprised of ARM chips. So it can be done. Qualcomm is interested in contesting the space. If Apple is successful, then we will see others follow. Windows depends on a chip,that’s much better than what’s available to them now, and Microsoft knows it, which is why they’ve partnered with Qualcomm to develop one. Apple needs to look over its shoulder and not become complacent.
    GG1
  • Reply 16 of 30
    tht said:
    melgross said:
    The latest Nvidia GPUs have over 25 billion transistors, and an energy budget to support that. There is simply no way Apple can even begin to approach that. It takes Nvidia’s highest GPU boards to enable ray tracing in a useful way. Remember we’re talking not about ray trace in static imagery, but in dynamically rendered images at a rate of at least 60fps. That’s real time tracing. It’s in addition to every other graphic demand, such as real time physics, dynamic texture generation, hidden image elimination, etc. these all require very large amounts of graphics RAM additionally.
    The A12Z on TSMC N7 has 10 billion transistors on a 120 mm^2 die. This is for a fanless tablet. I definitely see them hitting 20 to 30 billion transistors on TSMC N5 for the Mac SoCs. A notional A14X for an iPad Pro could be 15 billion transistors just based on the new fab alone. A Mac SoC for an iMac can easily be 2x that at around 30 billion.

    Transistor budgets of over 25 billion are not that daunting for Apple Silicon. It's the power budgets that are. They just aren't going to run any of their SoCs at over 150 Watts, and I'd bet they want 100 W or lower. That will be self-limiting, so performance probably won't match these 300 W dGPUs or 200 W CPUs on average.

    I expect them to be at parity or possibly more performant than dGPUs at each Watt class though. The fab advantage will give them about 30% perf/W advantage. The density advantage is going to be about 100%. They have so many transistors to play with on TSMC N5 that you wonder if they are willing use it all.

    melgross said:
    but none of us know what Apple’s plans really are. We can speculate, but that’s about it. And Apple has told us the basics during the June conference. In fact, they told us more than I expected. I listened very closely. I looked at their charts. We don’t see a separate GPU. At least not for the recognizable future. I suspect that we’ll se an SoC that competes with Intel very well in the beginning, likely exceeding performance at the level of Apple’s similar machines previously. I don’t expect to see overwhelming GPU performance. I would to see performance of a lower level separate GPU.

    ‘’anything that exceeds that would thrill me.
    Yes. Apple has a chance to do something different than the packaging and designs afforded to them with components from Intel and AMD. It could be a monolithic SoC. It could be chiplets. It could be stacks of chiplets. They could use HBM or GDDR or whatever custom memory solution used commodity DRAM. It's going to be interesting to say the least. 
    Let’s be real, even if you could, how are you going to cool it?  No big deal for laptops and consoles, they need to take size into consideration.  It wouldn’t make sense for desktops where separation makes efficient cooling.

    The 28-Core in the Mac Pro consumes ~280-300W and requires a ginormous heat sink.  Anything beyond needs a high-end water cooling system, which will be disastrous for OEMs...
    edited September 2020
  • Reply 17 of 30
    melgross said:
    tht said:
    melgross said:
    The latest Nvidia GPUs have over 25 billion transistors, and an energy budget to support that. There is simply no way Apple can even begin to approach that. It takes Nvidia’s highest GPU boards to enable ray tracing in a useful way. Remember we’re talking not about ray trace in static imagery, but in dynamically rendered images at a rate of at least 60fps. That’s real time tracing. It’s in addition to every other graphic demand, such as real time physics, dynamic texture generation, hidden image elimination, etc. these all require very large amounts of graphics RAM additionally.
    The A12Z on TSMC N7 has 10 billion transistors on a 120 mm^2 die. This is for a fanless tablet. I definitely see them hitting 20 to 30 billion transistors on TSMC N5 for the Mac SoCs. A notional A14X for an iPad Pro could be 15 billion transistors just based on the new fab alone. A Mac SoC for an iMac can easily be 2x that at around 30 billion.

    Transistor budgets of over 25 billion are not that daunting for Apple Silicon. It's the power budgets that are. They just aren't going to run any of their SoCs at over 150 Watts, and I'd bet they want 100 W or lower. That will be self-limiting, so performance probably won't match these 300 W dGPUs or 200 W CPUs on average.

    I expect them to be at parity or possibly more performant than dGPUs at each Watt class though. The fab advantage will give them about 30% perf/W advantage. The density advantage is going to be about 100%. They have so many transistors to play with on TSMC N5 that you wonder if they are willing use it all.

    melgross said:
    but none of us know what Apple’s plans really are. We can speculate, but that’s about it. And Apple has told us the basics during the June conference. In fact, they told us more than I expected. I listened very closely. I looked at their charts. We don’t see a separate GPU. At least not for the recognizable future. I suspect that we’ll se an SoC that competes with Intel very well in the beginning, likely exceeding performance at the level of Apple’s similar machines previously. I don’t expect to see overwhelming GPU performance. I would to see performance of a lower level separate GPU.

    ‘’anything that exceeds that would thrill me.
    Yes. Apple has a chance to do something different than the packaging and designs afforded to them with components from Intel and AMD. It could be a monolithic SoC. It could be chiplets. It could be stacks of chiplets. They could use HBM or GDDR or whatever custom memory solution used commodity DRAM. It's going to be interesting to say the least. 
    I just have to disagree. Apple has been pretty clear that these Macs would use less power and, according to the chart they showed, have equal performance. But they didnt say which Macs they were talking about. There is no way Apple is going to make that leap in the next year. Maybe, over time, if they are willing to make a much larger chip, and Nvidia’s chips are very large indeed, they can do it. But then, the efficiency and power draw will be seventy comoromised. And you’re forgetting, I suppose that thatb25 billion transistores is just for the GPU. Apple has to. Contend with an entire SoC. It’s not going to happen.

    I really don’t care how efficient ARM instructions are, or how good Apple’s engineering is. There are physical laws that everyone has to follow, and it’s not likely that Apple is special here. Not only is Apple nor go8ng to run over 150 watts, likely they don’t want to run much over 20 watts. There’s only so much they can do.
    It would only make sense for a workstation to have dedicated GPUs.  Apple is replacing four of them, no way they could build a die that large.  How are you going to cool it?

    Also, A12X/Z can do 20 watts, if you stress them enough, the CPU alone will output 15 watts.  Macs should have no problem to push beyond that, I understand Apple doesn’t need to win a spec-war but no reason to constraint themselves for competitors.

    Maybe they’re pointing the 14” Pro and the 24” iMac, both aren’t exactly “high-end”.
    edited September 2020
  • Reply 18 of 30
    tht said:
    Since graphics cards often come with many GB of RAM (typically 4 GB), and the Apple Silicon architecture will be using shared RAM between its CPU and GPU, does that mean that an 8 GB Apple Silicon computer would be a very bad idea? Yesterday "AppleSeeder2020" said that the 14" MacBook Pro to be announced on Sept 15th will have just 8GB of RAM. How can 8 GB be enough on a system which has to split RAM into video and system memory?
    The current MBP13 doesn't have dedicated graphics RAM either, they all have iGPUs, and they can support one 5K or two 4K external displays. It won't be a problem that users aren't dealing with today.

    Large graphics RAM sizes are for 3D applications or GPU compute applications. Not a lot of people are doing that on ultrabooks. If you are playing a lot of 3D games, buy a PC desktop. If you do a lot of GPU compute, you should get a more performant computer or an eGPU for your ultrabook. If you are doing any of these things, getting the base amount of non-upgradeable RAM is the last thing you should do.

    For web browsing, 8 GB will be fine for most people. Not for web workers maybe, but fine for light web browser duty, like school stuff, office automation, even a lot of programming/engineering tasks, etc.
    Ok, fair enough. I think I've been using 8 GB for over 6 years and it feels no longer enough. And pundits urge people to get memory upgrades to at least 16 GB.
    The 13” have 16-32GiB...

    Dude is probably not legit.


    Even the DTK have 16GiB, for goodness sake.
    edited September 2020
  • Reply 19 of 30
    melgrossmelgross Posts: 33,510member
    DuhSesame said:
    melgross said:
    tht said:
    melgross said:
    The latest Nvidia GPUs have over 25 billion transistors, and an energy budget to support that. There is simply no way Apple can even begin to approach that. It takes Nvidia’s highest GPU boards to enable ray tracing in a useful way. Remember we’re talking not about ray trace in static imagery, but in dynamically rendered images at a rate of at least 60fps. That’s real time tracing. It’s in addition to every other graphic demand, such as real time physics, dynamic texture generation, hidden image elimination, etc. these all require very large amounts of graphics RAM additionally.
    The A12Z on TSMC N7 has 10 billion transistors on a 120 mm^2 die. This is for a fanless tablet. I definitely see them hitting 20 to 30 billion transistors on TSMC N5 for the Mac SoCs. A notional A14X for an iPad Pro could be 15 billion transistors just based on the new fab alone. A Mac SoC for an iMac can easily be 2x that at around 30 billion.

    Transistor budgets of over 25 billion are not that daunting for Apple Silicon. It's the power budgets that are. They just aren't going to run any of their SoCs at over 150 Watts, and I'd bet they want 100 W or lower. That will be self-limiting, so performance probably won't match these 300 W dGPUs or 200 W CPUs on average.

    I expect them to be at parity or possibly more performant than dGPUs at each Watt class though. The fab advantage will give them about 30% perf/W advantage. The density advantage is going to be about 100%. They have so many transistors to play with on TSMC N5 that you wonder if they are willing use it all.

    melgross said:
    but none of us know what Apple’s plans really are. We can speculate, but that’s about it. And Apple has told us the basics during the June conference. In fact, they told us more than I expected. I listened very closely. I looked at their charts. We don’t see a separate GPU. At least not for the recognizable future. I suspect that we’ll se an SoC that competes with Intel very well in the beginning, likely exceeding performance at the level of Apple’s similar machines previously. I don’t expect to see overwhelming GPU performance. I would to see performance of a lower level separate GPU.

    ‘’anything that exceeds that would thrill me.
    Yes. Apple has a chance to do something different than the packaging and designs afforded to them with components from Intel and AMD. It could be a monolithic SoC. It could be chiplets. It could be stacks of chiplets. They could use HBM or GDDR or whatever custom memory solution used commodity DRAM. It's going to be interesting to say the least. 
    I just have to disagree. Apple has been pretty clear that these Macs would use less power and, according to the chart they showed, have equal performance. But they didnt say which Macs they were talking about. There is no way Apple is going to make that leap in the next year. Maybe, over time, if they are willing to make a much larger chip, and Nvidia’s chips are very large indeed, they can do it. But then, the efficiency and power draw will be seventy comoromised. And you’re forgetting, I suppose that thatb25 billion transistores is just for the GPU. Apple has to. Contend with an entire SoC. It’s not going to happen.

    I really don’t care how efficient ARM instructions are, or how good Apple’s engineering is. There are physical laws that everyone has to follow, and it’s not likely that Apple is special here. Not only is Apple nor go8ng to run over 150 watts, likely they don’t want to run much over 20 watts. There’s only so much they can do.
    It would only make sense for a workstation to have dedicated GPUs.  Apple is replacing four of them, no way they could build a die that large.  How are you going to cool it?

    Also, A12X/Z can do 20 watts, if you stress them enough, the CPU alone will output 15 watts.  Macs should have no problem to push beyond that, I understand Apple doesn’t need to win a spec-war but no reason to constraint themselves for competitors.

    Maybe they’re pointing the 14” Pro and the 24” iMac, both aren’t exactly “high-end”.
    The 16” Macbook Pro has one, so do the 27” iMacs. Everyone who games or dies any rendering needs one. Most pros use iMacs and Macbook Pros.
  • Reply 20 of 30
    melgross said:
    DuhSesame said:
    melgross said:
    tht said:
    melgross said:
    The latest Nvidia GPUs have over 25 billion transistors, and an energy budget to support that. There is simply no way Apple can even begin to approach that. It takes Nvidia’s highest GPU boards to enable ray tracing in a useful way. Remember we’re talking not about ray trace in static imagery, but in dynamically rendered images at a rate of at least 60fps. That’s real time tracing. It’s in addition to every other graphic demand, such as real time physics, dynamic texture generation, hidden image elimination, etc. these all require very large amounts of graphics RAM additionally.
    The A12Z on TSMC N7 has 10 billion transistors on a 120 mm^2 die. This is for a fanless tablet. I definitely see them hitting 20 to 30 billion transistors on TSMC N5 for the Mac SoCs. A notional A14X for an iPad Pro could be 15 billion transistors just based on the new fab alone. A Mac SoC for an iMac can easily be 2x that at around 30 billion.

    Transistor budgets of over 25 billion are not that daunting for Apple Silicon. It's the power budgets that are. They just aren't going to run any of their SoCs at over 150 Watts, and I'd bet they want 100 W or lower. That will be self-limiting, so performance probably won't match these 300 W dGPUs or 200 W CPUs on average.

    I expect them to be at parity or possibly more performant than dGPUs at each Watt class though. The fab advantage will give them about 30% perf/W advantage. The density advantage is going to be about 100%. They have so many transistors to play with on TSMC N5 that you wonder if they are willing use it all.

    melgross said:
    but none of us know what Apple’s plans really are. We can speculate, but that’s about it. And Apple has told us the basics during the June conference. In fact, they told us more than I expected. I listened very closely. I looked at their charts. We don’t see a separate GPU. At least not for the recognizable future. I suspect that we’ll se an SoC that competes with Intel very well in the beginning, likely exceeding performance at the level of Apple’s similar machines previously. I don’t expect to see overwhelming GPU performance. I would to see performance of a lower level separate GPU.

    ‘’anything that exceeds that would thrill me.
    Yes. Apple has a chance to do something different than the packaging and designs afforded to them with components from Intel and AMD. It could be a monolithic SoC. It could be chiplets. It could be stacks of chiplets. They could use HBM or GDDR or whatever custom memory solution used commodity DRAM. It's going to be interesting to say the least. 
    I just have to disagree. Apple has been pretty clear that these Macs would use less power and, according to the chart they showed, have equal performance. But they didnt say which Macs they were talking about. There is no way Apple is going to make that leap in the next year. Maybe, over time, if they are willing to make a much larger chip, and Nvidia’s chips are very large indeed, they can do it. But then, the efficiency and power draw will be seventy comoromised. And you’re forgetting, I suppose that thatb25 billion transistores is just for the GPU. Apple has to. Contend with an entire SoC. It’s not going to happen.

    I really don’t care how efficient ARM instructions are, or how good Apple’s engineering is. There are physical laws that everyone has to follow, and it’s not likely that Apple is special here. Not only is Apple nor go8ng to run over 150 watts, likely they don’t want to run much over 20 watts. There’s only so much they can do.
    It would only make sense for a workstation to have dedicated GPUs.  Apple is replacing four of them, no way they could build a die that large.  How are you going to cool it?

    Also, A12X/Z can do 20 watts, if you stress them enough, the CPU alone will output 15 watts.  Macs should have no problem to push beyond that, I understand Apple doesn’t need to win a spec-war but no reason to constraint themselves for competitors.

    Maybe they’re pointing the 14” Pro and the 24” iMac, both aren’t exactly “high-end”.
    The 16” Macbook Pro has one, so do the 27” iMacs. Everyone who games or dies any rendering needs one. Most pros use iMacs and Macbook Pros.
    No doubt desktops are going to have them.  Consoles are an exception where they don't need powerful CPUs (it's 15W) and they're "minor" by comparison.

    Laptops, however, have a limited power budget, in cases of large Pros, no models went beyond 100W.  Both 5500M and 5600M have a TGP of 50W, Intel's processor drains another ~60-80W.  Not pretty.  Cooling two chips side-by-side isn't so beneficial for a limited cooling system, where both relies heavily toward one side.  An SoC could make sense here and the performance goal isn't hard to reach.
Sign In or Register to comment.