Mojave hints at new & unannounced Vega GPU chips coming to Mac soon

Posted:
in General Discussion edited December 2018
AMD could be extending its lines of Radeon Pro Vega GPUs that could be used in a future Mac refresh early in 2019, according to identifiers in a recent macOS Mojave update.

MacBook Pro with Vega 20 GPU
MacBook Pro with Vega 20 GPU


AMD's Vega 16 and Vega 20 GPUs arrived on the MacBook Pro on November 14, offered as discrete graphics upgrades replacing the Radeon Pro 555X and Radeon Pro 560X. While the recent availability of the GPUs from Apple are welcomed by those wanting more performance, it appears that Apple may already be looking towards using other unannounced Vega GPU variants.

Patches to Linux kernel drivers were found on Friday to include references to a number of PCI IDs that don't correspond to currently-available Vega GPUs, reports Phoronix. One more PCI ID accompanied five existing Vega 20 PCI IDs in the Linux driver, while six more new PCI IDs are said to relate to Vega 10.

It is noted that the only other references to the PCI IDs in question were as part of the most recent macOS Mojave update, as well as GPUOpen's GFX9 parts list.

The existence of the new PCI IDs strongly implies that AMD is preparing to produce more in the notebook-oriented Vega range in the near future, possibly variants of the existing chip releases or upgraded versions. It is also plausible the references could relate to products undergoing internal testing for compatibility purposes, rather than for actual consumer release.

The new Vega 20 PCI ID is defined as 0x66A4, while the Vega 10 versions are 0x6869, 0x668A, 0x686B, 0x686D, 0x686E, and 0x686F.

In the case of Apple, it seems unlikely that it would proceed with adding another GPU option to the existing MacBook Pro lineup so soon after refreshing the lineup. Apple has used laptop-centric parts in iMacs in the past, and the existing line hasn't been updated recently.

The Vega 16 and Vega 20 used in the current high-end MacBook Pro models has what AMD describes as "Vega next-generation compute units," with the number in the name relating to the number of units each have. The GPUs include a feature called Rapid Packed Math to accelerate workloads in realtime and cut down the required amount of resources for repetitive tasks.

The use of second-generation high bandwidth memory (HBM2) offers some extra benefits over the previously-used GDDR5 memory in other graphics systems, including providing more memory bandwidth per chip and consuming less power. Also beneficial to notebook producers is the inclusion of HBM2 on the GPU package, allowing for it to take a smaller overall footprint, saving space inside the notebook for other components.

Those in the market for a Vega 16 or Vega 20-equipped MacBook Pro can save $225 instantly on every configuration with coupon.

Comments

  • Reply 1 of 15
    ...I would ask if the 'new' mini would benefit from more balanced internal cpu-gpu options as well...
    edited December 2018
  • Reply 2 of 15
    Mike WuertheleMike Wuerthele Posts: 6,858administrator
    ...I would ask if the 'new' mini would benefit from more balanced internal cpu-gpu options as well...
    I'd think so. The 16/20 line makes sense for the Mac mini form factor.
  • Reply 3 of 15
    thttht Posts: 5,421member
    If ths Vega 10 follows the convention where the number indicates the number of compute units a la Vega 16, 20, 56, 64, where is this Vega 10 going to go? It’s going to be about as fast as Intel processor graphics, right?

    MBP13TB? (Is there room?)
    iMac smaller display model? (Vega 16, 20 should go here at minimum)
    MBA13 new model? (Is there room?)
    Mac mini? (Is there room?)

    Don’t understand how this will benefit the lineup if it is half as fast as the Vega 20.


  • Reply 4 of 15
    madanmadan Posts: 103member
    AMD could be extending its lines of Radeon Pro Vega GPUs that could be used in a future Mac refresh early in 2019, according to identifiers in a recent macOS Mojave update.

    MacBook Pro with Vega 20 GPU
    MacBook Pro with Vega 20 GPU


    AMD's Vega 16 and Vega 20 GPUs arrived on the MacBook Pro on November 14, offered as discrete graphics upgrades replacing the Radeon Pro 555X and Radeon Pro 560X. While the recent availability of the GPUs from Apple are welcomed by those wanting more performance, it appears that Apple may already be looking towards using other unannounced Vega GPU variants.

    Patches to Linux kernel drivers were found on Friday to include references to a number of PCI IDs that don't correspond to currently-available Vega GPUs, reports Phoronix. One more PCI ID accompanied five existing Vega 20 PCI IDs in the Linux driver, while six more new PCI IDs are said to relate to Vega 10.

    It is noted that the only other references to the PCI IDs in question were as part of the most recent macOS Mojave update, as well as GPUOpen's GFX9 parts list.

    The existence of the new PCI IDs strongly implies that AMD is preparing to produce more in the notebook-oriented Vega range in the near future, possibly variants of the existing chip releases or upgraded versions. It is also plausible the references could relate to products undergoing internal testing for compatibility purposes, rather than for actual consumer release.

    The new Vega 20 PCI ID is defined as 0x66A4, while the Vega 10 versions are 0x6869, 0x668A, 0x686B, 0x686D, 0x686E, and 0x686F.

    In the case of Apple, it seems unlikely that it would proceed with adding another GPU option to the existing MacBook Pro lineup so soon after refreshing the lineup. Apple has used laptop-centric parts in iMacs in the past, and the existing line hasn't been updated recently.

    The Vega 16 and Vega 20 used in the current high-end MacBook Pro models has what AMD describes as "Vega next-generation compute units," with the number in the name relating to the number of units each have. The GPUs include a feature called Rapid Packed Math to accelerate workloads in realtime and cut down the required amount of resources for repetitive tasks.

    The use of second-generation high bandwidth memory (HBM2) offers some extra benefits over the previously-used GDDR5 memory in other graphics systems, including providing more memory bandwidth per chip and consuming less power. Also beneficial to notebook producers is the inclusion of HBM2 on the GPU package, allowing for it to take a smaller overall footprint, saving space inside the notebook for other components.

    Those in the market for a Vega 16 or Vega 20-equipped MacBook Pro can save $225 instantly on every configuration with coupon.

    God, I hope not. Vega just needs to go.  Keep in mind that the 2200g has a Vega chip on die with 8 CU and the 2400g has 11 CU.  That 11 CU component matches a 1030 pretty much pound for pound. That means a 16 CU part would trade with a 1050 and the new MBP 20 CU unit matches up with a 1050 Ti.  Which also produces low 3 in TF, just like the Vega 20.  So it all matches up.  What that means is that by calling it "Vega" instead of "Radeon RX", the average computer uninitiated think they're getting a much faster "work" part, than they actually are.  To put things in perspective, the next set of Ryzen APUs will ship with between 11 to 16 CUs.  That means a 150 dollar APU part will match the firepower of a 2500 dollar laptop GPU that Apple is offering right now.

    Apple is better off pushing the next gen 7nm Radeon RX components: 3080, 3070 and 3060 across its lineup.  7nm means far lower TDP, as well as higher performance for cost.  Especially since NVidia is stuck on higher process notes.

    With a Vega 56 being equivalent to a 1070 in both TF and practical, real-world performance, and the 20 CU Vega in the MBP.  What possible iMac could they release based on Vega in between those counts?  Vega 32?  Vega 48?  Guess what? AMD already makes those parts.  They're called the RX 570 and RX 580.  They'd have the SAME PERFORMANCE.  The iMac would move laterally in GPU and just hide that fact by couching it with a "pro" Vega name.

    So no...go away Vega.  You served your purpose.  All hail the HBM2 equipped 3080 series that rumors indicate AMD is waiting to drop in March-April.  That would be perfect.  You could have a 3050/5 shipping in the low end iMac 27/32 and a 3070/80 in the high end.  The iMacs would catapult from RX 580 performance up to Vega 56 and 64.  And then they'd just move the iMac pro to something else.


  • Reply 5 of 15
    TomETomE Posts: 172member
    Look for the Vega Graphics in a new MacBook Air. 
     As well as some other Macs, . . . . Possibly in Feb.

  • Reply 6 of 15
    thttht Posts: 5,421member
    madan said:
    God, I hope not. Vega just needs to go.
    Like with iOS devices, Apple could be going custom GPU on their laptops in the not so distant future, and so, Vega could indeed by going away.

    A notional T3 (MBA) and T3X (MBP) could have a GPU and PCIe switch in it. Apple could interface a T3 chip with all 16 PCIe lanes coming out of a typical Intel processor, and support TB3 controllers, the SSD controller, and whatever else I/O, with a switch. The switch could give the GPU all 16 lanes of PCIe worth of bandwidth, or give the SSD controller all 16 lanes of PCIe bandwidth.

    It would also make board layout a little bit easier. It is kind of interesting that Apple T2 laptops do not seem to have Intel PCH chips in them, while the desktops do. Since the laptops only have TB3, all I/O and GPU is bridged from the CPUs PCIe lanes. It’s not that far fetched to think an on-chip GPU is next. Since the T2 desktops have USBA and Ethernet, kind of implies the Intel PCH is the cheapest route for that. On chip support for USB and Ethernet might be included in a future T-series chip too so that eventually a Mac is just an Apple SoC with an optional Intel processor for those who to want to support legacy apps or x86 VMs.

    madan said:
    Keep in mind that the 2200g has a Vega chip on die with 8 CU and the 2400g has 11 CU.  That 11 CU component matches a 1030 pretty much pound for pound. That means a 16 CU part would trade with a 1050 and the new MBP 20 CU unit matches up with a 1050 Ti.  Which also produces low 3 in TF, just like the Vega 20. 
    TF = TensorFlow? Team Fortress?

    The 2400g with a Vega 11 scores about 45000 in Geekbench OpenCL compute. A Vega 10 would be about 10% less. That’s A12X GPU performance, except that the A12X is in a 10 W envelope versus a 45 W. Intel processor graphics aren’t that far away from a Vega 10 as well. Do agree that there doesn’t seem to be a role for this type of GPU almost anywhere.

    madan said:
    With a Vega 56 being equivalent to a 1070 in both TF and practical, real-world performance, and the 20 CU Vega in the MBP.  What possible iMac could they release based on Vega in between those counts?  Vega 32?  Vega 48?  Guess what? AMD already makes those parts.  They're called the RX 570 and RX 580.  They'd have the SAME PERFORMANCE.  The iMac would move laterally in GPU and just hide that fact by couching it with a "pro" Vega name.


    So no...go away Vega.  You served your purpose.  All hail the HBM2 equipped 3080 series that rumors indicate AMD is waiting to drop in March-April.  That would be perfect.  You could have a 3050/5 shipping in the low end iMac 27/32 and a 3070/80 in the high end.  The iMacs would catapult from RX 580 performance up to Vega 56 and 64.  And then they'd just move the iMac pro to something else.
    iMacs will get 75 W to 120 W GPUs. iMac Pros will get 150 W to 200 W GPUs. So, whichever GPU variants fit in those power constraints will go into those respective machines. This is assuming they keep the same form factor. They may go to a different form factor with different power and noise constraints.

    If Apple can put together a GPU package with 30 of their A12X cores, it’s going to be on order 150,000 Geekbench Compute points, something close to a Vega 56, but it’ll be in a 40 W package. Whether they want to do this, who knows.
    mcdave
  • Reply 7 of 15
    Could very well be the Instinct 7nm Vega 2 cards for Machine Learning and more that would be an obvious fit for the Mac Pro 2019.

    These would be OEM custom designs of the MI160. These are PCI-4 variant cards.


    Bus Type
    PCIe 4.0 x16
    PCIe 3.0 x16

    Cooling: Passive

    They are very powerful and run very quiet with passive cooling only. Perfect for Apple.

    Supports Dual Infinity Band Links that connect at Peak Infinity Fabric™ Link
    Bandwidth 100 GB/s

    The size of the new 7nm Instinct is amazingly small.
    edited December 2018
  • Reply 8 of 15
    I would rather have an eGPU built inside a monitor for the Mac Mini. Much better than a separate box.  
  • Reply 9 of 15
    ksecksec Posts: 1,569member
    tht said:
    madan said:
    God, I hope not. Vega just needs to go.
    Like with iOS devices, Apple could be going custom GPU on their laptops in the not so distant future, and so, Vega could indeed by going away.

    A notional T3 (MBA) and T3X (MBP) could have a GPU and PCIe switch in it. Apple could interface a T3 chip with all 16 PCIe lanes coming out of a typical Intel processor, and support TB3 controllers, the SSD controller, and whatever else I/O, with a switch. The switch could give the GPU all 16 lanes of PCIe worth of bandwidth, or give the SSD controller all 16 lanes of PCIe bandwidth.

    It would also make board layout a little bit easier. It is kind of interesting that Apple T2 laptops do not seem to have Intel PCH chips in them, while the desktops do. Since the laptops only have TB3, all I/O and GPU is bridged from the CPUs PCIe lanes. It’s not that far fetched to think an on-chip GPU is next. Since the T2 desktops have USBA and Ethernet, kind of implies the Intel PCH is the cheapest route for that. On chip support for USB and Ethernet might be included in a future T-series chip too so that eventually a Mac is just an Apple SoC with an optional Intel processor for those who to want to support legacy apps or x86 VMs.

    madan said:
    Keep in mind that the 2200g has a Vega chip on die with 8 CU and the 2400g has 11 CU.  That 11 CU component matches a 1030 pretty much pound for pound. That means a 16 CU part would trade with a 1050 and the new MBP 20 CU unit matches up with a 1050 Ti.  Which also produces low 3 in TF, just like the Vega 20. 
    TF = TensorFlow? Team Fortress?

    The 2400g with a Vega 11 scores about 45000 in Geekbench OpenCL compute. A Vega 10 would be about 10% less. That’s A12X GPU performance, except that the A12X is in a 10 W envelope versus a 45 W. Intel processor graphics aren’t that far away from a Vega 10 as well. Do agree that there doesn’t seem to be a role for this type of GPU almost anywhere.

    madan said:
    With a Vega 56 being equivalent to a 1070 in both TF and practical, real-world performance, and the 20 CU Vega in the MBP.  What possible iMac could they release based on Vega in between those counts?  Vega 32?  Vega 48?  Guess what? AMD already makes those parts.  They're called the RX 570 and RX 580.  They'd have the SAME PERFORMANCE.  The iMac would move laterally in GPU and just hide that fact by couching it with a "pro" Vega name.


    So no...go away Vega.  You served your purpose.  All hail the HBM2 equipped 3080 series that rumors indicate AMD is waiting to drop in March-April.  That would be perfect.  You could have a 3050/5 shipping in the low end iMac 27/32 and a 3070/80 in the high end.  The iMacs would catapult from RX 580 performance up to Vega 56 and 64.  And then they'd just move the iMac pro to something else.
    iMacs will get 75 W to 120 W GPUs. iMac Pros will get 150 W to 200 W GPUs. So, whichever GPU variants fit in those power constraints will go into those respective machines. This is assuming they keep the same form factor. They may go to a different form factor with different power and noise constraints.

    If Apple can put together a GPU package with 30 of their A12X cores, it’s going to be on order 150,000 Geekbench Compute points, something close to a Vega 56, but it’ll be in a 40 W package. Whether they want to do this, who knows.
    Really dislike the idea of Higher TDP = Pro. They should leave Xeon, ECC Memory, Radeon Pro on the iMac Pro, and has i9 CPU, Normal Memory, Fast GPU for Mac.
  • Reply 10 of 15
    tipootipoo Posts: 1,141member
    AMD filed for the name "Vega 2". Probably to differentiate 7nm Vega from the real next gen architecture, Navi, while still showing it's substantially new. 

    Hope they're out in time for the next iMac refresh, which is shaping up to be a good one. 
    edited December 2018
  • Reply 11 of 15
    mcdavemcdave Posts: 1,927member
    tht said:
    madan said:
    God, I hope not. Vega just needs to go.
    Like with iOS devices, Apple could be going custom GPU on their laptops in the not so distant future, and so, Vega could indeed by going away.

    A notional T3 (MBA) and T3X (MBP) could have a GPU and PCIe switch in it. Apple could interface a T3 chip with all 16 PCIe lanes coming out of a typical Intel processor, and support TB3 controllers, the SSD controller, and whatever else I/O, with a switch. The switch could give the GPU all 16 lanes of PCIe worth of bandwidth, or give the SSD controller all 16 lanes of PCIe bandwidth.

    It would also make board layout a little bit easier. It is kind of interesting that Apple T2 laptops do not seem to have Intel PCH chips in them, while the desktops do. Since the laptops only have TB3, all I/O and GPU is bridged from the CPUs PCIe lanes. It’s not that far fetched to think an on-chip GPU is next. Since the T2 desktops have USBA and Ethernet, kind of implies the Intel PCH is the cheapest route for that. On chip support for USB and Ethernet might be included in a future T-series chip too so that eventually a Mac is just an Apple SoC with an optional Intel processor for those who to want to support legacy apps or x86 VMs.

    madan said:
    Keep in mind that the 2200g has a Vega chip on die with 8 CU and the 2400g has 11 CU.  That 11 CU component matches a 1030 pretty much pound for pound. That means a 16 CU part would trade with a 1050 and the new MBP 20 CU unit matches up with a 1050 Ti.  Which also produces low 3 in TF, just like the Vega 20. 
    TF = TensorFlow? Team Fortress?

    The 2400g with a Vega 11 scores about 45000 in Geekbench OpenCL compute. A Vega 10 would be about 10% less. That’s A12X GPU performance, except that the A12X is in a 10 W envelope versus a 45 W. Intel processor graphics aren’t that far away from a Vega 10 as well. Do agree that there doesn’t seem to be a role for this type of GPU almost anywhere.

    madan said:
    With a Vega 56 being equivalent to a 1070 in both TF and practical, real-world performance, and the 20 CU Vega in the MBP.  What possible iMac could they release based on Vega in between those counts?  Vega 32?  Vega 48?  Guess what? AMD already makes those parts.  They're called the RX 570 and RX 580.  They'd have the SAME PERFORMANCE.  The iMac would move laterally in GPU and just hide that fact by couching it with a "pro" Vega name.


    So no...go away Vega.  You served your purpose.  All hail the HBM2 equipped 3080 series that rumors indicate AMD is waiting to drop in March-April.  That would be perfect.  You could have a 3050/5 shipping in the low end iMac 27/32 and a 3070/80 in the high end.  The iMacs would catapult from RX 580 performance up to Vega 56 and 64.  And then they'd just move the iMac pro to something else.
    If Apple can put together a GPU package with 30 of their A12X cores, it’s going to be on order 150,000 Geekbench Compute points, something close to a Vega 56, but it’ll be in a 40 W package. Whether they want to do this, who knows.
    Without this they’re fast becoming an over-priced Intel lackey.
  • Reply 12 of 15
    madanmadan Posts: 103member
    tht said:
    madan said:
    God, I hope not. Vega just needs to go.
    Like with iOS devices, Apple could be going custom GPU on their laptops in the not so distant future, and so, Vega could indeed by going away.

    A notional T3 (MBA) and T3X (MBP) could have a GPU and PCIe switch in it. Apple could interface a T3 chip with all 16 PCIe lanes coming out of a typical Intel processor, and support TB3 controllers, the SSD controller, and whatever else I/O, with a switch. The switch could give the GPU all 16 lanes of PCIe worth of bandwidth, or give the SSD controller all 16 lanes of PCIe bandwidth.

    It would also make board layout a little bit easier. It is kind of interesting that Apple T2 laptops do not seem to have Intel PCH chips in them, while the desktops do. Since the laptops only have TB3, all I/O and GPU is bridged from the CPUs PCIe lanes. It’s not that far fetched to think an on-chip GPU is next. Since the T2 desktops have USBA and Ethernet, kind of implies the Intel PCH is the cheapest route for that. On chip support for USB and Ethernet might be included in a future T-series chip too so that eventually a Mac is just an Apple SoC with an optional Intel processor for those who to want to support legacy apps or x86 VMs.

    madan said:
    Keep in mind that the 2200g has a Vega chip on die with 8 CU and the 2400g has 11 CU.  That 11 CU component matches a 1030 pretty much pound for pound. That means a 16 CU part would trade with a 1050 and the new MBP 20 CU unit matches up with a 1050 Ti.  Which also produces low 3 in TF, just like the Vega 20. 
    TF = TensorFlow? Team Fortress?

    The 2400g with a Vega 11 scores about 45000 in Geekbench OpenCL compute. A Vega 10 would be about 10% less. That’s A12X GPU performance, except that the A12X is in a 10 W envelope versus a 45 W. Intel processor graphics aren’t that far away from a Vega 10 as well. Do agree that there doesn’t seem to be a role for this type of GPU almost anywhere.

    madan said:
    With a Vega 56 being equivalent to a 1070 in both TF and practical, real-world performance, and the 20 CU Vega in the MBP.  What possible iMac could they release based on Vega in between those counts?  Vega 32?  Vega 48?  Guess what? AMD already makes those parts.  They're called the RX 570 and RX 580.  They'd have the SAME PERFORMANCE.  The iMac would move laterally in GPU and just hide that fact by couching it with a "pro" Vega name.


    So no...go away Vega.  You served your purpose.  All hail the HBM2 equipped 3080 series that rumors indicate AMD is waiting to drop in March-April.  That would be perfect.  You could have a 3050/5 shipping in the low end iMac 27/32 and a 3070/80 in the high end.  The iMacs would catapult from RX 580 performance up to Vega 56 and 64.  And then they'd just move the iMac pro to something else.
    iMacs will get 75 W to 120 W GPUs. iMac Pros will get 150 W to 200 W GPUs. So, whichever GPU variants fit in those power constraints will go into those respective machines. This is assuming they keep the same form factor. They may go to a different form factor with different power and noise constraints.

    If Apple can put together a GPU package with 30 of their A12X cores, it’s going to be on order 150,000 Geekbench Compute points, something close to a Vega 56, but it’ll be in a 40 W package. Whether they want to do this, who knows.
    On-die GPU would not work. Intel isn't building APUs. If you're talking about on-board T4-x chips that would not only handle boot but also deal with bus modulation and would also serve as a bridge with an Apple-manufactured IGP, I hope not. No IGP is a good option for a power user. I defy anyone to show an IGP option that is as efficient, much less more so, than a discrete option. It doesn't exist, even in higher-performance hardware.

    Secondly, Apple-created gpus may be perfectly acceptable for the iPhone, which still has less than 1.5 TF of power, but certainly not against discrete contemporaries that are primed to have 7 times the next performance during next calendar year's iteration. Sure, cooling could be better and you can improve the clock frequency in such an Apple-branded gpu but there's no guarantee you'd get comparable performance to an enthusiast-class AMD/NVidia part. Additionally, bootcamp would have to go the way of the dinosaur which would already mitigate the Apple computer market, especially for dual-booting power users. They'd also take a further hit as more and more developers would *have* to use Metal 2 in order to utilize the complete computational ability of the Apple gpu. All around, it's a clumsy plan, especially in light that Apple has bigger fish to fry with the iPhone, iPad, Watch and other products. It's simply easier to grab an off-the-shelf Intel part and an AMD discrete chipset rather than replace entire components wholesale. An ancillary T2-3 chip is not the same as the massive undertaking of replacing the entire graphics subsystem in a computer. Windows emulation is not a legitimate option for power users.

    TF is teraflops which when taken against the memory multiplier gives you the cleanest summary of comparable to chip chip performance. Someone thought Team Fortress was funny. Somewhere. Not me though..

    As for the iMac parts, the top end iMac discrete gpus climb well over 120 W. They actually top out at about 150-175. So as I said, it's entirely likely that they'll use the 3080 next year which is slated to have a 150 W TDP. That allows it to make one full iterative leap in performance, while keeping the size, thermal envelope and wattage nigh identical to the 580. AMD's CEO said as much when she talked about the company's 7nm chip/node path. If Apple put that Apple chip together, no software would run on it because no one codes high performance applications for that architecture. You'd have to wait years for new statistical apps, desktop publishing and games. They might do it eventually. But it's far too soon today.
    edited December 2018
  • Reply 13 of 15
    thttht Posts: 5,421member
    ksec said:
    iMacs will get 75 W to 120 W GPUs. iMac Pros will get 150 W to 200 W GPUs. So, whichever GPU variants fit in those power constraints will go into those respective machines. This is assuming they keep the same form factor. They may go to a different form factor with different power and noise constraints.

    If Apple can put together a GPU package with 30 of their A12X cores, it’s going to be on order 150,000 Geekbench Compute points, something close to a Vega 56, but it’ll be in a 40 W package. Whether they want to do this, who knows.
    Really dislike the idea of Higher TDP = Pro. They should leave Xeon, ECC Memory, Radeon Pro on the iMac Pro, and has i9 CPU, Normal Memory, Fast GPU for Mac.
    Maybe Apple will have a change of heart like they did with the Mac mini, but I don’t think so. It’s the way Apple segments the product line so that value is immediately obvious, with the upsell at every price tier. They don’t want to give you a lot of choices. If you want something in particular, they’ll make you pay for it as that is what you value.

    Like a simple thing like storage. I wanted to get >3 TB of internal storage over 4 years ago. The only option for that was the iMac 27. I would Have preferred a small form factor headless box with 2 3.5” HDD, but they don’t sell those anymore.

    Then, Apple does not sell computers for gamers, so they won’t sell a 250 W GPU for $300 in a $1500 box. For content creation people that has apps that can take advantage of such a card, well, they can pay $6000 for that plus all the Xeon, ECC RAM, and other expensive bits.

    Btw, if the iMac goes all SSD, it will likely be $300 more expensive. The small display iMac will be $1600 and the large display iMac will be $2200. If you want more perf/$, ARM Macs are really the only hope. A 13” display laptop with 8 GB RAM and A12X at $1300 would be a very competitive machine, or an 8 GB RAM A10X in a really small box for $500 would be a really great machine. Or even a laptop with a 15W Core i5-8265U with an Apple 10-core GPU on a T3 chip for $1500 would be an awesome Mac.
  • Reply 14 of 15
    tipoo said:
    AMD filed for the name "Vega 2". Probably to differentiate 7nm Vega from the real next gen architecture, Navi, while still showing it's substantially new. 

    Hope they're out in time for the next iMac refresh, which is shaping up to be a good one. 
    Vincent Vega approves.
  • Reply 15 of 15
    thttht Posts: 5,421member
    madan said:
    Secondly, Apple-created gpus may be perfectly acceptable for the iPhone, which still has less than 1.5 TF of power, but certainly not against discrete contemporaries that are primed to have 7 times the next performance during next calendar year's iteration. Sure, cooling could be better and you can improve the clock frequency in such an Apple-branded gpu but there's no guarantee you'd get comparable performance to an enthusiast-class AMD/NVidia part.

    Additionally, bootcamp would have to go the way of the dinosaur which would already mitigate the Apple computer market, especially for dual-booting power users. They'd also take a further hit as more and more developers would *have* to use Metal 2 in order to utilize the complete computational ability of the Apple gpu. All around, it's a clumsy plan, especially in light that Apple has bigger fish to fry with the iPhone, iPad, Watch and other products. It's simply easier to grab an off-the-shelf Intel part and an AMD discrete chipset rather than replace entire components wholesale. An ancillary T2-3 chip is not the same as the massive undertaking of replacing the entire graphics subsystem in a computer. Windows emulation is not a legitimate option for power users.
    I’m talking about laptops. Obviously, for desktops, they can just use 200 W cards from AMD or mercifully get over whatever issue they have with Nvidia and offer both options.

    For laptops, the Vega 20 in the MBP scores about 75,000 in Metal/OpenCL GB4 compute. The A12X scores about 42,000 in metal GB4 compute. This is the on-chip GPU in the A12X with 7 cores sitting inside a fanless device (<10 W). They can make a T3 ARM coprocessor with 12 of those cores and get somewhere around 72,000 in metal GB4 compute, about the same as a Vega 16 or maybe 20. This would be <20 W. That’s for 13” display devices and the Mac mini. That’s better performance in GB4 compute performance than the 25W Nvidia MX150.

    They can use 20 of those GPU cores be over 100,000 metal GB4 compute points for 30 W, and would be about 30% faster than Vega 20 while running about half the power consumption in a notional T3X for MBP or even the small display iMac. That’s pretty good, and they can get rid of the discrete GPU, make a smaller logic board, and add some combination of more battery, more ports, or make it a thinner & light device.

    madan said:
    TF is teraflops which when taken against the memory multiplier gives you the cleanest summary of comparable to chip chip performance. Someone thought Team Fortress was funny. Somewhere. Not me though..

    Ok. Which type of FLOPS or we talking about here. 8 bit, 16 bit, 32 bit? And what do you mean by memory multiplier? System bus between the CPU and system RAM memory, GPU to graphics bus? And what do you mean by “taken against”?

    The Radeon Pro 580 in the 2017 iMac 27 supposedly has 5.5 TFLOPS of single precision. That means it is TF 5.5? Or, do you need to multiply it to something else?

    madan said:
    As for the iMac parts, the top end iMac discrete gpus climb well over 120 W. They actually top out at about 150-175. So as I said, it's entirely likely that they'll use the 3080 next year which is slated to have a 150 W TDP. That allows it to make one full iterative leap in performance, while keeping the size, thermal envelope and wattage nigh identical to the 580. AMD's CEO said as much when she talked about the company's 7nm chip/node path. If Apple put that Apple chip together, no software would run on it because no one codes high performance applications for that architecture. You'd have to wait years for new statistical apps, desktop publishing and games. They might do it eventually. But it's far too soon today.
    Wikipedia says the Radeon Pro 580 in the iMac 27 is a 150 W TDP part.

    My issue with 150 W is that the iMac 27” only has a 300 W power supply. So, 90 W for CPU, 150 W for GPU, 20 W for memory, then only 40 W left for the display, SSD, audio and I/O ports? A 27” Apple Thunderbolt Display has a 250 W power supply, without the computing bits, but likely uses 100 W for MagSafe, 50 W for I/O leaving 100 W for the display. Can’t see how Apple is letting the GPU run at 150 W. So, they are likely doing the usual, decrease performance about 10%, and reduce power consumption by 30%. That’s always a good trade imo.

    The 2019 iMac having the equivalent of a Vega 56/64 from the iMac Pro is about the most you can hope for imo. The iMac Pro (single GPU) and Mac Pro (multi GPU) will be what uses the top end 150 to 250 W GPUs, whatever they’lo be branded.
Sign In or Register to comment.