AMD Radeon Pro W5500 offered as price-conscious alternative Mac Pro GPU

Posted:
in Current Mac Hardware
AMD has launched another professional graphics card which could be used in the Mac Pro, with the Radeon Pro W5500 offered as an alternative to the high-powered W5700 with restrained performance, at half the price of its stablemate.




AMD revealed the Radeon Pro W5700 in November as its first PC workstation graphics card with a GPU that uses a 7-nanometer process. While offering high performance, making it extremely suitable for the kind of workloads intended for the Mac Pro, the card is also quite expensive at $799, so the launch of a similar card at a lower price point is likely to be welcomed by professional users.

The Radeon Pro W5500 is made using the same 7-nanometer process and uses the same high-performance RDNA architecture as the W5700, which boasts a 25% speed higher performance-per-clock than previous-generation GCN architecture cards. The line also offers significant multitasking performance improvements, along with a more power-efficient setup that can help it consume 32-percent less power on average in Solidworks.

To bring the card down in price, AMD has reduced its specifications as well. While the W5700 has 36 compute units, 8.89 teraflops of performance, and 8GB of GDDR6 memory, the W5500 has a respectable 22 compute units, 1408 stream processors and up to 5.35 teraflops of single-precision performance, or up to 330 gigaflops of double-precision.

Memory is still 8GB of GDDR6, with up to 224GB/s of memory bandwidth, and maximum power consumption is 125 watts. The number of displays it can drive has also gone down from six to four, with the quad DisplayPort 1.4 outputs able to handle four 4K displays or one 8K display at 60Hz.

Again, the card supports PCIe 4.0 connections, though it will work with any PCIe 3 slots in the Mac Pro or in eGPU enclosures. It is also a single-width, full-height card.

While scaled back from the W5700, AMD insists the W5500 is still a great card for the workplace, including for visualizations and VR applications.

A mobile-enabled version of the GPU is also offered under the Radeon Pro W5500M name. It is equipped with the same 22 compute units and memory bandwidth, but it has 4GB of GDDR6 memory, a maximum power consumption of 85 watts, and offers up to 4.79 teraflops of performance.

The mobile version does stand a chance of being employed in a future MacBook Pro model, as Apple does use AMD GPUs as discrete alternatives to Intel's integrated graphics.

The Radeon Pro W5500 graphics card will be going on sale in mid-February, priced at $399. The W5500M GPU is anticipated to be available in professional mobile workstations later in the spring, though AMD did not state which vendors would use the component.

While the card isn't currently compatible with macOS, the rest of AMD's graphics card lineup is while Nvidia's range isn't. It is a near-inevitability that support will be incoming in a future operating system update.

Comments

  • Reply 1 of 15
    lkrupplkrupp Posts: 10,557member
    If you are "price-conscious" then you are probably not looking at the Mac Pro to begin with. However, there appear to be quite few individuals buying the Mac Pro for personal use just because they can. The Apple Discussion Forums have a lot of posts from user who are not professionals.
    watto_cobra
  • Reply 2 of 15
    mknelsonmknelson Posts: 1,125member
    lkrupp said:
    If you are "price-conscious" then you are probably not looking at the Mac Pro to begin with. However, there appear to be quite few individuals buying the Mac Pro for personal use just because they can. The Apple Discussion Forums have a lot of posts from user who are not professionals.
    I have a client purchasing one to use as a server - the GPU isn't relevant to that function so a lower cost option would be useful if Apple were to offer it.
    watto_cobra
  • Reply 3 of 15
    thttht Posts: 5,443member
    Request: Can AI evaluate what workflows are driven by GPU memory? Virtually all these 3rd party cards are 8 GB to 16 GB, while Apple is shipping 32 GB GPU upgrades for the MP, with a 16 GB W5700X coming soon. It seems your basic YouTube videographer and gamers don’t really make use of 16 GB let alone 32 GB. Same deal with main memory, where most YouTubers and gamers don’t need much more than 32 to 64 GB.

    That is, what is the performance uplift for these workflows that need 32 GB GPU memory versus just having 16 GB or 8 GB?
    watto_cobra
  • Reply 4 of 15
    lkrupp said:
    If you are "price-conscious" then you are probably not looking at the Mac Pro to begin with. However, there appear to be quite few individuals buying the Mac Pro for personal use just because they can. The Apple Discussion Forums have a lot of posts from user who are not professionals.
    Don't forget the non-Apple "Macs".  Quite a popular option for those that want to build their own Mac.  These cards would be another option for them.
  • Reply 5 of 15
    lkrupplkrupp Posts: 10,557member
    Did anyone read the last paragraph? The card is not yet compatible with macOS.
    bsbeamer
  • Reply 6 of 15
    ITGUYINSD said:
    lkrupp said:
    If you are "price-conscious" then you are probably not looking at the Mac Pro to begin with. However, there appear to be quite few individuals buying the Mac Pro for personal use just because they can. The Apple Discussion Forums have a lot of posts from user who are not professionals.
    Don't forget the non-Apple "Macs".  Quite a popular option for those that want to build their own Mac.  These cards would be another option for them.
    How popular? How many of these machines are there?
  • Reply 7 of 15
    tht said:
    Request: Can AI evaluate what workflows are driven by GPU memory? Virtually all these 3rd party cards are 8 GB to 16 GB, while Apple is shipping 32 GB GPU upgrades for the MP, with a 16 GB W5700X coming soon. It seems your basic YouTube videographer and gamers don’t really make use of 16 GB let alone 32 GB. Same deal with main memory, where most YouTubers and gamers don’t need much more than 32 to 64 GB.

    That is, what is the performance uplift for these workflows that need 32 GB GPU memory versus just having 16 GB or 8 GB?
    Given the compute units in W5500 vs RX 5700 XT, it seems like a no-brainer to go with one of those models for better GPU performance at basically the same price.  The W5700X is niche to MP7,1 and will not benefit eGPU users (except for driver improvements).  

    Rarely do people max out their VRAM on Mac unless you're dealing with renders, but even those are usually RAM cached vs. VRAM based.  Overall leaves more headroom, but agree very few applications will be able to utilize outside of 3D like C4D.  Autodesk at least used to be a leader in GPU driven rendering, but 3DS is no longer macOS compatible at all.  
    watto_cobra
  • Reply 8 of 15
    tht said:
    Request: Can AI evaluate what workflows are driven by GPU memory? Virtually all these 3rd party cards are 8 GB to 16 GB, while Apple is shipping 32 GB GPU upgrades for the MP, with a 16 GB W5700X coming soon. It seems your basic YouTube videographer and gamers don’t really make use of 16 GB let alone 32 GB. Same deal with main memory, where most YouTubers and gamers don’t need much more than 32 to 64 GB.

    That is, what is the performance uplift for these workflows that need 32 GB GPU memory versus just having 16 GB or 8 GB?
    "Performance uplift for these workflows" meaning YouTubers and gamers? Because those generally aren't workflows that benefit from high performance/VRAM cards. The workflows that these higher performance cards are 3D/compositing/etc. Davinci Resolve/Fusion users, 3D GPU-based renderers, After Effects/Premiere plugins, realtime game engines, VR, so on. Anything that requires loading and caching lots of high res textures, realtime rendering, etc.
    watto_cobra
  • Reply 9 of 15
    thttht Posts: 5,443member
    tht said:
    Request: Can AI evaluate what workflows are driven by GPU memory? Virtually all these 3rd party cards are 8 GB to 16 GB, while Apple is shipping 32 GB GPU upgrades for the MP, with a 16 GB W5700X coming soon. It seems your basic YouTube videographer and gamers don’t really make use of 16 GB let alone 32 GB. Same deal with main memory, where most YouTubers and gamers don’t need much more than 32 to 64 GB.

    That is, what is the performance uplift for these workflows that need 32 GB GPU memory versus just having 16 GB or 8 GB?
    "Performance uplift for these workflows" meaning YouTubers and gamers? Because those generally aren't workflows that benefit from high performance/VRAM cards. The workflows that these higher performance cards are 3D/compositing/etc. Davinci Resolve/Fusion users, 3D GPU-based renderers, After Effects/Premiere plugins, realtime game engines, VR, so on. Anything that requires loading and caching lots of high res textures, realtime rendering, etc.
    Oh definitely not. I said this after all: It seems your basic YouTube videographer and gamers don’t really make use of 16 GB let alone 32 GB.

    Would be nice to see what type of workflow benefits from 32 GB of video memory. The vast majority of Mac Pro related reviews seem to be from FCPX users, or people who run benchmarks that don’t stress video memory capacity. So, yeah, bring on the 1 TB CAD models, GPU compute models, AI training models, numerical modelers, so on and so forth. I’m curious to see how much the increased video memory improves performance versus loading the data from main memory.
    watto_cobra
  • Reply 10 of 15
    mknelson said:
    lkrupp said:
    If you are "price-conscious" then you are probably not looking at the Mac Pro to begin with. However, there appear to be quite few individuals buying the Mac Pro for personal use just because they can. The Apple Discussion Forums have a lot of posts from user who are not professionals.
    I have a client purchasing one to use as a server - the GPU isn't relevant to that function so a lower cost option would be useful if Apple were to offer it.
    Yup —some people need cpu not gpu. For example, compiling iOS apps
    watto_cobra
  • Reply 11 of 15
    cornchipcornchip Posts: 1,949member
    I don’t know much about these things. I just bought a radion RX 580 to put in my 09 cMP. Hope it’s good enough for the experts :/ 
    watto_cobra
  • Reply 12 of 15
    So will this work with the Mac Pro 5,1?
    watto_cobra
  • Reply 13 of 15
    thttht Posts: 5,443member
    So will this work with the Mac Pro 5,1?
    As long as drivers are written for the latest macOS version that run on them. Drivers have not been certified or been provided just yet, so, who knows.
    watto_cobra
  • Reply 14 of 15
    tht said:
    tht said:
    Request: Can AI evaluate what workflows are driven by GPU memory? Virtually all these 3rd party cards are 8 GB to 16 GB, while Apple is shipping 32 GB GPU upgrades for the MP, with a 16 GB W5700X coming soon. It seems your basic YouTube videographer and gamers don’t really make use of 16 GB let alone 32 GB. Same deal with main memory, where most YouTubers and gamers don’t need much more than 32 to 64 GB.

    That is, what is the performance uplift for these workflows that need 32 GB GPU memory versus just having 16 GB or 8 GB?
    "Performance uplift for these workflows" meaning YouTubers and gamers? Because those generally aren't workflows that benefit from high performance/VRAM cards. The workflows that these higher performance cards are 3D/compositing/etc. Davinci Resolve/Fusion users, 3D GPU-based renderers, After Effects/Premiere plugins, realtime game engines, VR, so on. Anything that requires loading and caching lots of high res textures, realtime rendering, etc.
    Oh definitely not. I said this after all: It seems your basic YouTube videographer and gamers don’t really make use of 16 GB let alone 32 GB.

    Would be nice to see what type of workflow benefits from 32 GB of video memory. The vast majority of Mac Pro related reviews seem to be from FCPX users, or people who run benchmarks that don’t stress video memory capacity. So, yeah, bring on the 1 TB CAD models, GPU compute models, AI training models, numerical modelers, so on and so forth. I’m curious to see how much the increased video memory improves performance versus loading the data from main memory.
    Oh, well in that case you are generally correct. Unless those videographers are filming in 8K and running multiple denoising nodes and things that can eat up VRAM. I guess I misunderstood your post to be asking which kinds of workflows would benefit from higher powered GPUs with more VRAM, when you already know but want to see actual user field tests and benchmarks etc. I would too.
    watto_cobra
  • Reply 15 of 15
    thttht Posts: 5,443member
    tht said:
    tht said:
    Request: Can AI evaluate what workflows are driven by GPU memory? Virtually all these 3rd party cards are 8 GB to 16 GB, while Apple is shipping 32 GB GPU upgrades for the MP, with a 16 GB W5700X coming soon. It seems your basic YouTube videographer and gamers don’t really make use of 16 GB let alone 32 GB. Same deal with main memory, where most YouTubers and gamers don’t need much more than 32 to 64 GB.

    That is, what is the performance uplift for these workflows that need 32 GB GPU memory versus just having 16 GB or 8 GB?
    "Performance uplift for these workflows" meaning YouTubers and gamers? Because those generally aren't workflows that benefit from high performance/VRAM cards. The workflows that these higher performance cards are 3D/compositing/etc. Davinci Resolve/Fusion users, 3D GPU-based renderers, After Effects/Premiere plugins, realtime game engines, VR, so on. Anything that requires loading and caching lots of high res textures, realtime rendering, etc.
    Oh definitely not. I said this after all: It seems your basic YouTube videographer and gamers don’t really make use of 16 GB let alone 32 GB.

    Would be nice to see what type of workflow benefits from 32 GB of video memory. The vast majority of Mac Pro related reviews seem to be from FCPX users, or people who run benchmarks that don’t stress video memory capacity. So, yeah, bring on the 1 TB CAD models, GPU compute models, AI training models, numerical modelers, so on and so forth. I’m curious to see how much the increased video memory improves performance versus loading the data from main memory.
    Oh, well in that case you are generally correct. Unless those videographers are filming in 8K and running multiple denoising nodes and things that can eat up VRAM. I guess I misunderstood your post to be asking which kinds of workflows would benefit from higher powered GPUs with more VRAM, when you already know but want to see actual user field tests and benchmarks etc. I would too.
    Yeah, it is essentially a plea to better characterize the performance of the system, rather run the reviewers’ typical workflow. With high core count CPUs and GPU compute, including multi GPU cards, performance can’t be boiled down to a few benchmark runs or a single workflow anymore. Performance in systems like this is highly dependent on the application and application architecture, just as much as the hardware itself, and software updates or refinements move even more glacially than hardware updates.

    Maybe it is not that important for reviewers to do a thorough job on a workstation like the Mac Pro, as most customers know and understand what will make their computing tasks go faster, but it is the tip of an arrow. High core count CPUs, GPU compute, and dedicated hardware (like T2 Videotoolbox, AI units) will be coming to cheaper systems soon enough. Like 8-core CPUs will be in small laptops and 16-core CPUs will be in large laptops pretty soon here. It seems the actual utilization of all the cores and GPUs will be done by an ever smaller fraction of buyers.

    A large fraction of 6-core and 8-core MBP15 and MBP16 owners probably will never run a task that utilizes all eight cores. For a lot of Apple’s customers, a large screen laptop with a more efficient and high performance 4-core, lots of storage, thinner, cooler running, longer runtime would make for a better computing experience. Just wondering how the inevitable core count race is going to play out now that AMD is effectively leveraging its core count advantage. Intel will respond.
    watto_cobra
Sign In or Register to comment.