Future Mac Pro may use Apple Silicon & PCI-E GPUs in parallel

Posted:
in Future Apple Hardware
Despite Apple Silicon currently working solely with its own on-board GPU cores, Apple is researching how to support more options, like PCI-E GPUs, all working in tandem.




One thing Intel Macs had that Apple Silicon ones do not, is the ability to use GPUs in external enclosures across Thunderbolt, or internally in a Mac Pro. There are just no provisions with Apple Silicon to do so, at present.

It might not be an issue that concerns most Mac users. It is a big deal for some -- and particularly for Mac Pro buyers.

Now, however, a series of four newly-revealed patent applications appear to show that Apple is at least considering this issue.

Why Apple abandoned multiple GPU support

Apple Silicon brought dramatic, practically unheard of, performance and capability improvements over the earlier Intel processors. Part of that was how the new Apple-designed processors cut down on previous bottlenecks.

For instance, unlike typical RAM chips in a device, the new Unified Memory system saw the RAM installed on the central processor. It means you can't upgrade it later, but it also radically sped up how fast that CPU could use RAM.

Apple Silicon processors come with graphics cores built-in for similar reasons. And in order to support third-party ones, Apple would have to find a way to achieve several things.

  • Physically include space for GPU cards, or connectors for external GPUs

  • Determine when a task is better served by another GPU

  • Then root data to that GPU

  • Handle how it gets data back from the GPU

The first point is possibly going to be addressed in the forthcoming Mac Pro, or perhaps a later model, as that machine should be expandable.

Everything else in the list is addressed by one or more of the four newly-revealed patent applications.

The benefits of multiple GPU support

"Given their growing compute capabilities, graphics processing units (GPUs) are now being used extensively for large-scale workloads," says Apple in the patent application, "Logical Slot To Hardware Slot Mapping For Graphics Processors."

"APIs such as Metal and OpenCI give software developers an interface to access the compute power of the GPU for their applications," it continues. "In recent times, software developers have been moving substantial portions of their applications to using the GPU."

Apple uses the term "kick" to refer to the kind of discrete unit of graphics work that a GPU may perform. It then says that there is an issue of getting these kicks to the right GPUs.

This diagram is repeated in most of the new patent applications.
This diagram is repeated in most of the new patent applications.


"Data master circuitry (e.g., a compute data master, vertex data master, and pixel data master) may distribute work from these kicks to multiple replicated shader cores," it says, "e.g., over a communications fabric."

A graphics card may occupy what Apple calls a "kickslot" which appears to be little more than a PCI-E slot either internal or external to the computer. There could be two or more of these, with macOS switching between them.

Switching between GPUs

Switching between these graphics cards requires technology similar to NVidia's old Scalable Link Interface (SLI), which controlled cards and sets of kicks.

Apple's new patent applications include one called "Kickslot Manager Circuitry For Graphics Processors," which is part of achieving the same result.

"Slot manager circuitry may store, using an entry of the tracking slot circuitry, software-specified information for a set of graphics work," says Apple. "The slot manager circuitry may prefetch, from the location and prior to allocating shader core resources for the set of graphics work, configuration register data for the set of graphics work."

Detail from a patent application concerning the scheduling of data being sent to more than one GPU
Detail from a patent application concerning the scheduling of data being sent to more than one GPU


So two or more GPU cards can work together, but that requires scheduling. Hence Apple's third new patent application, "Affinity-Based Graphics Scheduling."

"Distribution circuitry may receive a software-specified set of graphics work," says Apple in this application, "and a software-indicated mapping of portions of the set of graphics work to groups of graphics processor sub-units."

"This may improve cache efficiency, in some embodiments," notes Apple, "by allowing graphics work that accesses the same memory areas to be assigned to the same group of sub-units that share a cache."

Getting back data from a GPU

So Apple's patent applications cover physically supporting two or more graphics cards, then determining which is the best for a particular task. The patent applications then describe how work can be divided across the available GPUs.

That leaves getting back the data from GPU, and that comes over the more general patent application called, "Software Control Techniques For Graphics Hardware That Supports Logical Slots."

This patent application includes descriptions of how control "circuitry may determine mappings between logical slots and distributed hardware slots for different sets of graphics work."

"Various mapping aspects may be software-controlled," it says. "For example, software may specify one or more of the following: priority information for a set of graphics work, to retain the mapping after completion of the work, a distribution rule, a target group of sub-units, a sub-unit mask, a scheduling policy, to reclaim hardware slots from another logical slot, etc."

Detail from the patent applications showing an overview of the process
Detail from the patent applications showing an overview of the process


It appears that every issue raised by the desire to use multiple graphics cards has at least been investigated by Apple.

So that leaves the obvious question about whether Apple will make a Mac that adds multiple GPU support to Apple Silicon -- and when.

When we'll see multiple graphics cards in a Mac

Apple does apply for patents constantly, and there is no guarantee that even granted patents will lead directly to products. Plus patents might be applied for years before Apple can use them.

So despite all of the evidence, it is not guaranteed that Apple will support multiple GPUs in Macs -- and in particular, it can't be presumed that the next Mac Pro that's expected soon will.

But the intention is clearly there, and this is not a chance collection of unrelated patents happening to be applied for at the same time. Three of the four patents, for instance, name Andrew M. Havlir as an inventor, and three name Steven Fishwick.

Read on AppleInsider
«1

Comments

  • Reply 1 of 35
    keithwkeithw Posts: 145member
    This would be very useful if it comes to pass! It would give 3D graphics users the "best of both worlds," SOC GPUs and discrete GPUs.  My eGPU on my 6 year-old iMac Pro breathed new life into it.  While its CPU performance isn't even up to the original M1 performance, the eGPU performance blows away anything currently available on the ASi architecture, including the M2 Max.  Perhaps that advantage is short-lived if the M2 Ultra or M3 can increase the graphics performance!  For the record, the GB6 metal performance of my AMD eGPU is 194703.
    edited February 2023 d_2tenthousandthingsroundaboutnowcgWerkswatto_cobra
  • Reply 2 of 35
    Pedant's Corner:
    Then root data to that GPU
    That should be "route".
    tenthousandthingskeithwmike1williamlondonroundaboutnowwatto_cobraDiletanteGrannySmith99
  • Reply 3 of 35
    This follows neatly on the Mac Pro discussion about rumors with regard to this question a few weeks ago, New Mac Pro may not Support PCI-E GPUs
    programmercgWerkswatto_cobra
  • Reply 4 of 35
    Very interesting.

    Optional GPU expansion card(s) augmenting the M2 Ultra’s integrated GPU is the only way I can see an Apple Silicon Mac Pro truly matching or exceeding the capabilities of the current Intel Mac Pro’s dual Radeon Pro W6900X GPU option. At this point my guess would be such an expansion card would use an Apple GPU rather than AMD, but we’ll see. 

    Thus I think something like this will likely come with the new Mac Pro this year. Very cool to see that indeed Apple is working on this kind of thing. 
    edited February 2023 tenthousandthingskeithwprogrammerwatto_cobra
  • Reply 5 of 35
    No reason to believe this will actually happen until eGPU is actually supported on the latest versions of macOS.
    keithwwilliamlondonwatto_cobra9secondkox2
  • Reply 6 of 35
    This seems like a good place to re-post my transcript of Anand Shimpi's comments on Apple Silicon for the Mac in the Apple interview with Andru Edwards. I recently posted it there, but that thread is buried. For those who don't know, Anand is the founder of AnandTech.com, but Apple hired him away from himself in 2014. It is edited for clarity, removing interjections like "I think" and "kind of" and "sort of," but I've retained his rhetorical ", right? ..." habit, as it captures the feel of the dialogue.

    It's a good overview of what Apple is trying to do. If you think about what he's saying carefully, despite what he says about the Mac joining the "cadence" of the iPhone and iPad, you can see Apple isn't going to release silicon just because: "If we’re not able to deliver something compelling, we won’t engage, right? ... We won’t build the chip." 

    [General questions]

    The silicon team doesn’t operate in a vacuum, right? ... When these products are being envisioned and designed, folks on the architecture team, the design team, they’re there, they’re aware of where we’re going, what’s important, both from a workload perspective, as well as the things that are most important to enable in all of these designs.

    I think part of what you’re seeing now is this now decade-plus long, maniacal obsession with power-efficient performance and energy efficiency. If you look at the roots of Apple Silicon, it all started with the iPhone and the iPad. There we’re fitting into a very, very constrained environment, and so we had to build these building blocks, whether it’s our CPU or GPU, media engine, neural engine, to fit in something that’s way, way smaller from a thermal standpoint and a power delivery standpoint than something like a 16-inch MacBook Pro. So the fundamental building blocks are just way more efficient than what you’re typically used to seeing in a product like this. The other thing you’re noticing is, for a lot of tasks that maybe used to be high-powered use cases, on Apple Silicon they actually don’t consume that much power. If you look, compared to what you might find in a competing PC product, depending on the workload, we might be a factor of two or a factor of four times lower power. That allows us to deliver a lot of workloads, that might have been high-power use cases in a different product, in something that actually is a very quiet and cool and long-lasting. The other thing that you’re noticing is that the single-thread performance, the snappiness of your machine, it’s really the same high-performance core regardless of if you’re talking about a MacBook Air or a 14-inch Pro or 16-inch Pro or the new Mac mini, and so all of these machines can accommodate one of those cores running full tilt, again we’ve turned a lot of those usages and usage cases into low-power workloads. You can’t get around physics, though, right? ... So if you light up all the cores, all the GPUs, the 14-inch system just has less thermal capacity than the 16, right? ... So depending on your workload, that might drive you to a bigger machine, but really the chips are across the board incredibly efficient.

    [Battery life question]

    You can look at how chip design works at Apple. You have to remember we’re not a merchant silicon vendor, at the end of the day we ship product. So the story for the chip team actually starts at the product, right? ... There is a vision that the design team, that the system team has that they want to enable, and the job of the chip is to enable those features and enable that product to deliver the best performance within the constraints, within the thermal envelope of that chassis, that is humanly possible. So if you look at what we did going from the M1 family to the M2 Pro and M2 Max, at any given power point, we’re able to deliver more performance. On the CPU we added two more efficiency cores, two more of our e-cores. That allowed us, or was part of what allowed us, to deliver more multi-thread performance, again, at every single power point where the M1 and M2 curves overlap we were able to deliver more performance at any given power point. The dynamic range of operations [is] a little bit longer, a little bit wider, so we do have a slight increase in terms of peak power, but in terms of efficiency, across the range, it is a step forward versus the M1 family, and that directly translates into battery life. The same thing is true for the GPU, it’s counterintuitive, but a big GPU running a modest frequency and voltage, is actually a very efficient way to fill up a box. So that’s been our philosophy dating back to iPhone and iPad, and it continues on the Mac as well.

    // But really the thing that we see, that the iPhone and the iPad have enjoyed over the years, is this idea that every generation gets the latest of our IPs, the latest CPU IP, the latest GPU, media engine, neural engine, and so on and so forth, and so now the Mac gets to be on that cadence too. If you look at how we’ve evolved things on the phone and iPad, those IPs tend to get more efficient over time. There is this relationship, if the fundamental chassis doesn’t change, any additional performance you draw, you deliver has to be done more efficiently, and so this is the first time the MacBook Pro gets to enjoy that and be on that same sort of cycle.

    On the silicon side, the team doesn’t pull any punches, right? ... The goal across all the IPs is, one, make sure you can enable the vision of the product, that there’s a new feature, a new capability that we have to bring to the table in order for the product to have everything that we envisioned, that’s clearly something that you can’t pull back on. And then secondly, it’s do the best you can, right? ... Get as much down in terms of performance and capability as you can in every single generation. The other thing is, Apple’s not a chip company. At the end of the day, we’re a product company. So we want to deliver, whether it’s features, performance, efficiency. If we’re not able to deliver something compelling, we won’t engage, right? ... We won’t build the chip. So each generation we’re motivated as much as possible to deliver he best that we can.

    [Neural engine question]

    … There are really two things you need to think about, right? ... The first is the tradeoff between a general purpose compute engine and something a little more specialized. So, look at our CPU and GPU, these are big general purpose compute engines. They each have their strengths in terms of the types of applications you’d want to send to the CPU versus the GPU, whereas the neural engine is more focused in terms of the types of operations that it is optimized for. But if you have a workload that’s supported by the neural engine, then you get the most efficient, highest density place on the chip to execute that workload. So that’s the first part of it. The second part of it is, well, what kind of workload are we talking about? Our investment in the neural engine dates back years ago, right? The first time we had a neural engine on an Apple Silicon chip was A11 Bionic, right? ... So that was five-ish years ago on the iPhone. Really, it was the result of us realizing that there were these emergent machine learning models, where, that we wanted to start executing on device, and we brought this technology to the iPhone, and over the years we’ve been increasing its capabilities and its performance. Then, when we made the transition of the Mac to Apple Silicon, it got that IP just like it got the other IPs that we brought, things like the media engine, our CPU, GPU, Secure Enclave, and so on and so forth. So when you’re going to execute these machine learning models, performing these inference-driven models, if the operations that you’re executing are supported by the neural engine, if they fit nicely on that engine, it’s the most efficient way to execute them. The reality is, the entire chip is optimized for machine learning, right? ... So a lot of models you will see executed on the CPU, the GPU, and the neural engine, and we have frameworks in place that kind of make that possible. The goal is always to execute it in the highest performance, most efficient place possible on the chip.

    [Nanometer process question]

    ... You’re referring to the transistor. These are the building blocks all of our chips are built out of. The simplest way to think of them is like a little switch, and we integrate tons of these things into our designs. So if you’re looking at M2 Pro and M2 Max, you’re talking about tens of billions of these, and if you think about large collections of them, that’s how we build the CPU, the GPU, the neural engine, all the media blocks, every part of the chip is built out of these transistors. Moving to a new transistor technology is one of the ways in which we deliver more features, more performance, more efficiency, better battery life. So you can imagine, if the transistors get smaller, you can cram more of them into a given area, that’s how you might add things like additional cores, which is the thing you get in M2 Pro and M2 Max—you get more CPU cores, more GPU cores, and so on and so forth. If the transistors themselves use less power, or they’re faster, that’s another method by which you might deliver, for instance, better battery life, better efficiency. Now, I mentioned this is one tool in the toolbox. What you choose to build with them, the underlying architecture, microarchitecture and design of the chip also contribute in terms of delivering that performance, those features, and that power efficiency.

    If you look at the M2 Pro and M2 Max family, you’re looking at a second-generation 5 nanometer process. As we talked about earlier, the chip got more efficient. At every single operating point, the chip was able to deliver more performance at the same amount of power.

    [Media engine question]

    ... Going back to the point about transistors, taking that IP and integrating it on the latest kind of highly-integrated SOC and the latest transistor technology, that lets you run it at a very high speed and you get to extract a lot of performance out of it. The other thing is, and this is one of the things that is fairly unique about Apple Silicon, we built these highly-integrated SOCs, right? ... So if you think about the traditional system architecture, in a desktop or a notebook, you have a CPU from one vendor, a GPU from another vendor, each with their own sort of DRAM, you might have accelerators kind of built into each one of those chips, you might have add-in cards as additional accelerators. But with Apple Silicon in the Mac, it’s all a single chip, all backed by a unified memory system, you get a tremendous amount of memory bandwidth as well as DRAM capacity, which is unusual, right? ... In a machine like this a CPU is used to having large capacity, low bandwidth DRAM, and a GPU might have very low capacity, high bandwidth DRAM, but now the CPU gets access to GPU-like memory bandwidth, while the GPU gets access to CPU-like capacity, and that really enables things that you couldn’t have done before. Really, if you are trying to build a notebook, these are the types of chips that you want to build it out of. And the media engine comes along for the ride, right? ... The technology that we’ve refined over the years, building for iPhone and iPad, these are machines where the camera is a key part of that experience, and being able to bring some of that technology to the Mac was honestly pretty exciting. And it really enabled just a revolution in terms of the video editing and video workflows.

    The addition of ProRes as a hardware accelerated encode and decode engine as a part of the media engine, that’s one of the things you can almost trace back directly to working with the Pro Workflows team, right? ... This is a codec that it makes sense to accelerate to integrate into hardware for our customers that we're expecting to buy these machines. It was something that the team was able to integrate, and for those workflows, there’s nothing like it in the industry, on the market. 
    edited February 2023 keithwdanoxd_2watto_cobraroundaboutnow
  • Reply 7 of 35
    Yeah ... Intel and AMD CPUs have supported both integrated graphics (AMD's RDNA 3 integrated graphics on their Ryzen 7040 laptop chips are comparable to an Nvidia RTX 2050) as well as discrete graphics through PCIE or Thunderbolt for who knows how long (and now M.2 slots). Intel drivers will even have an Intel Arc discrete GPU and an Intel Iris Xe integrated GPU be seen by the CPU as a single GPU. (AMD considered this idea but abandoned it.) And no, it isn't an x86 thing. Lots of Linux ARM servers use discrete GPUs. MediaTek and Nvidia wanted to bring discrete GPU support to ARM PCs around 2021 but abandoned it because neither Microsoft (who has an exclusive Windows on ARM deal with Qualcomm that is explicitly designed to lock out MediaTek for Qualcomm's sake and ChromeOS/Linux for Microsoft's sake) or Google (who just isn't very smart when it comes to stuff like this) were interested so it was abandoned.

    So, there never has been any reason for Apple Silicon Macs not supporting discrete graphics via M.2, PCIE or Thunderbolt other than Apple simply not wanting to. Which was the same reason why Apple locked Nvidia out of the Mac ecosystem and had people stuck with AMD GPU options only: purely because they wanted to. My guess is that Apple believed that they were capable of creating integrated GPUs that were comparable with Nvidia Ampere and AMD Radeon Pro, especially in the workloads that most Mac Pro buyers use them for. Maybe they are, but the issue may be that it isn't cost-effective to do so for a Mac Pro line that will sell less than a million units a year.
    edited February 2023 keithwwilliamlondonravnorodomcgWerksmuthuk_vanalingamroundaboutnow
  • Reply 8 of 35
    keithwkeithw Posts: 145member
    thadec said:


    So, there never has been any reason for Apple Silicon Macs not supporting discrete graphics via M.2, PCIE or Thunderbolt other than Apple simply not wanting to. Which was the same reason why Apple locked Nvidia out of the Mac ecosystem and had people stuck with AMD GPU options only: purely because they wanted to. My guess is that Apple believed that they were capable of creating integrated GPUs that were comparable with Nvidia Ampere and AMD Radeon Pro, especially in the workloads that most Mac Pro buyers use them for. Maybe they are, but the issue may be that it isn't cost-effective to do so for a Mac Pro line that will sell less than a million units a year.
    Absolutely 100% accurate! They simply "don't want to."  If my old iMac Pro can get top-notch graphics performance across a TB3 interface, there is simply no reason they couldn't do the same thing with ASi, whether through TB3 or a PCIe bus.

    williamlondonprogrammer
  • Reply 9 of 35
    danoxdanox Posts: 3,097member
    thadec said:
    Yeah ... Intel and AMD CPUs have supported both integrated graphics (AMD's RDNA 3 integrated graphics on their Ryzen 7040 laptop chips are comparable to an Nvidia RTX 2050) as well as discrete graphics through PCIE or Thunderbolt for who knows how long (and now M.2 slots). Intel drivers will even have an Intel Arc discrete GPU and an Intel Iris Xe integrated GPU be seen by the CPU as a single GPU. (AMD considered this idea but abandoned it.) And no, it isn't an x86 thing. Lots of Linux ARM servers use discrete GPUs. MediaTek and Nvidia wanted to bring discrete GPU support to ARM PCs around 2021 but abandoned it because neither Microsoft (who has an exclusive Windows on ARM deal with Qualcomm that is explicitly designed to lock out MediaTek for Qualcomm's sake and ChromeOS/Linux for Microsoft's sake) or Google (who just isn't very smart when it comes to stuff like this) were interested so it was abandoned.

    So, there never has been any reason for Apple Silicon Macs not supporting discrete graphics via M.2, PCIE or Thunderbolt other than Apple simply not wanting to. Which was the same reason why Apple locked Nvidia out of the Mac ecosystem and had people stuck with AMD GPU options only: purely because they wanted to. My guess is that Apple believed that they were capable of creating integrated GPUs that were comparable with Nvidia Ampere and AMD Radeon Pro, especially in the workloads that most Mac Pro buyers use them for. Maybe they are, but the issue may be that it isn't cost-effective to do so for a Mac Pro line that will sell less than a million units a year.

    Long-term there isn’t any reason for Apple to support third-party, GPUs, Nvidia, and AMD or any other third-party company, Apple has been there done that, and is just going to hold Apple back down the road, if Apple supports third-party cards in the Mac Pro, it will just be a short term solution long-term it does not work not for what Apple may want to build in the future and that’s what we’re not seeing and we don’t know exactly what the long-term roadmap for Apple is, but it definitely isn’t based upon it being dependent on third-party companies holding them back.

    Apples lack of AAA games, or games that can take advantage of the Apple Silicon to its fullest is also a short term problem. It took Apple 13 years to get where they are today with their architecture, so what if it takes another 3 to 5 years to get there, graphically most people don’t use, or have the need for 300 W GPUs, that market is also a very small market.

    What probably will happen is that Apple will probably need to roll up its sleeves again and do some thing in the gaming area but that doesn’t mean buying another company that means actually getting involved in at least one or two games to show the true potential of Apple Silicon.

    I do think that Apple should release their entire range (all form factors) each time they upgrade to a different M-Series SOC generation, Apple should not hold back, Apple has never been about the absolute performance. It’s always been about how everything works together as one in an efficient manner the PC world however, works on the highest wattage and highest megahertz solves every problem, like a Dodge Demon, Mustang, or a Corvette the biggest engine wins, and that has never been Apple in the computing world, the fit, finish, efficiency, and overall design of the OS, has been more important to Apple.
    edited February 2023 williamlondonwatto_cobra
  • Reply 10 of 35
    Apple can do this.  Why? Because they aren’t limited to industry standards that all OEMs, big and small, use. 

    They don’t just take a standard from someone else, accept its limitations, and deal with it. They improve on those things where possible and even invent their own standards. 

    If Apple wants to use Apple Silicon as the base structure and add helper cards, they can definitely do that and it will work. It won’t scale the same as having it all one one SOC, but it will work and it will outperform everyone else. 

    A better option would be to introduce desktop specific SOCs without mobile limitations or power efficiency concerns. The MC Peo is THE perfect candidate for such a beast. Take the cheese grater case, the massive fan setup snd give it something to match. Give it all performance cores and lots of them, crank up the GHZ, pop the new ray tracing GPU cores in there also lots of them with jacked up clock speeds and let it rip. 

    Aside from that, Apple could add an entirely new spin on the modular approach and instead of adding generic PCI-E  slots, throw that old stuff away and build out a light speed socketed “fabric” that allows multiple Apple Silicon SOCs to be plugged in as the user requires. Need more RAM? It happens to come with more CPU and GPU power also and vice versa. 

    Apple has not usually just done what everyone else thinks is possible. iTunes, the iPod, iPhone, Mac Pro, Apple Silicon, Force Touch trackpads, etc. are all examples of Apple doing things way differently and better than everyone else, paving the way for a better computing landscape. 

    All it takes os for Apple to want to do this. 

    Sure Apple COULD do the PCI thing. But that’s the lazy and inefficient way. 

    I could see either a desktop specific SOC and/or a system fabric that connect multiple Apple Silicon SOCs together as needed. I can also see each SOC module coming with its own cooling system. Apple could also have modular power supplies to handle energy needs per configuration and sell those after purchase as well. For example, if you buy an M3 Extreme Mac Pro, it comes with a suitable power supply. But if you add 3 more M3 Extremes later, you could buy the necessary power supply from Apple, replace the old power supply module, and presto, supercomputer. 

    Apple has an opportunity to reinvent the desktop computer here, redefine modularity in meaningful ways, offer a good deal, while also opening up profit margin growth after the initial sale, and leave everyone else in the dust. This would also add continual shine to Apple Silicon as the ultimate platform. 

    This is the Mac Pro - the absolute pinnacle of Apple performance snd represents the best computer Apple could possibly make. It’s their time to shine. Hopefully that’s what they do. Just don’t rush it. Even if it takes another year, get it right and set the tone for the next decade to 20 years. 


    tenthousandthingsravnorodomd_2watto_cobramark fearingroundaboutnowmobirdlogic2.6
  • Reply 11 of 35

    Long-term there isn’t any reason for Apple to support third-party, GPUs, Nvidia, and AMD or any other third-party company, Apple has been there done that, and is just going to hold Apple back down the road, if Apple supports third-party cards in the Mac Pro, it will just be a short term solution long-term it does not work not for what Apple may want to build in the future and that’s what we’re not seeing and we don’t know exactly what the long-term roadmap for Apple is, but it definitely isn’t based upon it being dependent on third-party companies holding them back.

    Apples lack of AAA games, or games that can take advantage of the Apple Silicon to its fullest is also a short term problem. It took Apple 13 years to get where they are today with their architecture, so what if it takes another 3 to 5 years to get there, graphically most people don’t use, or have the need for 300 W GPUs, that market is also a very small market.

    What probably will happen is that Apple will probably need to roll up its sleeves again and do some thing in the gaming area but that doesn’t mean buying another company that means actually getting involved in at least one or two games to show the true potential of Apple Silicon.

    I do think that Apple should release their entire range (all form factors) each time they upgrade to a different M-Series SOC generation, Apple should not hold back, Apple has never been about the absolute performance. It’s always been about how everything works together as one in an efficient manner the PC world however, works on the highest wattage and highest megahertz solves every problem, like a Dodge Demon, Mustang, or a Corvette the biggest engine wins, and that has never been Apple in the computing world, the fit, finish, efficiency, and overall design of the OS, has been more important to Apple.
    If these patent applications reflect actual development projects, Apple and/or 3rd party add-on GPUs are certainly on the agenda.  While it's true Wintel will probably always outperform Mac in 3D graphics applications, with this capability they can be more flexible with their offerings.

    cgWerkswatto_cobra
  • Reply 12 of 35
    cgWerkscgWerks Posts: 2,952member
    thadec said:
    … So, there never has been any reason for Apple Silicon Macs not supporting discrete graphics via M.2, PCIE or Thunderbolt other than Apple simply not wanting to. Which was the same reason why Apple locked Nvidia out of the Mac ecosystem and had people stuck with AMD GPU options only: purely because they wanted to. My guess is that Apple believed that they were capable of creating integrated GPUs that were comparable with Nvidia Ampere and AMD Radeon Pro, especially in the workloads that most Mac Pro buyers use them for. Maybe they are, but the issue may be that it isn't cost-effective to do so for a Mac Pro line that will sell less than a million units a year.
    Exactly! And, if this is to be, this pretty much solves Apple’s gaping GPU hole. People who need a fairly good level of GPU performance might be fine ‘out of the box’ and people who need more, could add it. I just hope the eGPU aspect is the case, so it isn’t limited to the Mac Pro.

    keithw said:
    Absolutely 100% accurate! They simply "don't want to."  If my old iMac Pro can get top-notch graphics performance across a TB3 interface, there is simply no reason they couldn't do the same thing with ASi, whether through TB3 or a PCIe bus.

    Yes,. for some things the bus matters, but for the most part, the bus isn’t saturated with normal app to GPU communication. TB3/4 has been mostly fine, and TB5 is on the way, right?
    williamlondonwatto_cobrakeithw
  • Reply 13 of 35
    The SOC is good enough for most users- even high end users, but Apple will definitely create some kind of GPU expansion. 3D artists, postproduction, and scientific workloads need this expansion and although the details are unclear, there is no way Apple is going to leave these users out. This is much more critical then RAM expansion. They have users in these fields in-house precisely to give feedback to engineering.
    edited February 2023 watto_cobrakeithwcgWerks
  • Reply 14 of 35
    cgWerks said:
    thadec said:
    … So, there never has been any reason for Apple Silicon Macs not supporting discrete graphics via M.2, PCIE or Thunderbolt other than Apple simply not wanting to. Which was the same reason why Apple locked Nvidia out of the Mac ecosystem and had people stuck with AMD GPU options only: purely because they wanted to. My guess is that Apple believed that they were capable of creating integrated GPUs that were comparable with Nvidia Ampere and AMD Radeon Pro, especially in the workloads that most Mac Pro buyers use them for. Maybe they are, but the issue may be that it isn't cost-effective to do so for a Mac Pro line that will sell less than a million units a year.
    Exactly! And, if this is to be, this pretty much solves Apple’s gaping GPU hole. People who need a fairly good level of GPU performance might be fine ‘out of the box’ and people who need more, could add it. I just hope the eGPU aspect is the case, so it isn’t limited to the Mac Pro.

    keithw said:
    Absolutely 100% accurate! They simply "don't want to."  If my old iMac Pro can get top-notch graphics performance across a TB3 interface, there is simply no reason they couldn't do the same thing with ASi, whether through TB3 or a PCIe bus.

    Yes,. for some things the bus matters, but for the most part, the bus isn’t saturated with normal app to GPU communication. TB3/4 has been mostly fine, and TB5 is on the way, right?
    USB4v2 is on its way. That will be 2-3 times as fast as TB3/USB4 depending on if it is in an asymmetric mode. It all depends on the workload, but the bus isn’t the bottleneck for key professional workloads. The main thing is real-time rendering works a lot different on the SOC (assuming non-Apple IP) so Metal apps like games probably won’t be optimized for those GPUs, so this mixed use would be better for compute (raytracing, postproduction rendering, scientific work). It might work to some extent on unoptimized apps, but might be a mixed bag. At a minimum I think upscaling tech probably won’t work at all, so the GPUs will need to work a lot harder. Hopefully any GPU support will also support Thunderbolt, but Apple doesn’t necessarily need to allow that.
    edited February 2023 williamlondonwatto_cobracgWerks
  • Reply 15 of 35
    Pedant's Corner:
    Then root data to that GPU
    That should be "route".
    Maybe. Building high-performance computing systems involves thinking about data locality, because all "scale-out" systems now have non-uniform memory performance. That is, when you have multiple processors which each have their own memory controllers and their own memory pools, each processor's access to its own memory pool will be faster than accessing the memory pool of another processor. When discussing NUMA, there actually is a concept of rooting the data to the node where it will be processed, as in that node has the primary instance of that data.

    That said, Apple went to ridiculous lengths with the M1 Ultra to avoid NUMA concerns. I'm not sure I see them bringing these concerns back just for the Mac Pro.
    watto_cobra
  • Reply 16 of 35
    JinTechJinTech Posts: 1,042member
    bsbeamer said:
    No reason to believe this will actually happen until eGPU is actually supported on the latest versions of macOS.
    Wasn’t there a rumor recently that the MacPro was being tested on a version of the macOS that is not available to the public, either in beta form or otherwise? Perhaps the version they were testing it on has support for eGPUs. 
    williamlondonwatto_cobra
  • Reply 17 of 35
    mattinozmattinoz Posts: 2,388member
    Given the way the Mac System software works yes if they support third party GPU then they will be able to use all GPU core in the machine in parallel. 
    This has been the case since MacBookPros with Snow Leopard and beyond.


    watto_cobracgWerks
  • Reply 18 of 35
    lkrupplkrupp Posts: 10,557member
    Well, all I can say is the proof will be in the pudding. I would venture to think that true professionals don’t really care about slots and GPUs as long as the PERFORMANCE is there. Nvidia this, Radeon that, Apple Silicon, a Quarter Pounder with Cheese, anything that will get that rendering done in record time is what a professional seeks.
    d_2watto_cobraCurtisHight
  • Reply 19 of 35
    I would love to be totally wrong on this, but after thoroughly reading each patent application it looks like these are the patents for how the Ultra chip (two Max chip linked together through infinity fabric) processes graphic task in tandem, and the infinity fabric link which Apple uses to tie both MAX chips together, and has nothing to do with future multiple dGPU support by Apple. 

    Further evidence of this is that the patent file date is for August 2021 and the Ultra chip debuted in March 2022.

    This doesn’t mean that Apple isn’t working on dGPUs though. And we may very well still see dGPU support come to the new AS MacPros. I just think these patent applications cover the Apple Silicon Ultra’s pair of GPUs talking to each other in tandem. 
    watto_cobratenthousandthingsroundaboutnowmjtomlin
  • Reply 20 of 35
    The original post and none of the subsequent posts mention what is almost certainly the biggest stumbling block to supporting non-Apple GPU hardware:  drivers.

    Drivers have always been the biggest issue with Apple GPU support, and it has always been a hot potato tossed back and forth between Apple's OS group and the 3rd party HW vendor (including Intel for the integrated GPUs).  GPU drivers are terribly complex things, and Apple can't/doesn't use the drivers written by AMD/Intel/nVidia... and those vendors aren't likely to put much effort into writing drivers for macOS even if Apple were to start shipping their GPUs in Apple products.  They never did before, the market is too small.  So will Apple write drivers for any 3rd party devices?  Their current direction suggests that the answer is a resounding "no", but that's not definitive and could change.  They still have drivers that work on the Intel chip based Macs, and porting to Aarch64 may not be terribly difficult.  Keeping up with the moving target that is the latest AMD GPUs though, is a lot of work.  On top of supporting Apple's own GPU designs.  

    The Apple Silicon hardware is almost certainly hardware compatible with most GPUs from other vendors, thanks to PCI-e / Thunderbolt being standardized in its various flavours.  So you can physical install any of the devices, but you need drivers to make it interoperate with macOS and macOS needs to continue to expose the functionality required to do that (which conceivably it may not on Apple Silicon since the macOS team may be taking advantage of detailed knowledge of the hardware).

    tenthousandthingsthtdanoxmark fearingroundaboutnowkeithw
Sign In or Register to comment.