New Mac Pro may not support PCI-E GPUs

2

Comments

  • Reply 21 of 55
    MacProMacPro Posts: 19,575member
    So much for modular design and now back to square one. This is going to be Mac trash can, the 2013 Mac Pro, all over again. Oh brother.
    The Mac Studio is the spiritual successor to the Mac trash can
    Lot of similarities.  I have both side by side.  The main difference is one is cold the other gets red hot just ticking over.  Now if the trash can had modern ports and a few M2 Ultras inside...
    edited January 26 cornchipwatto_cobra
  • Reply 22 of 55
    nubus said:
    Apple Silicon is designed as fixed bundles of CPU, GPU, ML, and RAM. It doesn't play well the modularity that has been with us since Mac II. And why buy fixed performance when you can scale using the cloud? Workstations made sense 30 years ago with SGI, NeXT, and Sun Microsystems. Apple should make a partnership with AWS (or make a Mac cloud), improve the dull non-design of Studio, and kill the Pro.
    Cool concept for cloud computing. The down side if internet connection is down, the computer is down. Centralized workstation makes sense if everyone is back at the office and work. Post pandemic changes all that.
    cgWerkswatto_cobra
  • Reply 23 of 55
    thttht Posts: 4,787member
    According to Bloomberg's Mark Gurman, the Mac Pro "may" lack upgradeable GPUs, and this adds to its expected lack of modularity and expansion options.
    Gurman is only stating the default point of view for the Apple Silicon Macs, which is that the only GPU supported is Apple Silicon GPUs, and that the memory model is unified memory. No discrete memory pools to shuttle data around.

    So, saying it may lack dGPUs is only stating what Apple said in WWDC2020, almost 3 years ago. Apple's job with the Mac Pro is still about the same: provide more CPU, GPU, and media performance than the Mac Pro. They have CPUs and media accelerators that are already as fast or there is a path to be faster.

    The big issue, how do they get a class competitive GPU? Will be interesting to see what they do. The current GPU performance in the M2 Max is intriguing and if they can get 170k GB5 Metal scores with an M2 Ultra, that's probably enough. But, it's probably going to be something like 130k GB5 Metal. Perhaps they will crank the clocks up to ensure it is more performant than the Radeon Pro 6900X.

    After that, will be curious on how many slots and what is going to fill them.
    watto_cobra
  • Reply 24 of 55
    I've moved to PC for my workstation projects.  Cheaper, faster, and all of that stuff.  

    It sucks watching Apple continue to cripple itself.

    I'd love to switch back, just not seeing it at the moment.  
    ravnorodomcgWerksnetroxwatto_cobraiHy
  • Reply 25 of 55
    entropysentropys Posts: 3,926member
    I am moving across to a windows workstation. Sad, but had to happen. The spiral starts when users stop buying macs because among other things, there is no ability to change configs at low expense. But at point of purchase, anything beyond base model rapidly gets ruinously expensive so you need to be sure that is what you need, very sure, or you will find yourself having to buy an entirely new computer just to get more RAM, or you over buy and that may not be money you get back.
    Then software companies stop making Mac versions because it’s too hard to do something entirely separate to wintel, and costs for a diminishing number of people, so people like me have no choice but to move to windows.  It was the exact same situation in the nineties, until Jobs came back and things revitalised the joint with common parts.  This is less of a problem for education and home use these days, but is rapidly growing worse for specialist software.

    I won’t do a WWSD, if for no other reason than the fact he is just as guilty with the Apple non upgradeable machine philosophy.  
    But I will repeat what he said back then. he said something like “ we want to make computers for our friends, and not all our friends are Larry Ellison”.  When I look at upgrade costs for any Mac, I think about that. A lot.

    I think in that same presentation he said something about the very great philosophical difference between “making great computers to make money” and “making money to make great computers”.
    edited January 26 ravnorodommuthuk_vanalingamFileMakerFellercgWerkscornchipwatto_cobra
  • Reply 26 of 55
    Apple Silicon has a PCIe 4.0 controller. It's used for WLAN, Bluetooth, and of course Thunderbolt, which is PCIe over a serial bus. It's impossible to have Thunderbolt without a PCIe controller. The PCIe 4.0 controller showed up on Apple's list of M1 features in the presentation. The lack of eGPU support is more than anything a driver and OS support question than anything to do with hardware.
    d_2ravnorodomFileMakerFellerroundaboutnowtenthousandthingsthtcgWerkscornchipwatto_cobraprogrammer
  • Reply 27 of 55
    MacPro said:
    So much for modular design and now back to square one. This is going to be Mac trash can, the 2013 Mac Pro, all over again. Oh brother.
    The Mac Studio is the spiritual successor to the Mac trash can
    Lot of similarities.  I have both side by side.  The main difference is one is cold the other gets red hot just ticking over.  Now if the trash can had modern ports and a few M2 Ultras inside...
    I can’t confirm your comparison, but I support your conclusion, adding Thunderbolt 5 [post-modern ports :-)] and made in the USA.
    watto_cobra
  • Reply 28 of 55
    mattinozmattinoz Posts: 2,068member
    Apple Silicon has a PCIe 4.0 controller. It's used for WLAN, Bluetooth, and of course Thunderbolt, which is PCIe over a serial bus. It's impossible to have Thunderbolt without a PCIe controller. The PCIe 4.0 controller showed up on Apple's list of M1 features in the presentation. The lack of eGPU support is more than anything a driver and OS support question than anything to do with hardware.
    Also ARM have released reference blocks for PCIe 5 and CXL 2 on top of that. Apple’s architecture license gives them access to those designs if they don’t already have their own. 
    cgWerkscornchipwatto_cobraprogrammer
  • Reply 29 of 55
    In with the new, out with the old. Computers are getting smaller and a heck lot faster. Apple is more focused on emerging technologies than stupid desktops. Think massive processing power with low power consumption for lightweight glasses. 
    ravnorodomwatto_cobra
  • Reply 30 of 55
    mjtomlinmjtomlin Posts: 2,619member
    1. If these new Mac Pros do not have support for all the PCIe cards that the current Intel Mac Pro support, this system will fail. Most users interested in this system will be those looking to upgrade from their current Mac Pro and will want to bring their extremely expensive MPX modules (GPU cards) with them. The advantages of PCI slots isn't just expandability, it's also portability - moving those cards to another system.

    2. RAM is not on the SoC, it's on-package and can easily moved to the motherboard. There's no reason Apple cannot do this - yes, there will be a performance hit.
    watto_cobraSerqetry
  • Reply 31 of 55
    cgWerkscgWerks Posts: 2,928member
    The next Mac Pro is better be something that Mac mini and Mac Studio can't: Stacks multiple layers of M2 Ultra chips on top of each other with blade fans/thermo sitting in between each chips. Max out these guys to the height of the current Mac Pro chassis. It maintains the modular concept in term of adding extra M series chip set instead.
    I suppose, but it could do that at barely bigger than the Mac Studio. There is no need for a huge chassis like the previous model if there is little to no expansion/configuration. I’m fine with that, *if* they can actually compete. But, unless they’ve got some super-secret plan we can’t even imagine right now, it doesn’t look promising. IMO, w/o AMD support in some manner, Apple is going to remain a distant 2nd to the PC for certain pro segments, who’ll have to beg developers for support (in other words, the last how many decades of Apple).

    I think the ASi Mac Pro will be tested by actual professionals just like the current Mac Pro. The design and testing with real professionals has been the hold up with covid affecting the process. As such I think some professionals will still want it. People forget the current Mac Pro was tested by professionals and more importantly, professionals require a wide range of computers. Yet the current Mac Pro was derided as too expensive for some professionals. Now we have a wide range of Mac computing power from the Mini and iMacs, to the Studio and the the Mac Pro.
    I suppose, but it isn’t exactly a secret pros in certain segments need some serious GPU power, which is quite lacking in what we know of the current road-map.

    9secondkox2 said:
    … 2) a new motherboard framework that acts as a “fabric” which ties together multiple SOCs, with slots that you can add more SOCs after purchase. Each SOC would contain CPU, GPU, and RAM and that would be how you expand. Apple would of course charge crazy money, laugh all the way to the bank, and yet it would still enable customers to get what they want. 
    True, though they are lacking a solution for ‘the rest of us’ that is even close to being competitive. I suppose after a couple more iterations of ASi, maybe they’ll get back in the ballpark? We’re kind of getting mixed messages right now, as they seem to be working with software vendors in areas they are way behind in, or would have to drop, if there weren’t some plan. That gives me a bit of hope. I just wish we knew a bit more about it.

    People are missing another obvious possibility: what Apple calls Ultra Fusion or a silicon interposer that allows two M1 Max chip dies to be connected together to make the M1 Ultra.

    Gurman simply took the 38 cores of the M2 Max and doubled it as he thinks the Mac Pro will be using an M2 Ultra.

    Apple has a very high bandwidth interconnect to pair two M1 Max chips. But who says that this interface can only pair two identical chips? Apple could make a companion chip that was simply made up entirely of GPU cores. So you could have an M2 Ultra with 12 CPU cores, the NPU and encoders/decoders from the M2 Max and have an M2 GPU with a whopping 128 (or more) cores.

    Or if Apple went with a 4 die chip (the rumored Extreme version) then 2 x M2 Max along with 2 x M2 GPU and you’d have 24 CPU cores and upwards of 300 GPU cores.

    I don’t know why people assume Apple can only connect two identical chips when they make an Ultra version. 
    I like that idea, but I guess my question is if it would matter? It would cost quite a bit of money, and I still think not really be competitive in certain GPU aspects, even if exceeded the raw performance on benchmarks of a good AMD/Nvidia.
    ravnorodomwatto_cobra
  • Reply 32 of 55
    If this is true what is the point of a bigger Mac Studio Pro? I don’t see a justification.
    watto_cobraiHy
  • Reply 33 of 55
    People are missing another obvious possibility: what Apple calls Ultra Fusion or a silicon interposer that allows two M1 Max chip dies to be connected together to make the M1 Ultra.

    Gurman simply took the 38 icores of the M2 Max and doubled it as he thinks the Mac Pro will be using an M2 Ultra.

    Apple has a very high bandwidth interconnect to pair two M1 Max chips. But who says that this interface can only pair two identical chips? Apple could make a companion chip that was simply made up entirely of GPU cores. So you could have an M2 Ultra with 12 CPU cores, the NPU and encoders/decoders from the M2 Max and have an M2 GPU with a whopping 128 (or more) cores.

    Or if Apple went with a 4 die chip (the rumored Extreme version) then 2 x M2 Max along with 2 x M2 GPU and you’d have 24 CPU cores and upwards of 300 GPU cores.

    I don’t know why people assume Apple can only connect two identical chips when they make an Ultra version. 
    Makes sense to press upwards on graphics and memory instead of CPU power, if 3rd party cards will not be compatible.
    watto_cobra
  • Reply 34 of 55
    cgWerkscgWerks Posts: 2,928member
    MacPro said:
    Lot of similarities.  I have both side by side.  The main difference is one is cold the other gets red hot just ticking over.  Now if the trash can had modern ports and a few M2 Ultras inside...
    I don’t think the ‘trash can’ Mac Pro was actually a bad idea, if it would have been executed properly, and kept up to date, especially given Apple Silicon direction. If they would have kept modern on eGPUs, those could have even been moved external, with just a base level one inside (kind of like the base MP with RX580), then plug in what you need. It could have even been designed to stack on top (ex: Blackmagic eGPU on top of a ‘core’ cylinder CPU/GPU).

    Some disciplines *like* putting everything in a big box, but I don’t see a lot of real need in most cases if Thunderbolt would have been pushed to keep up with bus needs. Storage is easy. Most other components are lower bandwidth. GPUs are the primary catch.

    ravnorodom said:
    … Cool concept for cloud computing. The down side if internet connection is down, the computer is down. Centralized workstation makes sense if everyone is back at the office and work. Post pandemic changes all that.
    Yeah, the difference is that it is actually workable now… well, in best cases anyway. But, IMO, the idea it is all moving to the cloud is just silly and historically ignorant. We’ve been here a few times in the past. We do have options for a blend now, but I still think people are going to want the power local for many reasons.

    tht said:
    … The big issue, how do they get a class competitive GPU? Will be interesting to see what they do. The current GPU performance in the M2 Max is intriguing and if they can get 170k GB5 Metal scores with an M2 Ultra, that's probably enough. But, it's probably going to be something like 130k GB5 Metal. Perhaps they will crank the clocks up to ensure it is more performant than the Radeon Pro 6900X …
    The devil is in the details. I’ve seen M1 Pro/Max do some fairly incredible things in certain 3D apps that match high-end AMD/Nvidia, while at the same time, there are things the top M1 Ultra fails at so miserably, it isn’t usable, and is bested by a low-mid-end AMD/Nvidia PC.

    I suppose if they keep scaling everything up, they’ll kind of get there for the most part. But, remember the previous Mac Pro could have 4x or more of those fast GPUs. Most people don’t need that, so maybe they have no intention of going back there again. But, I hope they have some plan to be realistically competitive with more common mid-to-high end PCs with single GPUs. If they can’t even pull that off, they may as well just throw in the towel and abandon GPU-dependant professional markets.

    entropys said:
    … Then software companies stop making Mac versions because it’s too hard to do something entirely separate to wintel, and costs for a diminishing number of people, so people like me have no choice but to move to windows.  It was the exact same situation in the nineties, until Jobs came back and things revitalised the joint with common parts.  This is less of a problem for education and home use these days, but is rapidly growing worse for specialist software. …
    I think our primary hope there, is that the rest of the market might start shifting towards non-Wintel solutions, which might force some developer change in focus as well. If Microsoft starts moving towards ARM, or forces a change in Intel’s direction, etc. some of the legacy aspect will begin to be broken and everyone will have to start shifting. But, this is a way down the road…. so we might be stuck on having a Wintel PC on the side for the next half-decade or more. The good news is that with the right software, I think one can ‘remote control’ a Windows box in a Mac window realistically for even the most demanding work (haven’t tried it yet, but if one can FPS game using GeForce Go, it must be possible to do CAD or 3D or such that way, too).

    Apple Silicon has a PCIe 4.0 controller. It's used for WLAN, Bluetooth, and of course Thunderbolt, which is PCIe over a serial bus. It's impossible to have Thunderbolt without a PCIe controller. The PCIe 4.0 controller showed up on Apple's list of M1 features in the presentation. The lack of eGPU support is more than anything a driver and OS support question than anything to do with hardware.
    Yep, that is my understand too. They can do it, it is more a matter of if they will or not.

    mjtomlin said:
    1. If these new Mac Pros do not have support for all the PCIe cards that the current Intel Mac Pro support, this system will fail. Most users interested in this system will be those looking to upgrade from their current Mac Pro and will want to bring their extremely expensive MPX modules (GPU cards) with them. The advantages of PCI slots isn't just expandability, it's also portability - moving those cards to another system. …
    I hear you, but I have to be honest that I don’t think Apple cares about that.
    watto_cobra
  • Reply 35 of 55
    HrebHreb Posts: 51member
    If Apple does release Mac Pro without expandable memory (as seems likely), look for it to have a second tier of lower performance memory, analogous to what Intel did with Optane.  There is no point putting an enormous number of cores in a machine if you can't scale the memory proportionally (and in a more cost efficient manner than just buying more computers!)  But you also don't need all of your system memory to be as high bandwidth as the 100 GB/s in the M2.

    The claim of "up to 76 cores" is interesting.  Currently the macos kernel has a hard limit to 64 cores.  Apple could obviously raise that, but I doubt they'd do that outside a major release of macos.  But macos 14 could certainly have a higher core thread limit.
    watto_cobra
  • Reply 36 of 55
    MarvinMarvin Posts: 14,943moderator
    Hreb said:
    The claim of "up to 76 cores" is interesting.  Currently the macos kernel has a hard limit to 64 cores.  Apple could obviously raise that, but I doubt they'd do that outside a major release of macos.  But macos 14 could certainly have a higher core thread limit.
    This is for GPU cores. CPU cores in M2 Ultra would be 24.
    If this is true what is the point of a bigger Mac Studio Pro? I don’t see a justification.
    Some people like to have lots of storage, Ultra only goes to 8TB. The Mac Pro currently has 32TB PCIe modules (4x 8TB drives):

    https://www.apple.com/shop/product/HMUE2ZM/A/promise-pegasus-r4i-32tb-raid-mpx-module-for-mac-pro

    There are 16TB drives now so probably 64TB per module and it can take two for 128TB storage.

    There's also network cards for optical IO to be able to work with the Mac Pro remotely.

    If people need the storage or IO capability and it's similarly priced to the Ultra, it's an easy purchase to make. Most people only buy the mid-range Mac Pros so it doesn't need to reach the highest performance level.

    Mid-range is 16-core Xeon plus 15TFLOPs W6800X ($10k). The M2 Ultra will be faster than the 28-core Xeon and GPU will be close to 30TFLOPs for $5k.
    tenthousandthingsHrebravnorodomwatto_cobra
  • Reply 37 of 55
    netroxnetrox Posts: 1,325member
    It won't happen unless they have re-tooled the memory management to tap the non-unified RAM chips as a secondary RAM. 

    Apple is obviously getting rid of all video cards. All the recent iterations with M Series have eliminated discrete GPU cards altogether. 

    It's unlikely the Mac Pro will be customizable. 
     
  • Reply 38 of 55
    thttht Posts: 4,787member
    cgWerks said:
    tht said:
    … The big issue, how do they get a class competitive GPU? Will be interesting to see what they do. The current GPU performance in the M2 Max is intriguing and if they can get 170k GB5 Metal scores with an M2 Ultra, that's probably enough. But, it's probably going to be something like 130k GB5 Metal. Perhaps they will crank the clocks up to ensure it is more performant than the Radeon Pro 6900X …
    The devil is in the details. I’ve seen M1 Pro/Max do some fairly incredible things in certain 3D apps that match high-end AMD/Nvidia, while at the same time, there are things the top M1 Ultra fails at so miserably, it isn’t usable, and is bested by a low-mid-end AMD/Nvidia PC.

    I suppose if they keep scaling everything up, they’ll kind of get there for the most part. But, remember the previous Mac Pro could have 4x or more of those fast GPUs. Most people don’t need that, so maybe they have no intention of going back there again. But, I hope they have some plan to be realistically competitive with more common mid-to-high end PCs with single GPUs. If they can’t even pull that off, they may as well just throw in the towel and abandon GPU-dependant professional markets.
    If only they can keep scaling up. Scaling GPU compute performance with more GPU cores has been the Achilles heel of Apple Silicon. I bet not being able to scale GPU performance is the primary reason why the M1 Mac Pro was not shipped or got to validation stage. On a per core basis, 64 GPU cores in the M1 Ultra is performing at little over half half (GB5 Metal 1.5k points per core) of what a GPU core does in an 8 GPU core M1 (2.6k per core). It's basically half if you compare the Ultra to the A14 GPU core performance. And you can see the scaling efficiency get worse and worse when comparing 4, 8, 14, 16, 24, 32, 48 and 64 cores.

    The GPU team inside Apple is not doing a good job with their predictions of performance. They have done a great job at the smartphone, tablet and even laptop level, but getting the GPU architecture to scale to desktops and workstations has been a failure. Apple was convinced that the Ultra and Extreme models would provide competitive GPU performance. This type of decision isn't based on some GPU lead blustering that this architecture would work. It should have been based on modeled chip simulations showing that it would work and what potential it would have. After that, a multi-billion decision would be made. So, something is up in the GPU architecture team inside Apple imo. Hopefully they will recover and fix the scaling by the time the M3 architecture ships. The M2 versions has improved GPU core scaling efficiency, but not quite enough to make a 144 GPU core model worthwhile, if the rumors of the Extreme model being canceled are true (I really hope not).

    If the GPU scaling for the M1 Ultra was say 75% efficient, it would have scored about 125k in GB5 Metal. About the performance of a Radeon Pro 6800. An Extreme version with 128 GPU cores at 60% efficiency would be 200k in GB5 Metal. That's Nvidia 3080 territory, maybe even 3090. Both would have been suitable for a Mac Pro, but alas no. The devil is in the details. The Apple Silicon GPU team fucked up imo.
    edited January 27 tenthousandthingscgWerksfastasleep
  • Reply 39 of 55
    thttht Posts: 4,787member
    mjtomlin said:
    1. If these new Mac Pros do not have support for all the PCIe cards that the current Intel Mac Pro support, this system will fail. Most users interested in this system will be those looking to upgrade from their current Mac Pro and will want to bring their extremely expensive MPX modules (GPU cards) with them. The advantages of PCI slots isn't just expandability, it's also portability - moving those cards to another system.

    2. RAM is not on the SoC, it's on-package and can easily moved to the motherboard. There's no reason Apple cannot do this - yes, there will be a performance hit.
    1. I think you are going to be disappointed. It's not going to support non Apple GPUs. Apple has already told developers that Apple Silicon isn't going to support discrete GPU cards, isn't going to support AMD, Nvidia or Intel GPUs. If it was different, they would have supported eGPUs with Apple Silicon machines way back when.

    Now, they aren't holding up their end of the bargain by not offering Apple Silicon GPUs competitive to AMD and Nvidia graphics cards, but they surely know that. Sounds like it hasn't gotten bad enough for them to reverse the decision. The decision surely has huge ramifications for macOS design too.

    For other PCIe cards like IO cards, storage, audio, and whatnot, I bet they will be supported if the Mac Pro has PCIe slots, which I think it will.

    2. You literally stated a reason for Apple not to do it. The GPU is going to have a performance hit with less memory bandwidth. ;) Before Apple started showing their architecture, a lot of people were contemplating how they were going to have system memory feed the GPU. 4 to 8 stacks of HBM2e? Lots of GDDR memory? Their own memory stacking solution (memory cubes et al)? 8 to 12 DDR5 channels? Turns out they decided on a gazillion channels of commodity LPDDR. Perhaps their GPU scaling issues is really a latency issue with LPDDR, and they really need to have a high clock memory solution (GDDR) to fix it? I don't know.
    watto_cobratenthousandthingsiHyfastasleep
  • Reply 40 of 55
    cgWerkscgWerks Posts: 2,928member
    tht said:
    If only they can keep scaling up. Scaling GPU compute performance with more GPU cores has been the Achilles heel of Apple Silicon. … And you can see the scaling efficiency get worse and worse when comparing 4, 8, 14, 16, 24, 32, 48 and 64 cores.
    … but getting the GPU architecture to scale to desktops and workstations has been a failure. Apple was convinced that the Ultra and …
    If the GPU scaling for the M1 Ultra was say 75% efficient, it would have scored about 125k in GB5 Metal. About the performance of a Radeon Pro 6800. An Extreme version with 128 GPU cores at 60% efficiency would be 200k in GB5 Metal. That's Nvidia 3080 territory, maybe even 3090. Both would have been suitable for a Mac Pro, but alas no. The devil is in the details. The Apple Silicon GPU team fucked up imo.
    Thanks, and I agree. Great analysis. But, I was referring to other aspects of AMD/Nvidia silicon that is probably missing in Apple’s architecture, like ray-tracing. Even when benchmark numbers look good, in real-world, certain aspects fall flat (while others are quite impressive). I’ve seen some really huge ‘scene’ (high number of components) be rotated around and moved through in a CAD app, which was as good or better than a 3080, while other operations in a 3D app were so slow, it was pretty much unusable, and better on PC laptops with little better than a RX580. Maybe some software optimization can still be done and improve some of that, but I’m guessing some core architecture pieces are still missing.

    The scaling thing is dead-on, though. With the M1 you just aren’t getting enough more performance to justify the higher-core chips. The M2 is looking quite a bit better from the little I’ve seen so far. Fingers crossed for the Studio Max/Ultra. But, I’m guessing I’ll need to wait for M3/M4 until some of the other aspects are added, and maybe will need a PC in the interim if my Intel mini/eGPU doesn’t cut it. I really want to jump into ASi, but I don’t think it is there yet for me.

    tht said:
    1. I think you are going to be disappointed. It's not going to support non Apple GPUs. Apple has already told developers that Apple Silicon isn't going to support discrete GPU cards, isn't going to support AMD, Nvidia or Intel GPUs. If it was different, they would have supported eGPUs with Apple Silicon machines way back when.

    Now, they aren't holding up their end of the bargain by not offering Apple Silicon GPUs competitive to AMD and Nvidia…

    2. You literally stated a reason for Apple not to do it. The GPU is going to have a performance hit with less memory bandwidth. ;) Before Apple started showing their architecture, a lot of people were contemplating how they were going to have system memory feed the GPU. 4 to 8 stacks of HBM2e? Lots of GDDR memory? Their own memory stacking solution (memory cubes et al)? 8 to 12 DDR5 channels? Turns out they decided on a gazillion channels of commodity LPDDR. Perhaps their GPU scaling issues is really a latency issue with LPDDR, and they really need to have a high clock memory solution (GDDR) to fix it? I don't know.
    Yeah, I think we’re just hopeful there is a ‘plan B’ if they fail at ‘plan A’.

    re: performance hit - I don’t really understand how much of this is to do with GPU, memory, bus, or language (ie: Metal v OpenGL, etc.). I know that with my Intel mini and eGPU, there doesn’t seem to be much of an impact with it being external on a slower bus vs internal. My thought has been that it’s just sending ‘commands’ more than if the bandwidth was being saturated trying to support a bunch of monitors or stuff like that, so I suppose it depends on the use-case. Even stuff I’ve tried like crypto-mining doesn’t seem to be TB/bus limited.

    So, I think for adding real-world GPU performance for apps like 3D & CAD, it really could be an eGPU and the slower bus would be OK.
Sign In or Register to comment.