M3 Ultra Mac Studio rumored to debut in mid-2024 -- without a Mac Pro

13»

Comments

  • Reply 41 of 41
    thttht Posts: 5,620member
    blastdoor said:
    blastdoor said:
    One thing I've wondered -- might they separate the GPU onto a separate 'chiplet' that is connected by some high speed interconnect (ultra fusion or whatever) to the CPU+SOC silicon? Right now, for high-end users, there's not a lot of flexibility in how much CPU vs GPU power you can get. Some workloads skew heavily towards CPU and some towards GPU. If the CPU and GPU weren't on the same die, maybe we could get a bit more flexibility in configurations. For example, I'd take as many CPU cores as they would give me, but I have little use for GPU. I'd be fine with an M3 Max GPU, but I'd like a lot more CPU power. Today, I have to pay for GPU cores I don't need in order to get more CPU cores. So, I'd pick something with, say, 64 CPU cores on one die fused to a GPU die that has, say, 30 GPU cores. 
    I think, based on what others have said, it isn't necessary to use a substrate (like UltraFusion) or a specialized interconnect like AMD's Infinity Fabric (as seen in the 2019 Mac Pro). You'd still have your base unit, Max or Ultra, maybe not Pro now that the relationship between Max and Pro is not as straightforward as it used to be. Anyhow, you'd have the base unit, then add to it via PCIe.

    I think you'll see PCIe 5.0 lanes in the next Mac Pro. Intel is already using it in its 12th generation. How Apple allocates them and what they connect to is up to Apple. Intel uses two types of lanes: those that connect to the CPU and those that connect to the PCH (platform controller hub), but that's just Intel. 
    Some googling indicates that PCIe 5 bandwidth falls well short of the m3 max, so I don’t think apple could keep a GPU connected via PCIe in the same memory space as a GPU connected via ultra fusion. 

    Of course, Apple could choose to support discrete GPUs anyway, but that’s something they could do now, too.

    I’m skeptical that apple will support discrete GPUs. They have made *such* a big deal about the benefits of the unified memory that I don’t think they will go back on that.

    but separate GPU chiplets would allow for greater flexibility while maintaining the super high bandwidth. So I think that’s a real possibility for the M lineup eventually (probably not going to happen for A lineup, though). 
    Tiling architectures with silicon bridges are going to be very common for SoCs going into the future. The M1 Ultra was the first iteration for Apple with the UltraFusion silicon bridge. Intel Meteor Lake is Intel's first big iteration.

    So, having an SoC architecture with two types interchangeable chips: a CPU dominant tile and a GPU/NPU dominant tile could be something Apple will eventually move to. Two sides of the die have to be silicon bridge interfaces, and Apple will likely want passive ones with zero to 1 cycle latency. Not an easy thing, as the market wants performance to scale in at least 3 ways now: CPU, GPU and now NPU. And SRAM will probably need to be stacked and bridged as well. If anything, SRAM will be what drives the use silicon bridges.

    Say, a base SoC with a 8 CPU p-cores and 8 GPU cores, an 8+8 config, 4 LPDDR channels as the starter SoC chip that goes into iPads, MBAs and starter MBP configs. Then, a GPU die with 40 GPUs and 4 LPDDR channels. Want a CPU dominant SoC, use 2 CPU dies for a 16+16 config. And you can keep adding CPU dies. Use 4 CPU chip dies, a 32 p-core SoC. If it is 4 e-cores, add 16 e-cores to it. Same thing with GPU die. A CPU die and 3 GPU dies gets you a 8+128 config. Mix and match four dies.

    Latency from the first die to the fourth die likely kills this plan for GPUs though. It would have to traverse 3 bridges and I'm assuming there will be latency with so many. If those silicon bridges make the bus performance indistinguishable from a bus on a monolithic chip.

    Anyways, Apple's current chip strategy works for their lineup. As long as they are selling the devices they are, probably shouldn't change. If they make inroads into workstation and gaming markets, the chip strategy can change.
Sign In or Register to comment.