The latest Nvidia GPUs have over 25 billion transistors, and an energy budget to support that. There is simply no way Apple can even begin to approach that. It takes Nvidia’s highest GPU boards to enable ray tracing in a useful way. Remember we’re talking not about ray trace in static imagery, but in dynamically rendered images at a rate of at least 60fps. That’s real time tracing. It’s in addition to every other graphic demand, such as real time physics, dynamic texture generation, hidden image elimination, etc. these all require very large amounts of graphics RAM additionally.
The A12Z on TSMC N7 has 10 billion transistors on a 120 mm^2 die. This is for a fanless tablet. I definitely see them hitting 20 to 30 billion transistors on TSMC N5 for the Mac SoCs. A notional A14X for an iPad Pro could be 15 billion transistors just based on the new fab alone. A Mac SoC for an iMac can easily be 2x that at around 30 billion.
Transistor budgets of over 25 billion are not that daunting for Apple Silicon. It's the power budgets that are. They just aren't going to run any of their SoCs at over 150 Watts, and I'd bet they want 100 W or lower. That will be self-limiting, so performance probably won't match these 300 W dGPUs or 200 W CPUs on average.
I expect them to be at parity or possibly more performant than dGPUs at each Watt class though. The fab advantage will give them about 30% perf/W advantage. The density advantage is going to be about 100%. They have so many transistors to play with on TSMC N5 that you wonder if they are willing use it all.
but none of us know what Apple’s plans really are. We can speculate, but that’s about it. And Apple has told us the basics during the June conference. In fact, they told us more than I expected. I listened very closely. I looked at their charts. We don’t see a separate GPU. At least not for the recognizable future. I suspect that we’ll se an SoC that competes with Intel very well in the beginning, likely exceeding performance at the level of Apple’s similar machines previously. I don’t expect to see overwhelming GPU performance. I would to see performance of a lower level separate GPU.
‘’anything that exceeds that would thrill me.
Yes. Apple has a chance to do something different than the packaging and designs afforded to them with components from Intel and AMD. It could be a monolithic SoC. It could be chiplets. It could be stacks of chiplets. They could use HBM or GDDR or whatever custom memory solution used commodity DRAM. It's going to be interesting to say the least.
Let’s be real, even if you could, how are you going to cool it? No big deal for laptops and consoles, they need to take size into consideration. It wouldn’t make sense for desktops where separation makes efficient cooling.
The 28-Core in the Mac Pro consumes ~280-300W and requires a ginormous heat sink. Anything beyond needs a high-end water cooling system, which will be disastrous for OEMs...
Apple is saying they aren't going to have high power consumption devices. So, the question is basically moot. The iMac 5K is running about 250 W worth of components or so. The Apple Silicon version is probably going to be half that. Every Apple Silicon design decision will be in favor of higher perf/Watt. They aren't going to be running their SoC at power levels that Intel, AMD and Nvidia are.
Apple is saying that Apple Silicon will be about as performant or a little more performant than desktops, at much less Watts. There is a level of performance and Watts they won't chase though. Something like 100 CPUs and 250 W GPUs. 250 to 350 W CPUs and GPUs? Probably not. Could they achieve the same performance at 150 W as an AMD/Nvidia at 300 W? Who knows. Sounds unlikely as those GPUs are using TSMC 7nm and Samsung 8nm. Will be interesting to see what they do.
The latest Nvidia GPUs have over 25 billion transistors, and an energy budget to support that. There is simply no way Apple can even begin to approach that. It takes Nvidia’s highest GPU boards to enable ray tracing in a useful way. Remember we’re talking not about ray trace in static imagery, but in dynamically rendered images at a rate of at least 60fps. That’s real time tracing. It’s in addition to every other graphic demand, such as real time physics, dynamic texture generation, hidden image elimination, etc. these all require very large amounts of graphics RAM additionally.
The A12Z on TSMC N7 has 10 billion transistors on a 120 mm^2 die. This is for a fanless tablet. I definitely see them hitting 20 to 30 billion transistors on TSMC N5 for the Mac SoCs. A notional A14X for an iPad Pro could be 15 billion transistors just based on the new fab alone. A Mac SoC for an iMac can easily be 2x that at around 30 billion.
Transistor budgets of over 25 billion are not that daunting for Apple Silicon. It's the power budgets that are. They just aren't going to run any of their SoCs at over 150 Watts, and I'd bet they want 100 W or lower. That will be self-limiting, so performance probably won't match these 300 W dGPUs or 200 W CPUs on average.
I expect them to be at parity or possibly more performant than dGPUs at each Watt class though. The fab advantage will give them about 30% perf/W advantage. The density advantage is going to be about 100%. They have so many transistors to play with on TSMC N5 that you wonder if they are willing use it all.
but none of us know what Apple’s plans really are. We can speculate, but that’s about it. And Apple has told us the basics during the June conference. In fact, they told us more than I expected. I listened very closely. I looked at their charts. We don’t see a separate GPU. At least not for the recognizable future. I suspect that we’ll se an SoC that competes with Intel very well in the beginning, likely exceeding performance at the level of Apple’s similar machines previously. I don’t expect to see overwhelming GPU performance. I would to see performance of a lower level separate GPU.
‘’anything that exceeds that would thrill me.
Yes. Apple has a chance to do something different than the packaging and designs afforded to them with components from Intel and AMD. It could be a monolithic SoC. It could be chiplets. It could be stacks of chiplets. They could use HBM or GDDR or whatever custom memory solution used commodity DRAM. It's going to be interesting to say the least.
Let’s be real, even if you could, how are you going to cool it? No big deal for laptops and consoles, they need to take size into consideration. It wouldn’t make sense for desktops where separation makes efficient cooling.
The 28-Core in the Mac Pro consumes ~280-300W and requires a ginormous heat sink. Anything beyond needs a high-end water cooling system, which will be disastrous for OEMs...
Apple is saying they aren't going to have high power consumption devices. So, the question is basically moot. The iMac 5K is running about 250 W worth of components or so. The Apple Silicon version is probably going to be half that. Every Apple Silicon design decision will be in favor of higher perf/Watt. They aren't going to be running their SoC at power levels that Intel, AMD and Nvidia are.
Apple is saying that Apple Silicon will be about as performant or a little more performant than desktops, at much less Watts. There is a level of performance and Watts they won't chase though. Something like 100 CPUs and 250 W GPUs. 250 to 350 W CPUs and GPUs? Probably not. Could they achieve the same performance at 150 W as an AMD/Nvidia at 300 W? Who knows. Sounds unlikely as those GPUs are using TSMC 7nm and Samsung 8nm. Will be interesting to see what they do.
having a much advanced architecture then limiting its potential to 1/3 is at best pathetic & laughable. No workstations limited themselves within 500~700 of watts, nor full-size desktops that think they’re Mac mini’s. You can, but given all of that envelope enabling one to go hundreds or a thousand watts, what’s the purpose?
Pros need performance in a sense much important than consumption, even laptops. When you do, then it needs to go over tens of hundreds of watts with a proper cooling solution. Combining a high-performance GPU on them is still a challenge, ASi won’t change that. If Apple matched Intel’s consumption, that’ll gives them an unprecedented performance.
By the way, NVIDIA just released their 3080, that’s 320W total, imagine four of them on a chip.
Probably there’s still misunderstanding how efficient these new chips are, the 2020 XPS13 with i7-1065G7 runs at ~48-watt with Cinebench R15, way beyond their 15-watt envelope. It still throttles and can’t keep up their 3.5GHz turbo (All-core).
By comparison, A12X/Z can run at 15-watt without constraint. We saw how these cores nearly matched 9900K in single-core tests.
The 12.9” iPad Pro can cool it well. My MacBook Air (8210Y) peaks at 20-watt & comes with serious throttling.
Rumors saying the 14” will match the 16”, or even earlier some suggests the 24” will have 10700K-level at 40-watt. So if anyone really believes Apple will only match Intel in performance, that could only suggest Apple’s obsession with thinness is morbid. Imagine keep the current 16” design but replace with an ASi, instead making it less than 10mm thick.
Even AMD is releasing 8-core ultrabooks with a performance that nearly matches the 16”, at similar watts.
The latest Nvidia GPUs have over 25 billion transistors, and an energy budget to support that. There is simply no way Apple can even begin to approach that. It takes Nvidia’s highest GPU boards to enable ray tracing in a useful way. Remember we’re talking not about ray trace in static imagery, but in dynamically rendered images at a rate of at least 60fps. That’s real time tracing. It’s in addition to every other graphic demand, such as real time physics, dynamic texture generation, hidden image elimination, etc. these all require very large amounts of graphics RAM additionally.
The A12Z on TSMC N7 has 10 billion transistors on a 120 mm^2 die. This is for a fanless tablet. I definitely see them hitting 20 to 30 billion transistors on TSMC N5 for the Mac SoCs. A notional A14X for an iPad Pro could be 15 billion transistors just based on the new fab alone. A Mac SoC for an iMac can easily be 2x that at around 30 billion.
Transistor budgets of over 25 billion are not that daunting for Apple Silicon. It's the power budgets that are. They just aren't going to run any of their SoCs at over 150 Watts, and I'd bet they want 100 W or lower. That will be self-limiting, so performance probably won't match these 300 W dGPUs or 200 W CPUs on average.
I expect them to be at parity or possibly more performant than dGPUs at each Watt class though. The fab advantage will give them about 30% perf/W advantage. The density advantage is going to be about 100%. They have so many transistors to play with on TSMC N5 that you wonder if they are willing use it all.
but none of us know what Apple’s plans really are. We can speculate, but that’s about it. And Apple has told us the basics during the June conference. In fact, they told us more than I expected. I listened very closely. I looked at their charts. We don’t see a separate GPU. At least not for the recognizable future. I suspect that we’ll se an SoC that competes with Intel very well in the beginning, likely exceeding performance at the level of Apple’s similar machines previously. I don’t expect to see overwhelming GPU performance. I would to see performance of a lower level separate GPU.
‘’anything that exceeds that would thrill me.
Yes. Apple has a chance to do something different than the packaging and designs afforded to them with components from Intel and AMD. It could be a monolithic SoC. It could be chiplets. It could be stacks of chiplets. They could use HBM or GDDR or whatever custom memory solution used commodity DRAM. It's going to be interesting to say the least.
Let’s be real, even if you could, how are you going to cool it? No big deal for laptops and consoles, they need to take size into consideration. It wouldn’t make sense for desktops where separation makes efficient cooling.
The 28-Core in the Mac Pro consumes ~280-300W and requires a ginormous heat sink. Anything beyond needs a high-end water cooling system, which will be disastrous for OEMs...
Apple is saying they aren't going to have high power consumption devices. So, the question is basically moot. The iMac 5K is running about 250 W worth of components or so. The Apple Silicon version is probably going to be half that. Every Apple Silicon design decision will be in favor of higher perf/Watt. They aren't going to be running their SoC at power levels that Intel, AMD and Nvidia are.
Apple is saying that Apple Silicon will be about as performant or a little more performant than desktops, at much less Watts. There is a level of performance and Watts they won't chase though. Something like 100 CPUs and 250 W GPUs. 250 to 350 W CPUs and GPUs? Probably not. Could they achieve the same performance at 150 W as an AMD/Nvidia at 300 W? Who knows. Sounds unlikely as those GPUs are using TSMC 7nm and Samsung 8nm. Will be interesting to see what they do.
having a much advanced architecture then limiting its potential to 1/3 is at best pathetic & laughable. No workstations limited themselves within 500~700 of watts, nor full-size desktops that think they’re Mac mini’s. You can, but given all of that envelope enabling one to go hundreds or a thousand watts, what’s the purpose?
Pros need performance in a sense much important than consumption, even laptops. When you do, then it needs to go over tens of hundreds of watts with a proper cooling solution. Combining a high-performance GPU on them is still a challenge, ASi won’t change that. If Apple matched Intel’s consumption, that’ll gives them an unprecedented performance.
By the way, NVIDIA just released their 3080, that’s 320W total, imagine four of them on a chip.
Ok, we don’t know what Apple will do with a Mac Pro. That’s it. We. Don’t. Know.
everything we’ve been talking about relates to laptops, Mini’s and iMacs. It doesn’t even relate to the iMac Pro, much less the Mac Pro. Most likely, those will be the last machines moved over. It could be a year from now, or it could be the two full years.
By then, Apple will have evolved its line of chips. We can be sure that Apple understands the needs of those machines. I highly doubt they would do something stupid that would cripple performance. It’s not something they could hide. They made a mistake with the previous model, and learned their lesson.
The latest Nvidia GPUs have over 25 billion transistors, and an energy budget to support that. There is simply no way Apple can even begin to approach that. It takes Nvidia’s highest GPU boards to enable ray tracing in a useful way. Remember we’re talking not about ray trace in static imagery, but in dynamically rendered images at a rate of at least 60fps. That’s real time tracing. It’s in addition to every other graphic demand, such as real time physics, dynamic texture generation, hidden image elimination, etc. these all require very large amounts of graphics RAM additionally.
The A12Z on TSMC N7 has 10 billion transistors on a 120 mm^2 die. This is for a fanless tablet. I definitely see them hitting 20 to 30 billion transistors on TSMC N5 for the Mac SoCs. A notional A14X for an iPad Pro could be 15 billion transistors just based on the new fab alone. A Mac SoC for an iMac can easily be 2x that at around 30 billion.
Transistor budgets of over 25 billion are not that daunting for Apple Silicon. It's the power budgets that are. They just aren't going to run any of their SoCs at over 150 Watts, and I'd bet they want 100 W or lower. That will be self-limiting, so performance probably won't match these 300 W dGPUs or 200 W CPUs on average.
I expect them to be at parity or possibly more performant than dGPUs at each Watt class though. The fab advantage will give them about 30% perf/W advantage. The density advantage is going to be about 100%. They have so many transistors to play with on TSMC N5 that you wonder if they are willing use it all.
but none of us know what Apple’s plans really are. We can speculate, but that’s about it. And Apple has told us the basics during the June conference. In fact, they told us more than I expected. I listened very closely. I looked at their charts. We don’t see a separate GPU. At least not for the recognizable future. I suspect that we’ll se an SoC that competes with Intel very well in the beginning, likely exceeding performance at the level of Apple’s similar machines previously. I don’t expect to see overwhelming GPU performance. I would to see performance of a lower level separate GPU.
‘’anything that exceeds that would thrill me.
Yes. Apple has a chance to do something different than the packaging and designs afforded to them with components from Intel and AMD. It could be a monolithic SoC. It could be chiplets. It could be stacks of chiplets. They could use HBM or GDDR or whatever custom memory solution used commodity DRAM. It's going to be interesting to say the least.
Let’s be real, even if you could, how are you going to cool it? No big deal for laptops and consoles, they need to take size into consideration. It wouldn’t make sense for desktops where separation makes efficient cooling.
The 28-Core in the Mac Pro consumes ~280-300W and requires a ginormous heat sink. Anything beyond needs a high-end water cooling system, which will be disastrous for OEMs...
Apple is saying they aren't going to have high power consumption devices. So, the question is basically moot. The iMac 5K is running about 250 W worth of components or so. The Apple Silicon version is probably going to be half that. Every Apple Silicon design decision will be in favor of higher perf/Watt. They aren't going to be running their SoC at power levels that Intel, AMD and Nvidia are.
Apple is saying that Apple Silicon will be about as performant or a little more performant than desktops, at much less Watts. There is a level of performance and Watts they won't chase though. Something like 100 CPUs and 250 W GPUs. 250 to 350 W CPUs and GPUs? Probably not. Could they achieve the same performance at 150 W as an AMD/Nvidia at 300 W? Who knows. Sounds unlikely as those GPUs are using TSMC 7nm and Samsung 8nm. Will be interesting to see what they do.
having a much advanced architecture then limiting its potential to 1/3 is at best pathetic & laughable. No workstations limited themselves within 500~700 of watts, nor full-size desktops that think they’re Mac mini’s. You can, but given all of that envelope enabling one to go hundreds or a thousand watts, what’s the purpose?
Pros need performance in a sense much important than consumption, even laptops. When you do, then it needs to go over tens of hundreds of watts with a proper cooling solution. Combining a high-performance GPU on them is still a challenge, ASi won’t change that. If Apple matched Intel’s consumption, that’ll gives them an unprecedented performance.
By the way, NVIDIA just released their 3080, that’s 320W total, imagine four of them on a chip.
Outside of the Mac Pro with Apple Silicon, yes, I think Apple is aiming for 2x perf/Watt relative to x86. They said it in the presentation. They want to be on the upper left corner of the perf versus power illustration. To me, that implies that they will be little more performant than x86 desktops, but at much lower Watts. For laptops, much higher performance at the same Watts.
So, a 15 W Mac SoC will perform like a 30 W x86 SoC, a 50 W SoC will perform like a 100 SoC. Additionally, RAM won't be user upgradeable. It won't even be like the Mac mini or an iMac Pro where some surgery is needed to take the machines apart to access DIMM slots. RAM will all be soldered. The GPUs will be competitive or better to the AMD GPUs they would be using in the iMacs or MBP16 in 2021, and at lower Watts.
For the Mac Pro, who knows? They are saying Big Sur on Apple Silicon will only support Apple GPUs. That might change next year as they may be prioritizing work. At face value, if they are keeping the Mac Pro as product, that means they will be selling Apple GPU cards for the Mac Pro. Maybe it will be a GPU chip that will be in-package by way of a silicon bridge if it goes into a MBP or an iMac, but for the Mac Pro MPX models, they'll have a MCM package with 4 to 8 of them. The MPX module can cool 500 W, so they have headroom.
At some point though, they aren't going to chase wherever Nvidia and AMD are going. 320 to 350 Watts for the Nvidia 3080 and 3090 is getting into bonkers territory. Still acceptable for workstations, but how far are they going to take it? 400+ W for the 4000 series GPUs? If Apple can have the same performance at 200 W, I think that is a huge win. Just keeping parity in GPU but at lower Watts would be great.
Back to transistor densities. The A14 has 11.8b transistors. This is the phone chip. I'm guessing roughly 80 mm^2. An A14X for an iPad Pro and low end Mac laptop with 2x the area for CPU and 2x the area for GPU will be around 15b. Double again for a higher end Mac SoC will get them to 30b transistors and about a 200 mm^2 chip. That's just your average desktop chip. Hmm, 45b transistors for 300 mm2?
The latest Nvidia GPUs have over 25 billion transistors, and an energy budget to support that. There is simply no way Apple can even begin to approach that. It takes Nvidia’s highest GPU boards to enable ray tracing in a useful way. Remember we’re talking not about ray trace in static imagery, but in dynamically rendered images at a rate of at least 60fps. That’s real time tracing. It’s in addition to every other graphic demand, such as real time physics, dynamic texture generation, hidden image elimination, etc. these all require very large amounts of graphics RAM additionally.
The A12Z on TSMC N7 has 10 billion transistors on a 120 mm^2 die. This is for a fanless tablet. I definitely see them hitting 20 to 30 billion transistors on TSMC N5 for the Mac SoCs. A notional A14X for an iPad Pro could be 15 billion transistors just based on the new fab alone. A Mac SoC for an iMac can easily be 2x that at around 30 billion.
Transistor budgets of over 25 billion are not that daunting for Apple Silicon. It's the power budgets that are. They just aren't going to run any of their SoCs at over 150 Watts, and I'd bet they want 100 W or lower. That will be self-limiting, so performance probably won't match these 300 W dGPUs or 200 W CPUs on average.
I expect them to be at parity or possibly more performant than dGPUs at each Watt class though. The fab advantage will give them about 30% perf/W advantage. The density advantage is going to be about 100%. They have so many transistors to play with on TSMC N5 that you wonder if they are willing use it all.
but none of us know what Apple’s plans really are. We can speculate, but that’s about it. And Apple has told us the basics during the June conference. In fact, they told us more than I expected. I listened very closely. I looked at their charts. We don’t see a separate GPU. At least not for the recognizable future. I suspect that we’ll se an SoC that competes with Intel very well in the beginning, likely exceeding performance at the level of Apple’s similar machines previously. I don’t expect to see overwhelming GPU performance. I would to see performance of a lower level separate GPU.
‘’anything that exceeds that would thrill me.
Yes. Apple has a chance to do something different than the packaging and designs afforded to them with components from Intel and AMD. It could be a monolithic SoC. It could be chiplets. It could be stacks of chiplets. They could use HBM or GDDR or whatever custom memory solution used commodity DRAM. It's going to be interesting to say the least.
Let’s be real, even if you could, how are you going to cool it? No big deal for laptops and consoles, they need to take size into consideration. It wouldn’t make sense for desktops where separation makes efficient cooling.
The 28-Core in the Mac Pro consumes ~280-300W and requires a ginormous heat sink. Anything beyond needs a high-end water cooling system, which will be disastrous for OEMs...
Apple is saying they aren't going to have high power consumption devices. So, the question is basically moot. The iMac 5K is running about 250 W worth of components or so. The Apple Silicon version is probably going to be half that. Every Apple Silicon design decision will be in favor of higher perf/Watt. They aren't going to be running their SoC at power levels that Intel, AMD and Nvidia are.
Apple is saying that Apple Silicon will be about as performant or a little more performant than desktops, at much less Watts. There is a level of performance and Watts they won't chase though. Something like 100 CPUs and 250 W GPUs. 250 to 350 W CPUs and GPUs? Probably not. Could they achieve the same performance at 150 W as an AMD/Nvidia at 300 W? Who knows. Sounds unlikely as those GPUs are using TSMC 7nm and Samsung 8nm. Will be interesting to see what they do.
having a much advanced architecture then limiting its potential to 1/3 is at best pathetic & laughable. No workstations limited themselves within 500~700 of watts, nor full-size desktops that think they’re Mac mini’s. You can, but given all of that envelope enabling one to go hundreds or a thousand watts, what’s the purpose?
Pros need performance in a sense much important than consumption, even laptops. When you do, then it needs to go over tens of hundreds of watts with a proper cooling solution. Combining a high-performance GPU on them is still a challenge, ASi won’t change that. If Apple matched Intel’s consumption, that’ll gives them an unprecedented performance.
By the way, NVIDIA just released their 3080, that’s 320W total, imagine four of them on a chip.
Ok, we don’t know what Apple will do with a Mac Pro. That’s it. We. Don’t. Know.
everything we’ve been talking about relates to laptops, Mini’s and iMacs. It doesn’t even relate to the iMac Pro, much less the Mac Pro. Most likely, those will be the last machines moved over. It could be a year from now, or it could be the two full years.
By then, Apple will have evolved its line of chips. We can be sure that Apple understands the needs of those machines. I highly doubt they would do something stupid that would cripple performance. It’s not something they could hide. They made a mistake with the previous model, and learned their lesson.
Yes, excuse me. I just have problems with the notion "everything will like an iPad Pro". As much I like ASi, Macs won't defeat physics, there'll still be struggles with size & performance. For a productivity tool, don't redesign just for the sake of it, you want them to last at least half a decade. Most rumors set themselves to an unrealistic standard and have no regards about performance.
The latest Nvidia GPUs have over 25 billion transistors, and an energy budget to support that. There is simply no way Apple can even begin to approach that. It takes Nvidia’s highest GPU boards to enable ray tracing in a useful way. Remember we’re talking not about ray trace in static imagery, but in dynamically rendered images at a rate of at least 60fps. That’s real time tracing. It’s in addition to every other graphic demand, such as real time physics, dynamic texture generation, hidden image elimination, etc. these all require very large amounts of graphics RAM additionally.
The A12Z on TSMC N7 has 10 billion transistors on a 120 mm^2 die. This is for a fanless tablet. I definitely see them hitting 20 to 30 billion transistors on TSMC N5 for the Mac SoCs. A notional A14X for an iPad Pro could be 15 billion transistors just based on the new fab alone. A Mac SoC for an iMac can easily be 2x that at around 30 billion.
Transistor budgets of over 25 billion are not that daunting for Apple Silicon. It's the power budgets that are. They just aren't going to run any of their SoCs at over 150 Watts, and I'd bet they want 100 W or lower. That will be self-limiting, so performance probably won't match these 300 W dGPUs or 200 W CPUs on average.
I expect them to be at parity or possibly more performant than dGPUs at each Watt class though. The fab advantage will give them about 30% perf/W advantage. The density advantage is going to be about 100%. They have so many transistors to play with on TSMC N5 that you wonder if they are willing use it all.
but none of us know what Apple’s plans really are. We can speculate, but that’s about it. And Apple has told us the basics during the June conference. In fact, they told us more than I expected. I listened very closely. I looked at their charts. We don’t see a separate GPU. At least not for the recognizable future. I suspect that we’ll se an SoC that competes with Intel very well in the beginning, likely exceeding performance at the level of Apple’s similar machines previously. I don’t expect to see overwhelming GPU performance. I would to see performance of a lower level separate GPU.
‘’anything that exceeds that would thrill me.
Yes. Apple has a chance to do something different than the packaging and designs afforded to them with components from Intel and AMD. It could be a monolithic SoC. It could be chiplets. It could be stacks of chiplets. They could use HBM or GDDR or whatever custom memory solution used commodity DRAM. It's going to be interesting to say the least.
Let’s be real, even if you could, how are you going to cool it? No big deal for laptops and consoles, they need to take size into consideration. It wouldn’t make sense for desktops where separation makes efficient cooling.
The 28-Core in the Mac Pro consumes ~280-300W and requires a ginormous heat sink. Anything beyond needs a high-end water cooling system, which will be disastrous for OEMs...
Apple is saying they aren't going to have high power consumption devices. So, the question is basically moot. The iMac 5K is running about 250 W worth of components or so. The Apple Silicon version is probably going to be half that. Every Apple Silicon design decision will be in favor of higher perf/Watt. They aren't going to be running their SoC at power levels that Intel, AMD and Nvidia are.
Apple is saying that Apple Silicon will be about as performant or a little more performant than desktops, at much less Watts. There is a level of performance and Watts they won't chase though. Something like 100 CPUs and 250 W GPUs. 250 to 350 W CPUs and GPUs? Probably not. Could they achieve the same performance at 150 W as an AMD/Nvidia at 300 W? Who knows. Sounds unlikely as those GPUs are using TSMC 7nm and Samsung 8nm. Will be interesting to see what they do.
having a much advanced architecture then limiting its potential to 1/3 is at best pathetic & laughable. No workstations limited themselves within 500~700 of watts, nor full-size desktops that think they’re Mac mini’s. You can, but given all of that envelope enabling one to go hundreds or a thousand watts, what’s the purpose?
Pros need performance in a sense much important than consumption, even laptops. When you do, then it needs to go over tens of hundreds of watts with a proper cooling solution. Combining a high-performance GPU on them is still a challenge, ASi won’t change that. If Apple matched Intel’s consumption, that’ll gives them an unprecedented performance.
By the way, NVIDIA just released their 3080, that’s 320W total, imagine four of them on a chip.
Outside of the Mac Pro with Apple Silicon, yes, I think Apple is aiming for 2x perf/Watt relative to x86. They said it in the presentation. They want to be on the upper left corner of the perf versus power illustration. To me, that implies that they will be little more performant than x86 desktops, but at much lower Watts. For laptops, much higher performance at the same Watts.
So, a 15 W Mac SoC will perform like a 30 W x86 SoC, a 50 W SoC will perform like a 100 SoC. Additionally, RAM won't be user upgradeable. It won't even be like the Mac mini or an iMac Pro where some surgery is needed to take the machines apart to access DIMM slots. RAM will all be soldered. The GPUs will be competitive or better to the AMD GPUs they would be using in the iMacs or MBP16 in 2021, and at lower Watts.
For the Mac Pro, who knows? They are saying Big Sur on Apple Silicon will only support Apple GPUs. That might change next year as they may be prioritizing work. At face value, if they are keeping the Mac Pro as product, that means they will be selling Apple GPU cards for the Mac Pro. Maybe it will be a GPU chip that will be in-package by way of a silicon bridge if it goes into a MBP or an iMac, but for the Mac Pro MPX models, they'll have a MCM package with 4 to 8 of them. The MPX module can cool 500 W, so they have headroom.
At some point though, they aren't going to chase wherever Nvidia and AMD are going. 320 to 350 Watts for the Nvidia 3080 and 3090 is getting into bonkers territory. Still acceptable for workstations, but how far are they going to take it? 400+ W for the 4000 series GPUs? If Apple can have the same performance at 200 W, I think that is a huge win. Just keeping parity in GPU but at lower Watts would be great.
Back to transistor densities. The A14 has 11.8b transistors. This is the phone chip. I'm guessing roughly 80 mm^2. An A14X for an iPad Pro and low end Mac laptop with 2x the area for CPU and 2x the area for GPU will be around 15b. Double again for a higher end Mac SoC will get them to 30b transistors and about a 200 mm^2 chip. That's just your average desktop chip. Hmm, 45b transistors for 300 mm2?
They can, but I don't think slashing a half would be a good idea. Think about this, the 16" right now can hold a 60-watt CPU, what if you swap it for a 60-watt ASi? Will it also be much quieter?
There's no urgent need to redesign every Macs from the ground up. I think with some improvements, all current design would serve another half-decade. A mature design is better than flashy but short-lived, especially that Macs are productivity tools.
There's also no urgent need to solder down RAM on a 300-watt desktop either. Laptops have limitations and LPDDR doesn't come on a stick; there's no enough space for an entry iMac but you got plenty for a 27~32". iMac Pro is an exception due to board layout, but can be redesign like its consumer sibling.
Integration works on the 16" because you want a smaller cooler but maximized its capability. You have one chip with two terminals to cool, it also evens the path for both CPU & GPU. You don't have to on a desktop because separation makes sense. One unified solution is hotter, thicker and expensive than two smaller but efficient coolers.
That and workstation chips are expensive, we're talking about the iMac where CPU + GPU could end up spending $1,000. Processors come with tiers and sometimes you want a high-end to match a low-end, wouldn't make sense to put 10-~16-core then disable half of it.
Actually, why not just combine both large iMacs into iMac Pro? They can share the same cooling, PSU, both have 10-core version with similar performance. Just make one model that start from 8-core to 20-core or more, then have an option whether to include a dedicated GPU.
Integration works on the 16" because you want a smaller cooler but maximized its capability. You have one chip with two terminals to cool, it also evens the path for both CPU & GPU. You don't have to on a desktop because separation makes sense. One unified solution is hotter, thicker and expensive than two smaller but efficient coolers.
That and workstation chips are expensive, we're talking about the iMac where CPU + GPU could end up spending $1,000. Processors come with tiers and sometimes you want a high-end to match a low-end, wouldn't make sense to put 10-~16-core then disable half of it.
Actually, why not just combine both large iMacs into iMac Pro? They can share the same cooling, PSU, both have 10-core version with similar performance. Just make one model that start from 8-core to 20-core or more, then have an option whether to include a dedicated GPU.
Isn’t that somewhat similar to an iMac Pro? Not exactly, but closeish.
Integration works on the 16" because you want a smaller cooler but maximized its capability. You have one chip with two terminals to cool, it also evens the path for both CPU & GPU. You don't have to on a desktop because separation makes sense. One unified solution is hotter, thicker and expensive than two smaller but efficient coolers.
That and workstation chips are expensive, we're talking about the iMac where CPU + GPU could end up spending $1,000. Processors come with tiers and sometimes you want a high-end to match a low-end, wouldn't make sense to put 10-~16-core then disable half of it.
Actually, why not just combine both large iMacs into iMac Pro? They can share the same cooling, PSU, both have 10-core version with similar performance. Just make one model that start from 8-core to 20-core or more, then have an option whether to include a dedicated GPU.
Isn’t that somewhat similar to an iMac Pro? Not exactly, but closeish.
Yeah, fundamentally it's the same, the difference being power supply & cooling system. I don't see why the iMac couldn't have them.
Intel always offer low-end Xeon that's no faster than an i7, Apple doesn't have to do that with their own silicon.
Integration works on the 16" because you want a smaller cooler but maximized its capability. You have one chip with two terminals to cool, it also evens the path for both CPU & GPU. You don't have to on a desktop because separation makes sense. One unified solution is hotter, thicker and expensive than two smaller but efficient coolers.
That and workstation chips are expensive, we're talking about the iMac where CPU + GPU could end up spending $1,000. Processors come with tiers and sometimes you want a high-end to match a low-end, wouldn't make sense to put 10-~16-core then disable half of it.
Actually, why not just combine both large iMacs into iMac Pro? They can share the same cooling, PSU, both have 10-core version with similar performance. Just make one model that start from 8-core to 20-core or more, then have an option whether to include a dedicated GPU.
Isn’t that somewhat similar to an iMac Pro? Not exactly, but closeish.
Yeah, fundamentally it's the same, the difference being power supply & cooling system. I don't see why the iMac couldn't have them.
Intel always offer low-end Xeon that's no faster than an i7, Apple doesn't have to do that with their own silicon.
Apple could choose any xenon they wanted, but the prices get out of hand for their market.
Comments
Apple is saying that Apple Silicon will be about as performant or a little more performant than desktops, at much less Watts. There is a level of performance and Watts they won't chase though. Something like 100 CPUs and 250 W GPUs. 250 to 350 W CPUs and GPUs? Probably not. Could they achieve the same performance at 150 W as an AMD/Nvidia at 300 W? Who knows. Sounds unlikely as those GPUs are using TSMC 7nm and Samsung 8nm. Will be interesting to see what they do.
Pros need performance in a sense much important than consumption, even laptops. When you do, then it needs to go over tens of hundreds of watts with a proper cooling solution. Combining a high-performance GPU on them is still a challenge, ASi won’t change that. If Apple matched Intel’s consumption, that’ll gives them an unprecedented performance.
By the way, NVIDIA just released their 3080, that’s 320W total, imagine four of them on a chip.
https://www.notebookcheck.net/Intel-Core-i7-1065G7-Laptop-Processor-Ice-Lake.423851.0.html
The 12.9” iPad Pro can cool it well. My MacBook Air (8210Y) peaks at 20-watt & comes with serious throttling.
Rumors saying the 14” will match the 16”, or even earlier some suggests the 24” will have 10700K-level at 40-watt. So if anyone really believes Apple will only match Intel in performance, that could only suggest Apple’s obsession with thinness is morbid. Imagine keep the current 16” design but replace with an ASi, instead making it less than 10mm thick.
Even AMD is releasing 8-core ultrabooks with a performance that nearly matches the 16”, at similar watts.
everything we’ve been talking about relates to laptops, Mini’s and iMacs. It doesn’t even relate to the iMac Pro, much less the Mac Pro. Most likely, those will be the last machines moved over. It could be a year from now, or it could be the two full years.
By then, Apple will have evolved its line of chips. We can be sure that Apple understands the needs of those machines. I highly doubt they would do something stupid that would cripple performance. It’s not something they could hide. They made a mistake with the previous model, and learned their lesson.
So, a 15 W Mac SoC will perform like a 30 W x86 SoC, a 50 W SoC will perform like a 100 SoC. Additionally, RAM won't be user upgradeable. It won't even be like the Mac mini or an iMac Pro where some surgery is needed to take the machines apart to access DIMM slots. RAM will all be soldered. The GPUs will be competitive or better to the AMD GPUs they would be using in the iMacs or MBP16 in 2021, and at lower Watts.
For the Mac Pro, who knows? They are saying Big Sur on Apple Silicon will only support Apple GPUs. That might change next year as they may be prioritizing work. At face value, if they are keeping the Mac Pro as product, that means they will be selling Apple GPU cards for the Mac Pro. Maybe it will be a GPU chip that will be in-package by way of a silicon bridge if it goes into a MBP or an iMac, but for the Mac Pro MPX models, they'll have a MCM package with 4 to 8 of them. The MPX module can cool 500 W, so they have headroom.
At some point though, they aren't going to chase wherever Nvidia and AMD are going. 320 to 350 Watts for the Nvidia 3080 and 3090 is getting into bonkers territory. Still acceptable for workstations, but how far are they going to take it? 400+ W for the 4000 series GPUs? If Apple can have the same performance at 200 W, I think that is a huge win. Just keeping parity in GPU but at lower Watts would be great.
Back to transistor densities. The A14 has 11.8b transistors. This is the phone chip. I'm guessing roughly 80 mm^2. An A14X for an iPad Pro and low end Mac laptop with 2x the area for CPU and 2x the area for GPU will be around 15b. Double again for a higher end Mac SoC will get them to 30b transistors and about a 200 mm^2 chip. That's just your average desktop chip. Hmm, 45b transistors for 300 mm2?
They can, but I don't think slashing a half would be a good idea. Think about this, the 16" right now can hold a 60-watt CPU, what if you swap it for a 60-watt ASi? Will it also be much quieter?
There's no urgent need to redesign every Macs from the ground up. I think with some improvements, all current design would serve another half-decade. A mature design is better than flashy but short-lived, especially that Macs are productivity tools.
There's also no urgent need to solder down RAM on a 300-watt desktop either. Laptops have limitations and LPDDR doesn't come on a stick; there's no enough space for an entry iMac but you got plenty for a 27~32". iMac Pro is an exception due to board layout, but can be redesign like its consumer sibling.
That and workstation chips are expensive, we're talking about the iMac where CPU + GPU could end up spending $1,000. Processors come with tiers and sometimes you want a high-end to match a low-end, wouldn't make sense to put 10-~16-core then disable half of it.
Actually, why not just combine both large iMacs into iMac Pro? They can share the same cooling, PSU, both have 10-core version with similar performance. Just make one model that start from 8-core to 20-core or more, then have an option whether to include a dedicated GPU.
Intel always offer low-end Xeon that's no faster than an i7, Apple doesn't have to do that with their own silicon.