Johny Srouji says Apple's hardware ambitions are limited only by physics

2»

Comments

  • Reply 21 of 27
    wood1208 said:
    Buy a Mac is a good advice. Apple got it right with these new MAC chips. It will drive MAC adoption in professional and enterprise world. Wish Apple could make 14" cheaper so more high school and college students can afford. In fact, 14" can become GO TO laptop for them.
    Why? That’s literally what the Air is for. (And it’s ‘Mac’, non-screamed)
    watto_cobra
  • Reply 22 of 27
    MarvinMarvin Posts: 15,322moderator
    hexclock said:
    docno42 said:
    GG1 said:
    On some other AI thread, a poster mentioned how expensive these M1 Pro/Max SoCs were to make, especially with the variable RAM amounts. So I'm wondering if the forthcoming MacPro may just use two or more M1 Max's on a single motherboard, similar to very high end dual Xeon boards.
    Better hope they never go dual socket. The compromises you have to make to have two sockets are numerous and what held back the cheese grater and trash can Mac Pro's.  Heck there are performance penalties for "chiplet" packaging like AMD is using with Ryzen and Threadripper.  I get why they are doing it - it's WAY more cost effective for a bunch of reasons.  But if Apple can stomach the production for massive SOC - if you care about performance that's the real ticket and what I hope they stay focused on.  
    mjtomlin said:
    I doubt they'll double up on SoC's. More than likely they'll go with much bigger SoCs variants, "M# Ultra" or "M# Extreme". They would be so far ahead in performance that they'll only need to update them half as often as the rest of the M1 family (basically the same as they did with the "X" variant in the A-series).
    Doubling up SoCs simply won't work (at least, well enough to be worth doing, and assuming you're talking about the existing SoCs). There are all sorts of "gated by physics" issues there. But a single large SoC also seems impossible (at least if they stick with their current memory architecture). Whatever they do, it's going to be a lot more interesting. I wrote a bit more about this in comments here: https://forums.appleinsider.com/discussion/224623/compared-new-14-inch-macbook-pro-versus-13-inch-m1-macbook-pro-versus-intel-13-inch-macbo/p2? ; - see comments 31, 32, 35, 36.

    This is a monumental engineering challenge! If it weren't, we'd already be seeing the new Mac Pros. I fully expect to see a new Pro next year, and that it will be amazing. Don't imagine for a second that these things are easy.
    It’s fun to imagine a Mac Pro with 2 or 4 M1 Max chips, but I remember my G5 1.8 dual getting so hot you could fry an egg on the case, and that thing had 9 fans!
    4x M1 Max would only be in the region of 300W fully maxed out with both CPU and GPU. The 2019 Mac Pro is over 900W. The 2013 cylinder Mac Pro was able to handle over 300W with a whisper quiet single fan and minimal heat. Fully maxed CPUs alone would be under 100W, fully maxed GPUs would be around 240W.

    Not many processes will max out a 40TFLOP GPU except for raw compute tasks, this is like 1.5x an Nvidia 3090. Real-time rendering wouldn't come close most of the time.

    If someone took the same workload from a pre-2019 Mac Pro onto M1 Max, just one of the 4 chips would handle it. If they took the same workload from the 2019 Pro, it would handle it with < 1/3 the power usage.

    They could fit 12x M1 Max chips into a 2019 Mac Pro enclosure with > 0.5 trillion transistors. This kind of thing is being done for special use cases with shared memory across all the chips:

    https://techxplore.com/news/2020-11-trillion-transistor-chip.html

    "The chip is composed of 84 virtual chips along a single silicon wafer and 4,539 computing cores, which means there are effectively 381,276 computing cores to tackle mathematical processes in parallel. Packed with 18 GB RAM, the cores are connected with a communications fabric called Swarm that runs at 100 petabits per second.

    "This work opens the door for major breakthroughs in scientific computing performance," Cerebras researchers wrote in a blog post. "The CS-1 is the first ever system to demonstrate sufficient performance to simulate over a million fluid cells faster than real-time. This means that when the CS-1 is used to simulate a power plant based on data about its present operating conditions, it can tell you what is going to happen in the future faster than the laws of physics produce that same result."

    "We can solve this problem in an amount of time that no number of GPUs or CPUs can achieve," said Cerebras's CEO, Andrew Feldman. "This means the CS-1 for this work is the fastest machine ever built, and it's faster than any combination of clustering of other processors."

    In a paper distributed at the conference Tuesday, titled "Fast Stencil-Code Computation on a Wafer-Scale Processor," Cerebras and Department of Energy researchers explained that shared memory is the critical difference between the CS-1 and Joule machines."

    williamlondonwatto_cobra
  • Reply 23 of 27
    hexclock said:
    docno42 said:
    GG1 said:
    On some other AI thread, a poster mentioned how expensive these M1 Pro/Max SoCs were to make, especially with the variable RAM amounts. So I'm wondering if the forthcoming MacPro may just use two or more M1 Max's on a single motherboard, similar to very high end dual Xeon boards.
    Better hope they never go dual socket. The compromises you have to make to have two sockets are numerous and what held back the cheese grater and trash can Mac Pro's.  Heck there are performance penalties for "chiplet" packaging like AMD is using with Ryzen and Threadripper.  I get why they are doing it - it's WAY more cost effective for a bunch of reasons.  But if Apple can stomach the production for massive SOC - if you care about performance that's the real ticket and what I hope they stay focused on.  
    mjtomlin said:
    I doubt they'll double up on SoC's. More than likely they'll go with much bigger SoCs variants, "M# Ultra" or "M# Extreme". They would be so far ahead in performance that they'll only need to update them half as often as the rest of the M1 family (basically the same as they did with the "X" variant in the A-series).
    Doubling up SoCs simply won't work (at least, well enough to be worth doing, and assuming you're talking about the existing SoCs). There are all sorts of "gated by physics" issues there. But a single large SoC also seems impossible (at least if they stick with their current memory architecture). Whatever they do, it's going to be a lot more interesting. I wrote a bit more about this in comments here: https://forums.appleinsider.com/discussion/224623/compared-new-14-inch-macbook-pro-versus-13-inch-m1-macbook-pro-versus-intel-13-inch-macbo/p2? ; - see comments 31, 32, 35, 36.

    This is a monumental engineering challenge! If it weren't, we'd already be seeing the new Mac Pros. I fully expect to see a new Pro next year, and that it will be amazing. Don't imagine for a second that these things are easy.
    It’s fun to imagine a Mac Pro with 2 or 4 M1 Max chips, but I remember my G5 1.8 dual getting so hot you could fry an egg on the case, and that thing had 9 fans!
    As I explained in the message you quoted, that won't work. It's not just a heat issue (though that is an extremely important one.) Really, it wouldn't be fun, it would be awful. Whatever Apple is doing, it's going to be much more interesting. *Based* on the M1 (or, more likely, M2), but not the M1 itself. Or rather, I should probably say, based on the M1/2 cores.
    watto_cobra
  • Reply 24 of 27
    Marvin said:
    hexclock said:
    docno42 said:
    GG1 said:
    On some other AI thread, a poster mentioned how expensive these M1 Pro/Max SoCs were to make, especially with the variable RAM amounts. So I'm wondering if the forthcoming MacPro may just use two or more M1 Max's on a single motherboard, similar to very high end dual Xeon boards.
    Better hope they never go dual socket. The compromises you have to make to have two sockets are numerous and what held back the cheese grater and trash can Mac Pro's.  Heck there are performance penalties for "chiplet" packaging like AMD is using with Ryzen and Threadripper.  I get why they are doing it - it's WAY more cost effective for a bunch of reasons.  But if Apple can stomach the production for massive SOC - if you care about performance that's the real ticket and what I hope they stay focused on.  
    mjtomlin said:
    I doubt they'll double up on SoC's. More than likely they'll go with much bigger SoCs variants, "M# Ultra" or "M# Extreme". They would be so far ahead in performance that they'll only need to update them half as often as the rest of the M1 family (basically the same as they did with the "X" variant in the A-series).
    Doubling up SoCs simply won't work (at least, well enough to be worth doing, and assuming you're talking about the existing SoCs). There are all sorts of "gated by physics" issues there. But a single large SoC also seems impossible (at least if they stick with their current memory architecture). Whatever they do, it's going to be a lot more interesting. I wrote a bit more about this in comments here: https://forums.appleinsider.com/discussion/224623/compared-new-14-inch-macbook-pro-versus-13-inch-m1-macbook-pro-versus-intel-13-inch-macbo/p2? ; - see comments 31, 32, 35, 36.

    This is a monumental engineering challenge! If it weren't, we'd already be seeing the new Mac Pros. I fully expect to see a new Pro next year, and that it will be amazing. Don't imagine for a second that these things are easy.
    It’s fun to imagine a Mac Pro with 2 or 4 M1 Max chips, but I remember my G5 1.8 dual getting so hot you could fry an egg on the case, and that thing had 9 fans!
    4x M1 Max would only be in the region of 300W fully maxed out with both CPU and GPU. The 2019 Mac Pro is over 900W. The 2013 cylinder Mac Pro was able to handle over 300W with a whisper quiet single fan and minimal heat. Fully maxed CPUs alone would be under 100W, fully maxed GPUs would be around 240W.

    Not many processes will max out a 40TFLOP GPU except for raw compute tasks, this is like 1.5x an Nvidia 3090. Real-time rendering wouldn't come close most of the time.

    If someone took the same workload from a pre-2019 Mac Pro onto M1 Max, just one of the 4 chips would handle it. If they took the same workload from the 2019 Pro, it would handle it with < 1/3 the power usage.

    They could fit 12x M1 Max chips into a 2019 Mac Pro enclosure with > 0.5 trillion transistors. This kind of thing is being done for special use cases with shared memory across all the chips:

    https://techxplore.com/news/2020-11-trillion-transistor-chip.html

    "The chip is composed of 84 virtual chips along a single silicon wafer and 4,539 computing cores, which means there are effectively 381,276 computing cores to tackle mathematical processes in parallel. Packed with 18 GB RAM, the cores are connected with a communications fabric called Swarm that runs at 100 petabits per second.

    "This work opens the door for major breakthroughs in scientific computing performance," Cerebras researchers wrote in a blog post. "The CS-1 is the first ever system to demonstrate sufficient performance to simulate over a million fluid cells faster than real-time. This means that when the CS-1 is used to simulate a power plant based on data about its present operating conditions, it can tell you what is going to happen in the future faster than the laws of physics produce that same result."

    "We can solve this problem in an amount of time that no number of GPUs or CPUs can achieve," said Cerebras's CEO, Andrew Feldman. "This means the CS-1 for this work is the fastest machine ever built, and it's faster than any combination of clustering of other processors."

    In a paper distributed at the conference Tuesday, titled "Fast Stencil-Code Computation on a Wafer-Scale Processor," Cerebras and Department of Energy researchers explained that shared memory is the critical difference between the CS-1 and Joule machines."

    Your watt analysis is faulty because you're not looking at the whole picture. As I've mentioned elsewhere, the connectivity between cores across multiple chip(let)s is hugely expensive - some estimates put it at over 50% of the entire power budget of modern large core count CPUs like Xeons and EPYCs. That number includes memory controllers so you can't just double up (M1MaxPower * 4) to get a number, but it gives you a decent place to start.

    Also if you just did a half-assed job lashing together 4x M1Ms, you wouldn't have a 40Tflops (it's not "a tflop", it's "a tflops", singular) GPU. That situation would be like lashing 4x Nvidias or AMD Radeons together (SLI or CrossFire). You do NOT get linear speedups in that situation, most of the time - again, because of latency and bandwidth. You can be certain Apple isn't going to settle for that!

    Lastly, you have *entirely* missed the point of the Cerebras. Why do you think it is a *wafer* and not a bunch of chips cut from a wafer and then connected up?

    It's because that's how they tackle latency and bandwidth.

    Sure, you could get 12x M1Ms into a Mac Pro case. But you couldn't get them talking together fast enough for them to do useful work at anything close to 12x the rate of a single M1M.

    However, you have identified one of the tools Apple could use to solve their problem that I was talking about. They could conceivably have SoCs made of four dies that are cut from the wafer all together. They might use TSMC's CoW or similar tech. If they don't fall afoul of Cerebras' patents, anyway, which is not a small concern. Then they would still face issues with memory and heat removal, and perhaps most problematically, defect rates. All those issues seem solvable though.

    Anyone thinking this will yield cheaper Mac Pros should think again. :-/ The issues may be solvable, but not cheaply.
    edited October 2021 williamlondonmuthuk_vanalingamwatto_cobra
  • Reply 25 of 27
    MarvinMarvin Posts: 15,322moderator
    Marvin said:
    hexclock said:
    docno42 said:
    GG1 said:
    On some other AI thread, a poster mentioned how expensive these M1 Pro/Max SoCs were to make, especially with the variable RAM amounts. So I'm wondering if the forthcoming MacPro may just use two or more M1 Max's on a single motherboard, similar to very high end dual Xeon boards.
    Better hope they never go dual socket. The compromises you have to make to have two sockets are numerous and what held back the cheese grater and trash can Mac Pro's.  Heck there are performance penalties for "chiplet" packaging like AMD is using with Ryzen and Threadripper.  I get why they are doing it - it's WAY more cost effective for a bunch of reasons.  But if Apple can stomach the production for massive SOC - if you care about performance that's the real ticket and what I hope they stay focused on.  
    mjtomlin said:
    I doubt they'll double up on SoC's. More than likely they'll go with much bigger SoCs variants, "M# Ultra" or "M# Extreme". They would be so far ahead in performance that they'll only need to update them half as often as the rest of the M1 family (basically the same as they did with the "X" variant in the A-series).
    Doubling up SoCs simply won't work (at least, well enough to be worth doing, and assuming you're talking about the existing SoCs). There are all sorts of "gated by physics" issues there. But a single large SoC also seems impossible (at least if they stick with their current memory architecture). Whatever they do, it's going to be a lot more interesting. I wrote a bit more about this in comments here: https://forums.appleinsider.com/discussion/224623/compared-new-14-inch-macbook-pro-versus-13-inch-m1-macbook-pro-versus-intel-13-inch-macbo/p2? ; - see comments 31, 32, 35, 36.

    This is a monumental engineering challenge! If it weren't, we'd already be seeing the new Mac Pros. I fully expect to see a new Pro next year, and that it will be amazing. Don't imagine for a second that these things are easy.
    It’s fun to imagine a Mac Pro with 2 or 4 M1 Max chips, but I remember my G5 1.8 dual getting so hot you could fry an egg on the case, and that thing had 9 fans!
    4x M1 Max would only be in the region of 300W fully maxed out with both CPU and GPU. The 2019 Mac Pro is over 900W. The 2013 cylinder Mac Pro was able to handle over 300W with a whisper quiet single fan and minimal heat. Fully maxed CPUs alone would be under 100W, fully maxed GPUs would be around 240W.

    Not many processes will max out a 40TFLOP GPU except for raw compute tasks, this is like 1.5x an Nvidia 3090. Real-time rendering wouldn't come close most of the time.

    If someone took the same workload from a pre-2019 Mac Pro onto M1 Max, just one of the 4 chips would handle it. If they took the same workload from the 2019 Pro, it would handle it with < 1/3 the power usage.

    They could fit 12x M1 Max chips into a 2019 Mac Pro enclosure with > 0.5 trillion transistors. This kind of thing is being done for special use cases with shared memory across all the chips:

    https://techxplore.com/news/2020-11-trillion-transistor-chip.html

    "The chip is composed of 84 virtual chips along a single silicon wafer and 4,539 computing cores, which means there are effectively 381,276 computing cores to tackle mathematical processes in parallel. Packed with 18 GB RAM, the cores are connected with a communications fabric called Swarm that runs at 100 petabits per second.

    "This work opens the door for major breakthroughs in scientific computing performance," Cerebras researchers wrote in a blog post. "The CS-1 is the first ever system to demonstrate sufficient performance to simulate over a million fluid cells faster than real-time. This means that when the CS-1 is used to simulate a power plant based on data about its present operating conditions, it can tell you what is going to happen in the future faster than the laws of physics produce that same result."

    "We can solve this problem in an amount of time that no number of GPUs or CPUs can achieve," said Cerebras's CEO, Andrew Feldman. "This means the CS-1 for this work is the fastest machine ever built, and it's faster than any combination of clustering of other processors."

    In a paper distributed at the conference Tuesday, titled "Fast Stencil-Code Computation on a Wafer-Scale Processor," Cerebras and Department of Energy researchers explained that shared memory is the critical difference between the CS-1 and Joule machines."

    Also if you just did a half-assed job lashing together 4x M1Ms, you wouldn't have a 40Tflops (it's not "a tflop", it's "a tflops", singular) GPU. That situation would be like lashing 4x Nvidias or AMD Radeons together (SLI or CrossFire). You do NOT get linear speedups in that situation, most of the time - again, because of latency and bandwidth. You can be certain Apple isn't going to settle for that!
    For compute-heavy tasks you do get near linear speedups. Nobody would bother buying multi-GPU setups and manufacturers wouldn't bother selling them if they didn't:

    https://barefeats.com/pro-w6800x-quadruple.html

    It doesn't necessarily scale as well for real-time or memory-heavy tasks but it usually doesn't need to and it's still better than a single GPU. Apple's current Mac Pro has the CPU separate from the GPUs and all GPUs separate and the memory separate and not shared. A Mac Pro SoC just has to be in the same class or better than the 2019 Mac Pro and they can do that very easily and much more cost-effectively than passing on $16k+ to Intel and AMD for their overpriced components.
    Marvin said:
    hexclock said:
    docno42 said:
    GG1 said:
    On some other AI thread, a poster mentioned how expensive these M1 Pro/Max SoCs were to make, especially with the variable RAM amounts. So I'm wondering if the forthcoming MacPro may just use two or more M1 Max's on a single motherboard, similar to very high end dual Xeon boards.
    Better hope they never go dual socket. The compromises you have to make to have two sockets are numerous and what held back the cheese grater and trash can Mac Pro's.  Heck there are performance penalties for "chiplet" packaging like AMD is using with Ryzen and Threadripper.  I get why they are doing it - it's WAY more cost effective for a bunch of reasons.  But if Apple can stomach the production for massive SOC - if you care about performance that's the real ticket and what I hope they stay focused on.  
    mjtomlin said:
    I doubt they'll double up on SoC's. More than likely they'll go with much bigger SoCs variants, "M# Ultra" or "M# Extreme". They would be so far ahead in performance that they'll only need to update them half as often as the rest of the M1 family (basically the same as they did with the "X" variant in the A-series).
    Doubling up SoCs simply won't work (at least, well enough to be worth doing, and assuming you're talking about the existing SoCs). There are all sorts of "gated by physics" issues there. But a single large SoC also seems impossible (at least if they stick with their current memory architecture). Whatever they do, it's going to be a lot more interesting. I wrote a bit more about this in comments here: https://forums.appleinsider.com/discussion/224623/compared-new-14-inch-macbook-pro-versus-13-inch-m1-macbook-pro-versus-intel-13-inch-macbo/p2? ; - see comments 31, 32, 35, 36.

    This is a monumental engineering challenge! If it weren't, we'd already be seeing the new Mac Pros. I fully expect to see a new Pro next year, and that it will be amazing. Don't imagine for a second that these things are easy.
    It’s fun to imagine a Mac Pro with 2 or 4 M1 Max chips, but I remember my G5 1.8 dual getting so hot you could fry an egg on the case, and that thing had 9 fans!
    4x M1 Max would only be in the region of 300W fully maxed out with both CPU and GPU. The 2019 Mac Pro is over 900W. The 2013 cylinder Mac Pro was able to handle over 300W with a whisper quiet single fan and minimal heat. Fully maxed CPUs alone would be under 100W, fully maxed GPUs would be around 240W.

    Not many processes will max out a 40TFLOP GPU except for raw compute tasks, this is like 1.5x an Nvidia 3090. Real-time rendering wouldn't come close most of the time.

    If someone took the same workload from a pre-2019 Mac Pro onto M1 Max, just one of the 4 chips would handle it. If they took the same workload from the 2019 Pro, it would handle it with < 1/3 the power usage.

    They could fit 12x M1 Max chips into a 2019 Mac Pro enclosure with > 0.5 trillion transistors. This kind of thing is being done for special use cases with shared memory across all the chips:

    https://techxplore.com/news/2020-11-trillion-transistor-chip.html

    "The chip is composed of 84 virtual chips along a single silicon wafer and 4,539 computing cores, which means there are effectively 381,276 computing cores to tackle mathematical processes in parallel. Packed with 18 GB RAM, the cores are connected with a communications fabric called Swarm that runs at 100 petabits per second.

    "This work opens the door for major breakthroughs in scientific computing performance," Cerebras researchers wrote in a blog post. "The CS-1 is the first ever system to demonstrate sufficient performance to simulate over a million fluid cells faster than real-time. This means that when the CS-1 is used to simulate a power plant based on data about its present operating conditions, it can tell you what is going to happen in the future faster than the laws of physics produce that same result."

    "We can solve this problem in an amount of time that no number of GPUs or CPUs can achieve," said Cerebras's CEO, Andrew Feldman. "This means the CS-1 for this work is the fastest machine ever built, and it's faster than any combination of clustering of other processors."

    In a paper distributed at the conference Tuesday, titled "Fast Stencil-Code Computation on a Wafer-Scale Processor," Cerebras and Department of Energy researchers explained that shared memory is the critical difference between the CS-1 and Joule machines."

    However, you have identified one of the tools Apple could use to solve their problem that I was talking about. They could conceivably have SoCs made of four dies that are cut from the wafer all together. They might use TSMC's CoW or similar tech. If they don't fall afoul of Cerebras' patents, anyway, which is not a small concern. Then they would still face issues with memory and heat removal, and perhaps most problematically, defect rates. All those issues seem solvable though.

    Anyone thinking this will yield cheaper Mac Pros should think again. :-/ The issues may be solvable, but not cheaply.
    In terms of the unit volume for high-end chips, the defect rate would be ok. Mac Pro users are in the tens of thousands vs millions for the mainstream users.

    I'm not seeing where any significant additional cost comes in. Solving an engineering challenge isn't a significant retail cost. If they bundle 4 chips then it's the cost to produce 4 chips. If it's faster memory there's extra cost but they're not going to use memory that costs something insane like $1000/GB. They'll use a type of mainstream high bandwidth memory and live with whatever compromises that comes with.

    Apple's not in the server business and doesn't compete with Epyc/Xeon as far as server workloads go. They are in the creative space. Video and effects rendering are highly parallel tasks that don't require syncing data at a fast rate between chips and they scale extremely well even across entirely separate machines.

    Apple could duct tape 4x M1 Max Macbook Pros together, plug Thunderbolt 4 cables into each and it would offer value for a creative workflow over a single machine and it would be cheaper and faster than a specced out 2019 Mac Pro. There are companies doing this with Mac minis.

    Apple has demonstrated what they are capable of with their mainstream silicon, they're not going to have problems making a chip to target the same audience that is satisfied by the 2019 Mac Pro.
    watto_cobra
  • Reply 26 of 27
    hexclockhexclock Posts: 1,253member
    hexclock said:
    docno42 said:
    GG1 said:
    On some other AI thread, a poster mentioned how expensive these M1 Pro/Max SoCs were to make, especially with the variable RAM amounts. So I'm wondering if the forthcoming MacPro may just use two or more M1 Max's on a single motherboard, similar to very high end dual Xeon boards.
    Better hope they never go dual socket. The compromises you have to make to have two sockets are numerous and what held back the cheese grater and trash can Mac Pro's.  Heck there are performance penalties for "chiplet" packaging like AMD is using with Ryzen and Threadripper.  I get why they are doing it - it's WAY more cost effective for a bunch of reasons.  But if Apple can stomach the production for massive SOC - if you care about performance that's the real ticket and what I hope they stay focused on.  
    mjtomlin said:
    I doubt they'll double up on SoC's. More than likely they'll go with much bigger SoCs variants, "M# Ultra" or "M# Extreme". They would be so far ahead in performance that they'll only need to update them half as often as the rest of the M1 family (basically the same as they did with the "X" variant in the A-series).
    Doubling up SoCs simply won't work (at least, well enough to be worth doing, and assuming you're talking about the existing SoCs). There are all sorts of "gated by physics" issues there. But a single large SoC also seems impossible (at least if they stick with their current memory architecture). Whatever they do, it's going to be a lot more interesting. I wrote a bit more about this in comments here: https://forums.appleinsider.com/discussion/224623/compared-new-14-inch-macbook-pro-versus-13-inch-m1-macbook-pro-versus-intel-13-inch-macbo/p2? ; - see comments 31, 32, 35, 36.

    This is a monumental engineering challenge! If it weren't, we'd already be seeing the new Mac Pros. I fully expect to see a new Pro next year, and that it will be amazing. Don't imagine for a second that these things are easy.
    It’s fun to imagine a Mac Pro with 2 or 4 M1 Max chips, but I remember my G5 1.8 dual getting so hot you could fry an egg on the case, and that thing had 9 fans!
    As I explained in the message you quoted, that won't work. It's not just a heat issue (though that is an extremely important one.) Really, it wouldn't be fun, it would be awful. Whatever Apple is doing, it's going to be much more interesting. *Based* on the M1 (or, more likely, M2), but not the M1 itself. Or rather, I should probably say, based on the M1/2 cores.
    It is certainly very interesting and I will keep a close eye on what they do. At some point silicon will be replaced with a new engineered substrate which will allow another leap in performance. As it stands now, my 2013 Mac Pro is still way more computer than I actually need. You are absolutely right about it running very quiet, and doesn’t get too warm unless pushed to the brink. I use Logic quite a bit, and it still takes whatever I can throw at it, besides 20 tracks of Sculpture perhaps. 
    Last time I was in the Apple store, the new MP was set up but had no mouse attached, so I didn’t get to play with it. Didn’t have time to ask for one, and they were really busy with the iPhone 13 launch anyway. Might as well wait until the M series MP comes out and then check it out.  
    watto_cobra
  • Reply 27 of 27
    robabarobaba Posts: 228member
    Perhaps the will mirror the M1PRO/MAX Along two axis to build out the massive chips they seem to need for the desktop MacPro.  From what I’ve seen it looks like they might have been structuring it for this.  That being said, I’m no expert by any means.
    watto_cobra
Sign In or Register to comment.