Apple Silicon iMac & MacBook Pro expected in 2021, 32-core Mac Pro in 2022

13

Comments

  • Reply 41 of 69
    razorpit said:
    The iMac will come much sooner. It makes no sense for Apple to wait another year and release them end of 2021. 
    More likely, it’ll be released before Q3.
    Wondering if they are working on Intel emulators. There's a lot of us out here that still need to run the 64-bit version of Windows 10 Pro for specific reasons, and do not want to lose that ability. Maybe these 'higher-end' machines will get that function first? 
    Apple says Microsoft is responsible for that.
    watto_cobra
  • Reply 42 of 69

    tipoo said:
    Just to add something, GPU core counts are all counted differently and meaningless across architectures. An Apple GPU core is 128 ALUs, say an Intel one is 8. 

    Seeing what they did with the 8C M1, the prospect of a 128 core Apple GPU is amazingly tantalizing, that's 16,384 unified shaders.
    Man, I’m not sure what to expect from that! Lol!

    watto_cobra
  • Reply 43 of 69
    canukstormcanukstorm Posts: 2,732member
    blastdoor said:
    ph382 said:
    rob53 said:
    Why stop at 32 cores. 

    I don't see even 32 happening for anything but the Mac Pro. I saw forum comments recently that games don't and can't use more than six cores. How much RAM (and heat) would you need to feed 32 cores?

    https://www.anandtech.com/show/15044/the-amd-ryzen-threadripper-3960x-and-3970x-review-24-and-32-cores-on-7nm

    The 32 core Threadripper 3970x has a TDP of 280 watts on a 7nm process. It has four DDR4 3200 RAM channels.

    Based on comparisons of the M1 to mobile Ryzen, I would expect an ASi 32 core SOC to have a TDP much lower than 280 watts. 

    I bet a 32 core ASi SOC on a 5nm process could fit within the thermal envelope of an iMac Pro. 
    It has 8 not 4 DDR 4 3200 ECC RAM Channels and those are limited by the OEMs. Zen supports 2TB of DDR4 RAM since Zen 2. And Threadripper is limited to 32 Cores presently because they haven't moved to the Zen 4 5nm process with RDNA 3.0/CDNA 2.0 based solutions that are on a unified memory plane.

    Those 32 cores would be a 16/16 Big/Little and then combine their GPU and other co-processors and you have a much larger SoC or very small cores.

    TR 3 arrives this January along with EPYC 3 Milan with 64/128 Cores. The next releases as Lisa Su has stated and their software ROCm has shown will be integrating Xilinx co-processors into the Zen 4/RDNA 3.0/CDNA 2.0 based solutions and beyond. 

    Both AMD and Xilinx have Architecture licenses to ARM and have been designing and producing ARM processors for years. Xilinx itself has an arsenal of solutions in ARM.

    32 Cores would only be in the Mac Pro. 8/8 cores in the iMac and 12/12 in the iMac Pro is pushing it.

    In 2022 Jim Keller's CPU designs from Intel hit the market. The upcoming Zen architecture designs will be announced in January 2021 at the CES Virtual conference. AMD has already announced by 2025 its conservative Product sales of Hardware will be over $22 Billion. That's up from this year's just over $8 Billion.

    Apple has zero interest in supporting anything beyond their Matrix of hardware options and people believing they want to be all solutions to all people don't understand and never have understood the mission statement of Apple.

    A lot of the R&D in M1 is going into their IoT future products and Automobile products.
    "Apple has zero interest in supporting anything beyond their Matrix of hardware options" =>Obviously.  That's not exactly classified information.

    "A lot of the R&D in M1 is going into their IoT future products and Automobile products." => Apple let go pretty much their entire hardware engineers that were working on a supposed autonomous vehicle.
    edited December 2020 watto_cobra
  • Reply 44 of 69
    killroykillroy Posts: 281member
    I sure hope the Mac Pros will have PCIe 4 slots. We use 32gig fiber cards in our editing systems.
    watto_cobra
  • Reply 45 of 69
    canukstormcanukstorm Posts: 2,732member
    Flytrap said:
    I presume this article is talking about 32 cores being on the same die. If true, that's good to hear. But don't see why Apple couldn't put multiple physical packages with N-cores each on the same system board. Moreover, Apple should do this as an expansion card for existing Intel Mac Pros so that Intel Mac Pros can run M1-apps natively.
    For anyone paying attention to Apple over the last few years... and for anyone paying attention at their most recent product redesigns... Apple has clearly declared war on end-user replaceable or upgradable modularity. Apple Silicon gives them the ability to make a giant leap towards turning the Mac into a personal computing appliance. They will get close, but, in my opinion, they will never really accomplish that goal because it will be impossible to turn what is currently a general computing operating system that people can run almost anything that they want on, into a locked-down firmware-like OS that can only run apps that Apple has deemed worthy to be allowed.

    Fewer and fewer Apple devices offer end-user upgradable storage, memory, CPU, GPU, etc. None of the latest Apple Silicon Macs offer any means for an end-user to upgrade or replace the CPU, Memory, Storage, GPU... or anything that was not originally ordered with the machine, for that matter. If you have an Intel Mac, you can install and run any recent version of macOS. On an Apple Silicon Mac, you can only install and run macOS Big Sur... macOS is likely to be the only end-user upgradable part of a Mac appliance in future.
    That won't go over well for Mac Pro buyers.
  • Reply 46 of 69
    MplsPMplsP Posts: 3,987member
    It was generally expected and assumed that Apple would switch the Mac Pro over to apple Si at some point. What I wonder is how are they going to handle graphics? Are they going to develop their own processor in-house and eliminate any PCIe interface and thus any support for 3rd party graphics cards? Also, memory is a question, too. My understanding is that the unified memory they used with the M1 is integrated on the chip which explains why it’s not upgradable. That may be fine for a MacBook, but that’s not going to cut it for your average Mac Pro customer.
    watto_cobra
  • Reply 47 of 69
    mattinozmattinoz Posts: 2,442member
    dewme said:
    I presume this article is talking about 32 cores being on the same die. If true, that's good to hear. But don't see why Apple couldn't put multiple physical packages with N-cores each on the same system board. Moreover, Apple should do this as an expansion card for existing Intel Mac Pros so that Intel Mac Pros can run M1-apps natively.
    They could, but it would diminish some of the benefits of the M1's unified memory architecture. Yes, you'd still have more cores available for working on highly parallelizable problems, but once the parts of the work are chunked and doled out to different groups of cores in different packages you'd have to designate something, like a "master" package or CPU to reassemble (memory copy) all of the distributed work back to a central point for final processing - and it would have to fit into the memory space of the master.

    Keep in mind that the multiple CPUs on traditional multiple CPU motherboards all shared the same main memory. Additionally, each CPU or CPU core had its own memory caches and all of those caches had to be maintained in a coherent state because they were all referencing shared memory locations. If one CPU/core changed the value of a cached location in shared memory all caches in all CPUs/cores that also hold a reference to that main memory location in their caches have to be updated.  With the M1 Apple can optimize its cache coherency logic much easier within its unified memory architecture than it can with shared off-package memory. If Apple brings in multiple packages each with multiple cores to work on shared problems, how many of these traditional memory management problems come back into the fray? I ask this only because a lot of the architectural benefits that Apple achieves with the M1 are due to tight-coupling between CPU cores, tight-coupling between CPU and GPU cores, and having a unified memory architecture very close to all of the cores. Moving to a multi-package model brings loose coupling back into the picture, which will most definitely limit the overall speedup that is attainable. There are alternate ways to get something in-between tight and loose coupling, as AMD has done, but you want to keep everything as close as possible.

    Two things to keep in mind with all Von Neumann based architectures, which includes all current Macs and PCs, 1: copying memory around always sucks from a performance standpoint, and 2: even small amounts of non-parallelizable code has a serious negative impact on the attainable speedup that can be seen with any number of processing units/cores (Ahmdahl's Law). This doesn't mean that speedups aren't possible, it just means that unless your code is well above 90% parallelizable you're going to max out on performance very quickly regardless of the number of cores. For example for 90% case, with 8 cores you'd get a speedup of 5 over a single core, but with 32 cores you're only getting a speedup of 8 over a single core even though you've quadrupled the number of cores. To double the speedup in the 90% case you'd have to go from 8 cores to around 512 cores. 

    The idea of a M1 coprocessor board for the Mac Pro is not a bad idea, and it's a strategy that's been used since the dawn of personal computers. One of DEC's first PC-compatible computers, the Rainbow 100, had a 8088 and a Z80 processor so it could run both MS-DOS and CP/M (and other) operating systems and applications. I've had PCs, Sun workstations, and industrial PLCs that had plug-in coprocessor boards for doing specialized processing semi-independently of the host CPU. The challenge is always how to integrate the coprocessor with the host system so the co-processor and the host system cooperate in a high quality and seamless manner. Looking across the tight-coupling to loose-coupling spectrum, these plug-in board solutions tend to be more on the looser side of the spectrum unless the host system (and all of its busses) were purposely designed with the coprocessor in mind, like the relationship between the old 8086/8088 general purpose CPU and the 8087 numerical coprocessor.  

      
    Could the Unified Memory be used as a bridge between 2 or more chips?

    Could they be planning upgraded systems as multiples of the same chip with a Memory Controller/ PCIe hub chip between them?
    That could let them use 6Core bin of chips as to make 12 core clusters and have options for 14 and 16 at a premium. Mirror again to get up to a 32core beast.

    watto_cobra
  • Reply 48 of 69
    dewmedewme Posts: 5,654member
    mattinoz said:
    dewme said:
    I presume this article is talking about 32 cores being on the same die. If true, that's good to hear. But don't see why Apple couldn't put multiple physical packages with N-cores each on the same system board. Moreover, Apple should do this as an expansion card for existing Intel Mac Pros so that Intel Mac Pros can run M1-apps natively.
    They could, but it would diminish some of the benefits of the M1's unified memory architecture. Yes, you'd still have more cores available for working on highly parallelizable problems, but once the parts of the work are chunked and doled out to different groups of cores in different packages you'd have to designate something, like a "master" package or CPU to reassemble (memory copy) all of the distributed work back to a central point for final processing - and it would have to fit into the memory space of the master.

    Keep in mind that the multiple CPUs on traditional multiple CPU motherboards all shared the same main memory. Additionally, each CPU or CPU core had its own memory caches and all of those caches had to be maintained in a coherent state because they were all referencing shared memory locations. If one CPU/core changed the value of a cached location in shared memory all caches in all CPUs/cores that also hold a reference to that main memory location in their caches have to be updated.  With the M1 Apple can optimize its cache coherency logic much easier within its unified memory architecture than it can with shared off-package memory. If Apple brings in multiple packages each with multiple cores to work on shared problems, how many of these traditional memory management problems come back into the fray? I ask this only because a lot of the architectural benefits that Apple achieves with the M1 are due to tight-coupling between CPU cores, tight-coupling between CPU and GPU cores, and having a unified memory architecture very close to all of the cores. Moving to a multi-package model brings loose coupling back into the picture, which will most definitely limit the overall speedup that is attainable. There are alternate ways to get something in-between tight and loose coupling, as AMD has done, but you want to keep everything as close as possible.

    Two things to keep in mind with all Von Neumann based architectures, which includes all current Macs and PCs, 1: copying memory around always sucks from a performance standpoint, and 2: even small amounts of non-parallelizable code has a serious negative impact on the attainable speedup that can be seen with any number of processing units/cores (Ahmdahl's Law). This doesn't mean that speedups aren't possible, it just means that unless your code is well above 90% parallelizable you're going to max out on performance very quickly regardless of the number of cores. For example for 90% case, with 8 cores you'd get a speedup of 5 over a single core, but with 32 cores you're only getting a speedup of 8 over a single core even though you've quadrupled the number of cores. To double the speedup in the 90% case you'd have to go from 8 cores to around 512 cores. 

    The idea of a M1 coprocessor board for the Mac Pro is not a bad idea, and it's a strategy that's been used since the dawn of personal computers. One of DEC's first PC-compatible computers, the Rainbow 100, had a 8088 and a Z80 processor so it could run both MS-DOS and CP/M (and other) operating systems and applications. I've had PCs, Sun workstations, and industrial PLCs that had plug-in coprocessor boards for doing specialized processing semi-independently of the host CPU. The challenge is always how to integrate the coprocessor with the host system so the co-processor and the host system cooperate in a high quality and seamless manner. Looking across the tight-coupling to loose-coupling spectrum, these plug-in board solutions tend to be more on the looser side of the spectrum unless the host system (and all of its busses) were purposely designed with the coprocessor in mind, like the relationship between the old 8086/8088 general purpose CPU and the 8087 numerical coprocessor.  

      
    Could the Unified Memory be used as a bridge between 2 or more chips?

    Could they be planning upgraded systems as multiples of the same chip with a Memory Controller/ PCIe hub chip between them?
    That could let them use 6Core bin of chips as to make 12 core clusters and have options for 14 and 16 at a premium. Mirror again to get up to a 32core beast.

    Based on the descriptions Apple has published the unified memory pool is in the SoC package and is directly addressable by the CPUs and GPUs to avoid memory copying. This would seemingly make the unified memory address space local to the SoC. If a portion of this memory was shared with another SoC that has its own memory address space this would seemingly introduce cache coherency type of requirements to the architectural model. 

    But there’s nothing preventing Apple from defining a range of addresses in unified memory that serve as I/O ports so to speak, possibly with a DMA semantics, to allow high speed data interchange between SoCs. This would be more like a thunderbolt bus interchange and less like CPUs on the same SoC sharing access to the same memory address space. 

    If you look at the Mac in a slightly broader sense it is really a system of systems. There’s no doubt that Apple could leverage its Apple Silicon and M-series SoCs in particular as building blocks to compose an Apple SoC powered system of systems in a form factor like we’ve never seen in the markets that Macs currently target. We’re barely past the starting line on this journey. 
    edited December 2020 mattinozwatto_cobrarundhvid
  • Reply 49 of 69
    zimmiezimmie Posts: 651member
    dewme said:
    mattinoz said:
    dewme said:
    I presume this article is talking about 32 cores being on the same die. If true, that's good to hear. But don't see why Apple couldn't put multiple physical packages with N-cores each on the same system board. Moreover, Apple should do this as an expansion card for existing Intel Mac Pros so that Intel Mac Pros can run M1-apps natively.
    They could, but it would diminish some of the benefits of the M1's unified memory architecture. Yes, you'd still have more cores available for working on highly parallelizable problems, but once the parts of the work are chunked and doled out to different groups of cores in different packages you'd have to designate something, like a "master" package or CPU to reassemble (memory copy) all of the distributed work back to a central point for final processing - and it would have to fit into the memory space of the master.

    Keep in mind that the multiple CPUs on traditional multiple CPU motherboards all shared the same main memory. Additionally, each CPU or CPU core had its own memory caches and all of those caches had to be maintained in a coherent state because they were all referencing shared memory locations. If one CPU/core changed the value of a cached location in shared memory all caches in all CPUs/cores that also hold a reference to that main memory location in their caches have to be updated.  With the M1 Apple can optimize its cache coherency logic much easier within its unified memory architecture than it can with shared off-package memory. If Apple brings in multiple packages each with multiple cores to work on shared problems, how many of these traditional memory management problems come back into the fray? I ask this only because a lot of the architectural benefits that Apple achieves with the M1 are due to tight-coupling between CPU cores, tight-coupling between CPU and GPU cores, and having a unified memory architecture very close to all of the cores. Moving to a multi-package model brings loose coupling back into the picture, which will most definitely limit the overall speedup that is attainable. There are alternate ways to get something in-between tight and loose coupling, as AMD has done, but you want to keep everything as close as possible.

    Two things to keep in mind with all Von Neumann based architectures, which includes all current Macs and PCs, 1: copying memory around always sucks from a performance standpoint, and 2: even small amounts of non-parallelizable code has a serious negative impact on the attainable speedup that can be seen with any number of processing units/cores (Ahmdahl's Law). This doesn't mean that speedups aren't possible, it just means that unless your code is well above 90% parallelizable you're going to max out on performance very quickly regardless of the number of cores. For example for 90% case, with 8 cores you'd get a speedup of 5 over a single core, but with 32 cores you're only getting a speedup of 8 over a single core even though you've quadrupled the number of cores. To double the speedup in the 90% case you'd have to go from 8 cores to around 512 cores. 

    The idea of a M1 coprocessor board for the Mac Pro is not a bad idea, and it's a strategy that's been used since the dawn of personal computers. One of DEC's first PC-compatible computers, the Rainbow 100, had a 8088 and a Z80 processor so it could run both MS-DOS and CP/M (and other) operating systems and applications. I've had PCs, Sun workstations, and industrial PLCs that had plug-in coprocessor boards for doing specialized processing semi-independently of the host CPU. The challenge is always how to integrate the coprocessor with the host system so the co-processor and the host system cooperate in a high quality and seamless manner. Looking across the tight-coupling to loose-coupling spectrum, these plug-in board solutions tend to be more on the looser side of the spectrum unless the host system (and all of its busses) were purposely designed with the coprocessor in mind, like the relationship between the old 8086/8088 general purpose CPU and the 8087 numerical coprocessor.  

      
    Could the Unified Memory be used as a bridge between 2 or more chips?

    Could they be planning upgraded systems as multiples of the same chip with a Memory Controller/ PCIe hub chip between them?
    That could let them use 6Core bin of chips as to make 12 core clusters and have options for 14 and 16 at a premium. Mirror again to get up to a 32core beast.

    Based on the descriptions Apple has published the unified memory pool is in the SoC package and is directly addressable by the CPUs and GPUs to avoid memory copying. This would seemingly make the unified memory address space local to the SoC. If a portion of this memory was shared with another SoC that has its own memory address space this would seemingly introduce cache coherency type of requirements to the architectural model. 

    But there’s nothing preventing Apple from defining a range of addresses in unified memory that serve as I/O ports so to speak, possibly with a DMA semantics, to allow high speed data interchange between SoCs. This would be more like a thunderbolt bus interchange and less like CPUs on the same SoC sharing access to the same memory address space. 

    If you look at the Mac in a slightly broader sense it is really a system of systems. There’s no doubt that Apple could leverage its Apple Silicon and M-series SoCs in particular as building blocks to compose an Apple SoC powered system of systems in a form factor like we’ve never seen in the markets that Macs currently target. We’re barely past the starting line on this journey. 
    There’s nothing special about the memory being on-package. The special part is that the CPU cores and GPU cores share a memory controller and an address space. The chiplet packaging AMD uses also involves a big IO die in the middle with the processor dies off to the sides. The IO die handles memory management, peripherals like PCIe, and so on. Apple can do the same thing relatively easily, and all cores connected to the IO die—whether CPU or GPU—would work with the same pool of memory.

    The M1’s on-package RAM is LPDDR4X. That might be a challenge to extend to slotted memory, so they just switch it to normal DDR4. Easy enough. There’s nothing intrinsic to the M1’s design which precludes off-package RAM (such as in the 16” MBP) or slotted RAM.

    I expect most of the four-Thunderbolt machines to go ARM next. 4TB3 13” MBP, 16” MBP, a high-end Mac Mini, and the small iMac could all get the same processor differentiated by cooling solution. They’ll have off-package (but still soldered) RAM.
    mattinozwatto_cobra
  • Reply 50 of 69
    jdb8167jdb8167 Posts: 626member
    killroy said:
    I sure hope the Mac Pros will have PCIe 4 slots. We use 32gig fiber cards in our editing systems.
    Since the M1 already uses PCIe 4, it seems like any Mac with card slots will necessarily support PCIe4.
    williamlondonwatto_cobra
  • Reply 51 of 69
    Slightly off topic but desperate for some advice. Much appreciated if you could share your opinion on the following; For the same price (..ish) I can pick up a mini 5th gen or iPad 8th Gen for the same price - which would you get? The user is my 9 year old who will do the usual, do homework, watch movies play fortnite etc but is also getting into some programming for kids stuff. Appreciate your opinion. Spence
    watto_cobra
  • Reply 52 of 69
    razorpitrazorpit Posts: 1,796member
    Samsonikk said:
    razorpit said:
    The iMac will come much sooner. It makes no sense for Apple to wait another year and release them end of 2021. 
    More likely, it’ll be released before Q3.
    Wondering if they are working on Intel emulators. There's a lot of us out here that still need to run the 64-bit version of Windows 10 Pro for specific reasons, and do not want to lose that ability. Maybe these 'higher-end' machines will get that function first? 
    Apple says Microsoft is responsible for that.
    Apple says a lot of stuff. The interpretation of what they say is the key. Hopefully they are working with Microsoft on this one.
    watto_cobra
  • Reply 53 of 69
    SpenceNZ said:
    Slightly off topic but desperate for some advice. Much appreciated if you could share your opinion on the following; For the same price (..ish) I can pick up a mini 5th gen or iPad 8th Gen for the same price - which would you get? The user is my 9 year old who will do the usual, do homework, watch movies play fortnite etc but is also getting into some programming for kids stuff. Appreciate your opinion. Spence
    My young son loves his iPad. He also uses an iMac for online school (bigger screen for class streaming) but never bothers with it after school is out. I would get a iPad with a keyboard for school. By the time he out grows it you'll be looking for his 1st computer and he'll likely have an iPhone! Good luck! $$$
    watto_cobra
  • Reply 54 of 69
    ph382ph382 Posts: 43member
    dewme said:
     even small amounts of non-parallelizable code has a serious negative impact on the attainable speedup that can be seen with any number of processing units/cores (Ahmdahl's Law). This doesn't mean that speedups aren't possible, it just means that unless your code is well above 90% parallelizable you're going to max out on performance very quickly regardless of the number of cores. For example for 90% case, with 8 cores you'd get a speedup of 5 over a single core

    Perhaps Apple sees a different playing field. My Activity Monitor shows 3020 threads and 794 processes. I have no idea what those all are doing!  Maybe 12 of 16 cores can handle that load, and leave 4 to be purely devoted to my application of the moment.
    watto_cobra
  • Reply 55 of 69
    MacPro said:
    rob53 said:
    Why stop at 32 cores. The current TOP500 supercomputer is comprised of a ton of Fujitsu A64X 48 compute core CPUs based on ARM v8.2-A. Cray is also working on changing to the ARM architecture. The current implementation is only running at 2.2 GHz using a 7nm CMOS FinFET design. This is on one chip! (ref: https://en.wikipedia.org/wiki/Fujitsu_A64FX, also https://www.fujitsu.com/downloads/SUPER/a64fx/a64fx_datasheet.pdf)

    Apple is already fabricating at 5nm and there's no reason why they couldn't build something similar, although I expect it to be a larger SoC than the current M1. The Fujitsu CPU appears to be more or less a standard size package, although it's only the CPU. (ref: https://www.anandtech.com/show/15885/hpc-systems-special-offer-two-a64fx-nodes-in-a-2u-for-40k)

    Apple isn't the only major computer company going ARM. It's good to see Apple finally expand to the Mac line. Those performance charts everyone's used to seeing with a simply curve are going to be amazed at how steep the jumps are once Apple really gets going. I ordered an M1 MBA simply because I've ordered ver 1 of several Apple Macs before. Crazy thing is this entry level Mac is faster than my current iMac. Talk about a steep performance curve!

    Note: I keep confusing ARM with AMD so previously I might have commented about the Fugaku supercomputer being AMD-based. If I did, my mistake. 
    Cray will be using Macs to design their computers soon ;)
    You mean HP will be using Macs ?
    Hewlett Packard Enterprise to buy supercomputer maker Cray in $1.30 billion deal | Reuters
    watto_cobra
  • Reply 56 of 69
    blastdoor said:
    ph382 said:
    rob53 said:
    Why stop at 32 cores. 

    I don't see even 32 happening for anything but the Mac Pro. I saw forum comments recently that games don't and can't use more than six cores. How much RAM (and heat) would you need to feed 32 cores?

    https://www.anandtech.com/show/15044/the-amd-ryzen-threadripper-3960x-and-3970x-review-24-and-32-cores-on-7nm

    The 32 core Threadripper 3970x has a TDP of 280 watts on a 7nm process. It has four DDR4 3200 RAM channels.

    Based on comparisons of the M1 to mobile Ryzen, I would expect an ASi 32 core SOC to have a TDP much lower than 280 watts. 

    I bet a 32 core ASi SOC on a 5nm process could fit within the thermal envelope of an iMac Pro. 
    It has 8 not 4 DDR 4 3200 ECC RAM Channels and those are limited by the OEMs. Zen supports 2TB of DDR4 RAM since Zen 2. And Threadripper is limited to 32 Cores presently because they haven't moved to the Zen 4 5nm process with RDNA 3.0/CDNA 2.0 based solutions that are on a unified memory plane.

    Those 32 cores would be a 16/16 Big/Little and then combine their GPU and other co-processors and you have a much larger SoC or very small cores.

    TR 3 arrives this January along with EPYC 3 Milan with 64/128 Cores. The next releases as Lisa Su has stated and their software ROCm has shown will be integrating Xilinx co-processors into the Zen 4/RDNA 3.0/CDNA 2.0 based solutions and beyond. 

    Both AMD and Xilinx have Architecture licenses to ARM and have been designing and producing ARM processors for years. Xilinx itself has an arsenal of solutions in ARM.

    32 Cores would only be in the Mac Pro. 8/8 cores in the iMac and 12/12 in the iMac Pro is pushing it.

    In 2022 Jim Keller's CPU designs from Intel hit the market. The upcoming Zen architecture designs will be announced in January 2021 at the CES Virtual conference. AMD has already announced by 2025 its conservative Product sales of Hardware will be over $22 Billion. That's up from this year's just over $8 Billion.

    Apple has zero interest in supporting anything beyond their Matrix of hardware options and people believing they want to be all solutions to all people don't understand and never have understood the mission statement of Apple.

    A lot of the R&D in M1 is going into their IoT future products and Automobile products.
    Threadripper isn't limited to 32 cores, but to 64. 
    Ryzen™ Threadripper™ 3990X | Processeur pour PC de bureau | AMD
    watto_cobra
  • Reply 57 of 69
    blastdoorblastdoor Posts: 3,520member
    zimmie said:. 
    There’s nothing special about the memory being on-package.
    I wonder if that’s totally true....

    anandtech’s review seems to show no advantage in terms of latency or bandwidth relative to other PCs (unless I’ve misinterpreted). So that seems to support your statement.

    But might there be an advantage in terms of watts?
    muthuk_vanalingamwatto_cobra
  • Reply 58 of 69
    blastdoor said:
    zimmie said:. 
    There’s nothing special about the memory being on-package.
    I wonder if that’s totally true....

    anandtech’s review seems to show no advantage in terms of latency or bandwidth relative to other PCs (unless I’ve misinterpreted). So that seems to support your statement.

    But might there be an advantage in terms of watts?
    I suspect that the price is also affected (downwardly.)
    watto_cobra
  • Reply 59 of 69
    dewmedewme Posts: 5,654member
    ph382 said:
    dewme said:
     even small amounts of non-parallelizable code has a serious negative impact on the attainable speedup that can be seen with any number of processing units/cores (Ahmdahl's Law). This doesn't mean that speedups aren't possible, it just means that unless your code is well above 90% parallelizable you're going to max out on performance very quickly regardless of the number of cores. For example for 90% case, with 8 cores you'd get a speedup of 5 over a single core

    Perhaps Apple sees a different playing field. My Activity Monitor shows 3020 threads and 794 processes. I have no idea what those all are doing!  Maybe 12 of 16 cores can handle that load, and leave 4 to be purely devoted to my application of the moment.
    You’re looking at a dIfferent aspect of the same thing, which is using more computational units, e.g., multi core and/or hyper threading to do more work in a given time period. Having more computational units reduces the amount of time individual threads are waiting to run. Instead of having 3000+ threads contending for 8 cores you have 3000+ threads contending for 32 cores. This is a very high level resource allocation perspective because there are a lot of synchronization, scheduling, interrupts, interlocking, etc., types of issues that have to be managed and these have a big impact on how efficiently the work that’s been doled out to individual threads gets processed. When we’re talking speed up we’re looking at a specific processing job so to speak (for example a benchmark) and how much faster it completes when run on multiple cores versus running on a single core.

    Keep in mind that each core can be assigned to run multiple threads, hundreds or thousands perhaps, to create what appears to be concurrency. But in fact, at any given time only one or two (in the case of hyper threading) threads are actually running in the CPU at any given point in time. 
    watto_cobrarundhvid
  • Reply 60 of 69
    danvmdanvm Posts: 1,465member
    Flytrap said:
    I presume this article is talking about 32 cores being on the same die. If true, that's good to hear. But don't see why Apple couldn't put multiple physical packages with N-cores each on the same system board. Moreover, Apple should do this as an expansion card for existing Intel Mac Pros so that Intel Mac Pros can run M1-apps natively.
    For anyone paying attention to Apple over the last few years... and for anyone paying attention at their most recent product redesigns... Apple has clearly declared war on end-user replaceable or upgradable modularity. Apple Silicon gives them the ability to make a giant leap towards turning the Mac into a personal computing appliance. They will get close, but, in my opinion, they will never really accomplish that goal because it will be impossible to turn what is currently a general computing operating system that people can run almost anything that they want on, into a locked-down firmware-like OS that can only run apps that Apple has deemed worthy to be allowed.

    Fewer and fewer Apple devices offer end-user upgradable storage, memory, CPU, GPU, etc. None of the latest Apple Silicon Macs offer any means for an end-user to upgrade or replace the CPU, Memory, Storage, GPU... or anything that was not originally ordered with the machine, for that matter. If you have an Intel Mac, you can install and run any recent version of macOS. On an Apple Silicon Mac, you can only install and run macOS Big Sur... macOS is likely to be the only end-user upgradable part of a Mac appliance in future.
    If the market doesn't want upgradeable Macs, who are you to order Apple to do otherwise?
    How do you know if the market want or don't want upgradeable Mac's when the Mac Pro is the only upgradable option available from Apple?  
    Second off, if the market wants a locked-down, firmware-like secure OS, who are you to order Apple to do otherwise? Do you oppose freedom of choice? Do you want everyone to be forced by law to follow the Android or Microsoft model?
    I don't see @Flytrap comment as an order, but as an opinion that in his POV would be good for him and customers.  And I don't see how his POV hurt's freedom of choice.  For example, if Mac's were upgradable as Thinkpads, you have to choice to upgrade or not.  As today, Mac's limits freedom of choice, since you are forced not to upgrade.  BTW, what's your opinion of the Mac Pro?  Is that a device that oppose to freedom of choice, since it's upgradable, or maybe less secure?

    And there is a benefit you miss with sealed devices as Apple notebooks, on-site service.  My customers have ThinkPads and their X1 Carbons and P1 that are just a little bit thicker than MBP's have on-site service.  With a MBP I have to go to an Apple Store for service.  And if you live far from an Apple Store, you have to mail your device.  In both cases you'll lose hours of days without your device.  My customer just make a call, receive the part in 1-2 days and an IBM representative install it at the office.  Don't you think that would be nice for Apple notebooks?
    Third off, why do you consider the current crop of Macs to be non-upgradeable since they have Thunderbolt ports that allow plenty of external hardware upgrades?
    If you read @Flytrap comment, he included CPU and RAM in his upgrade list.  As today, there are no thunderbolt CPU or RAM expansion options.  
    Fourth off, why aren't you ranting about the lack of upgradeable TVs? Or why aren't you ranting about the lack of TV repair shops? Nobody cares to upgrade or repair their TVs anymore. Who are you to say that people should care? TVs are commodities now. Nobody cares about what's inside them. Same with computers. Nobody cares to upgrade a CPU, despite your desire to do that.
    What if Apple customers cared for upgradable devices?  Maybe we will never know until Apple released upgradable devices, and see how customer receive or reject them.  
    edited December 2020 muthuk_vanalingammobird
Sign In or Register to comment.