Why Apple uses integrated memory in Apple Silicon -- and why it's both good and bad

13

Comments

  • Reply 41 of 68
    Xedxed Posts: 3,228member
    ITGUYINSD said:
    lam92103 said:
    So every single PC or computer manufacturer can use modular RAM. Including servers, workstations, data centers, super computers. 

    But somehow the Apple chips cannot and are trying to convince us that it is not just plain & simple greed??
    Where do I plug RAM into my iPad? iPhone?

    Yeah, not a greed thing. It’s an architecture thing. One line of extremely high performing chips for their product line is just smart. 
    Charging $200 for an 8GB RAM upgrade is not a "greed thing"?  Charging $200 for a 256GB SSD upgrade is not a "greed thing"?  Those are up to 10X the cost a PC user with modular upgrades pays.  It most certainly is part greed.  But then, Apple couldn't make that $3T valuation, could they?
    You're not a finance guy are you?
    williamlondonwatto_cobra
     2Likes 0Dislikes 0Informatives
  • Reply 42 of 68
    ITGUYINSDitguyinsd Posts: 578member
    Xed said:
    ITGUYINSD said:
    lam92103 said:
    So every single PC or computer manufacturer can use modular RAM. Including servers, workstations, data centers, super computers. 

    But somehow the Apple chips cannot and are trying to convince us that it is not just plain & simple greed??
    Where do I plug RAM into my iPad? iPhone?

    Yeah, not a greed thing. It’s an architecture thing. One line of extremely high performing chips for their product line is just smart. 
    Charging $200 for an 8GB RAM upgrade is not a "greed thing"?  Charging $200 for a 256GB SSD upgrade is not a "greed thing"?  Those are up to 10X the cost a PC user with modular upgrades pays.  It most certainly is part greed.  But then, Apple couldn't make that $3T valuation, could they?
    You're not a finance guy are you?
    I know when I'm being ripped off.  And, yes, I own lots of Apple gear.  But I'd never pay $200 for $20 worth of product.  Something about a fool and his money...
    muthuk_vanalingam
     1Like 0Dislikes 0Informatives
  • Reply 43 of 68
    williamlondonwilliamlondon Posts: 1,523member
    Xed said:
    ITGUYINSD said:
    lam92103 said:
    So every single PC or computer manufacturer can use modular RAM. Including servers, workstations, data centers, super computers. 

    But somehow the Apple chips cannot and are trying to convince us that it is not just plain & simple greed??
    Where do I plug RAM into my iPad? iPhone?

    Yeah, not a greed thing. It’s an architecture thing. One line of extremely high performing chips for their product line is just smart. 
    Charging $200 for an 8GB RAM upgrade is not a "greed thing"?  Charging $200 for a 256GB SSD upgrade is not a "greed thing"?  Those are up to 10X the cost a PC user with modular upgrades pays.  It most certainly is part greed.  But then, Apple couldn't make that $3T valuation, could they?
    You're not a finance guy are you?
    No, but he's very, very blockable (been on my list years).
    watto_cobra
     1Like 0Dislikes 0Informatives
  • Reply 44 of 68
    thttht Posts: 5,989member
    sbdude said:
    Genuinely curious, why not include higher capacity memory modules if there's demand for it? I imagine the pin outs are the same for every capacity Apple uses, so why not include more than 192 GB for those tasks that demand greater amounts of memory? Or are M Series chips unable to address that much memory?
    12 GB is the largest LPDDR5 package in mass production. An LPDDR5 channel only supports 16 GB memory capacity (I think). An M2 Max has 8 channels of LPDDR5. 8 x 12 = 96. An M2 Ultra is 2 M2 Max chips; therefore, 16 x 12 = 192 GB. Each of the "RAM packages" you see in the M2 Pro/Max/Ultra have 2 LPDDR5 channels going into it and inside the RAM package are 2 sets of 12 GB LPDDR5 packages. That's why they look bigger than the RAM packages in the M2.

    Apple, for strategic reasons, has chosen to design around LPDDR memory. This architecture trades capacity for lower power and higher bandwidth. The regular "DDR" memory that is designed to be in DIMMs are slower and use more power.

    In order for Apple to support more memory in the Max/Pro chips, they would have to design for DDR5 memory architecture, as well as LPDDR5 memory architecture. They aren't going to do that for 2 or 3% of Mac sales. No OEM would.
    mfrydtenthousandthingswatto_cobra
     2Likes 0Dislikes 1Informative
  • Reply 45 of 68
    Xedxed Posts: 3,228member
    ITGUYINSD said:
    Xed said:
    ITGUYINSD said:
    lam92103 said:
    So every single PC or computer manufacturer can use modular RAM. Including servers, workstations, data centers, super computers. 

    But somehow the Apple chips cannot and are trying to convince us that it is not just plain & simple greed??
    Where do I plug RAM into my iPad? iPhone?

    Yeah, not a greed thing. It’s an architecture thing. One line of extremely high performing chips for their product line is just smart. 
    Charging $200 for an 8GB RAM upgrade is not a "greed thing"?  Charging $200 for a 256GB SSD upgrade is not a "greed thing"?  Those are up to 10X the cost a PC user with modular upgrades pays.  It most certainly is part greed.  But then, Apple couldn't make that $3T valuation, could they?
    You're not a finance guy are you?
    I know when I'm being ripped off.  And, yes, I own lots of Apple gear.  But I'd never pay $200 for $20 worth of product.  Something about a fool and his money...
    So you're saying that you get ripped off from Apple and yet willing make purchases from them? That makes you sound pretty stupid.

    Personally, I go for the best value for my needs, which is why I bought a 16" MBP with 64 GiB of unified RAM, and not some low-rent Dell for $500.
    williamlondonwatto_cobra
     2Likes 0Dislikes 0Informatives
  • Reply 46 of 68
    mjtomlinmjtomlin Posts: 2,699member
    ITGUYINSD said:
    lam92103 said:
    So every single PC or computer manufacturer can use modular RAM. Including servers, workstations, data centers, super computers. 

    But somehow the Apple chips cannot and are trying to convince us that it is not just plain & simple greed??
    Where do I plug RAM into my iPad? iPhone?

    Yeah, not a greed thing. It’s an architecture thing. One line of extremely high performing chips for their product line is just smart. 
    Charging $200 for an 8GB RAM upgrade is not a "greed thing"?  Charging $200 for a 256GB SSD upgrade is not a "greed thing"?  Those are up to 10X the cost a PC user with modular upgrades pays.  It most certainly is part greed.  But then, Apple couldn't make that $3T valuation, could they?



    The SSD prices are a bit steep, true, but you can simply plug in more storage.

    The RAM "upgrade" prices are a different story because it's not simply "just plug in more RAM chips". Prices scale with volume. Low volume higher cost. High volume lower cost. Apple produces X amount of M1-8GB and Y amount of M1-16GB SoC's. DRAM prices are dirt cheap because those components are produced in the 100's of millions. Apple may only produce 10 million M1-16GB. That's the cost of customization versus commoditization.
    williamlondonwatto_cobra
     2Likes 0Dislikes 0Informatives
  • Reply 47 of 68
    22july201322july2013 Posts: 3,836member
    danox said:
    The Mac Pro could be made to allow the use of PCI-based RAM as 2nd-tier memory. Then the SoC RAM would act as a rather huge cache. Alternatively, code could be programmed to perform certain less time-sensitive operations in that 2nd-tier RAM. This would then give users the opportunity to upgrade their RAM with aftermarket hardware.
    Why? Mac computers are not for you. Just buy a PC.
    Anyone who tries to force Apple to sell things that they personally want is actually an Apple fanboy. 
    watto_cobra
     1Like 0Dislikes 0Informatives
  • Reply 48 of 68
    thttht Posts: 5,989member
    mjtomlin said:
    melgross said:
    Ok, so the writer gets it wrong, as so many others have when it comes to the M series RAM packaging. One would think that’s this simple thing would be well understood by now. So let me make it very clear - the RAM is NOT on the chip. It is NOT “in the CPU itself”. As we should all know by now, it’s in two packages soldered to the substrate, which is the small board the the SoC is itself soldered to. The lines from Apple’s fabric, which everything on the chip is connected with, extend to that substrate, to the RAM chips. Therefore, the RAM chips are separate from the SoC, and certainly not in the CPU itself. As we also know, Apple offers several different levels of RAM for each M series they sell. That means that there is no limit to their ability to decide how much RAM they can offer, up to the number of memory lines that can be brought out. This is no different from any traditional computer. Every CPU and memory controller has a limit as to how much RAM can be used. So, it seems to me that Apple could, if it wanted to, have sockets for those RAM packages, which add no latency, and would allow exchangeable RAM packages. Apple would just have to extend the maximum number of memory lines out to the socket. How many would get used would depend on the amount of RAM in the package. That’s nothing new. That’s how it’s done. Yes, under that scheme you would have to remove a smaller RAM package when getting a larger one, but that's also normal. The iMac had limited RAM slots and we used to do that all the time. Apple could also add an extra two sockets, in addition to the RAM that comes with the machine. So possibly there would be two packages soldered to the substrate, and two more sockets for RAM expansion. Remember that Apple sometimes does something a specific way, not because that’s the way it has to be done, but because they decided that this was the way they were going to do it. We don’t know where Apple is going with this in the future. It’s possible that the M2, which is really just a bump from the M1, is something to fill in the time while we’re waiting for the M3, which with the 3nm process it’s being built on, is expected to be more than just another bump in performance. Perhaps an extended RAM capability is part of that.
    Yes. It is common knowledge that RAM is not part of the actual SoC, it's on package with the SoC. People use the term "SoC" to describe the whole part ("M1", "M2") which includes the RAM.

    Being able to control how much RAM is installed allows Apple to guarantee that all memory channels are filled and being utilized, maximizing performance.
    The article says: "Apple's M1 and M2 ARM-based chip designs are similar. They are essentially a SoC design that integrates CPUs, GPUs, main RAM, and other components into a single chip.".

    That's ("... into a single chip") clearly incorrect for RAM. You may be able to squeak by with correct information by saying "main RAM, and other components into a SoC" where most of the, or many a, time when you say SoC, it's really short hand for "SoC package".

    On the whole, mass media articles do a very poor job at delineating a chip from a package. It's not uncommon to read someone say Apple silicon machines have the RAM in the "chip". So, it does behoove us to get the media to make it more clear. 
    watto_cobra
     1Like 0Dislikes 0Informatives
  • Reply 49 of 68
    killroykillroy Posts: 294member
    auxio said:
    DuhSesame said:

    The biggest strength from wintel is they’re standardized, but for every standard, they’re limiting themselves.  Intel does want something like Apple’s UMA, but imagine building a PC like that.  No customer will be happy.
    The biggest strength from Wintel is that they maintain backwards compatibility for a very long time (software written for Windows 30 years ago often still runs). This is why enterprise loves them, but it's also what keeps the platform from being used anywhere but traditional PCs/laptops/servers. They're basically the Big Blue of this generation.

    You do know that's one of the reasons PCs running that old software get hacked. People get complacent and won't upgrade or patch that software when it should've been dropped in the first place.
    watto_cobra
     1Like 0Dislikes 0Informatives
  • Reply 50 of 68
    killroykillroy Posts: 294member
    mjtomlin said:
    sbdude said:
    Genuinely curious, why not include higher capacity memory modules if there's demand for it? I imagine the pin outs are the same for every capacity Apple uses, so why not include more than 192 GB for those tasks that demand greater amounts of memory? Or are M Series chips unable to address that much memory?
    Are you sure they don't already use the highest density memory chips (considering size constraints) on those packages!? They need 8 (24GB) chips to achieve that 800GB/s bandwidth (1 chip per 100GB/s controller). Another thing to consider is the amount of power drawn by the increase in memory. Having that much power draw in such a relatively small area might be limiting.

    With the M2 Max topping out at 96GB (4x 24GB), I'm thinking the 24GB chips they're currently using are probably the highest capacity chips they could fit in the given area.

    The Mac Pro tops out at 192gGB.
    watto_cobra
     1Like 0Dislikes 0Informatives
  • Reply 51 of 68
    chasmchasm Posts: 3,755member
    Because there is a small but monied market that needs to work with large datasets in RAM, computers that can work with massive amounts of RAM are needed. Weather modelers need all that AND super-powerful GPUs. They pay the kind of prices that would make 98 percent of Apple’s audience faint just hearing them, but it is a very worthwhile investment investment to those who need it.

    I personally think Apple should have continued to offer the Mac Pro, only in two configurations: Intel and Mx. The Mx one is RAM-limited and graphics-limited by the constraints of their SoC decisions, but has PCI options (limited ones, but they are there) that no other Mac has at all. The Intel version could have played to the strengths of the Intel designs: massive RAM and massive GPU configurability.

    But here’s the thing: Apple doesn’t really want the business they would get from such a tiny, albeit wealthy, segment. That segment that would go for the theoretical Intel-version modern Mac Pro isn’t large enough to justify the cost of manufacturing/R&D/marketing/et al anymore. Mainstream market needs shift, and high-end machines are not as big a deal anymore because mainstream, everyday Macs and PCs are significantly faster than their users need already, particularly the Macs.

    This has happened before, and it will happen again. Apple is responding to where the mainstream audience is for the most part, and the Mac Studio covers the creative professionals at a fraction of the cost of the Mac Pro. Apple is catering to the consumer and creative-pro market, and that means shifting away from the science/engineering/CAD markets, which have always been dominated by PCs anyway. Apple goes where it thinks it can win and be profitable.
    mfrydwilliamlondonwatto_cobra
     2Likes 0Dislikes 1Informative
  • Reply 52 of 68
    melgrossmelgross Posts: 33,705member
    mfryd said:
    melgross said:
    Ok, so the writer gets it wrong, as so many others have when it comes to the M series RAM packaging. One would think that’s this simple thing would be well understood by now. So let me make it very clear - the RAM is NOT on the chip. It is NOT “in the CPU itself”. As we should all know by now, it’s in two packages soldered to the substrate, which is the small board the the SoC is itself soldered to. The lines from Apple’s fabric, which everything on the chip is connected with, extend to that substrate, to the RAM chips. Therefore, the RAM chips are separate from the SoC, and certainly not in the CPU itself. As we also know, Apple offers several different levels of RAM for each M series they sell. That means that there is no limit to their ability to decide how much RAM they can offer, up to the number of memory lines that can be brought out. This is no different from any traditional computer. Every CPU and memory controller has a limit as to how much RAM can be used. So, it seems to me that Apple could, if it wanted to, have sockets for those RAM packages, which add no latency, and would allow exchangeable RAM packages. Apple would just have to extend the maximum number of memory lines out to the socket. How many would get used would depend on the amount of RAM in the package. That’s nothing new. That’s how it’s done. Yes, under that scheme you would have to remove a smaller RAM package when getting a larger one, but that's also normal. The iMac had limited RAM slots and we used to do that all the time. Apple could also add an extra two sockets, in addition to the RAM that comes with the machine. So possibly there would be two packages soldered to the substrate, and two more sockets for RAM expansion. Remember that Apple sometimes does something a specific way, not because that’s the way it has to be done, but because they decided that this was the way they were going to do it. We don’t know where Apple is going with this in the future. It’s possible that the M2, which is really just a bump from the M1, is something to fill in the time while we’re waiting for the M3, which with the 3nm process it’s being built on, is expected to be more than just another bump in performance. Perhaps an extended RAM capability is part of that.
    Actually, moving the memory further away from the CPU does add latency.  Every foot of wire adds about a nanosecond of delay.

    Then there is the issue of how many wires you run.  When the memory is physically close to the CPU you can run more wires from the memory to the CPU, this allows you to get data to/from the CPU faster.   It's not practical to run a large number of wires to a socket that might be a foot or more of cable run away.  That means you transfer less data in each clock cycle.

    Generally socketed memory is on an external bus.  This lets various peripherals directly access memory.  The bus arbitration also adds overhead.


    Traditional CPUs try to overcome these memory bottlenecks by using multiple levels of cache.  This can provide a memory bandwidth performance boost for chunks of recently accessed memory.  However, tasks that use more memory than will fit in the cache, may not benefit from these techniques.

    Apples "System on a Chip" design really does allow much higher memory bandwidth.   Socketing the memory really would reduce performance.
    We’re talking about no change in distance. The socket can be exactly where the RAM packages are now. If extra sockets are added, one on each side. We’re talking about 0.375”.

    the way they do it now is to have one package on each side of the SoC. When they double packages, they put one slightly further down, and the other slightly higher. We’re talking about fractions of an inch from the way they add packages. There’s nothing really different here. Just substitute soldered on packages to low profile sockets. Not talking about going to DIMMs. The same packages they use now, but with pins instead of solder connections.

    you’re talking about something entirely different, and it doesn’t apply. Sickest won’t change anything here.
    watto_cobra
     1Like 0Dislikes 0Informatives
  • Reply 53 of 68
    melgrossmelgross Posts: 33,705member
    JP234 said:
    melgross said:
    Ok, so the writer gets it wrong, as so many others have when it comes to the M series RAM packaging. One would think that’s this simple thing would be well understood by now. So let me make it very clear - the RAM is NOT on the chip. It is NOT “in the CPU itself”. As we should all know by now, it’s in two packages soldered to the substrate, which is the small board the the SoC is itself soldered to. The lines from Apple’s fabric, which everything on the chip is connected with, extend to that substrate, to the RAM chips. Therefore, the RAM chips are separate from the SoC, and certainly not in the CPU itself. As we also know, Apple offers several different levels of RAM for each M series they sell. That means that there is no limit to their ability to decide how much RAM they can offer, up to the number of memory lines that can be brought out. This is no different from any traditional computer. Every CPU and memory controller has a limit as to how much RAM can be used. So, it seems to me that Apple could, if it wanted to, have sockets for those RAM packages, which add no latency, and would allow exchangeable RAM packages. Apple would just have to extend the maximum number of memory lines out to the socket. How many would get used would depend on the amount of RAM in the package. That’s nothing new. That’s how it’s done. Yes, under that scheme you would have to remove a smaller RAM package when getting a larger one, but that's also normal. The iMac had limited RAM slots and we used to do that all the time. Apple could also add an extra two sockets, in addition to the RAM that comes with the machine. So possibly there would be two packages soldered to the substrate, and two more sockets for RAM expansion. Remember that Apple sometimes does something a specific way, not because that’s the way it has to be done, but because they decided that this was the way they were going to do it. We don’t know where Apple is going with this in the future. It’s possible that the M2, which is really just a bump from the M1, is something to fill in the time while we’re waiting for the M3, which with the 3nm process it’s being built on, is expected to be more than just another bump in performance. Perhaps an extended RAM capability is part of that.
    Brevity being the soul of wit, could you explain that in under 25 words? Because my ADHD won't let me get any further.
    I’ve tried that before, but then nobody pays attention. I’ll try for you.

    all I’m saying is that Apple could substitute sockets for the solder pads used now for memory. That’s the simplest way to say it. But that doesn’t say much, and some here don’t seem to u detest and what I’m saying, so they’re are making statements that are themselves wrong.
    watto_cobra
     1Like 0Dislikes 0Informatives
  • Reply 54 of 68
    melgrossmelgross Posts: 33,705member
    melgross said:
    Ok, so the writer gets it wrong, as so many others have when it comes to the M series RAM packaging. One would think that’s this simple thing would be well understood by now. So let me make it very clear - the RAM is NOT on the chip. It is NOT “in the CPU itself”. As we should all know by now, it’s in two packages soldered to the substrate, which is the small board the the SoC is itself soldered to. The lines from Apple’s fabric, which everything on the chip is connected with, extend to that substrate, to the RAM chips. Therefore, the RAM chips are separate from the SoC, and certainly not in the CPU itself. As we also know, Apple offers several different levels of RAM for each M series they sell. That means that there is no limit to their ability to decide how much RAM they can offer, up to the number of memory lines that can be brought out. This is no different from any traditional computer. Every CPU and memory controller has a limit as to how much RAM can be used. So, it seems to me that Apple could, if it wanted to, have sockets for those RAM packages, which add no latency, and would allow exchangeable RAM packages. Apple would just have to extend the maximum number of memory lines out to the socket. How many would get used would depend on the amount of RAM in the package. That’s nothing new. That’s how it’s done. Yes, under that scheme you would have to remove a smaller RAM package when getting a larger one, but that's also normal. The iMac had limited RAM slots and we used to do that all the time. Apple could also add an extra two sockets, in addition to the RAM that comes with the machine. So possibly there would be two packages soldered to the substrate, and two more sockets for RAM expansion. Remember that Apple sometimes does something a specific way, not because that’s the way it has to be done, but because they decided that this was the way they were going to do it. We don’t know where Apple is going with this in the future. It’s possible that the M2, which is really just a bump from the M1, is something to fill in the time while we’re waiting for the M3, which with the 3nm process it’s being built on, is expected to be more than just another bump in performance. Perhaps an extended RAM capability is part of that.
    Having the memory as close as possible to the CPU allows the memory to be clocked faster and less latency. Also minimizes other issues that slow down the system like electromagnetic interference, signal integrity degradation, and complexity of routing to many signals on a board. Moreover, every time you turn or pass through a via on a board, add solder joint from connectors and connections to memory boards, it adds more delays and areas where the contact might fail due to corrosion, cracked solder joints, flex from heating and cooling, etc..

    Routing on the package and die mounting allows most density, faster speed, greater reliability, and reduces cost since you do not incur in additional cost for extra memory packages, large packages with many pins, complicated board routing and simpler cooling of CPU and memory due to proximity. However, you give up flexibility for future expansion. Nevertheless, by the time you outgrow your system the CPU is so dated, you end up buying a new computer..
    Yes, you are right. But putting sockets on the substrate does not incur any of those problems. Nothing is actually being changed.
    edited June 2023
    watto_cobra
     1Like 0Dislikes 0Informatives
  • Reply 55 of 68
    thttht Posts: 5,989member
    melgross said:
    So, it seems to me that Apple could, if it wanted to, have sockets for those RAM packages, which add no latency, and would allow exchangeable RAM packages. Apple would just have to extend the maximum number of memory lines out to the socket. How many would get used would depend on the amount of RAM in the package.
    As long as Apple is using standard LPDDR memory, there isn't going to be any slotted RAM modules, The specification limits the max capacity to 8 GB, 16 GB per channel depending on LPDDR generation. Basically 1 LPDDR package per channel for every generation, and there is no point to having a slotted module in the specification for one package.

    Each LPDDR memory channel has a "port" along the edges of the silicon chip. The M1 had 8 channels of LPDDR4 that takes basically 2 edges of the chip. So, they are butting against how many channels of LPDDR they can really add to the chips. They don't have much more room to add more channels per chip.

    If they could get a next gen LPDDR version to support, say 64 GB per channel, they can hit 1 TB of memory on the Ultra by stacking 4 16 GB LPDDR packages per channel, or 2 32 GB LPDDR packages per channel. Kind of like an extension of specification that uses existing LPDDR5X or LPDDR6 designs, but it's probably not possible. It's important to get it into the specification so memory vendors can mass produce them for a wider variety of OEMs instead of a custom order for Apple, just for a commodity part.
    watto_cobra
     1Like 0Dislikes 0Informatives
  • Reply 56 of 68
    mfrydmfryd Posts: 258member
    melgross said:
    mfryd said:
    melgross said:
    Ok, so the writer gets it wrong, as so many others have when it comes to the M series RAM packaging. One would think that’s this simple thing would be well understood by now. So let me make it very clear - the RAM is NOT on the chip. It is NOT “in the CPU itself”. As we should all know by now, it’s in two packages soldered to the substrate, which is the small board the the SoC is itself soldered to. The lines from Apple’s fabric, which everything on the chip is connected with, extend to that substrate, to the RAM chips. Therefore, the RAM chips are separate from the SoC, and certainly not in the CPU itself. As we also know, Apple offers several different levels of RAM for each M series they sell. That means that there is no limit to their ability to decide how much RAM they can offer, up to the number of memory lines that can be brought out. This is no different from any traditional computer. Every CPU and memory controller has a limit as to how much RAM can be used. So, it seems to me that Apple could, if it wanted to, have sockets for those RAM packages, which add no latency, and would allow exchangeable RAM packages. Apple would just have to extend the maximum number of memory lines out to the socket. How many would get used would depend on the amount of RAM in the package. That’s nothing new. That’s how it’s done. Yes, under that scheme you would have to remove a smaller RAM package when getting a larger one, but that's also normal. The iMac had limited RAM slots and we used to do that all the time. Apple could also add an extra two sockets, in addition to the RAM that comes with the machine. So possibly there would be two packages soldered to the substrate, and two more sockets for RAM expansion. Remember that Apple sometimes does something a specific way, not because that’s the way it has to be done, but because they decided that this was the way they were going to do it. We don’t know where Apple is going with this in the future. It’s possible that the M2, which is really just a bump from the M1, is something to fill in the time while we’re waiting for the M3, which with the 3nm process it’s being built on, is expected to be more than just another bump in performance. Perhaps an extended RAM capability is part of that.
    Actually, moving the memory further away from the CPU does add latency.  Every foot of wire adds about a nanosecond of delay.

    Then there is the issue of how many wires you run.  When the memory is physically close to the CPU you can run more wires from the memory to the CPU, this allows you to get data to/from the CPU faster.   It's not practical to run a large number of wires to a socket that might be a foot or more of cable run away.  That means you transfer less data in each clock cycle.

    Generally socketed memory is on an external bus.  This lets various peripherals directly access memory.  The bus arbitration also adds overhead.


    Traditional CPUs try to overcome these memory bottlenecks by using multiple levels of cache.  This can provide a memory bandwidth performance boost for chunks of recently accessed memory.  However, tasks that use more memory than will fit in the cache, may not benefit from these techniques.

    Apples "System on a Chip" design really does allow much higher memory bandwidth.   Socketing the memory really would reduce performance.
    We’re talking about no change in distance. The socket can be exactly where the RAM packages are now. If extra sockets are added, one on each side. We’re talking about 0.375”.

    the way they do it now is to have one package on each side of the SoC. When they double packages, they put one slightly further down, and the other slightly higher. We’re talking about fractions of an inch from the way they add packages. There’s nothing really different here. Just substitute soldered on packages to low profile sockets. Not talking about going to DIMMs. The same packages they use now, but with pins instead of solder connections.

    you’re talking about something entirely different, and it doesn’t apply. Sickest won’t change anything here.
    How many pins would the socket need to accommodate the current chips?  How reliable are those sockets?  Are the current chips available in a socketable version?  
    watto_cobra
     1Like 0Dislikes 0Informatives
  • Reply 57 of 68
    dewmedewme Posts: 6,067member
    chasm said:
    Because there is a small but monied market that needs to work with large datasets in RAM, computers that can work with massive amounts of RAM are needed. Weather modelers need all that AND super-powerful GPUs. They pay the kind of prices that would make 98 percent of Apple’s audience faint just hearing them, but it is a very worthwhile investment investment to those who need it.

    I personally think Apple should have continued to offer the Mac Pro, only in two configurations: Intel and Mx. The Mx one is RAM-limited and graphics-limited by the constraints of their SoC decisions, but has PCI options (limited ones, but they are there) that no other Mac has at all. The Intel version could have played to the strengths of the Intel designs: massive RAM and massive GPU configurability.

    But here’s the thing: Apple doesn’t really want the business they would get from such a tiny, albeit wealthy, segment. That segment that would go for the theoretical Intel-version modern Mac Pro isn’t large enough to justify the cost of manufacturing/R&D/marketing/et al anymore. Mainstream market needs shift, and high-end machines are not as big a deal anymore because mainstream, everyday Macs and PCs are significantly faster than their users need already, particularly the Macs.

    This has happened before, and it will happen again. Apple is responding to where the mainstream audience is for the most part, and the Mac Studio covers the creative professionals at a fraction of the cost of the Mac Pro. Apple is catering to the consumer and creative-pro market, and that means shifting away from the science/engineering/CAD markets, which have always been dominated by PCs anyway. Apple goes where it thinks it can win and be profitable.
    This is a very level headed and reasonable comment. Thank you.

    It should not come as a surprise to anyone around here that Apple has been doing rather well doing things "their way." Apple has an unusually loyal customer base that allows them to gobble up the lions share of profits in pretty much every product category they go after. They carefully pick and choose what to build and who to serve based on their own strengths and ability to serve those customers in spectacular ways. They can't serve everyone, not because they don't want to, but because they recognize where their strengths are and where they have attained mastery in those customer markets and problem domains where Apple excels, which is clearly not every possible market, problem domain, or customer demographic. People still buy Windows PCs in great quantities, but not always in great qualities. 

    Apple is not afraid to step out of their comfort zone. They commit an extraordinary amount of resources to R&D and new product development. But before they pull the trigger on entering new markets or going after new (for them) product categories they make sure they've got something extraordinary that's going to make a big impact with customers, not just something to slap an Apple logo on.

    Apple's engineers are fully capable of playing at any level, as they've shown very clearly and most recently with the A-series and M-series SoCs. They know every nuance and architectural trick to coerce as much performance from the von Neumann architecture as possible. They fully understand that soldering down memory and storage chips or constraining M-series chips to using on-chip memory limits product extensibility. But Apple knows its customers very well and Apple knows how to play the big game for big wins. They've purposely chosen to intensely focus on those areas that deliver the greatest impact for the greatest number of customers to keep those customers delighted with their Apple purchases.

    It's never been a matter of Apple not being able to do any of the technical things that are done on any competing competing platform. They've simply chosen not to do some of those things because they are not afraid to say "no" to anything that's not hugely impactful or contributing to big wins. I'd bet that for every "helpful suggestion" that we see on this site for what Apple "should be doing" there were more than a few Apple engineers that already brought those points up with their leaders and were summarily dismissed with a solid "no." Apple has not forgotten how to say NO, and this policy seems to be paying off in a big way regardless of what we think about it. They're playing big with real money. We're playing small with no money. The results speak for themselves.
    mfrydwatto_cobra
     2Likes 0Dislikes 0Informatives
  • Reply 58 of 68
    mattinozmattinoz Posts: 2,667member
    sbdude said:
    Genuinely curious, why not include higher capacity memory modules if there's demand for it? I imagine the pin outs are the same for every capacity Apple uses, so why not include more than 192 GB for those tasks that demand greater amounts of memory? Or are M Series chips unable to address that much memory?
    Is there demand for it.
    Would such a pool of memory be significantly faster to access than say a big pool of fast SSD storage on PCIe?

    If they have scope to add complexity to allow for a memory pool between packaged RAM and Storage would that complexity be best used creating that pool or would it be better used to improve one or both of the other pools of storage? 

    All design is trade, we already know by knocking out the slotted RAM apple get advantages in both processing memory and storage. Those improvements might have already close the gap to the point there just isn't space for the slotted RAM pool to be in without making one of them worse. 
    killroywatto_cobra
     2Likes 0Dislikes 0Informatives
  • Reply 59 of 68
    mfrydmfryd Posts: 258member
    melgross said:
    mfryd said:
    melgross said:
    Ok, so the writer gets it wrong, as so many others have when it comes to the M series RAM packaging. One would think that’s this simple thing would be well understood by now. So let me make it very clear - the RAM is NOT on the chip. It is NOT “in the CPU itself”. As we should all know by now, it’s in two packages soldered to the substrate, which is the small board the the SoC is itself soldered to. The lines from Apple’s fabric, which everything on the chip is connected with, extend to that substrate, to the RAM chips. Therefore, the RAM chips are separate from the SoC, and certainly not in the CPU itself. As we also know, Apple offers several different levels of RAM for each M series they sell. That means that there is no limit to their ability to decide how much RAM they can offer, up to the number of memory lines that can be brought out. This is no different from any traditional computer. Every CPU and memory controller has a limit as to how much RAM can be used. So, it seems to me that Apple could, if it wanted to, have sockets for those RAM packages, which add no latency, and would allow exchangeable RAM packages. Apple would just have to extend the maximum number of memory lines out to the socket. How many would get used would depend on the amount of RAM in the package. That’s nothing new. That’s how it’s done. Yes, under that scheme you would have to remove a smaller RAM package when getting a larger one, but that's also normal. The iMac had limited RAM slots and we used to do that all the time. Apple could also add an extra two sockets, in addition to the RAM that comes with the machine. So possibly there would be two packages soldered to the substrate, and two more sockets for RAM expansion. Remember that Apple sometimes does something a specific way, not because that’s the way it has to be done, but because they decided that this was the way they were going to do it. We don’t know where Apple is going with this in the future. It’s possible that the M2, which is really just a bump from the M1, is something to fill in the time while we’re waiting for the M3, which with the 3nm process it’s being built on, is expected to be more than just another bump in performance. Perhaps an extended RAM capability is part of that.
    Actually, moving the memory further away from the CPU does add latency.  Every foot of wire adds about a nanosecond of delay.

    Then there is the issue of how many wires you run.  When the memory is physically close to the CPU you can run more wires from the memory to the CPU, this allows you to get data to/from the CPU faster.   It's not practical to run a large number of wires to a socket that might be a foot or more of cable run away.  That means you transfer less data in each clock cycle.

    Generally socketed memory is on an external bus.  This lets various peripherals directly access memory.  The bus arbitration also adds overhead.


    Traditional CPUs try to overcome these memory bottlenecks by using multiple levels of cache.  This can provide a memory bandwidth performance boost for chunks of recently accessed memory.  However, tasks that use more memory than will fit in the cache, may not benefit from these techniques.

    Apples "System on a Chip" design really does allow much higher memory bandwidth.   Socketing the memory really would reduce performance.
    We’re talking about no change in distance. The socket can be exactly where the RAM packages are now. If extra sockets are added, one on each side. We’re talking about 0.375”.

    the way they do it now is to have one package on each side of the SoC. When they double packages, they put one slightly further down, and the other slightly higher. We’re talking about fractions of an inch from the way they add packages. There’s nothing really different here. Just substitute soldered on packages to low profile sockets. Not talking about going to DIMMs. The same packages they use now, but with pins instead of solder connections.

    you’re talking about something entirely different, and it doesn’t apply. Sickest won’t change anything here.
    Adding sockets reduces reliability and increases support costs.  Not all customers are competent. Many customers will break things when adding memory.  They will not plug chips fully in, kill chips with static shocks, and not properly reclose the unit.  Any of these dramatically reduce customer satisfaction and drive up costs.   There is the business question of whether socketing memory will increase profits by bringing in more customers, or reduce profits by driving up costs and lowering customer satisfaction.

    Remember, the vast majority of customers never open up the computer.   Socketing memory increases costs for these customers with a feature they don't need.
    For customers that demand expandability, merely socketing memory may not be enough.   The market segment that Apple would gain with socketed memory may be very small.

    Remember, if people want socketed memory, but buy Macs anyway, then Apple won't gain sales by adding socketed memory.  If customers really demanded socketed memory, then there wouldn't be anyone buying Macs.

    williamlondonwatto_cobra
     2Likes 0Dislikes 0Informatives
  • Reply 60 of 68
    xbitxbit Posts: 408member
    It's been interesting watching PC gaming news recently. A lot of the big multi-platform release have suffered from terrible performance on PC, because the games have been designed for a unified memory architecture which PCs don't have. Somewhat ironically, it should be much easier to port console games to Mac than Windows in the near future. 
    watto_cobrakillroy
     2Likes 0Dislikes 0Informatives
Sign In or Register to comment.