New Intel Technology to put DRAM in the CPU

Posted:
in Future Apple Hardware edited January 2014
Intel plans to revolutionize the CPU



So the basic gist is that they intend to couple the system RAM with the CPU, eliminating not-as-dense SRAM cache from the equation. There are plenty of implications for Apple, and other computer makers, especially with regards to portables. We've seen in the MacBook Air, that Apple has gone so far as to remove RAM expansion slots all together and soldered all the RAM the system would ever use on the motherboard. Intel's technology goes one step further and integrates it directly with the CPU. This offers 2 immediate advantages: speed, and less physical space. And probably simpler motherboard designs.



The only drawback, which I think will be mitigated, is that you may be restricted when it comes to expanding the computer's system memory. On a laptop or consumer desktop, like the mini, I see this as less of an issue, especially if they integrate at least 4GB of memory. But on a professional workstation this would become an issue and I think that either Intel will have a way for the CPU to access more memory via a high-speed slot or slots on the motherboard, or create a workstation/server class CPU that has this ability.



What is everyone's thoughts on this?
«1

Comments

  • Reply 1 of 32
    ksecksec Posts: 1,543member
    i dont think Intergrated 2T DRam will come any time soon. Properly another 3 - 4 years at least.

    And the die space to fit 4G Ram is quite alot therefore they will need to use Stacks. Which Intel revealed few years ago.



    With that huge amount of bandwidth Intel could fit an proper GPU inside as well. The problem will then becomes how to get heat removed from each layer.
  • Reply 2 of 32
    this may be for there planed gpu in the cpu. But given the size of ddr2 and ddr3 ram dimms I don't see you being able to fit that much in a cpu.
  • Reply 3 of 32
    outsideroutsider Posts: 6,008member
    Quote:
    Originally Posted by Joe_the_dragon View Post


    this may be for there planed gpu in the cpu. But given the size of ddr2 and ddr3 ram dimms I don't see you being able to fit that much in a cpu.



    You realize that they are talking about implementing this on a 45nm, and eventually 32nm, manufacturing process. Also, you can't compare the size of DIMMS to what would be on a CPU. You don't have to deal with chip to chip interfaces or circuitry to deal with a typical northbridge. This alone will same a lot of space.
  • Reply 4 of 32
    MarvinMarvin Posts: 14,195moderator
    I think it's a good idea but limiting. It's another component that could possibly fail and render your machine unusable when previously a bad Ram chip could be swapped out. Probably an unlikely scenario but it implies having limited amounts of Ram too. If they started with 4GB then it would be great for all consumer machines but for high end work, it would be lacking.



    I'm sure one day we will be able to have 64GB+ Ram and we can store the entire OS and apps on it so that we get instant launch times.
  • Reply 5 of 32
    futurepastnowfuturepastnow Posts: 1,772member
    I can see this working as a larger, faster cache. It'll never replace upgradeable memory on the motherboard.
  • Reply 6 of 32
    wmfwmf Posts: 1,164member
    Quote:
    Originally Posted by Outsider View Post


    So the basic gist is that they intend to couple the system RAM with the CPU, eliminating not-as-dense SRAM cache from the equation.



    No, this is about making the cache larger. They can't fit even 1GB of DRAM on a chip, so this won't be used for main memory.



    Also, Intel appears to be behind the state of the art, which is 1T eDRAM.
  • Reply 7 of 32
    wizard69wizard69 Posts: 12,667member
    Quote:
    Originally Posted by Outsider View Post


    Intel plans to revolutionize the CPU



    Interesting, but I wonder how they can call the memory dynamic if there is no capacitor. I wonder if there was some sort of reporting error. The other thing was that they did not go into heavy detail as to the amount of RAM they where able to squeeze on a die.



    In any event I see first implementation of such on things like ATOM where lots of real estate already is available. In other words current process technology would likely make this acceptable from an economic standpoint on something like ATOM. Also one has to think of the markets that ATOM is directed at, 256MBof ram built into a processor headed for an iPod Touch would be very nice indeed.

    Quote:



    So the basic gist is that they intend to couple the system RAM with the CPU, eliminating not-as-dense SRAM cache from the equation. There are plenty of implications for Apple, and other computer makers, especially with regards to portables. We've seen in the MacBook Air, that Apple has gone so far as to remove RAM expansion slots all together and soldered all the RAM the system would ever use on the motherboard.



    Cool isn't it!



    One thing though such technology doesn't automatically indicate that there is not chance of external memory. There certainly are applications where it is not needed but nothing here implies that external memory is impossible. It would be a lot slower, but that is always the case with external memory.

    [quote]

    Intel's technology goes one step further and integrates it directly with the CPU. This offers 2 immediate advantages: speed, and less physical space. And probably simpler motherboard designs.

    [quote]

    This would certainly lead to rather complete SoC and for many apps would be a huge win.

    Quote:



    The only drawback, which I think will be mitigated, is that you may be restricted when it comes to expanding the computer's system memory. On a laptop or consumer desktop, like the mini, I see this as less of an issue, especially if they integrate at least 4GB of memory.



    I don't see 4GB happening any time soon. Still as noted above this doesn't necessarily eliminate external memory. What is could do for something like ATOM though is to make the processor much more competitive against ARM.



    AS to how much memory can be integrated on a chip look at modern DRAM to get an approximate idea of just how much die area will be required. Take into account that modern DRAM is already being built on a smaller process size than most CPU's.

    Quote:

    But on a professional workstation this would become an issue and I think that either Intel will have a way for the CPU to access more memory via a high-speed slot or slots on the motherboard, or create a workstation/server class CPU that has this ability.



    On workstations you still will be getting big wins from cores and other processors so I wouldn't expect that such RAM would be for anything other than special processors. Especially considering that the demand for workstation RAM is just going to keep going up and up.

    Quote:

    What is everyone's thoughts on this?



    Think it is great and hope to see implementations soon. It is a huge stretch though to expect it to pass as work station main memory. For an in order processor like ATOM though, on a SoC this would be just the nuts. That is if targeted at limit memory systems.



    DAve
  • Reply 8 of 32
    mdriftmeyermdriftmeyer Posts: 7,195member
    QA on Memory failures and it effecting the posting of the CPU would be interesting. Who wants to toss an entire CPU/DRAM combo if a portion fails on boot?
  • Reply 9 of 32
    futurepastnowfuturepastnow Posts: 1,772member
    Quote:
    Originally Posted by FuturePastNow View Post


    I can see this working as a larger, faster cache. It'll never replace upgradeable memory on the motherboard.



    And now that I've gotten bored enough to actually read the article, I see that this really is nothing more than larger, faster cache on the CPU. It has nothing to do with replacing system RAM as we know it.



    This is fascinating technology but there's nothing to see here for us.
  • Reply 10 of 32
    MarvinMarvin Posts: 14,195moderator
    Quote:
    Originally Posted by FuturePastNow View Post


    And now that I've gotten bored enough to actually read the article, I see that this really is nothing more than larger, faster cache on the CPU. It has nothing to do with replacing system RAM as we know it.



    This is fascinating technology but there's nothing to see here for us.



    I'm not sure though, what if it was some sort of non-volatile memory that cached important OS files and application binaries? Like I say, near instant boot times and no more waiting for heavy apps like Maya, Photoshop, Indesign to load. Even a small amount like 256MB would be enough to cache the most important stuff and it could learn what apps you use most to make your experience smoother.



    The issue about the memory failure could be gotten round by a mechanism to ignore a failed component. If the memory storage goes down for whatever reason then you simply lose the cache mechanism and your machine just reverts to the old boot/launch times.



    No replacement for main memory but still could be very useful.
  • Reply 11 of 32
    Quote:
    Originally Posted by Marvin View Post


    I'm not sure though, what if it was some sort of non-volatile memory that cached important OS files and application binaries? Like I say, near instant boot times and no more waiting for heavy apps like Maya, Photoshop, Indesign to load. Even a small amount like 256MB would be enough to cache the most important stuff and it could learn what apps you use most to make your experience smoother.



    The issue about the memory failure could be gotten round by a mechanism to ignore a failed component. If the memory storage goes down for whatever reason then you simply lose the cache mechanism and your machine just reverts to the old boot/launch times.



    No replacement for main memory but still could be very useful.



    You just described the so-called Hybrid Hard Drives that Intel and Microsoft were pushing a year ago. 256MB of Flash on a hard drive to cache frequently accessed files, but they never hit the market and they probably never will.
  • Reply 12 of 32
    programmerprogrammer Posts: 3,409member
    Quote:
    Originally Posted by FuturePastNow View Post


    You just described the so-called Hybrid Hard Drives that Intel and Microsoft were pushing a year ago. 256MB of Flash on a hard drive to cache frequently accessed files, but they never hit the market and they probably never will.



    Actually I wouldn't be so sure given better on-chip DRAM capabilities. The incremental cost of adding large caches to driver controllers would be very small (particularly since manufacturing failures in on-chip RAM are relatively easy to work around), and no additional discrete devices keeps the board design simple.
  • Reply 13 of 32
    vineavinea Posts: 5,585member
    Quote:
    Originally Posted by mdriftmeyer View Post


    QA on Memory failures and it effecting the posting of the CPU would be interesting. Who wants to toss an entire CPU/DRAM combo if a portion fails on boot?



    Eh, Intel would just bin it as a lower priced part. Celeron or something.
  • Reply 14 of 32
    hirohiro Posts: 2,663member
    Oh ye of little faith! This isn't cache, this is main memory on the die. The fact that it is on the die will make it far more reliable than RAM simply stuck on the bus. We will still see DRAM on the bus, but it will be there as a way to avoid hitting the HD, not duplicated to the cache heirarchy.



    Intel has published a number of white papers over the last couple years and a magic number for bus speed has been 200GB/s to be able to avoid L1/L2 caching over several multi-core processing domains. Why do you think they have been testing, speccing and engineering stuff all in the same performance bucket? For shits and giggles? Intel could put ~1GB of this on the next step fabbing decrease of a dual core CPU without much if any die size increase. But they really want this tech for their 32 core + SMT chips coming 4 or so years from now. Then they may be able to put 8-16GB on die, and the RAM provides a wonderful heat-sump to spread out the interior heat and eliminate core-core reinforced hot-spots. Plenty of papers which hint at that via spreading the cores out among cache, just project that to the DRAM in a few years.
  • Reply 15 of 32
    programmerprogrammer Posts: 3,409member
    Quote:
    Originally Posted by Hiro View Post


    The fact that it is on the die will make it far more reliable than RAM simply stuck on the bus.



    Why is this more reliable than DRAM on separate chips? I don't disagree with the rest of your post, but its not clear why putting the DRAM on the same or a different chip from the cores impacts its reliability? Or are you simply referring to potential bandwidth improvements due to not having to deal with transmission noise between chips?
  • Reply 16 of 32
    outsideroutsider Posts: 6,008member
    I think reliability in the sense of reduced interference since the memory is so close to where the CPU is now. This can also eliminate the need for sophisticated ECC overhead found in workstation and server class CPU/chipset combinations perhaps.



    I doubt it would make memory any more reliable than if it was fabricated on a separate chip.
  • Reply 17 of 32
    programmerprogrammer Posts: 3,409member
    Quote:
    Originally Posted by Outsider View Post


    I think reliability in the sense of reduced interference since the memory is so close to where the CPU is now.



    Interesting way of putting it... but perhaps it is better to think of this as Intel getting into the memory business and putting CPU cores on every memory chip.
  • Reply 18 of 32
    backtomacbacktomac Posts: 4,579member
    Quote:
    Originally Posted by Programmer View Post


    ... but perhaps it is better to think of this as Intel getting into the memory business and putting CPU cores on every memory chip.



    That seems to be an odd way to view it. Memory is a low profit commodity business. The cpus are where manufacturers have pricing power due to product differentiation. At least if you're Intel and not AMD.
  • Reply 19 of 32
    hirohiro Posts: 2,663member
    Quote:
    Originally Posted by Programmer View Post


    Why is this more reliable than DRAM on separate chips? I don't disagree with the rest of your post, but its not clear why putting the DRAM on the same or a different chip from the cores impacts its reliability? Or are you simply referring to potential bandwidth improvements due to not having to deal with transmission noise between chips?



    Because the weak link is the capacitors in current DRAM. Transistors are pretty rock solid in comparison. Compared to current off-die memory, the on-die memory will also be subject to FAR less potential variation in current or other potential sources of temporary shorts that could cause DRAM transistor failures. Basically, this architecture removes the VAST majority of post-installation failure modes because it is both simpler and more isolated. If a CPU module tests good at the fab after packaging and it's treated properly through installation, its embedded memory should have mean time between failure specs significantly better than the CPU cores themselves.
  • Reply 20 of 32
    programmerprogrammer Posts: 3,409member
    Quote:
    Originally Posted by Hiro View Post


    Because the weak link is the capacitors in current DRAM. Transistors are pretty rock solid in comparison. Compared to current off-die memory, the on-die memory will also be subject to FAR less potential variation in current or other potential sources of temporary shorts that could cause DRAM transistor failures.



    Fair enough, but you're comparing today's apples to tomorrow's oranges. If this tech can be used for on-chip RAM, it can be used with off-chip RAM. So your statement is really that memory technology is going to become more reliable because of this. Good stuff!



    Quote:
    Originally Posted by Backtomac


    That seems to be an odd way to view it. Memory is a low profit commodity business. The cpus are where manufacturers have pricing power due to product differentiation. At least if you're Intel and not AMD.



    That's a historical perspective that may not last into the future, or at least not universally. The move to large numbers of simple in-order cores makes processors much more like a commodity than ever before... and embedding them with large on-chip memories even more so. Imagine a future where you buy your tightly couple memories and cores on DIMMs and plug them into your motherboard. A single heavy-weight central (multi-core) CPU may remain, but the majority of the computational power and parallelism will be embodied in the vast array of simple processors which are coupled tightly with memory (minimizing latency and maximizing bandwidth). Buying DIMMs suddenly gets a whole lot more complicated.



    I'm not saying it will go this way, but the idea of putting computational ability into the memory has been around for a long time... at least ever since Backus bemoaned Von Neumann's bottleneck over 30 years ago. And if we are looking at putting 100+ processors in a single machine, we've got to get serious about addressing this problem.
Sign In or Register to comment.