Intel's new Optane memory technology could lead to 1000 times faster MacBook storage

2

Comments

  • Reply 21 of 47
    wizard69wizard69 Posts: 13,377member
    bsimpsen said:
    1000x is misleading. Random accesses to bytes distributed far and wide across the memory array will be much faster, as there is no Flash page load latency. But sequential accesses, which are far more common, will go no faster than the processor's memory interface. The real-life speed improvement will be far less than 1000x, and probably far less than 10x (as already shown in the chart).

    We've got half a century of computer system design (both hardware and software) wrapped around the idea of large scale high speed sequential storage. The arrival of large scale high speed random storage doesn't change all of that. It'll take time for CPU designers to adapt cache strategies (or even eliminate cache), and for OS and app designers to adapt algorithms to take advantage of this new technology.

    Sad isn't it! Intel's chart is right there and as such it pretty much tells you that you won't be seeing a real world 1000x benefit.
    edited March 2016
  • Reply 22 of 47
    tallest skiltallest skil Posts: 43,388member
    lkrupp said:
    The physical spinning hard drive is going the way of the Dodo bird and soon, just like the 3.5mm headphone jack.
    Not until I can get an SSD larger than 2TB for a reasonable amount.
    bestkeptsecret
  • Reply 23 of 47
    mdriftmeyermdriftmeyer Posts: 7,503member
    BS on any claim of such magnitude.
    tallest skil
  • Reply 24 of 47
    revenantrevenant Posts: 621member
    is this anything like IBM's racetrack memory?
  • Reply 25 of 47
    rcfarcfa Posts: 1,124member
    The new math: 6.4 = 1000
    tallest skil
  • Reply 26 of 47
    ksecksec Posts: 1,569member
    I doubt this will even be in any Mac in 2020. This is expensive, and expect its first commercial roll out to be later this year or early next year on HPC or Servers. Spend a few years there before even rolling out to Consumers.

    3D NAND is just starting to take off, the next few years will be 3D NAND years. Interestingly it is now the controller cant keep up. We will have shipping PCI-E 4.0 in 2018. Which further improves the Seq IO performance. And since consumer electronics and PC are not Random IO limited, there is little need to move to Optane. 

    And i dont see Optane being within 2x price / GB of NAND within the next 2 - 5 years.

    But dont get me wrong, Persistent memory like Optane is BIG, it will fundamentally change the way we think about Computer architecture, but i will there will need to be lots of research, and software engineering, software rewrite etc. And it is going to take a long time.
    edited March 2016
  • Reply 27 of 47
    mattinozmattinoz Posts: 2,319member
    lkrupp said:
    The physical spinning hard drive is going the way of the Dodo bird and soon, just like the 3.5mm headphone jack.
    Not until I can get an SSD larger than 2TB for a reasonable amount.
    Why don't we see fusion SSD's?
    have some form of fast NAND for 64Gb of storage then older, cheaper tech to bulk up the storage to high usable numbers.

    Sure not for people working with data sets but still good for the everyday user. 
    tallest skil
  • Reply 28 of 47
    tallest skiltallest skil Posts: 43,388member
    mattinoz said:
    Why don't we see fusion SSD's?
    Does Apple own all the patents to the only viable implementations? That’d be a reason.

    Just checking for the first time in years, I see we have 8TB HDDs now. My feeling is a combination of “took them long enough since 1TB” and “back in the day we were happy for a couple of megabytes...” A 128GB SSD+8TB HDD would be the ultimate in speed+storage right now. Even just a 16GB SSD to store the OS and applications. 
  • Reply 29 of 47
    cnocbuicnocbui Posts: 3,613member
    Nothing Intel has a hand in will be affordable to consumers - Thunderbolt, for example.  It will take something from Samsung to make it cost effective enough for the masses.
  • Reply 30 of 47
    loquitur said:
    With mass production "12-18 months away" (http://www.eetimes.com/document.asp?doc_id=1328682), Optane won't be appearing on
    any near-term Mac refresh.
    If I remember right (even with the delays) Intel was going to be bringing Optane online much earlier than Micron.  Originally the plan was for I think Intel to have it out by the end of last year or very early this year - while Micron was at the end of this year...... just a vague recollection -- but there have been delays. 
  • Reply 31 of 47
    MarvinMarvin Posts: 15,324moderator
    rcfa said:
    The new math: 6.4 = 1000
    The 1000x is the latency of the hardware:



    It's like saying USB 3 is 10Gbps but if you attach storage that only writes at 100MB/s, the write speed is the limit. Manufacturers typically advertise the theoretical maximum performance of the separate components. It's a little misleading with this memory because NVMe adds a lot to the latency but in the DIMM format, it's closer to advertised.

    The bandwidth, which is what people usually think of concerning storage speed is much lower:

    http://www.kitguru.net/components/memory/anton-shilov/intel-first-3d-xpoint-ssds-will-feature-up-to-6gbs-of-bandwidth/

    A later article says the first implementation would be just over double the 6GB/s bandwidth but again, it depends on where it's connected. The following chart shows where it sits with standard RAM at the top, then XPoint DIMMs, then PCIe XPoint SSD:



    The use case mentions servers, it's a cheaper and more compact way to get huge amounts of slower RAM. The likes of IBM could use it for big data, which have TBs of RAM:

    https://www.ibm.com/developerworks/community/blogs/IBMRedbooksSystemz/entry/don_t_tell_me_cloud_on_system_z_these_machines_were_already_born_for_cloud?lang=en

    If the computing loads are small, it could allow more virtual machines per server.

    1TB/s memory bandwidth is possible, the next Polaris/Pascal GPUs use High Bandwidth Memory (HBM) but to get 1TB/s, NVidia attaches the memory directly to the GPU chip. Intel would probably have to do the same to get the best latency numbers out and to get any significant improvement in bandwidth.

    Having standard DIMMs should allow any machine with those slots to use the memory, like the Mac Pro. It won't be cheap but it'll be faster than the SSD. It can be used as a cache for lower-end SSDs to improve SSD durability and lower latency for whatever is in the cache.
    edited March 2016
  • Reply 32 of 47
    staticx57staticx57 Posts: 405member
    As others have pointed out, the current bottleneck in regards to storage read/write speeds exists not on the storage medium itself but the interlink between processor and storage medium. It's awesome what Intel has done here, but until they either increase the amount or speed of the lanes between the two, you're not going to see any real world performance increase from this, as the current SSD drives are saturating the heck out their SATA 3/6GB or PCI Express connections right now. It's like owning a Ferrari when you live in NYC and never leave the city. 
    Yes and No. SSDs are extremely fast when they run in parallel, but run a chip or two by themselves and they are relatively slow. Slow compared to full SSDs, but still fast compared to Hard drives. What if you could have a one chip solution AND faster still?
  • Reply 33 of 47
    misamisa Posts: 827member
    boeyc15 said:

    Does the 'data bus' speeds come into play here? Or said another way(not too clear to me in article) what does this translate to in real life usage?


    Well, assuming it hits 1000x faster and 1000x more durable than NAND flash technologies which top out at around 200MB/sec without parallelism tricks (eg, more than one chip.) So 200MB x 1000= 200,000MB/sec or 200GB/sec. Yeah that looks more like RAM speed which is 30GB per channel in DDR4.


    The positioning of XPoint is between RAM and NAND attached directly to PCIe.

    So the theory here is that you'd replace conventional DDR4 RAM with 3D Xpoint, and suddenly we're thrown back to 1996-2006 when PDA's only had RAM and if the batteries died your device was effectively reset back to factory defaults. Except only now it's in your desktops and laptops and Xpoint memory now means you don't need to "boot" your system after being powered off, you have to actually tell the the BIOS to reset the memory. Law Enforcement will love this, since they could grab the decryption keys from the "RAM" Xpoint memory by pulling the memory out while it's off.

    Like, what I expect is that 3D Xpoint will replace existing NVME, SATA and SAS hard drives, but it's up to Intel and AMD to dramatically increase the available bandwidth to PCIe in order to do this, because effectively what is necessary is bandwidth equal to that of the RAM. It's likely that early production will have speeds that are 5-10X faster than NVMe and will just replace PCIe SSD's, while the bottom drops out of the NAND market, where they will continue to be used for USB3.0 drives (currently 16GB drives are "disposable" tier, and effectively have replaced CD and DVD's for throw-away sneakernet's.)

    If the durability holds up (I have Sony memory sticks from 2001 that still work, don't know about bitrot) It might also get used for cold-storage. But currently nobody knows the life span of it. We know the life span of NAND is actually rather short (as little as 2 years) depending on how frequently it's written to. 8K/UHD video management (it's currently impossible to store 4K video losslessly, at least not without a loss of color fidelity.) Currently cold-storage solutions all consist of "make a new copy every 6 months, keep X copies"

  • Reply 34 of 47
    knowitallknowitall Posts: 1,648member
    Slowest memory is cached in several layers ending up in a register file.
    Ideally one bus is used to connect all memory types to the CPU.
    Its waiting for ARM to define inexpensive 'slo' memory; Intel keeps the prices high and increments small, like Sergey Bubka with pole vault.
  • Reply 35 of 47
    SpamSandwichSpamSandwich Posts: 33,407member
    lkrupp said:
    The physical spinning hard drive is going the way of the Dodo bird and soon, just like the 3.5mm headphone jack.
    Not until I can get an SSD larger than 2TB for a reasonable amount.
    Manufacturers will keep competing and coming up with different solutions to knock down costs. Storage follows a similar curve to processors in terms of cost-to-power ratio advances over time.
    tallest skil
  • Reply 36 of 47
    tallest skiltallest skil Posts: 43,388member
    Storage follows a similar curve to processors in terms of cost-to-power ratio advances over time.
    Does it, really? Because we went from 1 GB to 1 TB in roughly twice the time it has taken us to go from 1 TB to 8 TB… I get that HDDs have a physical limitation, same as transistor gaps, but SSDs aren’t exactly picking up the slack. You’re probably right, but I just haven’t noticed this. NAND has made some decent leaps in capacity/price, but the higher quality stuff I don’t know.
  • Reply 37 of 47
    nolamacguynolamacguy Posts: 4,758member
    cnocbui said:
    Nothing Intel has a hand in will be affordable to consumers - Thunderbolt, for example.  It will take something from Samsung to make it cost effective enough for the masses.
    yeah that whole Win-tel PC revolution....was it not affordable or did it not happen?

    still banging your FUD drum, eh? Apple implementations suck, go Samsung, all that? go home.

    me, i love my Thunderbolt external SSD, which was a couple hundred bucks. does just what I need it to for my work VMs. 
    edited March 2016 patchythepirate
  • Reply 38 of 47
    wizard69 said:
    bsimpsen said:
    1000x is misleading. Random accesses to bytes distributed far and wide across the memory array will be much faster,

    Sad isn't it! Intel's chart is right there and as such it pretty much tells you that you won't be seeing a real world 1000x benefit.
    I think it's important for everyone reading this article to understand that in every single metric of performance, XPoint is slower than DRAM.  Probably about 4x slower than DRAM in bandwidth, possibly 1000x slower than DRAM in random access.  It may be true that the random access latency of XPoint is 1,000 faster than flash, but flash has access latencies in tenths of milliseconds - it only seeks about 10x faster than rotating hard drives due to its massive block writing.  This is like saying, "we will soon have airplanes that are 1,000 times faster than walking speed."  It's the wrong comparison.  I don't think they'll even see 1,000x boost in IOPS.  The catch is if they cost 10x per byte as much as flash, but only offer 2x the bandwidth, who will buy?

  • Reply 39 of 47
    wizard69 said:
    Sad isn't it! Intel's chart is right there and as such it pretty much tells you that you won't be seeing a real world 1000x benefit.
    I think it's important for everyone reading this article to understand that in every single metric of performance, XPoint is slower than DRAM.  Probably about 4x slower than DRAM in bandwidth, possibly 1000x slower than DRAM in random access.  It may be true that the random access latency of XPoint is 1,000 faster than flash, but flash has access latencies in tenths of milliseconds - it only seeks about 10x faster than rotating hard drives due to its massive block writing.  This is like saying, "we will soon have airplanes that are 1,000 times faster than walking speed."  It's the wrong comparison.  I don't think they'll even see 1,000x boost in IOPS.  The catch is if they cost 10x per byte as much as flash, but only offer 2x the bandwidth, who will buy?

    XPoint is slower than DRAM so it will not replace DRAM (though you could see different balances). 
     - Intel plans to first aim at the consumer market, then later the enterprise market
     - XPoint may be slower than RAM, but it is going to be much faster and higher bandwidth than current SSD storage.  Before SSD storage would be at best 100s times slower than memory, now it will be maybe a factor of 10.  
     - It will create a different balance point between caching in memory vs SSD -- which we will have to see the outcome.
     - Optane will be about 10x denser than volatile memory
     - There will be 2 versions of Optane modules - one that uses PCIe based connections and DIMM module based ones that connect directly to memory controller

    What will happen is that computers will move towards having more DIMM slots for a combination and there will be changes in how the operating system decides to use what for what.  We really don't know the final outcome of what this will mean in reality - but it is one of the more dramatic shifts in recent times.  


    afrodri
  • Reply 40 of 47
    tallest skiltallest skil Posts: 43,388member
    bkkcanuck said:
    What will happen is that computers will move towards having more DIMM slots for a combination and there will be changes in how the operating system decides to use what for what.
    I guess I don’t know of any technical reason this couldn’t happen, but can non-volatile memory be used across DIMMs? I remember reading once about the opposite: a specialized card being made into which RAM could be plugged such that the system would recognize it as a hard drive. This was to increase data transfer and access speeds, of course, but required that the card include a battery to keep the RAM powered should the computer be turned off.

    There’s probably a reason it didn’t catch on.  :p
Sign In or Register to comment.