Intel's X25-E SLC SSD performs like a champ

Posted:
in Current Mac Hardware edited January 2014
You won't like the price but you'll LOVE the performance



Quote:

X25-M/X25-E: Why Two SSDs?



There are two different types of flash memory on the market: multi-level cell (MLC) and single-level cell (SLC). MLC stores multiple bits of data in each flash memory cell, making it less expensive. SLC costs much more, but allows direct access to each bit of data, which enables better performance for random access and write operations.



Let me give you an example: the X25-M, which has been Intel?s desktop flash SSD product, reaches a level of 200 MB/s in read throughput, but it only writes at up to 75 MB/s. And although it provides great I/O performance, an SLC-based flash SSD can do much better.



Enterprise Requirements



Enterprise customers typically require as many I/O operations per second as possible in order to sustain the minimum number of transactions per second required by mission-critical applications. In this context, Intel paired its excellent flash controller with SLC memory. The result is amazing, as the X25-E drive simply leaves its competition in the dust.



We compared it to the X25-M, a Samsung 64 GB mainstream flash SSD, server SSDs from Mtron and Memoright, and the two fastest 15,000 RPM hard drives you can get: the Hitachi Ultrastar 15K450, and Seagate?s Cheetah 15K.6.



Yes just want I want to see...the battle between SLC and 15k HDD.



Quote:

SATA Not Ideal



The X25-E drives still have a weakness that may prevent them from being deployed into true enterprise environments: they are still only based on the Serial ATA interface, while Serial Attached SCSI (SAS) has become the standard in enterprise environments. SAS provides important features, such as expanders and dual data ports, which can be used to maximize performance or redundancy. That said, this isn?t a serious issue, as SAS supports STP, the SATA Tunneling Protocol, which allows you to connect Serial ATA hard drives to SAS controllers.



Also keep in mind SAS drives can be dual ported for redundancy and the SAS controllers would be full duplex. I imagine we'd see more performance on a SAS controller naturally.







Quote:

Performance Madness



The new device is based on the same controller and cache memory architecture. It does not provide more maximum throughput than the X25-M (200 MB/s), and it is limited to 32 GB and 64 GB capacities for now. But it offers serious write performance (160 MB/s) thanks to single level cell flash memory, which the mainstream drive doesn?t possess. More importantly, it introduces I/O performance that is 10x to 25x higher than what you can get from the latest 15,000 RPM server hard drives. In almost every I/O benchmark, except the Web server test, the X25-E is three to five times faster than its direct flash SSD competitors.



Revealing the Inefficient





Describing the X25-E as the most efficient server drive would be correct, but I prefer to endorse it as the flash SSD storage product that finally redefines server storage performance, and resets the standards for high I/O devices. It isn?t so much more efficient than hard drives, but hard drives are simply extremely inefficient when it comes to random workloads.



Sophisticated flash memory technology has reached a level at which a single storage product is capable of delivering performance levels formerly reached only on complex RAID arrays with 6-12 hard drives. Not only does it outperform those good old hard drives, but this single X25-E storage product does it while consuming only a bit more than 1 W, on average, compared to at least 100 W for a RAID array.



This doesn?t mean that the hard drive is going to disappear, of course. High capacity applications and fast throughput remain an undisputed domain of magnetic storage products. But the days of hard drives being used in I/O intensive server applications are numbered. Hitachi and Seagate had better do their homework before releasing their flash SSD products in late 2009 or 2010, as Intel has set the bar higher than it has ever been before in the server storage market.



Not even the Hulk offers so much Green power. When these suckers get down to enthusiast range in pricing I'm freakin all over it. Yes it'll hurt but 400MBps writes with 3 drives is going to amaze every time.

Comments

  • Reply 1 of 5
    hmurchisonhmurchison Posts: 12,423member
    I see the storage market bifurcating along two distinct lines.



    Big storage will move to external arrays or JBOD and house your large data sets. OS partitions will move to SSD which excel at

    read throughput and latency sensitive apps.



    The current SSD will support 1 bit per cell (SLC) or 2 bits per cell (MLC) future MLC SSD will support 3 and 4 bit per cell which will make SSD in 2010 more competitive with 2.5" HDD selling today. The trick is to add enough smarts into the controller to mitigate the hit on write performance when you start cramming more bits into a cell.



    Shopping for SSD soon is going to be just like HDD. You'll need to take into account the bits per cell and controller to make a suitable decision.



    http://www.itexaminer.com/sandisk-x4...-revealed.aspx
  • Reply 2 of 5
    dobbydobby Posts: 797member
    Quote:
    Originally Posted by hmurchison View Post


    Big storage will move to external arrays or JBOD and house your large data sets.





    Really Big Storage is already on External arrays such as a DMX-4. Using auto tuning (which isn't that new) it can tune LUNs assigned to an OLTP on one server and a datawarehouse on another.

    Its this type of techno we need for the desktop, 7 ssds in a stripe and mirrored as a volume then another 5 of the same creating 3 or 4 logical vols that we can use for OS/DATA/SWAP and software that controls/tunes all that in the background for optimum performance.



    Problem we run in to is that the poxy SATA II doesn't perform the same as a 4/8GB HBA and none of the desktop techo (mac or wintel) hardware has a backplane that can handle the throughput to really utilise such storage.



    Um, can't quite remember why I replied but probably cos most of our servers use SAS which just isn't as fast as people make it out to be.



    Dobby.
  • Reply 3 of 5
    hmurchisonhmurchison Posts: 12,423member
    SAS really needs the 6Gbps bump that it has just recently got. It technically is slower that the Ultra 320 that it replaced.



    7 SSD in a stripe would be scary bandwidth. People are already hitting the interface limits with just a few fast SSD.



    Perhaps the darkhorse here could end up being FCoE over 10Gb Ethernet. The final ratification of Converged Enhanced Ethernet could mean that we move most external traffic over a single physical transport (ethernet) yet we have all the functionality of the SCSI command set and Fibre Channel.



    Let's face it what's more appealing segmenting your LAN traffic and Storage Area Network on different cabling and through different switches? Or rip and replacing your SAS/Fibre Channel/Ethernet network with 10GB CNA ---> Ethernet switches w/FCoE and letting the software and switches keep the data streams in order?



    I imagine that blade servers would be ideal. Infiniband and fiber optic backplanes go away and 10Gb, 40Gb and eventually 100Gb become the new roadmap.



    Hey wasn't Intel supposed to be integrating 10Gb ethernet into an ICH ?



    I digress. The long and the short is that we 'puting users looking for more performance are going to have to look for ways of tiering our data structures to take advantage.
  • Reply 4 of 5
    ksecksec Posts: 1,569member
    The problem with SSD is getting more speed out of it without crapping more capacity. ( i.e Higher Material cost )



    Hopefully Next Generation Intel SLC SSD will do 16 channel that should give around 400MB/s
  • Reply 5 of 5
    hmurchisonhmurchison Posts: 12,423member
    Quote:
    Originally Posted by ksec View Post


    The problem with SSD is getting more speed out of it without crapping more capacity. ( i.e Higher Material cost )



    Hopefully Next Generation Intel SLC SSD will do 16 channel that should give around 400MB/s



    Intel's a part of ONFI and they do have a roadmap that has 400MBps SSD being ratified mid 2010.



    Of course this doesn't mean that we won't have custom designs sooner but the focus of ONFI is to get the heavies to standardize the use of NAND SSD so that there's a more sane level of interoperability between vendors.



    Current ONFI standard 2.1 supports maximum 200MBps interface.



    http://onfi.org/about/faq/



    Another vendor to watch is Indilinx. They have a barefoot controller that does 200MB/160MBps read/write on MLC flash. They just got the design win for OCZ's new premium Vertex line I believe.



    They also have a controller called Jet Stream that will be testing second half of this year that will support SATA 6Gbps and and 500MBps.
Sign In or Register to comment.