Let's forget for a moment that booting your computer is not something that you need to do particularly often. My rule-of-thumb is that you need a three-fold speed increase in a computer operation to make a substantial perceptual difference. The YouTube video of the boot times show a 9 second decrease--from 43 seconds to 34 seconds. In relative terms, this is a 21% decrease in boot time/26% increase in boot speed. You can probably do as well with a faster hard drive and save a buttload of money in the process.
Booting is not a completely fair test - because a large proportion of the time is spent in non-disk stuff, like harware tests etc.
The biggest boost is app launching which is demonstrated the second video.
Prediction 1 - Flash SSD throughput and IOPs (in traditional HDD form factors) will more than double every year in the period from 2007 to 2012.
This predicts (in effect) that in 2011 a single 3.5" form factor flash SSD will be able to deliver similar throughput to some of the fastest RAM SSDs available in 2007, with over 2,000MB/s sustainable reads and writes.
So in essence by 2012 we'll be cheering the fact that the world is in fact NOT ending and our SSD drives are passing Terabytes of information in just a few seconds. Clearly we're going to need some sort of optimized controller to see even a fraction of these speeds. For once the controller and bus may be the limiting factor
Quote:
Originally Posted by StorageResearch
Prediction 2 - Rackmount flash SSD throughput and IOPS performance will be a multiple of the performance for a single disk. These factors have already been shown to be scalable in SSD RAID arrays.
This needs little explanation as some of the results are intuitive and we've already published plenty of articles on this subject. However, some of the architectural features which are now used in SSD RAID systems - such as MFT technology - can also be designed into individual SSD disk modules.
Hmmmm so take that gonzo performance of a single SSD and multiply by the number of SSD drives in a system since it scales well enough.
Quote:
Originally Posted by StorageResearch
Prediction 3 - The asymmetry of sustained read to write IOPs will improve from 10 to 1 (the fastest devices available in 2007) - but will never achieve parity (1 to 1).
As this change occurs in the market flash SSD arrays will become viable choices in many enterprise server speedup applications which hitherto had been the exclusive domain of RAM SSDs.
In (typical) database applications with Read:Write ratios of 4:1, an ideal flash SSD with 10:1 R/W IOPS is approximately 3x (2.8x to be exact) slower in overall applications performance than an ideal RAM SSD with similar MB/s throughput.
When flash SSDs improve to 5:1 R/W IOPS - the overall applications ratio will be about 2x slower than RAM SSDs (1.8x to be exact).
This portends big improvements in write performance to close the gap.
Quote:
Originally Posted by StorageResearch
Prediction 4 - Latency in flash SSDs will not scale in the same way as throughput, and will always be significantly worse than that in ideal RAM SSDs.
The ratio of read access times for RAM SSDs compared to flash SSDs may improve for a few years (as the gap gets smaller) but then it will hit a brick wall - and may in fact get worse again.
The reason is - that flash SSDs have not yet been optimized for latency - so there is some scope to reduce the latency gap with RAM systems (which have already been highly optimized).
But in future product generations as flash SSDs increase in density - a read or write cycle becomes an increasingly complicated on-chip process - which includes calibration, error correction and address translation all being done by controllers between the memory array and the host interface controller or card data bus.
This series of steps (to do a simple read) will diverge from what happens in a typical RAM to the point where flash and RAM look like completely different species. That's unlike earlier generations of flash in which the read cycle looked the same as a static RAM - but simply took longer.
Keep in mind that a 15k HDD drive has rougly 4.17ms latency. Intel's X-25m MLC SSD has a latency of 0.085ms (source: SSD-Reviews.com). RAM SSD is far better but both clobber HDD with no empathy at all.
The latency is one of the reasons why SSD respond so well to multiasking as it handles many IO operations in the same time it takes HDD to issue far fewer.
Has the SSD problem of limited read/write cycles been overcome?
limited in comparison to HDD
This limited write "problem" can be an issue in some applications. It really depends on what your write profile is like. For many uses / users it may never be a problem.
Research continues in the durability problem so flash is getting better. Micron for instance just recently announced an improvement to it's flash that make MLC as durable as old SLC and makes SLC much more durable. The life span however is still finite.
The bigger problem with flash is that development may very well be running out of steam. There is a real question out there about flash being able to meet future density needs.
240GB 1.8" good quality (Samsung) SSD for $499. I am going to get one of these as soon as they become available. Woot!
For 2.5inch form factor SSDs with reads over 200MB/s and at least 160MB/s is a certainty but the AIR uses such oseteric Sata-LIF connector instead of standard micro-Sata means, the 1.8inch 5mm thick 240GB may not be doable. Its a shame tho. RunCore is the only company that may have promise in this drive....
But simply introducing Flash as yet another tier of storage in a datacenter isn't the real opportunity - that adds new costs and a set of new management hassles. To truly change the industry, adding Flash would have to be completely transparent to users and operators, alike, with no switching or operational cost. And that's exactly what we're doing with ZFS. ZFS will transparently incorporate Flash into the storage hierarchy of a running system, using the microprocessor cache for the most performance sensitive tasks, DRAM for the next, then Flash, then disk (then ultimately tape). ZFS will allow Flash to join DRAM and commodity disks to form a hybrid pool automatically used by ZFS to achieve the best price, performance and energy efficiency conceivable. Simply put, our storage and server systems will get enormously faster - without any upgrade to the microprocessor. Adding Flash will be like adding DRAM - once it's in, there's no new administration, just new capability.
I've been chatting up SSD as yet another tier in storage lately. It makes total sense. In fact think of SSD as turning an array into a big iPod. The orginal iPod was efficient because it loaded songs into RAM and then spun the hard drive down. There's little difference to how SSD would allow arrays to consume less power and perform better by being at the front end and intelligently caching data that needs to be delivered with low latency.
ZFS is certainly key here and I await Apple's response to using ZFS in OS X desktop
This limited write "problem" can be an issue in some applications. It really depends on what your write profile is like. For many uses / users it may never be a problem.
Research continues in the durability problem so flash is getting better. Micron for instance just recently announced an improvement to it's flash that make MLC as durable as old SLC and makes SLC much more durable. The life span however is still finite.
The bigger problem with flash is that development may very well be running out of steam. There is a real question out there about flash being able to meet future density needs.
Dave
well, as i understood it, OSX accesses the HDD rather often in its current form. please note its been a while since i boned up on any of this, but it made me want to hold off, until I heard "big" news about this being overcome
of course the rewrite of OSX that is and will be SL will likely take this into account (at the very least on some level it should be "improved" for SSD) its a very forward looking "Apple" thing to do after all, more SSD thinness in Macs
Re density, over the course of CES reporting, I read at least two companies stating that 2TB was on the horizon (can't remember the time scale) but HDDs haven't quite got to 2TB yet, and ow small a particle can you charge on a plater? if anything is running out of steam/space its HDDs
My only worry is that the 3.5" form factor will be done away with, and I would really Like to get plenty of use out of my Drobo with SSD 3.5" drive in the future
I guess though if one were to paraphrase Steve "SSD is a nascent technology, we have some interesting ideas about what to do with it, and we are keeping an eye on it"
--
I'm holding off on buying a MacBook, and I think I'll opt for an SSD if at all possible price to capacity dependant. If I hear SL has code to "help" SSD along with the read write thing, then its a sure thing.
I've been chatting up SSD as yet another tier in storage lately. It makes total sense. In fact think of SSD as turning an array into a big iPod. The orginal iPod was efficient because it loaded songs into RAM and then spun the hard drive down. There's little difference to how SSD would allow arrays to consume less power and perform better by being at the front end and intelligently caching data that needs to be delivered with low latency.
ZFS is certainly key here and I await Apple's response to using ZFS in OS X desktop
was gonna reply to your other post, but then saw ZFS and who in their right mind could miss the opportunity to post about ZFS oh yeah BEING USED BY OSX!
Thanks for the link. It is illuminating. It reinforces my thinking on SSDs. They are faster than HDDs, but they not fast enough to justify their extra cost. Neither are they fast enough to overcome concerns about their limited read/rewrite cycles.
Verdict: SSDs are getting there, but they aren't there yet.
I bought a Mac Air 1.86GHz with 128GB SSD around a month ago. I decided to buy this over the SATA drive because when I launched Word side-by-side with on the two models, it started up noticeably faster on the SSD machine.
I'm using my Air for development and part of my build process some 1000- 2000 files are deleted and regenerated (javadocs). This process is really really really slow on my machine. It used to take 90 seconds on my 5yr old Dell desktop and takes anywhere between 300 - 700 seconds on my Air. I attribute this to the slow writes of the SSD and really regret buying this model. I am trying to swap out the SSD with a SATA and called Apple but they said this isn't possible.
So while SSD mght be useful for "regular" use, it really sucks by my use. My only option is to sell this 1 mo. old Air and buy the model with SATA disk Any suggestions on where's the best place to sell? I prefer dealing locally here in boston (not really into eBay).
Apple's SSD option is decent but not stellar regarding SSD performance. I certainly wouldn't take its performance as the pinnacle of SSD capability.
The write issue has only affected some SSD with jmicron controllers and MLC drives in particular. You may want to keep your eyes on the pricing of SLC SSD which do not have the write slowdown of MLC SSD.
Also you're going to see designs that ameliorate the write slowdowns of MLC SSD. Jmicron has new controllers that do some special things to help and many SSD vendors are adding DRAM caches to prevent slowdowns as well.
Offering your system the incredible performance of flash-based technology, The OCZ Vertex Series delivers the performance and reliability of SSDs at less price per gigabyte than other high speed offerings currently on the market. The OCZ Vertex Series is the result of all the latest breakthroughs in SSD technology, including new architecture and controller design, blazing 200MB/sec read and 160MB/sec write speeds, and featuring up to 64MB onboard cache. OCZ continues to place solid state technology within reach of the average consumer, and delivers on the promise of SSDs as an alternative to traditional hard drives in consumer targeted mobile applications.
When did you first find out about the write latency issue?
We have been developing SSD technology since 2006 and launched our first generation SSD controller, JMF601A/602A at the end of 2007. It soon attracted the attention of SSD makers because of the feature set and high performance. We found the write latency issue around March, 2008. The issue only happens under a special condition, when the system data is close to full and the host keeps writing data on it. It takes time to do internal garbage collection, data merge and housekeeping.
What did you do to solve it?
We revised the hardware architecture and launched JMF601B/602B in June 2008. JMF601A/602A was the old version after B version was available. Currently, all JMicron customers are using latest version, including ASUS NB/EeePC, OCZ, Super Talent, Transcend, etc. The B version improves the write latency a lot. Besides, JMicron also can reserve more spare blocks to alleviate the issue. Because more spare blocks reservation would decrease the drive capacity, most SSD makers tend to not enlarge the spare size.
What do you have planned for the future?
Some customers have introduced high speed SSDs with JMicron's RAID controller JMB390, plus two JMF602B controllers. The target performance is 233MB/sec on sequential read and 166MB/sec on sequential write. Moving forward, JMicron is developing SSD controllers with DRAM cache and it is expected to be available in Q3 2009. That will totally solve the random read/write performance issue.
Thanks hmurchison. I'll have to do some reading on the points you mention. It appears that only a hardware replacement will help resolve this issue. The Apple guys say that the Air is "closed" and parts like disk cannot be replaced (by them). Do I have any other options? Are there local authorized centers that might be able to help me without causing me to lose warranty?
Thanks hmurchison. I'll have to do some reading on the points you mention. It appears that only a hardware replacement will help resolve this issue. The Apple guys say that the Air is "closed" and parts like disk cannot be replaced (by them). Do I have any other options? Are there local authorized centers that might be able to help me without causing me to lose warranty?
Sanjiv
I don't know how easy/hard it is to modify the MBA but there are faster 1.8" SSD coming.
well, as i understood it, OSX accesses the HDD rather often in its current form. please note its been a while since i boned up on any of this, but it made me want to hold off, until I heard "big" news about this being overcome
Most Unix or Unix like systems access the secondary store frequently. But these are often reads. Some unix file systems track access times which also mean writes for every read. I'm not up to date on Apples specifics but on some systems that feature can be turned off with relative ease. So this particular wear problem can be addressed.
What would take more time is optimizing the part of the I/O subsystem that manages traffic to the physical disks. Many file systems and the associated drivers are designed to work around the latency of the slow magnetic disk drive. Because SSD deliver data much quicker than a magnetic drive the I/O systems have to be reworked to make use of that quickness.
Quote:
of course the rewrite of OSX that is and will be SL will likely take this into account (at the very least on some level it should be "improved" for SSD) its a very forward looking "Apple" thing to do after all, more SSD thinness in Macs
I would hope and suspect that that is true, that is SL being optimized for SSD. What the pay off is for any individual user is yet to be determined. I'm not sure that every app would benefit noticeably from an file system optimized for SSD and one that just has SSD with the old architecture.
Quote:
Re density, over the course of CES reporting, I read at least two companies stating that 2TB was on the horizon (can't remember the time scale) but HDDs haven't quite got to 2TB yet, and ow small a particle can you charge on a plater? if anything is running out of steam/space its HDDs
That doesn't jive with info I have seen. The problem likewise is those pesky bits and the space required to store them. Remember that Flash is at a very small process node already. It will become extremely expensive to go to the next node if a reliable process can be found at that node. The problem is does one invest in such a production facility for a mature technology or do they look at more promising tech. There is a number of potential replacements for Flash in the labs right now, it is not a stretch to see Flash as limited in life span.
Quote:
My only worry is that the 3.5" form factor will be done away with, and I would really Like to get plenty of use out of my Drobo with SSD 3.5" drive in the future
Hey how do you like your Drobo? I was looking into the little beast a week or two ago.
On the other hand I do wish Apple or somebody with some balls would break away from the old magnetic drive formats and shift to something suitable for PC board mountable components. Solid state storage should slip in place on a computer much like and expansion card does today. Just look at the save space. Further the old disk interfaces are really just to slow for a reasonably fast array of storage. I'd rather see SSD storage skip legacy interfaces and go directly to a fast PCI-Express connection.
Imagine a Drobo made up of thin SSD storage cards that slip into the Drobo as simple PC cards. That is your Drobo becomes a traditional card rack that groups together the storage cards in a very compact format.
Quote:
I guess though if one were to paraphrase Steve "SSD is a nascent technology, we have some interesting ideas about what to do with it, and we are keeping an eye on it"
I don't see this as Apples position at all. The only thing causing them problems is the cost of flash and its unstable behavior in the market. Besides that there really isn't much that is new about Flash as it has been around a very long time. The density and price equation just makes flash a possible choice theses days.
Quote:
--
I'm holding off on buying a MacBook, and I think I'll opt for an SSD if at all possible price to capacity dependant. If I hear SL has code to "help" SSD along with the read write thing, then its a sure thing.
I'd love to see Flash exceed note book hard drive storage capacity in a reasonably well performing device. Ideally that would be where my next drive upgrade will come from. I just don't feel though that it will happen as soon as some would like. One can already get 500GB notebook drives that are reasonably priced, SSD is a generation behind and outrageously priced.
I have been using a 60GB HDD as my main Hard Drive for 3 - 4 years. And i see no capacity problem.
As are Data grows it become much more important to backup to external Raid 1 or Multiple HDD.
So let say 60GB SSD is enough for main stream. 10GB for OS and 20GB for App. You do have 30Gb left for your frequent access files.
Flash isn't going to change much from now till 2010. Apart from being cheaper. However Controller and software;the current limitation of SSD, will get much more refined and improve.
OCZ has already demonstrate Vertex 2, hitting 550MB/s Read. Micro has stated that could get 800MB/s with their New controller.
All Apple need is to snap the controller and Flash... and Via La. Apple SSD.
We could ditch SATA if it is a limitation. PCI Express 2.0 2x will do.
I have been using a 60GB HDD as my main Hard Drive for 3 - 4 years. And i see no capacity problem.
My MBP isn't even a year old yet and it has a 200GB drive and I ran out of space on it relatively quickly. That lead to the removal of a lot of stuff that I'd rather have on the machine. It is fine that you can get by with 60GB of storage, in my case 60GB was gone in a couple of months. Now some of that was due to big memory hogs like XCode, NeoOffice and others. Some also due to iTunes and my more than passing interest in photography, it's not hard to end up with a huge Aperture database!
Quote:
As are Data grows it become much more important to backup to external Raid 1 or Multiple HDD.
Of course back ups are important but that does not make up for the need to have data online.
Quote:
So let say 60GB SSD is enough for main stream. 10GB for OS and 20GB for App. You do have 30Gb left for your frequent access files.
Let's clear that up first by saying it isn't enough. First not enough bytes for todays user. Second you don't want to have all your dynamic data occupying a small section of the SSD for fear it might cause failure of wear leveling.
Quote:
Flash isn't going to change much from now till 2010. Apart from being cheaper. However Controller and software;the current limitation of SSD, will get much more refined and improve.
While this is certainly true it misses one important point. That is flash can be much faster than traditional drives when space doesn't matter. That is today, but it does require shopping for that performance.
Quote:
OCZ has already demonstrate Vertex 2, hitting 550MB/s Read. Micro has stated that could get 800MB/s with their New controller.
Which brings up the question of how they will transfer that data to the main memory. SATA is already out classed. The biggest problem with flash right now is the attachment to legacy interfaces. What we really need is a PCI Express interface to these mass storage devices.
Quote:
All Apple need is to snap the controller and Flash... and Via La. Apple SSD.
What Apple needs to do here is to define a new high speed storage interface that overcomes the limitations of SATA and the legacy disk mechanical form factor. Flash is nothing more than semi conductors and could go on a really cheap expansion card that simple plugs into some form of PCI Express slot. That would give us a very low cost way to secondary storage. A card rack could be designed to mechanically hold three flash based SSD in the place of one 3.5" disk and be much faster.
Quote:
We could ditch SATA if it is a limitation. PCI Express 2.0 2x will do.
It's not that we could but rather that we should! There are already flash based PCI Express cards that can do 750KB per second so the idea is sound. What needs to be addressed though is the card format which was never well done on a PC. That is where a new card format comes into play. Design it mechanically to deal with the wasted space and the lack of Air flow. Going to a new format gives us the opportunity to correct the mistakes of the past.
Most Unix or Unix like systems access the secondary store frequently. But these are often reads. Some unix file systems track access times which also mean writes for every read. ......What would take more time is optimizing the part of the I/O subsystem that manages traffic to the physical disks. .........
cheers
Quote:
Originally Posted by wizard69
That doesn't jive with info I have seen. The problem likewise is those pesky bits and the space required to store them. Remember that Flash is at a very small process node already. It will become extremely expensive to go to the next node if a reliable process can be found at that node. The problem is does one invest in such a production facility for a mature technology or do they look at more promising tech. There is a number of potential replacements for Flash in the labs right now, it is not a stretch to see Flash as limited in life span.
how many of those micro mini flash cards (little more than a square CM total) at 16/32GB could you get in a 3.5" case (or 2.5" for that matter?)
answer .. LOTS
but I get your point.
Quote:
Originally Posted by wizard69
Hey how do you like your Drobo? I was looking into the little beast a week or two ago.
some initial probs, never lost any data, but that first week or so was a PAIN! narrowed it down to a faulty drive (a "free" 250GB I had spare, it was busted)
but the Drobo sits via firewire to a mini, with LOTS of films and episodes for my iTunes Library, not a complaint of peep out of it, there have been issues with some 1.5TBs drives, although it seems to be fixed, but you may want to avoid the 1.5TBs till the problems ones are flushed out of the suppply chain.
1TB WD Greens in mine.
I hope they sell enough to make a go of it (long term, damned recession), its a great little product and I can't see being without one, the fact that the storage can grow EASILY with you is just so "Apple like" at I wouldn't be surprised if Apple snapped them up.
IF it fits your needs, then yeah go for it
Quote:
Originally Posted by wizard69
I'd rather see SSD storage skip legacy interfaces and go directly to a fast PCI-Express connection.
yes and no. Ideally both flavours, at least in transition.
Quote:
Originally Posted by wizard69
Imagine a Drobo made up of thin SSD storage cards that slip into the Drobo as simple PC cards. That is your Drobo becomes a traditional card rack that groups together the storage cards in a very compact format.
mmm, the ZFS "pool of storage" I keep wondering how Cook/Jobs are gonna "Appleize" that, what it means for TM and how they will "one last thing" it.
and how they would tie that "pool" into "the cloud" cos you just KNOW Steve would think that was "waay cool" when selling it to a packed house
Quote:
Originally Posted by wizard69
I don't see this as Apples position at all. The only thing causing them problems is the cost of flash and its unstable behavior in the market. Besides that there really isn't much that is new about Flash as it has been around a very long time. The density and price equation just makes flash a possible choice theses days.
I'd love to see Flash exceed note book hard drive storage capacity in a reasonably well performing device. Ideally that would be where my next drive upgrade will come from. I just don't feel though that it will happen as soon as some would like. One can already get 500GB notebook drives that are reasonably priced, SSD is a generation behind and outrageously priced.
dave
these last two points are what I ment with the nascent quote its pretty NEW that we are considering flash as a HDD replacement, and it just seems that no matter how fast SSD advances in capacity and costs dropping, HDD is still one step (although usually two) ahead. its as NEARLY THERE as Jobs regards the netbook craze.
Comments
Let's forget for a moment that booting your computer is not something that you need to do particularly often. My rule-of-thumb is that you need a three-fold speed increase in a computer operation to make a substantial perceptual difference. The YouTube video of the boot times show a 9 second decrease--from 43 seconds to 34 seconds. In relative terms, this is a 21% decrease in boot time/26% increase in boot speed. You can probably do as well with a faster hard drive and save a buttload of money in the process.
Booting is not a completely fair test - because a large proportion of the time is spent in non-disk stuff, like harware tests etc.
The biggest boost is app launching which is demonstrated the second video.
C.
http://www.tomshardware.com/news/asu...ptop,6771.html
Hopefully, Apple will have 512GB SSDs ready for some of their computers by this summer.
limited in comparison to HDD
Has the SSD problem of limited read/write cycles been overcome?
limited in comparison to HDD
According to manufacturers it has with "wear leveling" technology.
Still if data robusness is paramount going SLC is the way.
BTW I read an interesting prognostication about the roadmap of SSD.
http://www.storagesearch.com/ssd-law-1.html
Prediction 1 - Flash SSD throughput and IOPs (in traditional HDD form factors) will more than double every year in the period from 2007 to 2012.
This predicts (in effect) that in 2011 a single 3.5" form factor flash SSD will be able to deliver similar throughput to some of the fastest RAM SSDs available in 2007, with over 2,000MB/s sustainable reads and writes.
So in essence by 2012 we'll be cheering the fact that the world is in fact NOT ending and our SSD drives are passing Terabytes of information in just a few seconds. Clearly we're going to need some sort of optimized controller to see even a fraction of these speeds. For once the controller and bus may be the limiting factor
Prediction 2 - Rackmount flash SSD throughput and IOPS performance will be a multiple of the performance for a single disk. These factors have already been shown to be scalable in SSD RAID arrays.
This needs little explanation as some of the results are intuitive and we've already published plenty of articles on this subject. However, some of the architectural features which are now used in SSD RAID systems - such as MFT technology - can also be designed into individual SSD disk modules.
Hmmmm so take that gonzo performance of a single SSD and multiply by the number of SSD drives in a system since it scales well enough.
Prediction 3 - The asymmetry of sustained read to write IOPs will improve from 10 to 1 (the fastest devices available in 2007) - but will never achieve parity (1 to 1).
As this change occurs in the market flash SSD arrays will become viable choices in many enterprise server speedup applications which hitherto had been the exclusive domain of RAM SSDs.
In (typical) database applications with Read:Write ratios of 4:1, an ideal flash SSD with 10:1 R/W IOPS is approximately 3x (2.8x to be exact) slower in overall applications performance than an ideal RAM SSD with similar MB/s throughput.
When flash SSDs improve to 5:1 R/W IOPS - the overall applications ratio will be about 2x slower than RAM SSDs (1.8x to be exact).
This portends big improvements in write performance to close the gap.
Prediction 4 - Latency in flash SSDs will not scale in the same way as throughput, and will always be significantly worse than that in ideal RAM SSDs.
The ratio of read access times for RAM SSDs compared to flash SSDs may improve for a few years (as the gap gets smaller) but then it will hit a brick wall - and may in fact get worse again.
The reason is - that flash SSDs have not yet been optimized for latency - so there is some scope to reduce the latency gap with RAM systems (which have already been highly optimized).
But in future product generations as flash SSDs increase in density - a read or write cycle becomes an increasingly complicated on-chip process - which includes calibration, error correction and address translation all being done by controllers between the memory array and the host interface controller or card data bus.
This series of steps (to do a simple read) will diverge from what happens in a typical RAM to the point where flash and RAM look like completely different species. That's unlike earlier generations of flash in which the read cycle looked the same as a static RAM - but simply took longer.
Keep in mind that a 15k HDD drive has rougly 4.17ms latency. Intel's X-25m MLC SSD has a latency of 0.085ms (source: SSD-Reviews.com). RAM SSD is far better but both clobber HDD with no empathy at all.
The latency is one of the reasons why SSD respond so well to multiasking as it handles many IO operations in the same time it takes HDD to issue far fewer.
Has the SSD problem of limited read/write cycles been overcome?
limited in comparison to HDD
This limited write "problem" can be an issue in some applications. It really depends on what your write profile is like. For many uses / users it may never be a problem.
Research continues in the durability problem so flash is getting better. Micron for instance just recently announced an improvement to it's flash that make MLC as durable as old SLC and makes SLC much more durable. The life span however is still finite.
The bigger problem with flash is that development may very well be running out of steam. There is a real question out there about flash being able to meet future density needs.
Dave
that would optimize data transfers with SSDs. Anyone else remember it?
I want this in my Air!
240GB 1.8" good quality (Samsung) SSD for $499. I am going to get one of these as soon as they become available. Woot!
For 2.5inch form factor SSDs with reads over 200MB/s and at least 160MB/s is a certainty but the AIR uses such oseteric Sata-LIF connector instead of standard micro-Sata means, the 1.8inch 5mm thick 240GB may not be doable. Its a shame tho. RunCore is the only company that may have promise in this drive....
I vaguely remember an article awhile back about Snow Leopard having some kind of technology
that would optimize data transfers with SSDs. Anyone else remember it?
Yup
http://zaynehumphrey.wordpress.com/2...s-performance/
and
Jonathan Schwartz on ZFS optiming SSD
But simply introducing Flash as yet another tier of storage in a datacenter isn't the real opportunity - that adds new costs and a set of new management hassles. To truly change the industry, adding Flash would have to be completely transparent to users and operators, alike, with no switching or operational cost. And that's exactly what we're doing with ZFS. ZFS will transparently incorporate Flash into the storage hierarchy of a running system, using the microprocessor cache for the most performance sensitive tasks, DRAM for the next, then Flash, then disk (then ultimately tape). ZFS will allow Flash to join DRAM and commodity disks to form a hybrid pool automatically used by ZFS to achieve the best price, performance and energy efficiency conceivable. Simply put, our storage and server systems will get enormously faster - without any upgrade to the microprocessor. Adding Flash will be like adding DRAM - once it's in, there's no new administration, just new capability.
I've been chatting up SSD as yet another tier in storage lately. It makes total sense. In fact think of SSD as turning an array into a big iPod. The orginal iPod was efficient because it loaded songs into RAM and then spun the hard drive down. There's little difference to how SSD would allow arrays to consume less power and perform better by being at the front end and intelligently caching data that needs to be delivered with low latency.
ZFS is certainly key here and I await Apple's response to using ZFS in OS X desktop
This limited write "problem" can be an issue in some applications. It really depends on what your write profile is like. For many uses / users it may never be a problem.
Research continues in the durability problem so flash is getting better. Micron for instance just recently announced an improvement to it's flash that make MLC as durable as old SLC and makes SLC much more durable. The life span however is still finite.
The bigger problem with flash is that development may very well be running out of steam. There is a real question out there about flash being able to meet future density needs.
Dave
well, as i understood it, OSX accesses the HDD rather often in its current form. please note its been a while since i boned up on any of this, but it made me want to hold off, until I heard "big" news about this being overcome
of course the rewrite of OSX that is and will be SL will likely take this into account (at the very least on some level it should be "improved" for SSD) its a very forward looking "Apple" thing to do after all, more SSD thinness in Macs
Re density, over the course of CES reporting, I read at least two companies stating that 2TB was on the horizon (can't remember the time scale) but HDDs haven't quite got to 2TB yet, and ow small a particle can you charge on a plater? if anything is running out of steam/space its HDDs
My only worry is that the 3.5" form factor will be done away with, and I would really Like to get plenty of use out of my Drobo with SSD 3.5" drive in the future
I guess though if one were to paraphrase Steve "SSD is a nascent technology, we have some interesting ideas about what to do with it, and we are keeping an eye on it"
--
I'm holding off on buying a MacBook, and I think I'll opt for an SSD if at all possible price to capacity dependant. If I hear SL has code to "help" SSD along with the read write thing, then its a sure thing.
Yup
http://zaynehumphrey.wordpress.com/2...s-performance/
and
Jonathan Schwartz on ZFS optiming SSD
I've been chatting up SSD as yet another tier in storage lately. It makes total sense. In fact think of SSD as turning an array into a big iPod. The orginal iPod was efficient because it loaded songs into RAM and then spun the hard drive down. There's little difference to how SSD would allow arrays to consume less power and perform better by being at the front end and intelligently caching data that needs to be delivered with low latency.
ZFS is certainly key here and I await Apple's response to using ZFS in OS X desktop
was gonna reply to your other post, but then saw ZFS and who in their right mind could miss the opportunity to post about ZFS
do we live in interesting times? HELL YES!
bring ZFS on SSD ASAP
256GB SSD that performs well.
http://www.bit-tech.net/hardware/sto...b-ssd-review/1
256GB SSD that performs well.
Thanks for the link. It is illuminating. It reinforces my thinking on SSDs. They are faster than HDDs, but they not fast enough to justify their extra cost. Neither are they fast enough to overcome concerns about their limited read/rewrite cycles.
Verdict: SSDs are getting there, but they aren't there yet.
I'm using my Air for development and part of my build process some 1000- 2000 files are deleted and regenerated (javadocs). This process is really really really slow on my machine. It used to take 90 seconds on my 5yr old Dell desktop and takes anywhere between 300 - 700 seconds on my Air. I attribute this to the slow writes of the SSD and really regret buying this model. I am trying to swap out the SSD with a SATA and called Apple but they said this isn't possible.
So while SSD mght be useful for "regular" use, it really sucks by my use. My only option is to sell this 1 mo. old Air and buy the model with SATA disk
Thanks,
Sanjiv
The write issue has only affected some SSD with jmicron controllers and MLC drives in particular. You may want to keep your eyes on the pricing of SLC SSD which do not have the write slowdown of MLC SSD.
Also you're going to see designs that ameliorate the write slowdowns of MLC SSD. Jmicron has new controllers that do some special things to help and many SSD vendors are adding DRAM caches to prevent slowdowns as well.
http://www.ocztechnology.com/product...ata_ii_2_5-ssd
Offering your system the incredible performance of flash-based technology, The OCZ Vertex Series delivers the performance and reliability of SSDs at less price per gigabyte than other high speed offerings currently on the market. The OCZ Vertex Series is the result of all the latest breakthroughs in SSD technology, including new architecture and controller design, blazing 200MB/sec read and 160MB/sec write speeds, and featuring up to 64MB onboard cache. OCZ continues to place solid state technology within reach of the average consumer, and delivers on the promise of SSDs as an alternative to traditional hard drives in consumer targeted mobile applications.
Interview with JMicron
When did you first find out about the write latency issue?
We have been developing SSD technology since 2006 and launched our first generation SSD controller, JMF601A/602A at the end of 2007. It soon attracted the attention of SSD makers because of the feature set and high performance. We found the write latency issue around March, 2008. The issue only happens under a special condition, when the system data is close to full and the host keeps writing data on it. It takes time to do internal garbage collection, data merge and housekeeping.
What did you do to solve it?
We revised the hardware architecture and launched JMF601B/602B in June 2008. JMF601A/602A was the old version after B version was available. Currently, all JMicron customers are using latest version, including ASUS NB/EeePC, OCZ, Super Talent, Transcend, etc. The B version improves the write latency a lot. Besides, JMicron also can reserve more spare blocks to alleviate the issue. Because more spare blocks reservation would decrease the drive capacity, most SSD makers tend to not enlarge the spare size.
What do you have planned for the future?
Some customers have introduced high speed SSDs with JMicron's RAID controller JMB390, plus two JMF602B controllers. The target performance is 233MB/sec on sequential read and 166MB/sec on sequential write. Moving forward, JMicron is developing SSD controllers with DRAM cache and it is expected to be available in Q3 2009. That will totally solve the random read/write performance issue.
2009 should be a watershed year for SSD.
Sanjiv
Thanks hmurchison. I'll have to do some reading on the points you mention. It appears that only a hardware replacement will help resolve this issue. The Apple guys say that the Air is "closed" and parts like disk cannot be replaced (by them). Do I have any other options? Are there local authorized centers that might be able to help me without causing me to lose warranty?
Sanjiv
I don't know how easy/hard it is to modify the MBA but there are faster 1.8" SSD coming.
http://www.hardmac.com/news/2009-01-23/#9481
I think you may be able to have an Apple Service Center replace the SSD with another drive and keep your warranty.
well, as i understood it, OSX accesses the HDD rather often in its current form. please note its been a while since i boned up on any of this, but it made me want to hold off, until I heard "big" news about this being overcome
Most Unix or Unix like systems access the secondary store frequently. But these are often reads. Some unix file systems track access times which also mean writes for every read. I'm not up to date on Apples specifics but on some systems that feature can be turned off with relative ease. So this particular wear problem can be addressed.
What would take more time is optimizing the part of the I/O subsystem that manages traffic to the physical disks. Many file systems and the associated drivers are designed to work around the latency of the slow magnetic disk drive. Because SSD deliver data much quicker than a magnetic drive the I/O systems have to be reworked to make use of that quickness.
of course the rewrite of OSX that is and will be SL will likely take this into account (at the very least on some level it should be "improved" for SSD) its a very forward looking "Apple" thing to do after all, more SSD thinness in Macs
I would hope and suspect that that is true, that is SL being optimized for SSD. What the pay off is for any individual user is yet to be determined. I'm not sure that every app would benefit noticeably from an file system optimized for SSD and one that just has SSD with the old architecture.
Re density, over the course of CES reporting, I read at least two companies stating that 2TB was on the horizon (can't remember the time scale) but HDDs haven't quite got to 2TB yet, and ow small a particle can you charge on a plater? if anything is running out of steam/space its HDDs
That doesn't jive with info I have seen. The problem likewise is those pesky bits and the space required to store them. Remember that Flash is at a very small process node already. It will become extremely expensive to go to the next node if a reliable process can be found at that node. The problem is does one invest in such a production facility for a mature technology or do they look at more promising tech. There is a number of potential replacements for Flash in the labs right now, it is not a stretch to see Flash as limited in life span.
My only worry is that the 3.5" form factor will be done away with, and I would really Like to get plenty of use out of my Drobo with SSD 3.5" drive in the future
Hey how do you like your Drobo? I was looking into the little beast a week or two ago.
On the other hand I do wish Apple or somebody with some balls would break away from the old magnetic drive formats and shift to something suitable for PC board mountable components. Solid state storage should slip in place on a computer much like and expansion card does today. Just look at the save space. Further the old disk interfaces are really just to slow for a reasonably fast array of storage. I'd rather see SSD storage skip legacy interfaces and go directly to a fast PCI-Express connection.
Imagine a Drobo made up of thin SSD storage cards that slip into the Drobo as simple PC cards. That is your Drobo becomes a traditional card rack that groups together the storage cards in a very compact format.
I guess though if one were to paraphrase Steve "SSD is a nascent technology, we have some interesting ideas about what to do with it, and we are keeping an eye on it"
I don't see this as Apples position at all. The only thing causing them problems is the cost of flash and its unstable behavior in the market. Besides that there really isn't much that is new about Flash as it has been around a very long time. The density and price equation just makes flash a possible choice theses days.
--
I'm holding off on buying a MacBook, and I think I'll opt for an SSD if at all possible price to capacity dependant. If I hear SL has code to "help" SSD along with the read write thing, then its a sure thing.
I'd love to see Flash exceed note book hard drive storage capacity in a reasonably well performing device. Ideally that would be where my next drive upgrade will come from. I just don't feel though that it will happen as soon as some would like. One can already get 500GB notebook drives that are reasonably priced, SSD is a generation behind and outrageously priced.
dave
As are Data grows it become much more important to backup to external Raid 1 or Multiple HDD.
So let say 60GB SSD is enough for main stream. 10GB for OS and 20GB for App. You do have 30Gb left for your frequent access files.
Flash isn't going to change much from now till 2010. Apart from being cheaper. However Controller and software;the current limitation of SSD, will get much more refined and improve.
OCZ has already demonstrate Vertex 2, hitting 550MB/s Read. Micro has stated that could get 800MB/s with their New controller.
All Apple need is to snap the controller and Flash... and Via La. Apple SSD.
We could ditch SATA if it is a limitation. PCI Express 2.0 2x will do.
I have been using a 60GB HDD as my main Hard Drive for 3 - 4 years. And i see no capacity problem.
My MBP isn't even a year old yet and it has a 200GB drive and I ran out of space on it relatively quickly. That lead to the removal of a lot of stuff that I'd rather have on the machine. It is fine that you can get by with 60GB of storage, in my case 60GB was gone in a couple of months. Now some of that was due to big memory hogs like XCode, NeoOffice and others. Some also due to iTunes and my more than passing interest in photography, it's not hard to end up with a huge Aperture database!
As are Data grows it become much more important to backup to external Raid 1 or Multiple HDD.
Of course back ups are important but that does not make up for the need to have data online.
So let say 60GB SSD is enough for main stream. 10GB for OS and 20GB for App. You do have 30Gb left for your frequent access files.
Let's clear that up first by saying it isn't enough. First not enough bytes for todays user. Second you don't want to have all your dynamic data occupying a small section of the SSD for fear it might cause failure of wear leveling.
Flash isn't going to change much from now till 2010. Apart from being cheaper. However Controller and software;the current limitation of SSD, will get much more refined and improve.
While this is certainly true it misses one important point. That is flash can be much faster than traditional drives when space doesn't matter. That is today, but it does require shopping for that performance.
OCZ has already demonstrate Vertex 2, hitting 550MB/s Read. Micro has stated that could get 800MB/s with their New controller.
Which brings up the question of how they will transfer that data to the main memory. SATA is already out classed. The biggest problem with flash right now is the attachment to legacy interfaces. What we really need is a PCI Express interface to these mass storage devices.
All Apple need is to snap the controller and Flash... and Via La. Apple SSD.
What Apple needs to do here is to define a new high speed storage interface that overcomes the limitations of SATA and the legacy disk mechanical form factor. Flash is nothing more than semi conductors and could go on a really cheap expansion card that simple plugs into some form of PCI Express slot. That would give us a very low cost way to secondary storage. A card rack could be designed to mechanically hold three flash based SSD in the place of one 3.5" disk and be much faster.
We could ditch SATA if it is a limitation. PCI Express 2.0 2x will do.
It's not that we could but rather that we should! There are already flash based PCI Express cards that can do 750KB per second so the idea is sound. What needs to be addressed though is the card format which was never well done on a PC. That is where a new card format comes into play. Design it mechanically to deal with the wasted space and the lack of Air flow. Going to a new format gives us the opportunity to correct the mistakes of the past.
Most Unix or Unix like systems access the secondary store frequently. But these are often reads. Some unix file systems track access times which also mean writes for every read. ......What would take more time is optimizing the part of the I/O subsystem that manages traffic to the physical disks. .........
cheers
That doesn't jive with info I have seen. The problem likewise is those pesky bits and the space required to store them. Remember that Flash is at a very small process node already. It will become extremely expensive to go to the next node if a reliable process can be found at that node. The problem is does one invest in such a production facility for a mature technology or do they look at more promising tech. There is a number of potential replacements for Flash in the labs right now, it is not a stretch to see Flash as limited in life span.
how many of those micro mini flash cards (little more than a square CM total) at 16/32GB could you get in a 3.5" case (or 2.5" for that matter?)
answer .. LOTS
but I get your point.
Hey how do you like your Drobo? I was looking into the little beast a week or two ago.
some initial probs, never lost any data, but that first week or so was a PAIN! narrowed it down to a faulty drive (a "free" 250GB I had spare, it was busted)
but the Drobo sits via firewire to a mini, with LOTS of films and episodes for my iTunes Library, not a complaint of peep out of it, there have been issues with some 1.5TBs drives, although it seems to be fixed, but you may want to avoid the 1.5TBs till the problems ones are flushed out of the suppply chain.
1TB WD Greens in mine.
I hope they sell enough to make a go of it (long term, damned recession), its a great little product and I can't see being without one, the fact that the storage can grow EASILY with you is just so "Apple like" at I wouldn't be surprised if Apple snapped them up.
IF it fits your needs, then yeah go for it
I'd rather see SSD storage skip legacy interfaces and go directly to a fast PCI-Express connection.
yes and no. Ideally both flavours, at least in transition.
Imagine a Drobo made up of thin SSD storage cards that slip into the Drobo as simple PC cards. That is your Drobo becomes a traditional card rack that groups together the storage cards in a very compact format.
mmm, the ZFS "pool of storage" I keep wondering how Cook/Jobs are gonna "Appleize" that, what it means for TM and how they will "one last thing" it.
and how they would tie that "pool" into "the cloud" cos you just KNOW Steve would think that was "waay cool" when selling it to a packed house
I don't see this as Apples position at all. The only thing causing them problems is the cost of flash and its unstable behavior in the market. Besides that there really isn't much that is new about Flash as it has been around a very long time. The density and price equation just makes flash a possible choice theses days.
I'd love to see Flash exceed note book hard drive storage capacity in a reasonably well performing device. Ideally that would be where my next drive upgrade will come from. I just don't feel though that it will happen as soon as some would like. One can already get 500GB notebook drives that are reasonably priced, SSD is a generation behind and outrageously priced.
dave
these last two points are what I ment with the nascent quote