Electronista: Seagate To Ship 37TB Hard Drives By 2010

Posted:
in General Discussion edited January 2014
Quote:

Seagate may achieve an exponential increase in hard disk storage limits in as little as three years' time, according to Seagate researchers speaking with Wired. The storage device maker has revealed that a technology called heat-assisted magnetic recording, which uses a laser to temporarily heat the platter and store more information in a given area, could increase the density of hard drives to just over 6TB per square inch -- allowing full, 3.5-inch wide desktop hard drives to store 37.5TB of data. The increased space would hold the entire Library of Congress catalog in raw form, according to Seagate.



The magazine also reports that Seagate is working on a small, magnetic form of storage codenamed "Probe" that would compete directly against flash memory. No details of its capacity or performance have been revealed, though it too should become available in the next few years.



Comments

  • Reply 1 of 14
    SpamSandwichSpamSandwich Posts: 33,408member
    Let's see if we can make this happen by 2008.
  • Reply 2 of 14
    jvbjvb Posts: 210member
    I'm good with my 160 gb hard drive, thanks. Maybe video editing gurus will be excited for this one...
  • Reply 3 of 14
    smaxsmax Posts: 361member
    Interestng... When this idea is refined enough, think of what they could do with tiny hard drives that have practical capacities.



    It'll be damn expensive though, and by that time, solid state drives (which, aside from capacity, have many more advantages) will be widely accepted, so it probably won't see too much use.
  • Reply 4 of 14
    Do you really want to entrust that amount of data to a single piece of hardware?
  • Reply 5 of 14
    SpamSandwichSpamSandwich Posts: 33,408member
    Quote:
    Originally Posted by ThinkingDifferent View Post


    Do you really want to entrust that amount of data to a single piece of hardware?



    But you could always Time Machine it off to another ginormous capacity drive.
  • Reply 6 of 14
    jvbjvb Posts: 210member
    Except the majority of people using these drives will not be running OSX. I can imagine disasters if this baby would fail. It would be a huge pain to back it up too, if you didn't have a drive of the same capacity. Think about how many of today's drives that would take...
  • Reply 7 of 14
    smaxsmax Posts: 361member
    Quote:
    Originally Posted by ThinkingDifferent View Post


    Do you really want to entrust that amount of data to a single piece of hardware?



    Heh, throw 2 of thise in a RAID 0 array and fill 'em up. Ouch.



    I just had another though though. Think of the read/write times for something with that kind of platter density.
  • Reply 8 of 14
    jvbjvb Posts: 210member
    Think about how long it would take to defragment one of those. Like a week?
  • Reply 9 of 14
    jvbjvb Posts: 210member
    Quote:
    Originally Posted by tonton View Post


    How about setting up the thing as a RAID-in-a-box? Keep the capacity at 10TB or so and have redundancy built-in.



    That's actually not a bad idea. It would be a clean and safe way, with a much smaller risk of lost data.
  • Reply 10 of 14
    bacillusbacillus Posts: 313member
    I guess the question becomes.... are they going to increase write speed to match?



    The write speeds I've seen are about 60 Mb/sec tops. For ease of use, lets assume it becomes 100 Mb/sec (real world, not 'in theory' write speed).



    37 X10^12 B/ (100 X 10^6 B/sec) = 0.37 X 10^6 sec or 370,000 seconds.



    So what...about 4 days, assuming you are just adding data all the time at its max speed, just to fill the thing up.



    My math could be wrong, I did not write it out on paper.
  • Reply 11 of 14
    feynmanfeynman Posts: 1,087member
    Can't wait to see these things in the Xserve RAID! Though I imagine it will be another year and a half after they actually ship! But imagine 518 TBs in one setup and 1.12 Petabytes in two 8)
  • Reply 12 of 14
    Quote:
    Originally Posted by jvb View Post


    That's actually not a bad idea. It would be a clean and safe way, with a much smaller risk of lost data.



    If you know anything about server redundancy its a horrible idea. Its like saying "oh ill make two copies of one file on my hardrive this way im safe".....wrong if that drive fails it dosent matter if you have a million copies on it your pretty much out of luck. If you want to back up something it must be done on another physical drive.
  • Reply 13 of 14
    addaboxaddabox Posts: 12,660member
    But remember, anything we have to say now about insanely huge drives and their seek times, cost, backing up, etc., will inevitably be rendered quaintly naive by time.



    From the perspective of the sub-gig drive user, just a few short years ago, all the caveats about a multi-terabyte drive applied to what is today an average size drive of 80 GB or so.
Sign In or Register to comment.