These Harddrive capacities are just getting ridiculous!...

Jump to First Reply
Posted:
in General Discussion edited January 2014
So'wah cheg dis out, and get back to me if you have any solutions...



Seeing as how these guargantuan sized harddrives are now becoming common place, is it getting to be fairly good chance of that situation where you want to install one in your older Mac, but it ends up not being able to see the whole capacity of the HD? I see a good opportunity to throw one into my BW G3, but I am a bit concerned about getting stuck with a HD that I cannot use completely due to some obsolete IDE controller issue. Should I be concerned? If this is an issue, can it be circumvented by just partitioning to smaller sizes that are recognizable?



My real question is, is there a some database somewhere that explicitly indicates maximum HD capacities for older Macs?

Comments

  • Reply 1 of 6
    eugeneeugene Posts: 8,254member
    137 GB (128 GiB) is the only number you need to know for now. That covers at the very least beige G3s through Quicksilver G4s.



    The old controllers use 28-bit LBA, which limits the number of readable sectors on a HDD to 2^28. Each sector is 512 bytes, so that translates into 512*2^28 = 137438953472 bytes.



    The new controllers use 48-bit LBA, so I don't think you'll have to worry about capacity for a while...unless you have a 144 petabyte requirement...



    The best way around the issue is to completely circumvent the built-in ATA controller altogether since it will not only limit the capacity, but also the throughput of these newer drives. Get a new ATA PCI card or an ATA-6 capable FireWire enclosure. I've heard that you can partition a large drive into smaller volumes while its connected to a capable IDE controller, but you can't do it with the built-in controller without the help of a software driver/hack that I've forgotten the name of. Either way, it seems like a risky proposition to be fooling around with a software enabler.
     0Likes 0Dislikes 0Informatives
  • Reply 2 of 6
    Wow, thanks for the good info! So a 160 GB HD would be too much HD for my BW G3, and I take it partitioning would not make any difference?





    ...OTOH, my G4 iBook should then be able to read all of it as an external FW drive, no? Hmmm...
     0Likes 0Dislikes 0Informatives
  • Reply 3 of 6
    putting an internal HD above 128GB into a Firewire case will allow your B&W G3 to see all of the space. I would recommend going the PCI ATA controller route since Rev A B&W G3's can't support a slave drive since corruption will occur.
     0Likes 0Dislikes 0Informatives
  • Reply 4 of 6
    Did you mean "below 128 GB", instead of "above"? ...or are you suggesting that the firewire interface itself would circumvent the IDE controller issue? (I never even considered that, actually) Would USB work pretty much the same way (except not as fast, of course)? Yeah I know, USB 1 on a BW would be painfully slow (especially for a drive of that size), but it would only serve as an offline archiving device if I do use it as an external drive. I would also be able to use it as an external drive for my laptop (with USB2), so I would be able to enjoy higher bandwidth in that scenario.



    Any which way, 128 GB will be about all she's good for if I plan to use the new HD internally (as a replacement to the existing drive), right?



    Getting back to the sentiment of the topic title, it seems to me that it doesn't really matter what interface we are using, when we get to doing giant file moves comparable to these 100's of GB HD's (...and even "puny" 40-60 GB drives). You will be doing a lot of waiting. Pushing 1-2 GB or defragging a 50%-filled 20 GB partition across FW yields "get a bite to eat" kinds of waits. ATA-66 is surely faster, but these giant HD's aren't necessarily pushing these thresholds much in throughput off the platter. Most of all, it seems the OS and data verification processes suck out most of that performance in the end.



    What I'm getting at, is it seems we need a BIG jump in media throughput and controller-derived CPU resources to really keep in line with these umpteen GB-sized HD's, no? The waits are like pulling stuff off a 3.5" floppy (frustration-wise), even though we are dealing with ridiculously bottomless capacities.
     0Likes 0Dislikes 0Informatives
  • Reply 5 of 6
    eugeneeugene Posts: 8,254member
    Like I said, most FireWire enclosures out there now should handle >137 GB HDDs just fine. The only devices that would be limited by the 28-bit LBA on your built-in IDE controller would be those directly connected to it.



    Your best two options are to buy an internal Fast Ultra-ATA PCI adapter or to get a FireWire enclosure that supports ATA-6.



    For all intents and purposes, consumers shouldn't be worried about throughput issues. I can't really see any application that would require a consumer to saturate a modern HDD's 60 MB/s sustainable throughput.
     0Likes 0Dislikes 0Informatives
  • Reply 6 of 6
    randycat99randycat99 Posts: 1,919member
    I guess FW and USB2 is the way to go in this case, as it never even occurred to me that it would sidestep the IDE controller limitation.



    (wrt if 60 MB/s is "enough" for now)

    I don't think that is what we really see in the field when it comes to moving real files around, especially writing to something. I'm talking about real throughput off the media itself, not from a burst cache, and with all that other OS-imposed write-verify and file security goodies going on. Even if it was that high on a persistent basis, and you are throwing around 10 GB archive files on a 200 GB HD, you will be waiting for at least a little while (if there is absolutely nothing else for you to do, until that file copy finishes). I would hazard a guess that a "real" 10 MB/s is about typical for a slowish HD and 20 MB/s for a fast-ish one.



    True, there isn't much for the average consumer where they need that kind of performance, but it bears upon the same wants a casual consumer has when it comes to wanting their CD rip to be done in 30 sec. It's wonderful to have an entire CD readied and in your hands in 30 sec, but will I die if it takes 20 min on an "older" system. Of course not, but it can be unbearably slow, if I have to wait for it, right?



    Anyways, I just wanted to lament over how media throughput not quite scaling with how quickly capacity has gone up.
     0Likes 0Dislikes 0Informatives
Sign In or Register to comment.