These Harddrive capacities are just getting ridiculous!...
So'wah cheg dis out, and get back to me if you have any solutions...
Seeing as how these guargantuan sized harddrives are now becoming common place, is it getting to be fairly good chance of that situation where you want to install one in your older Mac, but it ends up not being able to see the whole capacity of the HD? I see a good opportunity to throw one into my BW G3, but I am a bit concerned about getting stuck with a HD that I cannot use completely due to some obsolete IDE controller issue. Should I be concerned? If this is an issue, can it be circumvented by just partitioning to smaller sizes that are recognizable?
My real question is, is there a some database somewhere that explicitly indicates maximum HD capacities for older Macs?
Seeing as how these guargantuan sized harddrives are now becoming common place, is it getting to be fairly good chance of that situation where you want to install one in your older Mac, but it ends up not being able to see the whole capacity of the HD? I see a good opportunity to throw one into my BW G3, but I am a bit concerned about getting stuck with a HD that I cannot use completely due to some obsolete IDE controller issue. Should I be concerned? If this is an issue, can it be circumvented by just partitioning to smaller sizes that are recognizable?
My real question is, is there a some database somewhere that explicitly indicates maximum HD capacities for older Macs?
Comments
The old controllers use 28-bit LBA, which limits the number of readable sectors on a HDD to 2^28. Each sector is 512 bytes, so that translates into 512*2^28 = 137438953472 bytes.
The new controllers use 48-bit LBA, so I don't think you'll have to worry about capacity for a while...unless you have a 144 petabyte requirement...
The best way around the issue is to completely circumvent the built-in ATA controller altogether since it will not only limit the capacity, but also the throughput of these newer drives. Get a new ATA PCI card or an ATA-6 capable FireWire enclosure. I've heard that you can partition a large drive into smaller volumes while its connected to a capable IDE controller, but you can't do it with the built-in controller without the help of a software driver/hack that I've forgotten the name of. Either way, it seems like a risky proposition to be fooling around with a software enabler.
...OTOH, my G4 iBook should then be able to read all of it as an external FW drive, no? Hmmm...
Any which way, 128 GB will be about all she's good for if I plan to use the new HD internally (as a replacement to the existing drive), right?
Getting back to the sentiment of the topic title, it seems to me that it doesn't really matter what interface we are using, when we get to doing giant file moves comparable to these 100's of GB HD's (...and even "puny" 40-60 GB drives). You will be doing a lot of waiting. Pushing 1-2 GB or defragging a 50%-filled 20 GB partition across FW yields "get a bite to eat" kinds of waits. ATA-66 is surely faster, but these giant HD's aren't necessarily pushing these thresholds much in throughput off the platter. Most of all, it seems the OS and data verification processes suck out most of that performance in the end.
What I'm getting at, is it seems we need a BIG jump in media throughput and controller-derived CPU resources to really keep in line with these umpteen GB-sized HD's, no? The waits are like pulling stuff off a 3.5" floppy (frustration-wise), even though we are dealing with ridiculously bottomless capacities.
Your best two options are to buy an internal Fast Ultra-ATA PCI adapter or to get a FireWire enclosure that supports ATA-6.
For all intents and purposes, consumers shouldn't be worried about throughput issues. I can't really see any application that would require a consumer to saturate a modern HDD's 60 MB/s sustainable throughput.
(wrt if 60 MB/s is "enough" for now)
I don't think that is what we really see in the field when it comes to moving real files around, especially writing to something. I'm talking about real throughput off the media itself, not from a burst cache, and with all that other OS-imposed write-verify and file security goodies going on. Even if it was that high on a persistent basis, and you are throwing around 10 GB archive files on a 200 GB HD, you will be waiting for at least a little while (if there is absolutely nothing else for you to do, until that file copy finishes). I would hazard a guess that a "real" 10 MB/s is about typical for a slowish HD and 20 MB/s for a fast-ish one.
True, there isn't much for the average consumer where they need that kind of performance, but it bears upon the same wants a casual consumer has when it comes to wanting their CD rip to be done in 30 sec. It's wonderful to have an entire CD readied and in your hands in 30 sec, but will I die if it takes 20 min on an "older" system. Of course not, but it can be unbearably slow, if I have to wait for it, right?
Anyways, I just wanted to lament over how media throughput not quite scaling with how quickly capacity has gone up.