Originally posted by bauman
Also, it's not just the defragmentation on the fly that keeps HFS+ clean. It, unlike NTFS, tries to find a block that's big enough to write the entire file into by delaying immediate writing.
Where did you get this idea? I was to understand that NTFS does
employ that very same technique. When someone describes NTFS as "braindead" relative to HFS+, that gives me doubts as to where that is coming from. They both attempt to do relocated, contiguous writes whenever possible. So what makes one braindead and the other vastly superior? Given this "auto-defragged writes scenario", how does it come about where Windows machines routinely require defragmenting while OSX does not? Something there is just not adding up for me.
Thanks to the link for the fragmentation scanner- interesting and will check out...
WRT the paltry 1-2% fragmentation scores, do bear in mind the brute increases in data size and storage mediums we are dealing with. It's not uncommon to have 10-15 GB of "stuff" comprised of the core OS and closely tied apps (this isn't even including your collection of regular apps and documents). That's a lot of data storage compared to what would fill 1-2 GB of space from the OS9 days. Thankfully, most of that stays put, is read only, and never gets fragged if it wasn't to begin with. There is still the working set of stuff that gets a lot of read and write back activity in the course of a given working day. It just may seem that the working set hasn't really grown much compared to what is stored in that 10-15 GB of "stuff". So it is entirely possible to get really tiny fragmentation scores. However, if the computer is spending most of its time in that working set, and there is fragmentation within that, then the effective impact of fragmentation is possibly "larger" than that reflected in that 1-2% (hope that made sense).