Apple Needs A Defragment Program By Apple!

24

Comments

  • Reply 21 of 62
    Check out 'man tunefs' in Terminal; one can set automatic defragmentation options for an internal drive (i.e., space vs speed) and the thresholds thereof. (One may need to boot in single-user mode to actually apply the changes.)
  • Reply 22 of 62
    Quote:

    Originally posted by Barto



    lol... way to modify the quote!
  • Reply 23 of 62
    inkheadinkhead Posts: 155member
    I'm sorry but after a year, when I go to defrag my disk it takes 6 hours and makes my computer super fast. repairing permissions, deleting cache, and all sorts of other tricks never do anything comparable.



    Windows has one, and it is useful. You do need to defrag your drives every so often for optimal performance.



    If windows can have a free one built in why can't I? Steve said the only area the mac was behind in was multiple logins, which just isn't true....



    Disk DO need to be defragged from time to time, just not too often.
  • Reply 24 of 62
    big macbig mac Posts: 480member
    Quote:

    Originally posted by inkhead

    I'm sorry but after a year, when I go to defrag my disk it takes 6 hours and makes my computer super fast. repairing permissions, deleting cache, and all sorts of other tricks never do anything comparable.



    Windows has one, and it is useful. You do need to defrag your drives every so often for optimal performance.



    If windows can have a free one built in why can't I? Steve said the only area the mac was behind in was multiple logins, which just isn't true....



    Disk DO need to be defragged from time to time, just not too often.




    If you need it that badly, surely you can spare the money for a utility. I recommend Disk Warrior. The disk optimization isn't important, but you'll be ecstatic when Disk Warrior brings your disk back after a major crash.



    Thank you for the great info, Giant. With so many posts in your count, I wonder why I've never noticed you before.
  • Reply 25 of 62
    inkheadinkhead Posts: 155member
    Obviously I already bought disk warrior. It was rather expensive. My PC friend, he just used the nice disk utilities on his window CD to help fix the problem. Everyone who's saying we don't need one is only doing so to defend Apple.



    Look if Apple made the disk utility to defrag we'd all be talking about how cool it was that it's included.
  • Reply 26 of 62
    jlljll Posts: 2,713member
    Quote:

    Originally posted by inkhead

    Everyone who's saying we don't need one is only doing so to defend Apple.



    I'm so tired of these 'I have an opinion and everyone not agreeing with me are Apple zealots' posts.
  • Reply 27 of 62
    kickahakickaha Posts: 8,760member
    Quote:

    Originally posted by inkhead

    Obviously I already bought disk warrior. It was rather expensive. My PC friend, he just used the nice disk utilities on his window CD to help fix the problem. Everyone who's saying we don't need one is only doing so to defend Apple.



    Look if Apple made the disk utility to defrag we'd all be talking about how cool it was that it's included.




    No, I'd be talking about what a waste of resources it was to produce it.



    I'm really sorry that your disk is used such that it *does* defrag badly. (I'm assuming badly... don't suppose you have some numbers?) You are the exception, not the rule. In Windowsland, defragging is *NOT* an option, due to the fscking screwed up nature of the disk system. *THAT* is why MS provides a defrag utility... because they screwed up, and they know it, and, in classic MS style, it's easier to provide a lame bandage, and put out a slick PR piece talking about 'added value', than it is to actually go back and fix the problem.



    Go back and read the above highly informative posts. There are reasons your disk is fragmenting, and other ways to fix it more permanently. (Ie, get a bigger drive, or keep a big chunk free if you can.)
  • Reply 28 of 62
    123123 Posts: 278member
    Quote:

    Originally posted by Kickaha

    No, I'd be talking about what a waste of resources it was to produce it.



    I'm really sorry that your disk is used such that it *does* defrag badly. (I'm assuming badly... don't suppose you have some numbers?) You are the exception, not the rule. In Windowsland, defragging is *NOT* an option, due to the fscking screwed up nature of the disk system. *THAT* is why MS provides a defrag utility... because they screwed up, and they know it, and, in classic MS style, it's easier to provide a lame bandage, and put out a slick PR piece talking about 'added value', than it is to actually go back and fix the problem.



    Go back and read the above highly informative posts. There are reasons your disk is fragmenting, and other ways to fix it more permanently. (Ie, get a bigger drive, or keep a big chunk free if you can.)




    It's getting ridiculous.



    Can you tell me exactly, why NTFS is so bad and HFS+ is sooo much better concerning disk fragmentation? I'm sure you can't, because you don't know a thing about file systems, do you even know that there are more than one Windows file system?
  • Reply 29 of 62
    paulpaul Posts: 5,278member
    hahahahhaah....



    in before 123 gets his ass handed to him...
  • Reply 30 of 62
    foadfoad Posts: 717member
    This is getting interesting....
  • Reply 31 of 62
    airslufairsluf Posts: 1,861member
    Kickaha and Amorph couldn't moderate themselves out of a paper bag. Abdicate responsibility and succumb to idiocy. Two years of letting a member make personal attacks against others, then stepping aside when someone won't put up with it. Not only that but go ahead and shut down my posting priviledges but not the one making the attacks. Not even the common decency to abide by their warning (afer three days of absorbing personal attacks with no mods in sight), just shut my posting down and then say it might happen later if a certian line is crossed. Bullshit flag is flying, I won't abide by lying and coddling of liars who go off-site, create accounts differing in a single letter from my handle with the express purpose to decieve and then claim here that I did it. Everyone be warned, kim kap sol is a lying, deceitful poster.



    Now I guess they should have banned me rather than just shut off posting priviledges, because kickaha and Amorph definitely aren't going to like being called to task when they thought they had it all ignored *cough* *cough* I mean under control. Just a couple o' tools.



    Don't worry, as soon as my work resetting my posts is done I'll disappear forever.
  • Reply 32 of 62
    cubedudecubedude Posts: 1,556member
    edit: This was a stupid post.
  • Reply 33 of 62
    kickahakickaha Posts: 8,760member
    Quote:

    Originally posted by 123

    It's getting ridiculous.



    Can you tell me exactly, why NTFS is so bad and HFS+ is sooo much better concerning disk fragmentation? I'm sure you can't, because you don't know a thing about file systems, do you even know that there are more than one Windows file system?




    Heh.



    Oh my. I don't even know where to begin on this one.



    There are multiple filesystems under Windows. *Traditionally*, they have been rather idiotic when it has come to file fragmentation, even when there is ample disk space. Eg: My NT4 workstation several years ago would regularly hit 35% fragmentation in a week of development, with the disk being 50% full. Stupid. Absolutely *zero* reason for that situation to ever occur. Hence, the *need* for a defrag tool to be shipped with Windows. (For the reasons for that level of fragmenting, look back to the posts giant made - I'm not going to repeat information already presented, that'd be a waste of *my* resources.)



    Today, they've wised up quite a bit... but can you imagine the screaming if they *removed* the defrag tool? (Lets not even get into the issues of whether any defrag tool can actually do better than the now rather sophisticated algorithms within the drives themselves, as already pointed out.) I mean come on, look at the whining in this thread for a filesystem that doesn't even *have* that legacy of brokenness.



    So it's there.



    So what.



    To me, it's like moaning that Apple doesn't ship a Registry Editor. We don't need one, it wouldn't do us a blasted bit of good, but golly gee, Windows has one, so why don't we? Silly.



    Yeah, fragmentation happens. It *rarely* is bad enough to cause any issues for 99.9% of the MacOS X users out there. So why should Apple throw resources at something that a) affects a tiny fraction of its user population, b) is solvable with 3rd party tools for those who need it, and c) is solvable long term by just getting a bigger drive, when there are much more pressing issues that affect *100%* of the users out there, such as the Finder??



    If you really, really feel you need a defrag tool, go get one. Chances are you don't, but if you *do*, due to low free disk space, spending that money on a new drive would be a much better use of *your* resources.



    Defragging is just a temporary bandaid.
  • Reply 34 of 62
    123123 Posts: 278member
    Quote:

    Originally posted by AirSluf

    There is a basic lack of understanding at work here--What is fragmentation? It is not a messy file map displayed by Norton or a built in Windows tool, it is physically discontinuous files. [..]

    Optimization is different from de-fragmentation, that is moving files around to best place them for disk access based on some guess of how they will be accessed in the future.




    It's important to rember that. However, it's a tradeoff. You either have contiguous files allover the disk, or you have a disk with heavy external fragmentation but little or no empty blocks and related files grouped locally. Either way, performance is not optimal and can at least theoretically be increased by optimizing (== distribute files intelligently on the disk, remove space between files etc.) and defragmenting the disk. A defragmentation tool usually tries to do both. How big is the improvement? I don't know, you'd have to define some usage patterns and then benchmark HFS+ file system snapshots against optimized ones.



    However, I actually want to address a few things in giant's post that everybody thinks is so great. I'm now talking only about fragmentation as this is the topic, not about optimization:



    Quote:

    Originally posted by giant

    First and formost (and this has always been true) hard drives try to write data in the most efficient manner anyway, all by themselves. Drives try to "de-frag" naturally.




    That's not true, hard drives know nothing about the file system used and thus cannot do anything about fragmentation (internal nor external) at all. They are built to provide good performance for sequential read/writes and can do something about bad sectors and such, but this has nothing to do with fragmentation.



    Quote:



    De-fragging (even in it's most effective times) really only made a difference when you drive was over 75% full. Files didn't start getting fragmented until space started running out.




    Depends on the file system and if many files grow a lot over time (this is often forgotten), but it's generally true as this is rarely the case.



    Quote:



    Why de-fragging is reaching ineffective stages:

    1) Contiguous sectors are often not the most efficient way to read a file on modern hard drives! For those of you old enough, do you remember the "speeded up" versions of the Apple II disk operating system, like ProtoDOS and otehrs? They did what many modern hard drives do now.

    Instead of being effective at reading each successive sector, the most effective read pattern for many hard drive mechanisms is read one sector, process that into cache while skipping 2 sectors, read the 3rd. So a continous write operation (file system level write) will result in a sector pattern of (F1 is this file, fx is other files)

    F1 fx fx fx F1 fx fx fx F1.

    "De-fragging" that and putting each sector next to the other will actually reduce the efficiency of the system instead of improving it.





    Completely wrong. First of all, let's count to three: F1 fx fx ? Second of all, what he is describing here is called interleaving and was used many years ago when disk controllers were slow and the rotational speed was too fast for them. Modern drives don't need this.



    Quote:



    All hard drive manufactures know that de-fragging software is out there, and they know that most de-fragging software does not contain a catalog of optimum sector spreads for every possible hard drive platter/cache combination in the world.

    So, many hard drive mechanisms or drives ignore de-fragging software. There you are, watching you hard drive de-frag for 4 hours. Inside the box the de-fragging software is telling the hard drive "OK, take this sector and put it on cylinder 3, sector $1A".





    The CHS (Cylinder, Head, Sector) concept is obsolete. You never say "OK, take this sector and put it on cylinder 3, sector $1A". This has two reasons:

    - Hard disks used to have a constant number of sectors per track. However, modern (last 5 years at least) disks use ZBR and have a variable number of sectors per track. CHS was still used, but it was only logical addressing, the drive itself used to do something different.



    - As disks became too big (>8GB), the traditional addressing mode couldn't be used anymore. Since it doesn't reflect the disks physical layout anyway, a new mode was introduced, called LBA (logical block addressing). Sectors are now just addressed linearly starting at 0. Apple uses LBA, LBA48 support (>137GB drives) has been introduced with 10.2 and ATA100/133 controllers.



    What has this to do with everything? What the author is describing is simply not a problem. Because you can't tell the disk what to do exactly, a defragmentation program does not even try to do that. It doesn't try to minimize revolution time, because this cannot be controlled, just guessed or found out empirically. It can, however, minimize head movement and maximize throughput: Disks are optimized for sequential LBA access, so that's what you do, use a contiguous LBA sector cluster to store the file. Depending on the file system, usage pattern etc. other distributions may be better than that (e.g. for massive parallel access, it's useful to stripe files), but that's up to the defragmentation/optimization program.



    Quote:



    Think about this part of the hard drive mechanism, also. When you bought your Audio drive, you probably spent a few extra bucks to get a 2 or 8 MByte cache on the hard drive controller board.





    How a cache is supposed to decrease fragmentation is beyond me.



    Quote:



    Well guess what! If it's working well (and most do) then most of the data reads your OS is making of the drive are being satisfied from the cache, NOT from the hard drive platter itself. The drive is reading ahead and gathering data in the most effective way, and the cache is getting the hits. De-fragging obviously has no effect on a RAM cache.




    In many applications, the disk is the bottleneck. If you want to read 20 MB NOW, a cache won't help if the data is scattered all over the drive. Also, as I've already pointed out, a drive doesn't know anything about files, if you read from a fragmented file, a drive cannot know that it has to read ahead at a completely different place. Instead, it reads and caches sectors that aren't even going to be read.



    Quote:



    There are literally hundreds of platter, arm, cache designs on the market today, and each will have optimum data transfer patterns that are slightly different. What is a great sector pattern for a 7200 RPM Western Digital drive with an 8 mbyte cache is a Bad sector pattern for an 5400 RPM IBM Travelstar with no cache. So defrag software will probably get it wrong.




    No, a modern defragmentation program will leave the actual low level data placement to the disk itself.



    Quote:

    OK, lets step up to the OS level now.



    Not now, maybe tomorrow.



    123
  • Reply 35 of 62
    giantgiant Posts: 6,041member
    It's too bad you didn't post this where the person who said it could respond.
  • Reply 36 of 62
    Quote:

    Originally posted by Paul

    hahahahhaah....



    in before 123 gets his ass handed to him...






    looks like it won't happen anytime soon :-)
  • Reply 37 of 62
    kickahakickaha Posts: 8,760member
    Which isn't to say he's 100% correct...



    Quote:

    Originally posted by 123

    That's not true, hard drives know nothing about the file system used and thus cannot do anything about fragmentation (internal nor external) at all. They are built to provide good performance for sequential read/writes and can do something about bad sectors and such, but this has nothing to do with fragmentation.



    - As disks became too big (>8GB), the traditional addressing mode couldn't be used anymore. Since it doesn't reflect the disks physical layout anyway, a new mode was introduced, called LBA (logical block addressing). Sectors are now just addressed linearly starting at 0. Apple uses LBA, LBA48 support (>137GB drives) has been introduced with 10.2 and ATA100/133 controllers.



    What has this to do with everything? What the author is describing is simply not a problem. Because you can't tell the disk what to do exactly, a defragmentation program does not even try to do that. It doesn't try to minimize revolution time, because this cannot be controlled, just guessed or found out empirically. It can, however, minimize head movement and maximize throughput: Disks are optimized for sequential LBA access, so that's what you do, use a contiguous LBA sector cluster to store the file. Depending on the file system, usage pattern etc. other distributions may be better than that (e.g. for massive parallel access, it's useful to stripe files), but that's up to the defragmentation/optimization program.




    Okay, so let me get this straight...



    1) The disk (probably) uses LBA.



    2) The disk optimizes itself for sequential LBA access.



    3) The disk knows nothing about the files that it is being asked to write (the only time fragmentation *happens*).



    4) The disk does, however, write blocks out into the LBA for best read performance. (Let's face it, this is the new definition of 'defragmented'... optimized throughput on read. The blocks may be hither and yon on the drive, we don't care.)



    So... assuming the filesystem asks the disk to write out a big ol' chunk of blocks as a 'contiguous' (where contiguous = optimized for read performance however the drive decides to do it), then the disk handles this for you, yes?



    5) And further... if a defrag program is supposed to decide how best to lay out the disk, yet the disk is going to ignore low level requests... what's the defrag doing, a noop loop?



    Quote:

    In many applications, the disk is the bottleneck. If you want to read 20 MB NOW, a cache won't help if the data is scattered all over the drive. Also, as I've already pointed out, a drive doesn't know anything about files, if you read from a fragmented file, a drive cannot know that it has to read ahead at a completely different place. Instead, it reads and caches sectors that aren't even going to be read.





    No, a modern defragmentation program will leave the actual low level data placement to the disk itself.





    So a defrag program does exactly *what* again, since it seems that the disk handles it's own optimization rather intelligently, *and* given a halfway intelligent filesystem (like HFS+, where files are written into the largest available chunk of space, meaning fragmentation is about nil), from everything *you* said, fragmentation is going to be essentially non-existent on a drive with any decent amount of space? (A problem, that you'll notice, a user can fix on their own without defragging...)



    Failing to see how this means that Apple needs to ship a defrag program, other than to satisfy a very very small percentage of their market that probably has better solutions out there...



    Bottom line: I'm still not convinced that a) fragmentation on HFS+ drives is a serious problem except for a tiny fraction of the users out there, or b) Apple needs to put resources into this.



    The *easiest* way to really, honestly, defrag your drive is to have a spare (that backup thingy), wipe, and copy back. Because HFS+ will request files to be written in the largest possible space, you'll have everything defragged quite nicely.
  • Reply 38 of 62
    airslufairsluf Posts: 1,861member
    Kickaha and Amorph couldn't moderate themselves out of a paper bag. Abdicate responsibility and succumb to idiocy. Two years of letting a member make personal attacks against others, then stepping aside when someone won't put up with it. Not only that but go ahead and shut down my posting priviledges but not the one making the attacks. Not even the common decency to abide by their warning (afer three days of absorbing personal attacks with no mods in sight), just shut my posting down and then say it might happen later if a certian line is crossed. Bullshit flag is flying, I won't abide by lying and coddling of liars who go off-site, create accounts differing in a single letter from my handle with the express purpose to decieve and then claim here that I did it. Everyone be warned, kim kap sol is a lying, deceitful poster.



    Now I guess they should have banned me rather than just shut off posting priviledges, because kickaha and Amorph definitely aren't going to like being called to task when they thought they had it all ignored *cough* *cough* I mean under control. Just a couple o' tools.



    Don't worry, as soon as my work resetting my posts is done I'll disappear forever.
  • Reply 39 of 62
    kickahakickaha Posts: 8,760member
    Thanks, that's pretty much my position on this.



    No defrag program will ever be optimal for every user or use pattern. (Or even most, given the wide range of hardware, software, and uses.)



    HW advances + filesystem advances have made fragmentation *much* less of a problem than it used to be. (And we've always had it pretty good.)



    User level prevention is still the best cure, given a halfway intelligent filesystem, of which most of the current ones are. (Traditionally, that hasn't been the case on the MS side of the fence... it has gotten better though.)



    And, given all that... a free defrag program from Apple would be a trivial, non-marketshare expanding, non-general purpose widget. They've got better things to do.



    You're right, I'm not 100% up on current filesystems. I mean jeez, the last time I wrote one was 1990, and it was a striping system that used a bank of commercial VCRs on a Concurrrent 3 mainframe to archive real time sonar acquisition data for the Navy.
  • Reply 40 of 62
    I hope you guys are getting a kick out of this thread...
Sign In or Register to comment.