Apple Needs A Defragment Program By Apple!

13

Comments

  • Reply 41 of 62
    123123 Posts: 278member
    Quote:

    Originally posted by Kickaha

    Bottom line: I'm still not convinced that a) fragmentation on HFS+ drives is a serious problem except for a tiny fraction of the users out there, or b) Apple needs to put resources into this.





    I never said that Apple needs to develop such a program, but I also don't think that defragmentation/optimization programs are completely useless. Unless someone actually comes up with some studies.



    What I said:

    - NTFS is much better than the FAT file systems as far as fragmentation is concerned. (You were probably using FAT32 to be able to also boot into a non-NT system)

    - The hardware arguments of this guy are all wrong. (the OS thing is mostly right but the impact can only be guessed, especially as defrag utils can evolve, too)



    Quote:

    1) The disk (probably) uses LBA.



    The disk IS addressed using LBA, unless you only need the first 8GB.



    Quote:

    2) The disk optimizes itself for sequential LBA access.



    It's not really an optimization, (maybe my comments were misleading) there's no magic involved here, it's just a simple address translation. The fastest access is to read/write LBA numbers in sequence, which internally is mapped onto head positions and sectors, that's all. It's no more optimized than it used to be.



    Quote:



    3) The disk knows nothing about the files that it is being asked to write (the only time fragmentation *happens*).



    4) The disk does, however, write blocks out into the LBA for best read performance. (Let's face it, this is the new definition of 'defragmented'... optimized throughput on read. The blocks may be hither and yon on the drive, we don't care.)



    So... assuming the filesystem asks the disk to write out a big ol' chunk of blocks as a 'contiguous' (where contiguous = optimized for read performance however the drive decides to do it), then the disk handles this for you, yes?





    It always was that way, it was always known how data can be read fastest from the disk. Fragmentation was and is a file system issue.



    Quote:



    5) And further... if a defrag program is supposed to decide how best to lay out the disk, yet the disk is going to ignore low level requests... what's the defrag doing, a noop loop?




    The same as it used to do, just that it doesn't have to translate logical numbers to cylinders and such anymore. Do you really think, when the OS had to write a big file with CHS, it didn't use some intelligent addressing but wrote the file just randomly? We really need to clear up this one, fragmentation is something different and the disk cannot help!



    You write a file f1, let's say it's 5'120 bytes big and you write it onto a disk with 512 byte sectors. You put it to the disk in a way that leads to optimal read performance, you'll have the following logical sector layout (x = free sectors):



    f1 f1 f1 f1 f1 f1 f1 f1 f1 f1 x x x x x x x x x x



    now you write another file f2, let's say it's a movie, you end up like this

    f1 f1 f1 f1 f1 f1 f1 f1 f1 f1 f2 f2 f2.. a lot...f2 f2 x x x x x x x x x x x



    now you open your file f1 again and add 3 sectors, depending on the application (append or write), you might end up with fragmentation:

    f1 f1 f1 f1 f1 f1 f1 f1 f1 f1 f2 f2 f2.............f2 f2 f1 f1 f1 x x x x x x x



    a defragmentation tool just puts the file sectors together again, so you can read f1 faster:

    f1 f1 f1 f1 f1 f1 f1 f1 f1 f1 f1 f1 f1 f2 f2 f2.............f2 f2 x x x x x x x



    Let's go back to this state:

    f1 f1 f1 f1 f1 f1 f1 f1 f1 f1 f2 f2 f2.............f2 f2 f1 f1 f1 x x x x x x x



    and add f3:

    f1 f1 f1 f1 f1 f1 f1 f1 f1 f1 f2 f2 f2.............f2 f2 f1 f1 f1 f3 f3 f3 x x x



    Now remove f1:

    x x x x x x x x x x f2 f2 f2.............f2 f2 x x x f3 f3 f3 x x x



    You get empty space that can't be used for many things anymore. This is not what you want for several reasons (one being that while this is not fragmentation, it leads to it if the disk gets full). Optimization or fragmentation tools will take care of this.



    Quote:



    The *easiest* way to really, honestly, defrag your drive is to have a spare (that backup thingy), wipe, and copy back. Because

    HFS+ will request files to be written in the largest possible space, you'll have everything defragged quite nicely.




    Yes, BUT: here the old discussion comes in, because a disk has different read/write speeds at different places and because head movements cost a lot of time (not just fragmentation, also when you have to jump between files), there might be a much better way to place your data on the disk than how the OS does it now if you just copy the files. Apple's (or anyone else's) optimization tool could be much more advanced than what you think now. Statistical data can be automatically acquired and then the files arranged accordingly. For example, files that are often accessed in parallel could be striped and written to the same place to decrease head movement, OS files and other frequently accessed files could be grouped together, etc. I don't think that Apple will ever write something like that, it won't be easy to come up with something good, and the results probably wouldn't be good enough, I don't know. But I do know that data on current drives are certainly not placed optimally despite OS X and HFS+.



    To Kickaha:

    b > a doesn't mean that there's no c > b. Nothing needs to be optimal, just better, that's how technology advances.



    To Airsluf:

    I thought, disk hw basically didn't change since the 70s eh? We'll see about the hw changes that make OS techniques obsolete.
  • Reply 42 of 62
    kickahakickaha Posts: 8,760member
    Quote:

    Originally posted by 123

    I never said that Apple needs to develop such a program, but I also don't think that defragmentation/optimization programs are completely useless. Unless someone actually comes up with some studies.



    Oh no, they *are* useful... for a very, very small segment of the user population in MacOS X land. For them, sure. It's useful. May not be optimal, but useful.



    Quote:

    What I said:

    - NTFS is much better than the FAT file systems as far as fragmentation is concerned. (You were probably using FAT32 to be able to also boot into a non-NT system)

    - The hardware arguments of this guy are all wrong. (the OS thing is mostly right but the impact can only be guessed, especially as defrag utils can evolve, too)




    Yup, it was FAT32, and it was godawful for fragmenting. Hence the necessity of a defrag tool.



    Quote:

    It always was that way, it was always known how data can be read fastest from the disk. Fragmentation was and is a file system issue.



    *Exactly my point*. FAT32: "Here are 14 blocks corresponding to the file... in 14 write commands.", or, "Here is just the last, new block added to that existing 13 block file... put it where ever..." HFS+: "Here are 14 blocks corresponding to the file. Write them out as one request, thank you."



    If the filesystem has *half* a brain, it can request the disk to perform its own internal magic, by giving it the right information in the form of intelligent requests.



    FAT32 didn't have that half a brain. FAT32 led to massive fragmentation. FAT32 required a free defrag tool to be shipped.



    NTFS is better. NTFS doesn't really probably need that defrag tool to the same extreme... but since it's a feature list bullet point, MS will never remove it.



    HFS(+) never really had this problem, so there was no overwhelming legacy reason to ship a defrag tool. There still isn't.





    Quote:

    The same as it used to do, just that it doesn't have to translate logical numbers to cylinders and such anymore. Do you really think, when the OS had to write a big file with CHS, it didn't use some intelligent addressing but wrote the file just randomly? We really need to clear up this one, fragmentation is something different and the disk cannot help!



    You write a file f1, let's say it's 5'120 bytes big and you write it onto a disk with 512 byte sectors. You put it to the disk in a way that leads to optimal read performance, you'll have the following logical sector layout (x = free sectors):



    f1 f1 f1 f1 f1 f1 f1 f1 f1 f1 x x x x x x x x x x



    now you write another file f2, let's say it's a movie, you end up like this

    f1 f1 f1 f1 f1 f1 f1 f1 f1 f1 f2 f2 f2.. a lot...f2 f2 x x x x x x x x x x x



    now you open your file f1 again and add 3 sectors, depending on the application (append or write), you might end up with fragmentation:

    f1 f1 f1 f1 f1 f1 f1 f1 f1 f1 f2 f2 f2.............f2 f2 f1 f1 f1 x x x x x x x



    a defragmentation tool just puts the file sectors together again, so you can read f1 faster:

    f1 f1 f1 f1 f1 f1 f1 f1 f1 f1 f1 f1 f1 f2 f2 f2.............f2 f2 x x x x x x x



    Let's go back to this state:

    f1 f1 f1 f1 f1 f1 f1 f1 f1 f1 f2 f2 f2.............f2 f2 f1 f1 f1 x x x x x x x



    and add f3:

    f1 f1 f1 f1 f1 f1 f1 f1 f1 f1 f2 f2 f2.............f2 f2 f1 f1 f1 f3 f3 f3 x x x



    Now remove f1:

    x x x x x x x x x x f2 f2 f2.............f2 f2 x x x f3 f3 f3 x x x



    You get empty space that can't be used for many things anymore. This is not what you want for several reasons (one being that while this is not fragmentation, it leads to it if the disk gets full). Optimization or fragmentation tools will take care of this.




    Yes, they will. Nice example.



    But... given the HFS+ filesystem, is it something that *most* users are going to see amazing speedups on? Or even moderate speedups? My experience says resoundingly 'no'.



    Quote:

    Yes, BUT: here the old discussion comes in, because a disk has different read/write speeds at different places and because head movements cost a lot of time (not just fragmentation, also when you have to jump between files), there might be a much better way to place your data on the disk than how the OS does it now if you just copy the files. Apple's (or anyone else's) optimization tool could be much more advanced than what you think now. Statistical data can be automatically acquired and then the files arranged accordingly. For example, files that are often accessed in parallel could be striped and written to the same place to decrease head movement, OS files and other frequently accessed files could be grouped together, etc. I don't think that Apple will ever write something like that, it won't be easy to come up with something good, and the results probably wouldn't be good enough, I don't know. But I do know that data on current drives are certainly not placed optimally despite OS X and HFS+.



    Absolutely. There are plenty of ways in which the layout could be improved, particularly for seldom-written, often-read system files. No argument there.



    As I said before, I think the word 'fragmentation' has lost much of its meaning with current disk firmware, filesystem, and application levels all mucking with (or trying to) the layout. Whatever gets you the best throughput *for you use* is your optimum. Doubtful that it is your neighbor's optimum.



    Quote:

    To Kickaha:

    b > a doesn't mean that there's no c > b. Nothing needs to be optimal, just better, that's how technology advances.




    Agreed, but the users whini^h^h^h^h^hrequesting a defrag tool to be shipped for free, thinking that it's *required* or somehow a necessary part of having a computer, need to be educated that it just ain't so. That's busted thinking on this side of the fence.



    It's a *nicety* with HFS+. A tweaking tool that it merely something to get that last little bit of *oomph* out of your system. It's not a tool that's needed to keep your system running above, oh, 60% of it's theoretical norm, as was the case with that &*(%@#@%# NT box I had. (Doing development on a disk that's fragged over 35% is just *painful*.)
  • Reply 43 of 62
    airslufairsluf Posts: 1,861member
    Kickaha and Amorph couldn't moderate themselves out of a paper bag. Abdicate responsibility and succumb to idiocy. Two years of letting a member make personal attacks against others, then stepping aside when someone won't put up with it. Not only that but go ahead and shut down my posting priviledges but not the one making the attacks. Not even the common decency to abide by their warning (afer three days of absorbing personal attacks with no mods in sight), just shut my posting down and then say it might happen later if a certian line is crossed. Bullshit flag is flying, I won't abide by lying and coddling of liars who go off-site, create accounts differing in a single letter from my handle with the express purpose to decieve and then claim here that I did it. Everyone be warned, kim kap sol is a lying, deceitful poster.



    Now I guess they should have banned me rather than just shut off posting priviledges, because kickaha and Amorph definitely aren't going to like being called to task when they thought they had it all ignored *cough* *cough* I mean under control. Just a couple o' tools.



    Don't worry, as soon as my work resetting my posts is done I'll disappear forever.
  • Reply 44 of 62
    I just wanted to comment on what seems to be a trend recently with Apple that relates to this thread. I remember back in the early 90s I believe, when Apple either made or had Apple-branded printers, scanners, an internet service provider, digital cameras, PDAs, monitors, keyboards, hard drives, and probably much more than I can remember at this time. One of the criticisms, if I remember correctly, was that Apple was a fat and inefficient company then with its hands into too many areas. Furthermore, though they generally made good quality products, these products were never the best in their fields. That is, Apple never made the best digital camera or the best printer or the best scanner. My guess is that Apple likely was doing this just because of a big lack of peripherals and accessories available for Macs at the time. Nowadays however, the situation is much different. So, for example, for disk utilities, Mac users have a choice between Norton Utilities, DiskWarrior 3 and Drive 10. Not only do Mac users have choices, but the Mac disk utility market has healthy competition. However, as Microsoft as illustrated many times, if Apple makes its own disk utility, odds are it will kill the market and give Apple a monopoly since no Mac user could easily turn down a bundled disk utiility and an Apple branded one at that. I'm sure that many Mac users automatically made Safari their default web browser, despite its many bugs and flaws, simply because it was made by Apple. Point being, sure an Apple-made disk utility would be great, but it would also take resources, time, and focus away from what I believe is most important, the OS. I say, let Apple continue to fix and optimize OS X for now, rather than make a great disk utility. We already have three companies doing that and this is a good thing. Thanks.
  • Reply 45 of 62
    Quote:

    Originally posted by tspencer83

    Point being, sure an Apple-made disk utility would be great, but it would also take resources, time, and focus away from what I believe is most important, the OS. I say, let Apple continue to fix and optimize OS X for now, rather than make a great disk utility. We already have three companies doing that and this is a good thing. Thanks.



    As a counterpoint, I might say that no third-party disk utility--including the Micromat's to-be-released TechTool Pro 4--supports the UFS disk format. This is a serious shortcoming of third-party utilities when one considers OS X's long-standing UFS compatibility.
  • Reply 46 of 62
    giantgiant Posts: 6,041member
    Hey 123, I was surfing the web and came across more info on the person I quoted and who you were arguing against saying he was wrong:



    Quote:

    CK Haun

    Director, Woldwide Developer Technical Services, Apple Computer Inc.



    C.K. Haun started in the Apple community in 1979, and released his first commercial product in 1980. He spent the '80s writing education and development tools, and software for the Macintosh. He joined Apple in the late '80s as an engineer in the Developer Technical Support section, and moved to managing various DTS teams in the early '90s. In 1994 he moved to the Apple System Software organisation. He was the chief engineer for the Mac OS 8 project, designed the NetBoot system for Mac OS and was part of the team that defined the core of Mac OS X. In 1998 Haun re-joined the Developer Relations team as Director of Worldwide Developer Technical services. His team is responsible for focusing on key developers, assisting them with development efforts and providing rich technical content to Apple's developer website to support the more than 100,000 Macintosh developers worldwide.



    http://auc.uow.edu.au/index2.html?co...html~mainFrame
  • Reply 47 of 62
    randycat99randycat99 Posts: 1,919member
    Here's my list of comments:



    ?NTFS basically does the same "write contiguous blocks whenever possible" thing as HFS+ (plus they get compressed folders capability built-in)



    ?once you get contiguous files strewn out across the wide expanse of the HD platter (as would happen for any user over the long term), that will make it only that much tougher (and longer by access time) for it to sift through 60+ GB of fragmented space to find a nice little hole to chuck a contiguous file into- that could potentially make a BIG difference if you are writing many large files (say over 200 kB) sequentially



    ?that will also commandeer its share of CPU resources (however small, but finite) to drive the file system to find that "hole"



    ?files strewn across the entire platter, though non-fragmented, will unequivocally take longer to access (by seek time) than the "just newly defragmented" condition where the same cluster of files would be physically closer to each other in a solid contiguous block (all fragmented space squeezed out)



    ?don't believe HFS+ can fragment under even the most low demand conditions (far from capacity filled)?



    -Look at your caches that get frequent updates. If you got a 20 MB Netscape Internet cache, you will find it 100% fragmented. Yeah, it's only 20 MB, but nevertheless, fragmentation is not impossible in HFS+.



    -Also look at all the files where you added custom icons- fragmented in 2 (unless you happened to physically relocate that file after the fact).



    -Look at all the files you have downloaded from the Net- fragmented many, many times in itty bitty buffer-sized chunks. Naturally, this may be a bigger issue for dial-up users than broadband, but OTOH, broadband users may download typically bigger files as a result of this enhanced capability. So the fragmentation stakes rise with it. It's also not impossible to expect fragmentation to occur in large files you have pulled off a LAN.



    -Look at all your Preference files that have been changed and tweaked over time



    -Depending on how your apps handle automatic backup of documents as you work on them, you may find fragmentation in a lot of your documents



    -Ever seen what hosting a data server can do to a HD over time?



    Suffice to say, HFS+ is far from fragmentation resistent. It's just not ridiculously fragmentation prone, but that isn't exactly saying much.



    ?Finally, Apple doesn't really need to consume development resources to come up with a defragmenter. They could do exactly what M$ does- buy a basic module from a 3rd party vendor. Instead of having Diskeeper or Diskeeper light, you just have the Diskeeper engine built into the OS to do the most basic of fragmentation chores. In Apple's case, they could quite easily buy rights to an engine from Symantec or other. Contrary to projections that such a move would weaken the "defragmenter market", it could likewise improve it...for we, the consumer. Knowing that Apple users have a basic tool to do the chore on their own, 3rd party vendors may think twice about justifying the extended selling price in the ridiculous $80-100 range. We may actually see decent prices like $25-35 for a full-featured defragmenter product.



    That's my 1024 bits.
  • Reply 48 of 62
    webmailwebmail Posts: 639member
    all file systems need defragmented from time to time, some more than others it often depends on the kind of file work you do on your machine.



    I get really annoyed with people in this thread who say defragmention isn't needed. YOU ARE WRONG. don't buy some hype some apple zeliot told you.



    Utilities like drive 10 exist for a reason, Apple also recommend drive 10 on their site for gasp "routinue defragmentation"
  • Reply 49 of 62
    rara Posts: 623member
    <adding fuel to the fire>



    *ahem* Hot file adaptive clustering. *cough*
  • Reply 50 of 62
    mr. memr. me Posts: 3,221member
    Quote:

    Originally posted by webmail

    all file systems need defragmented from time to time, some more than others it often depends on the kind of file work you do on your machine.



    Not true.

    Quote:

    Originally posted by webmail

    I get really annoyed with people in this thread who say defragmention isn't needed. YOU ARE WRONG. don't buy some hype some apple zeliot told you.



    I suppose that you are going to "get really annoyed" once again.

    Quote:

    Originally posted by webmail

    Utilities like drive 10 exist for a reason, Apple also recommend drive 10 on their site for gasp "routinue defragmentation."



    Drive 10 have many capabilities other than defragmentation. It is primarily a tool for "repairing drives and recovering data." It is one thing to have an opinion. You have the right to express it. However, it is another thing to make misleading statements to promote your point of view. Apple lists Drive 10 for sale on the Apple Store web site. It is wrong for you to misrepresent the advertising copy accompanying Drive 10 as a recommendation. It is not. If Apple recommended defragmentation software, you can be assured that it would offer its own utility or supply a third-party solution to .Mac users. Apple does this is in the case of file backup with and virus protection with its own Backup utility and Virex, respectively.
  • Reply 51 of 62
    kickahakickaha Posts: 8,760member
    Quote:

    Originally posted by webmail

    all file systems need defragmented from time to time, some more than others it often depends on the kind of file work you do on your machine.



    I get really annoyed with people in this thread who say defragmention isn't needed. YOU ARE WRONG. don't buy some hype some apple zeliot told you.



    Utilities like drive 10 exist for a reason, Apple also recommend drive 10 on their site for gasp "routinue defragmentation"




    So... *ALL* drives need defragmenting is your point, and there is no situation where a drive could never, for some reason, not need it.



    Like, say... it wouldn't give the user any perceived or actual speedup because the fragmentation was low to start with?



    Or... the disk system performs defragmentation in the background on an ongoing basis so there really isn't any worth mentioning at any given time?



    Unless you're one of those "Sweet Mary, Mother of God, my drive has 2% fragmentation, if I fix that, I'll get at *least* a 20% boost in speed!" wackos, there's really no *PRACTICAL* reason to *have* to defragment your modern HFS+ drive.



    All drives fragment over time.



    Some (HFS+) fix themselves over time.



    Some (HFS+) leave some amount of fragmentation.



    Most users will never notice any difference between such a drive and a pristine defragged one.



    Some users need absolute efficiency for very real reasons.



    Other users just think they do.



    Don't fall in the last camp, and end up spending money you don't need to, just because somebody starts the FUD train and scares you into buying a utility you'll never see a real benefit from.
  • Reply 52 of 62
    airslufairsluf Posts: 1,861member
    Kickaha and Amorph couldn't moderate themselves out of a paper bag. Abdicate responsibility and succumb to idiocy. Two years of letting a member make personal attacks against others, then stepping aside when someone won't put up with it. Not only that but go ahead and shut down my posting priviledges but not the one making the attacks. Not even the common decency to abide by their warning (afer three days of absorbing personal attacks with no mods in sight), just shut my posting down and then say it might happen later if a certian line is crossed. Bullshit flag is flying, I won't abide by lying and coddling of liars who go off-site, create accounts differing in a single letter from my handle with the express purpose to decieve and then claim here that I did it. Everyone be warned, kim kap sol is a lying, deceitful poster.



    Now I guess they should have banned me rather than just shut off posting priviledges, because kickaha and Amorph definitely aren't going to like being called to task when they thought they had it all ignored *cough* *cough* I mean under control. Just a couple o' tools.



    Don't worry, as soon as my work resetting my posts is done I'll disappear forever.
  • Reply 53 of 62
    randycat99randycat99 Posts: 1,919member
    What is this bit about "disk does defragmentation in the background" bit?



    Also, another source of fragmentation (to append my earlier post)- email files.
  • Reply 54 of 62
    rara Posts: 623member
    Quote:

    Originally posted by Randycat99

    What is this bit about "disk does defragmentation in the background" bit?





    Quote:

    Originally posted by Ra

    <adding fuel to the fire>



    *ahem* Hot file adaptive clustering. *cough




    http://macslash.org/article.pl?sid=0...37&mode=thread
  • Reply 55 of 62
    mmmpiemmmpie Posts: 628member
    Disks dont optimise file layout. What they do is hide the geometry of the disk ( the old CHS system ). Modern disks use LBA ( logical block addressing ) which does not correspond to the real physical structure of the disc. The disk controller translates LBA reads and writes into something that is appropriate to the disk ( more like the old CHS ). It isnt possible for software at the OS level to optimise disk layout because it doesnt know where LBA addresses reside on the disk, so it is best not to try and let the disk controller handle it. When LBA was introduced it was also possible to setup big disks as CHS. However the CHS values were fake, and translated by the disk controller as well. The problem is that OSes would try to optimise to the fake cylinder/sector layout, which is useless.



    Defrag programs optimise at several levels.

    Firstly, they try to place a file contiguously on the disk. They do this by making sufficient space in ( in the LBA realm ) and then writing the file to it. In any given case this may or may not be effective. Any given file may end up being split across sectors or cylinders, because they are hidden from the defrag program. However, statistically, when you make all your files contiguous their on disk contiguity will improve ( small files versus large cylinder sizes ).

    Secondly, the disk layout tree is reduced in size and complexity, by not having to store more complex file fragment layouts. This means that more files can fit in the same memory space in the filesystem software, and that when it does have to read more of the layout off of disk it needs to read less.

    Thirdly, the defrag propram may try to optimise file location. It can try to put files from the same directory next to each other, and try to put the files near the directory information ( in FS that support that, eg: ext2 ). This may or may not be useful. But one of the things that the disk controller tries to do is read ahead of file requests to cache data. It reads ahead based on the LBA addresses. This will help when a program is reading though a file at well below the speed of the disk, but software that pushes disk speed wont get any benefit ( high bandwidth media editing for example ).



    Panther automatically defrags small files ( less then 20mb ), but doesnt optimise their layout. This should be enough for typical day to day usage. Windows doesnt really do anything, and I get 30% fragmentation in XP after a month or so. I have 20% disk space available. Just gotta live with that day long defrag session.
  • Reply 56 of 62
    kickahakickaha Posts: 8,760member
    Quote:

    Originally posted by Randycat99

    Also, another source of fragmentation (to append my earlier post)- email files.



    I'm beginning to think you're working under a different definition of fragmentation that the rest of us... mmmpie outlined several types. Which one are you thinking of? Because email files are tiny (check inside your ~/Library/Mail folder - each msg is a file), which is utterly the opposite end of file-block fragmentation problems. Small files are easy to deal with.



    Are you by any chance thinking of file layout optimization? Because that's a whole other ballgame.
  • Reply 57 of 62
    randycat99randycat99 Posts: 1,919member
    It really depends on the particular email application and how it stores its data. Sometimes it may store the entries of various mailboxes (in, out, sent, etc.) in their respective file. Additional entries/changes are appended to the file as you go. So you may end up with an in box file that is fragmented 6-7 or more times. The file for a large mailbox may be several MB's in size.
  • Reply 58 of 62
    In my circumstance, OS X fragments just as quick, and to the same extent as Windows. In the ideal world in which disks aren't being whacked around, we might not 'feel' the effects of a fragmented drive.



    The fragmentaion on both Windows and OS X is not so bad now that I need to defragment every second night, or that the fragmentation that I just leave causes the system to majorly underperform. But, if I take a lok at any of these drives on any of these machines, I sure do have one semi-badly fragmented drive.
  • Reply 59 of 62
    kickahakickaha Posts: 8,760member
    Quote:

    Originally posted by sanity assassin

    In my circumstance, OS X fragments just as quick, and to the same extent as Windows. In the ideal world in which disks aren't being whacked around, we might not 'feel' the effects of a fragmented drive.



    The fragmentaion on both Windows and OS X is not so bad now that I need to defragment every second night, or that the fragmentation that I just leave causes the system to majorly underperform. But, if I take a lok at any of these drives on any of these machines, I sure do have one semi-badly fragmented drive.




    Wow... what the heck do you do that causes that level of fragmentation under MacOS X?



    I'm used to 20-30% fragmentation on a Windows box in a week of heavy development use, and under 10% for a Mac after a few months of the same workload.
  • Reply 60 of 62
    airslufairsluf Posts: 1,861member
    Kickaha and Amorph couldn't moderate themselves out of a paper bag. Abdicate responsibility and succumb to idiocy. Two years of letting a member make personal attacks against others, then stepping aside when someone won't put up with it. Not only that but go ahead and shut down my posting priviledges but not the one making the attacks. Not even the common decency to abide by their warning (afer three days of absorbing personal attacks with no mods in sight), just shut my posting down and then say it might happen later if a certian line is crossed. Bullshit flag is flying, I won't abide by lying and coddling of liars who go off-site, create accounts differing in a single letter from my handle with the express purpose to decieve and then claim here that I did it. Everyone be warned, kim kap sol is a lying, deceitful poster.



    Now I guess they should have banned me rather than just shut off posting priviledges, because kickaha and Amorph definitely aren't going to like being called to task when they thought they had it all ignored *cough* *cough* I mean under control. Just a couple o' tools.



    Don't worry, as soon as my work resetting my posts is done I'll disappear forever.
Sign In or Register to comment.