Originally Posted by BigMac2
I think most people wants things to be simpler, but I do also recognize how file managing is an important aspect of desktop computing that mobile computing wants to get rid of. As a long time Mac OS users, I'm accustomed to browse my HD and install or launch apps from the file manager. But most windows users have never opened their Program Files directory or even browse their C: drive. I think this is why Apple, Google and Microsoft chooses to not lets people manually manage files on their mobiles devices.
I'd contend that file management isn't so much an important aspect of desktop computing as a holdover from an era when it was a necessary liability (compromise) to make computers usable.
The question is: why, in a modern operating system, do we need to "manage" the data like that at a high level? If I can fire up an application, and the application will tell me all the chunks of data ("files") that it knows how to manipulate, and even filter what it presents to me based on arbitrary criteria, then what advantage is there in me putting one chunk in directory X and another chunk in directory Y?
What we really want (and one thing the notion of 'managing' documents in particular directory structures desperately, but incompletely, tries to address) is to be able to apply meaningful meta-characteristics to the the data. File systems already do that to a certain extent with the timestamps built into the node structure, or the addition of file extensions tacked on to file names by applications.
For example, for the most part, Joe User isn't interested in the jpeg-formatted image data in a directory at /users/joe/documents/images/2012/mexico/ - what he really wants is the pictures from his 2012 vacation in Mexico. If he can open up an image viewer and type "Mexico 2012" to get what he's looking for, this is vastly more usable than memorizing that arbitrary pathname (and what's worse, that path may change over time as his collection of files changes). It's irrelevant where the data is physically located on a storage device, or even what storage device it's located on, only what the data is is.
Maybe Jane Photoshopwhiz is on the job, and needs to retouch the images from client "XYZ" she received on June 31st via email - she isn't interested in ~/desktop/photoshop/XYZ corp/recent/email revisions/*.png. If she can pop open photoshop and type in "XYZ June 31 email" to get exactly what she needs thanks to intelligent automated tagging (like automatically tagging the png's attached to an email from someone@XYZ.com with the received date/time, that the document source was an email attachment and possibly a cross-refence to the source email file), then she doesn't need to spend her billable hours fiddling with "managing files".
Add in some user-facing tagging abilities, and this gives us much more control in "managing" how we access the data. It's a far more usable solution than trying to shoehorn data chunks into a contrived tree structure.
Another major facet of the file management abuse we've trained ourselves to rely on is for versioning or backup purposes. Im my line of work, we don't make a change to anything without being able to immediately roll back to a previous version in case something isn't right with the modifications. But manually versioning files by changing file names or directories (a.k.a. "file management") is sloppy, inefficient and prone to user errors. Why manually (and possibly wrongly) apply arbitrary file name or directory changes that when the alternative is simply having a mechanism whereby saving a document/file (i.o.w. "committing a change") can store the changes as a delta to the previous version (making it more storage efficient) and provide a convenient hook for the aforementioned automated tagging system to work its magic.
What think us on this?