Don't forget that windows creates Hidden files on removable media as well, just that they are hidden so we only see those created by Macs. I have a HD with over 2GB used by a hidden "System Volume Information" which I do not have permission to remove.
I think it's safe to anticipate this could cause a lot of growing pains, lots of reason to wait some months after release for the first "paying beta testers" to see how it pans out and gets patched.
For those asking, Time Machine can't be done over SMB - there is no decent Windows equivalent of a symbolic link or hard link and certainly no method of producing one over the SMB protocol.
It's a Unix invention that allows many file pointers to point to the same file making it appear as if a file exists in many locations, whereas it only exists once with many links to it.
That's how Time Machine is so conservative with disk space.
Also dot files and displaying them over SMB is optional and configured on the SMB server, problem is if you don't show them they can't be edited. So hiding them silently would require a modification of the SMB client, something Apple may do.
For OS X 10.7 Lion, Apple wrote its own software for Windows File Sharing under the name "SMBX" to replace Samba, adding initial support for Microsoft's SMB2 at the same time.
Though this has been widely reported, it isn't true. Support for SMB2 didn't exist in SMBX until now. Apple's decision to write their own SMB implementation was due to Samba's licensing change.
I think it's safe to anticipate this could cause a lot of growing pains, lots of reason to wait some months after release for the first "paying beta testers" to see how it pans out and gets patched.
Glad you mentioned that. That's one of my biggest pet peeves.
Why do they keep announcing how many developers they have when it's clearly how many people are paying an extra $100 to test the latest software?
Common sense math would state that if there's 6 million developers then there should at least be 6 million apps? Right? I mean really? ...and for those of you who will say "oh, the Mac!"... no. ...and no. No again!
There's supposedly 900,000 apps on the app store. That would mean that each developer, even those who developed more than one app has made 0.15 apps.
Honestly the competition would use and literally embarrass them with that 6Mill number, if they had enough apps to make it a big deal. Google might. I don't know how many developers they have.
I don't know, but every time I hear the developer count followed by the app count I see the 0.15 apps per developer number. It's hilarious!
Im not really sure, but if I was Tim and someone told me that we have 6Mil developers, and 900k apps I might ask a few more questions before I got up on stage an just announced that. There should have been someone there in the "senior" staff that would at least say "What? We can't say that!"
...add to this the time when Tallest Skill literally RANKED on a REAL developer who made a passbook app that was revolutionary. (that never showed up after he was ranked on)
For those asking, Time Machine can't be done over SMB - there is no decent Windows equivalent of a symbolic link or hard link and certainly no method of producing one over the SMB protocol.
Can you please clarify what you mean? You can create symbolic links in Windows, and you are create them over SMB2 and 3
Can you please clarify what you mean? You can create symbolic links in Windows, and you are create them over SMB2 and 3
The issue is hard links, not symbolic/soft links. There's a very important distinction which is vital to how Time Machine works - though as I'll explain later, this is all moot anyway.
A symbolic link is essentially a special type of file on the filesystem that points to another file somewhere else in the filesystem or even somewhere on another filesystem. It's similar to a shortcut in Windows or an alias in Mac OS 9, except that it operates at a slightly lower level in the stack, making it a bit more transparent to software. If you created a symbolic link called B to a file called A, most software would treat B as if it were A. However, if you deleted A (or even just rename it) B would break.
A hard link is more like a clone of the file, except that the data still resides in the same location. So if you create a hard link called B to a file called A, the entire software stack will treat both as if they're the actual file, but with separate metadata. What's really interesting about this is that if you delete A, B isn't affected at all. It keeps chugging along as if nothing happened because files are just pointers to the data, and the filesystem still sees B as a pointer to that data and leaves it alone.
That's the key to how Time Machine works. When it's doing a backup and finds a file hasn't changed, it just creates a hard link in the new backup to that same file in the previous backup. When the old backup is deleted, it doesn't have to be careful to move data around because the filesystem already takes care of that.
The catch here, however, is that any file sharing protocol (except maybe NFS) won't be able to understand the difference between a hard link and the original file anyway because they operate at a higher level than the filesystem. Most importantly, that means they can't create them. So now you should be wondering how Time Machine works over a network.
Abstraction! Time Machine hides all of this complex specialized filesystem work by placing the entire backup into a disk image (technically a sparse bundle, but the distinction isn't important here). All of the nitty gritty stuff is wrapped by the filesystem driver, so by the time it reaches the file sharing protocol it's nothing more than reading and writing data from a file. It doesn't need to support hard links, permissions, resource forks, or anything else like that.
In short, Time Machine's network backups requiring AFP is clearly something Mac-specific (though I'm not sure what), but it isn't hard links.
WTF! if this is true then I´m not switching to newer mac os versions in a LOOONG while... Ok their not removing it though sigh... Though the article is a bit missleading in the heading. It defaults to smb2.
Read this article, see it mention something that happened more than a few years ago, scroll up. Yep, author is DED. Always good for "a brief history of..."
What's wrong with OS X including NTFS write support? It would obviate my need to run a WinXP virtual machine.
Read this article, see it mention something that happened more than a few years ago, scroll up. Yep, author is DED. Always good for "a brief history of..."
What's wrong with OS X including NTFS write support? It would obviate my need to run a WinXP virtual machine.
Not really. You would still need a VM or Boot Camp if you were to use most of the MS file formats.
It doesn't matter if you have NTFS or not, you're still not going to click on a Windows Media File and get it to play. Amongst many others.
Even the new iCloud solution isn't getting around that.
A hard link is more like a clone of the file, except that the data still resides in the same location. So if you create a hard link called B to a file called A, the entire software stack will treat both as if they're the actual file, but with separate metadata. What's really interesting about this is that if you delete A, B isn't affected at all. It keeps chugging along as if nothing happened because files are just pointers to the data, and the filesystem still sees B as a pointer to that data and leaves it alone.
That's the key to how Time Machine works. When it's doing a backup and finds a file hasn't changed, it just creates a hard link in the new backup to that same file in the previous backup. When the old backup is deleted, it doesn't have to be careful to move data around because the filesystem already takes care of that.
[...]
Abstraction! Time Machine hides all of this complex specialized filesystem work by placing the entire backup into a disk image (technically a sparse bundle, but the distinction isn't important here). All of the nitty gritty stuff is wrapped by the filesystem driver, so by the time it reaches the file sharing protocol it's nothing more than reading and writing data from a file. It doesn't need to support hard links, permissions, resource forks, or anything else like that.
In short, Time Machine's network backups requiring AFP is clearly something Mac-specific (though I'm not sure what), but it isn't hard links.
That's why a file system like ZFS with snapshot support would have been crucial: A snapshot creates a duplicate of the entire file system, but with copy-on-write semantics, so it's almost instantaneous. So each time machine backup would create a new snapshot, which would really freeze an instant in time. Then the same snapshot is made on the backup drive, and only the diffs would be copied over. Same space efficiency as what TimeMachine does, but without the directory hardlink hack, and no special permissions, etc. needed, i.e. the result would be something that could easily be restored with ASR.
Better, while a laptop is on the road, unlike the paltry mobile TM we have now, multiple local backups could be made, that then would simply be synced to the actual backup target once the mobile device is back at home. (Of course these mobile backups would only protect against accidental deletion, not drive failure, until they are synced to the backup at home).
ZFS would also have been error correcting/detecting and would allow for periodic data integrity checking, none of that is done in either NTFS or HFS+.
Both of these are similarly outdated.
We can only hope that Apple doesn't stop half-way and takes their LVM concept further to develop a true next generation file system on top of that.
The statement that "SMB2 is superfast" is a lie. The CIFS/SMB protocol is a clusterfuck of protocol overhead. NFS and AFP have way better bandwidth because they are not as talktative as SMB. Go ahead and test it.
About being more secure that also isn't totally true. Features like GSSAPI and rich ACLs (NFSv4 style) also exists on the NFSv4 protocol. But hey... kerberized nfsv4 doesn't really work on Mountain Lion and its nonexistant on previous versions.
I don't if saying it twice makes it truthful but the truth is it's really twice as fast , I just did a direct comparison on a 2012 Mac mini and a 2013 MacBook Air, both running ML connected via a gigabit connection bounced through an apple time capsule to a Dell Server, both struggled to break past 50MB per second during large file transfers, I just installed the latest version of Mavericks and both machines are transferring the same data over the same network between 90MB to 105MB per second, I was really impressed, they have finally made some improvements to inter OS Networking.
I don't if saying it twice makes it truthful but the truth is it's really twice as fast , I just did a direct comparison on a 2012 Mac mini and a 2013 MacBook Air, both running ML connected via a gigabit connection bounced through an apple time capsule to a Dell Server, both struggled to break past 50MB per second during large file transfers, I just installed the latest version of Mavericks and both machines are transferring the same data over the same network between 90MB to 105MB per second, I was really impressed, they have finally made some improvements to inter OS Networking.
Im not shure I understand your test setup. Are you copying via afp to that dell server also? You can't make a judgment of speeds only to one platform saying that it's the protocols speed. There is a lot of things dependent in LANs and server/client configuration and hardware that have an influence on the outcome. And it's not aways the protocols fault.
Comments
I think it's safe to anticipate this could cause a lot of growing pains, lots of reason to wait some months after release for the first "paying beta testers" to see how it pans out and gets patched.
It's a Unix invention that allows many file pointers to point to the same file making it appear as if a file exists in many locations, whereas it only exists once with many links to it.
That's how Time Machine is so conservative with disk space.
Also dot files and displaying them over SMB is optional and configured on the SMB server, problem is if you don't show them they can't be edited. So hiding them silently would require a modification of the SMB client, something Apple may do.
Quote:
For OS X 10.7 Lion, Apple wrote its own software for Windows File Sharing under the name "SMBX" to replace Samba, adding initial support for Microsoft's SMB2 at the same time.
Though this has been widely reported, it isn't true. Support for SMB2 didn't exist in SMBX until now. Apple's decision to write their own SMB implementation was due to Samba's licensing change.
Quote:
Originally Posted by Superbass
I think it's safe to anticipate this could cause a lot of growing pains, lots of reason to wait some months after release for the first "paying beta testers" to see how it pans out and gets patched.
Glad you mentioned that. That's one of my biggest pet peeves.
Why do they keep announcing how many developers they have when it's clearly how many people are paying an extra $100 to test the latest software?
Common sense math would state that if there's 6 million developers then there should at least be 6 million apps? Right? I mean really? ...and for those of you who will say "oh, the Mac!"... no. ...and no. No again!
There's supposedly 900,000 apps on the app store. That would mean that each developer, even those who developed more than one app has made 0.15 apps.
Honestly the competition would use and literally embarrass them with that 6Mill number, if they had enough apps to make it a big deal. Google might. I don't know how many developers they have.
I don't know, but every time I hear the developer count followed by the app count I see the 0.15 apps per developer number. It's hilarious!
Im not really sure, but if I was Tim and someone told me that we have 6Mil developers, and 900k apps I might ask a few more questions before I got up on stage an just announced that. There should have been someone there in the "senior" staff that would at least say "What? We can't say that!"
...add to this the time when Tallest Skill literally RANKED on a REAL developer who made a passbook app that was revolutionary. (that never showed up after he was ranked on)
I love this site.
Can you please clarify what you mean? You can create symbolic links in Windows, and you are create them over SMB2 and 3
Quote:
Originally Posted by jfanning
Can you please clarify what you mean? You can create symbolic links in Windows, and you are create them over SMB2 and 3
The issue is hard links, not symbolic/soft links. There's a very important distinction which is vital to how Time Machine works - though as I'll explain later, this is all moot anyway.
A symbolic link is essentially a special type of file on the filesystem that points to another file somewhere else in the filesystem or even somewhere on another filesystem. It's similar to a shortcut in Windows or an alias in Mac OS 9, except that it operates at a slightly lower level in the stack, making it a bit more transparent to software. If you created a symbolic link called B to a file called A, most software would treat B as if it were A. However, if you deleted A (or even just rename it) B would break.
A hard link is more like a clone of the file, except that the data still resides in the same location. So if you create a hard link called B to a file called A, the entire software stack will treat both as if they're the actual file, but with separate metadata. What's really interesting about this is that if you delete A, B isn't affected at all. It keeps chugging along as if nothing happened because files are just pointers to the data, and the filesystem still sees B as a pointer to that data and leaves it alone.
That's the key to how Time Machine works. When it's doing a backup and finds a file hasn't changed, it just creates a hard link in the new backup to that same file in the previous backup. When the old backup is deleted, it doesn't have to be careful to move data around because the filesystem already takes care of that.
The catch here, however, is that any file sharing protocol (except maybe NFS) won't be able to understand the difference between a hard link and the original file anyway because they operate at a higher level than the filesystem. Most importantly, that means they can't create them. So now you should be wondering how Time Machine works over a network.
Abstraction! Time Machine hides all of this complex specialized filesystem work by placing the entire backup into a disk image (technically a sparse bundle, but the distinction isn't important here). All of the nitty gritty stuff is wrapped by the filesystem driver, so by the time it reaches the file sharing protocol it's nothing more than reading and writing data from a file. It doesn't need to support hard links, permissions, resource forks, or anything else like that.
In short, Time Machine's network backups requiring AFP is clearly something Mac-specific (though I'm not sure what), but it isn't hard links.
Originally Posted by FreeRange
NTFS support for read and write is also needed!
Not really, as long as each side has read support.
WTF! if this is true then I´m not switching to newer mac os versions in a LOOONG while... Ok their not removing it though sigh... Though the article is a bit missleading in the heading. It defaults to smb2.
Quote:
Originally Posted by habi
WTF! if this is true then I´m not switching to newer mac os versions in a LOOONG while...
No! No! No! If you have Mountain Lion then this is Mountain Lion the way it was supposed to be. It's literally everything that was missing.
I would suggest getting it. If you have more than one display then I would say it's most definitely needed.
Quote:
Originally Posted by FreeRange
NTFS support for read and write is also needed!
Oh crikey, now your feeding the trolls.
Opps! One already bit.
Read this article, see it mention something that happened more than a few years ago, scroll up. Yep, author is DED. Always good for "a brief history of..."
What's wrong with OS X including NTFS write support? It would obviate my need to run a WinXP virtual machine.
Quote:
Originally Posted by Vorsos
Read this article, see it mention something that happened more than a few years ago, scroll up. Yep, author is DED. Always good for "a brief history of..."
What's wrong with OS X including NTFS write support? It would obviate my need to run a WinXP virtual machine.
Not really. You would still need a VM or Boot Camp if you were to use most of the MS file formats.
It doesn't matter if you have NTFS or not, you're still not going to click on a Windows Media File and get it to play. Amongst many others.
Even the new iCloud solution isn't getting around that.
Vadania
Not really. You would still need a VM or Boot Camp if you were to use most of the MS file formats.
It doesn't matter if you have NTFS or not, you're still not going to click on a Windows Media File and get it to play. Amongst many others.
Even the new iCloud solution isn't getting around that.
File system ? file format.
I can already interact with "MS file formats," including WMV.
I want the ability to read and write to NTFS drives within the OS X environment. Make sense?
I don't believe this is something MS will license for use as part of competitor's OS.
Couldn't you just get Paragon NTFS for Mac?, works just fine here on 10.8.4
don't really see why it HAS to be Apple that licenses the driver, as long as it's there.
there's also open source alternatives, but paragon yield native NTFS speeds.
http://www.paragon-software.com/home/ntfs-mac/
Quote:
A hard link is more like a clone of the file, except that the data still resides in the same location. So if you create a hard link called B to a file called A, the entire software stack will treat both as if they're the actual file, but with separate metadata. What's really interesting about this is that if you delete A, B isn't affected at all. It keeps chugging along as if nothing happened because files are just pointers to the data, and the filesystem still sees B as a pointer to that data and leaves it alone.
That's the key to how Time Machine works. When it's doing a backup and finds a file hasn't changed, it just creates a hard link in the new backup to that same file in the previous backup. When the old backup is deleted, it doesn't have to be careful to move data around because the filesystem already takes care of that.
[...]
Abstraction! Time Machine hides all of this complex specialized filesystem work by placing the entire backup into a disk image (technically a sparse bundle, but the distinction isn't important here). All of the nitty gritty stuff is wrapped by the filesystem driver, so by the time it reaches the file sharing protocol it's nothing more than reading and writing data from a file. It doesn't need to support hard links, permissions, resource forks, or anything else like that.
In short, Time Machine's network backups requiring AFP is clearly something Mac-specific (though I'm not sure what), but it isn't hard links.
That's why a file system like ZFS with snapshot support would have been crucial: A snapshot creates a duplicate of the entire file system, but with copy-on-write semantics, so it's almost instantaneous. So each time machine backup would create a new snapshot, which would really freeze an instant in time. Then the same snapshot is made on the backup drive, and only the diffs would be copied over. Same space efficiency as what TimeMachine does, but without the directory hardlink hack, and no special permissions, etc. needed, i.e. the result would be something that could easily be restored with ASR.
Better, while a laptop is on the road, unlike the paltry mobile TM we have now, multiple local backups could be made, that then would simply be synced to the actual backup target once the mobile device is back at home. (Of course these mobile backups would only protect against accidental deletion, not drive failure, until they are synced to the backup at home).
ZFS would also have been error correcting/detecting and would allow for periodic data integrity checking, none of that is done in either NTFS or HFS+.
Both of these are similarly outdated.
We can only hope that Apple doesn't stop half-way and takes their LVM concept further to develop a true next generation file system on top of that.
About being more secure that also isn't totally true. Features like GSSAPI and rich ACLs (NFSv4 style) also exists on the NFSv4 protocol. But hey... kerberized nfsv4 doesn't really work on Mountain Lion and its nonexistant on previous versions.
Im not shure I understand your test setup. Are you copying via afp to that dell server also? You can't make a judgment of speeds only to one platform saying that it's the protocols speed. There is a lot of things dependent in LANs and server/client configuration and hardware that have an influence on the outcome. And it's not aways the protocols fault.