Time Machine backups causing issues for some Apple Silicon Mac users

2»

Comments

  • Reply 21 of 31
    elijahg said:
    darkvader said:
    I don't give a flying fuck if a backup solution is 'efficient'.  I care that it backs up data and makes it easily retrievable.  Time Machine does that better than anything else out there at any price.
    Does it really? Due to the design of TM, storing a disk image on a server, disconnecting a Mac from the network during a backup breaks that disk image's filesystem and therefore the backup. That makes it far from "easily retrievable".

    Many people here seem to think their single data point means everyone else is "holding it wrong" to use the infamous phrase. You apparently fall into that group.
    Not my experience.  I have all my Macs (desktops & laptops) backing up to a FreeBSD server running NetATalk.  Have had for years.  Laptops come & go also during backups.  Rarely do I have any issues requiring the Time Machine backups needing to be verified.  When they do need to be verified, Time Machine just seems to "take care of it", without any issues - it's automatic on the schedule backup, when it's required.
    williamlondonwatto_cobra
  • Reply 22 of 31
    Guess we'll have to wait for an SVP at Apple to start using Time Machine.
    elijahgwilliamlondon
  • Reply 23 of 31
    elijahgelijahg Posts: 2,759member
    nicholfd said:
    elijahg said:
    darkvader said:
    I don't give a flying fuck if a backup solution is 'efficient'.  I care that it backs up data and makes it easily retrievable.  Time Machine does that better than anything else out there at any price.
    Does it really? Due to the design of TM, storing a disk image on a server, disconnecting a Mac from the network during a backup breaks that disk image's filesystem and therefore the backup. That makes it far from "easily retrievable".

    Many people here seem to think their single data point means everyone else is "holding it wrong" to use the infamous phrase. You apparently fall into that group.
    Not my experience.  I have all my Macs (desktops & laptops) backing up to a FreeBSD server running NetATalk.  Have had for years.  Laptops come & go also during backups.  Rarely do I have any issues requiring the Time Machine backups needing to be verified.  When they do need to be verified, Time Machine just seems to "take care of it", without any issues - it's automatic on the schedule backup, when it's required.
    Every time I've had a "backup needs verifying" message it's always resulted in "unable to verify, a new backup needs to be created". Literally right now my Macbook's APFS TM backup is broken, disk utility can't fix it either. TM is unreliable enough for me that I have a backup of the backup. These problems don't seem to occur on wired Macs.

    TM's method of hosting a filesystem structure over a network is a bad idea, it's way too fragile. A mid-backup disconnect is no different to yanking out a USB disk while copying data to it, and you rightly receive a warning for that. Basing it on rsync would be way more sensible, but Apple has a habit of rolling their own solution even though it's often not actually better than what already exists. Rsync would allow the destination OS to deal with the filesystem structure so disconnects wouldn't be a problem, it would support on-the-fly compression and it streams the data in one long transaction rather than requiring tens of SMB transactions for each file - making it orders of magnitude faster. This is possible now they no longer backup system files.

    Of course I only have the experience of myself and a couple of friends to go by, though they have had the occasional issue too. I have however read a lot of people with similar problems.

    Also not sure if you realised AFP has been deprecated, you can use Samba/SMB now without Netatalk. AFP is definitely slower than modern implementations of Samba, too.
  • Reply 24 of 31
    elijahg said:
    nicholfd said:
    elijahg said:
    darkvader said:
    I don't give a flying fuck if a backup solution is 'efficient'.  I care that it backs up data and makes it easily retrievable.  Time Machine does that better than anything else out there at any price.
    Does it really? Due to the design of TM, storing a disk image on a server, disconnecting a Mac from the network during a backup breaks that disk image's filesystem and therefore the backup. That makes it far from "easily retrievable".

    Many people here seem to think their single data point means everyone else is "holding it wrong" to use the infamous phrase. You apparently fall into that group.
    Not my experience.  I have all my Macs (desktops & laptops) backing up to a FreeBSD server running NetATalk.  Have had for years.  Laptops come & go also during backups.  Rarely do I have any issues requiring the Time Machine backups needing to be verified.  When they do need to be verified, Time Machine just seems to "take care of it", without any issues - it's automatic on the schedule backup, when it's required.
    Every time I've had a "backup needs verifying" message it's always resulted in "unable to verify, a new backup needs to be created". Literally right now my Macbook's APFS TM backup is broken, disk utility can't fix it either. TM is unreliable enough for me that I have a backup of the backup. These problems don't seem to occur on wired Macs.

    TM's method of hosting a filesystem structure over a network is a bad idea, it's way too fragile. A mid-backup disconnect is no different to yanking out a USB disk while copying data to it, and you rightly receive a warning for that. Basing it on rsync would be way more sensible, but Apple has a habit of rolling their own solution even though it's often not actually better than what already exists. Rsync would allow the destination OS to deal with the filesystem structure so disconnects wouldn't be a problem, it would support on-the-fly compression and it streams the data in one long transaction rather than requiring tens of SMB transactions for each file - making it orders of magnitude faster. This is possible now they no longer backup system files.

    Of course I only have the experience of myself and a couple of friends to go by, though they have had the occasional issue too. I have however read a lot of people with similar problems.

    Also not sure if you realised AFP has been deprecated, you can use Samba/SMB now without Netatalk. AFP is definitely slower than modern implementations of Samba, too.
    Yes - I'm aware AFP is deprecated, but it still works and has always worked well for me (Netatalk on FreeBSD, and previously on Solaris x86 - both OSes used for access to ZFS).
    williamlondon
  • Reply 25 of 31
    maltzmaltz Posts: 454member
    MacPro said:
    I will have to try TM on my M1 Mac mini, I use CCC these days.

    BTW anyone has else noticed you can now unplug an external without ejecting first without any warnings with macOS Monetary (like Windows)  on an Intel or an  M1 Mac?  Or is it just mine?

    Windows disables write cache on USB drives (since Win10 1809) trading performance for unplugging safety, but you can still end up unplugging it while it's being written to and hose your filesystem.  It's just less likely to happen if you unplug when the machine doesn't appear to be writing.  The proper/safe way is ALWAYS to unmount/eject before unplugging, on any operating system, warnings or not.

    https://docs.microsoft.com/en-us/windows/client-management/change-default-removal-policy-external-storage-media

  • Reply 26 of 31
    maltzmaltz Posts: 454member
    elijahg said:
    nicholfd said:
    elijahg said:
    darkvader said:
    I don't give a flying fuck if a backup solution is 'efficient'.  I care that it backs up data and makes it easily retrievable.  Time Machine does that better than anything else out there at any price.
    Does it really? Due to the design of TM, storing a disk image on a server, disconnecting a Mac from the network during a backup breaks that disk image's filesystem and therefore the backup. That makes it far from "easily retrievable".

    Many people here seem to think their single data point means everyone else is "holding it wrong" to use the infamous phrase. You apparently fall into that group.
    Not my experience.  I have all my Macs (desktops & laptops) backing up to a FreeBSD server running NetATalk.  Have had for years.  Laptops come & go also during backups.  Rarely do I have any issues requiring the Time Machine backups needing to be verified.  When they do need to be verified, Time Machine just seems to "take care of it", without any issues - it's automatic on the schedule backup, when it's required.
    Every time I've had a "backup needs verifying" message it's always resulted in "unable to verify, a new backup needs to be created". Literally right now my Macbook's APFS TM backup is broken, disk utility can't fix it either. TM is unreliable enough for me that I have a backup of the backup. These problems don't seem to occur on wired Macs.

    TM's method of hosting a filesystem structure over a network is a bad idea, it's way too fragile. A mid-backup disconnect is no different to yanking out a USB disk while copying data to it, and you rightly receive a warning for that. Basing it on rsync would be way more sensible, but Apple has a habit of rolling their own solution even though it's often not actually better than what already exists. Rsync would allow the destination OS to deal with the filesystem structure so disconnects wouldn't be a problem, it would support on-the-fly compression and it streams the data in one long transaction rather than requiring tens of SMB transactions for each file - making it orders of magnitude faster. This is possible now they no longer backup system files.

    Of course I only have the experience of myself and a couple of friends to go by, though they have had the occasional issue too. I have however read a lot of people with similar problems.

    Also not sure if you realised AFP has been deprecated, you can use Samba/SMB now without Netatalk. AFP is definitely slower than modern implementations of Samba, too.

    Time Machine, when it was new, blew my mind.  lol  But after having used ZFS for snapshots and backups on my own stuff for a few years, it's lamentable that APFS can't do more... a LOT more.  I have ZFS snapshots that act as local backups, and then offsite replication (over the internet, not just a LAN!) that acts as the external backup drive.  It's sending incremental block-level data between two snapshots, so it's super space and speed efficient.  If there is a network problem, the replication fails and everything just seamlessly rolls back.  I'm using native ZFS pools on both ends, but I could even make it more SMB friendly and Time Machine-like by just using a binary file on a network share as the remote ZFS "volume", too.  Way better than anything rsync could do, but I do agree that the HFS+ filesystem in an IMG wrapper on a remote network volume is madness.

    Oh, and ZFS has a thing called a bookmark that can serve as the starting point for an incremental backup/replication without taking up any space on the source side.  So if you haven't synced your backup drive in a year, you can still do an incremental block-level sync without holding all that old data on your source drive.  ZFS is freakin' magic.  :)

    elijahg
  • Reply 27 of 31
    elijahgelijahg Posts: 2,759member
    maltz said:
    elijahg said:
    nicholfd said:
    elijahg said:
    darkvader said:
    I don't give a flying fuck if a backup solution is 'efficient'.  I care that it backs up data and makes it easily retrievable.  Time Machine does that better than anything else out there at any price.
    Does it really? Due to the design of TM, storing a disk image on a server, disconnecting a Mac from the network during a backup breaks that disk image's filesystem and therefore the backup. That makes it far from "easily retrievable".

    Many people here seem to think their single data point means everyone else is "holding it wrong" to use the infamous phrase. You apparently fall into that group.
    Not my experience.  I have all my Macs (desktops & laptops) backing up to a FreeBSD server running NetATalk.  Have had for years.  Laptops come & go also during backups.  Rarely do I have any issues requiring the Time Machine backups needing to be verified.  When they do need to be verified, Time Machine just seems to "take care of it", without any issues - it's automatic on the schedule backup, when it's required.
    Every time I've had a "backup needs verifying" message it's always resulted in "unable to verify, a new backup needs to be created". Literally right now my Macbook's APFS TM backup is broken, disk utility can't fix it either. TM is unreliable enough for me that I have a backup of the backup. These problems don't seem to occur on wired Macs.

    TM's method of hosting a filesystem structure over a network is a bad idea, it's way too fragile. A mid-backup disconnect is no different to yanking out a USB disk while copying data to it, and you rightly receive a warning for that. Basing it on rsync would be way more sensible, but Apple has a habit of rolling their own solution even though it's often not actually better than what already exists. Rsync would allow the destination OS to deal with the filesystem structure so disconnects wouldn't be a problem, it would support on-the-fly compression and it streams the data in one long transaction rather than requiring tens of SMB transactions for each file - making it orders of magnitude faster. This is possible now they no longer backup system files.

    Of course I only have the experience of myself and a couple of friends to go by, though they have had the occasional issue too. I have however read a lot of people with similar problems.

    Also not sure if you realised AFP has been deprecated, you can use Samba/SMB now without Netatalk. AFP is definitely slower than modern implementations of Samba, too.

    Time Machine, when it was new, blew my mind.  lol  But after having used ZFS for snapshots and backups on my own stuff for a few years, it's lamentable that APFS can't do more... a LOT more.  I have ZFS snapshots that act as local backups, and then offsite replication (over the internet, not just a LAN!) that acts as the external backup drive.  It's sending incremental block-level data between two snapshots, so it's super space and speed efficient.  If there is a network problem, the replication fails and everything just seamlessly rolls back.  I'm using native ZFS pools on both ends, but I could even make it more SMB friendly and Time Machine-like by just using a binary file on a network share as the remote ZFS "volume", too.  Way better than anything rsync could do, but I do agree that the HFS+ filesystem in an IMG wrapper on a remote network volume is madness.

    Oh, and ZFS has a thing called a bookmark that can serve as the starting point for an incremental backup/replication without taking up any space on the source side.  So if you haven't synced your backup drive in a year, you can still do an incremental block-level sync without holding all that old data on your source drive.  ZFS is freakin' magic.  :)

    Apple had been playing around with ZFS support between ~10.6 and ~10.8,  but apparently abandoned it and took 10 years to roll their own. Unfortunately these days Apple seems to be heading more and more toward proprietary, and oftentimes the open-source equivalent is actually better in almost every way. Maybe that'll change with the new open-source website they just launched.

    ZFS is really an amazing filesystem, it's a real shame they didn't use it. I had no idea it could do half of what you mentioned! AFPS is miles ahead of HFS+ but it's still pretty basic in the grand scheme of things. Creating/deleting the thousands of individual files over a network as TM does is sooooo slow because it involves so many network transactions (especially with the SSD-optimised APFS in an image), and the "sparseimage" bundles aren't handled well by anything other than macOS - and even the Finder struggles with the absurd number of bands in a single folder. Who the hell thought storing 30,000+ 10mb files in a single "bands" folder was a good idea?
  • Reply 28 of 31
    maltzmaltz Posts: 454member
    And I didn't even get into ZFS's native encryption, snapshot cloning and rollbacks, checksum of ALL blocks (not just metadata like APFS), built-in fast compression, RAID-like arrays and mirrors, efficient scrubs that only check used blocks, etc.  Since goes from the filesystem level all the way down to the bare device, it can even tell you which files are damaged (if any) when it detects a block error.  Like I said, magic.  lol

    ZFS saved my butt when I had a flaky SATA-to-SAS breakout cable that was causing otherwise silent corruption in my raidz2 (RAID6-like) array.  It was a rare occurence, but amounted to a few dozen KB per month - maybe a few hundred if it was a write-heavy month.  ZFS detected it, fixed it, and went on with its day.  It didn't even have to wait for a scrub, a normal read of a corrupted block would trigger a repair of that block.
  • Reply 29 of 31
    athempel said:
    Why aren't they retiring this antique approach to back-ups? I mean, they now have a filesystem that supports snapshots
    How would an APFS snapshot on the same physical disk save your data from the failure of said disk?
    An APFS snapshot is a read-only copy of its parent APFS volume, taken at a particular moment in time. An APFS volume can have zero or more associated APFS snapshots. 

    A backup tool could simply take a snapshot of drive A and sync that to drive B (the physical backup, also APFS), assuming that the first snapshot will have to contain the original version of your data, and subsequent snapshots being the ‘delta’ between the last snapshot and the current. 

    This is why duplicating a 10gb on the same disk using APFS is nearly instantaneous; it doesn’t actually copy 10gb, it just makes a new file with a reference to the original file and it only looks like as if it’s copied. Only when you then modify the duplicated file it only stores the delta between the current and the original. 
    Try that with Apple’s legacy file systems (which is what Time Machine works with).

    When recovering your data from a backup, all snapshots (individual deltas compared to the original snapshot) can be taken into account to recover the right files. I know I oversimplified it here but that’s the basic idea.
    APFS does this on a file system level and the local physical backup drive could simply have the same file system. 
    Instead, Apple’s Time Machine is ignoring their own beautiful advancements and still backing up like it’s 1999 - not using any of their AFPS advancements natively. Time Machine doesn’t operate at file system level; it works at a file/folder level, agnostic of file system. 
    edited December 2021
  • Reply 30 of 31
    maltzmaltz Posts: 454member
    athempel said:
    Why aren't they retiring this antique approach to back-ups? I mean, they now have a filesystem that supports snapshots
    How would an APFS snapshot on the same physical disk save your data from the failure of said disk?
    An APFS snapshot is a read-only copy of its parent APFS volume, taken at a particular moment in time. An APFS volume can have zero or more associated APFS snapshots. 

    A backup tool could simply take a snapshot of drive A and sync that to drive B (the physical backup, also APFS), assuming that the first snapshot will have to contain the original version of your data, and subsequent snapshots being the ‘delta’ between the last snapshot and the current. 

    This is why duplicating a 10gb on the same disk using APFS is nearly instantaneous; it doesn’t actually copy 10gb, it just makes a new file with a reference to the original file and it only looks like as if it’s copied. Only when you then modify the duplicated file it only stores the delta between the current and the original. 
    Try that with Apple’s legacy file systems (which is what Time Machine works with).

    When recovering your data from a backup, all snapshots (individual deltas compared to the original snapshot) can be taken into account to recover the right files. I know I oversimplified it here but that’s the basic idea.
    APFS does this on a file system level and the local physical backup drive could simply have the same file system. 
    Instead, Apple’s Time Machine is ignoring their own beautiful advancements and still backing up like it’s 1999 - not using any of their AFPS advancements natively. Time Machine doesn’t operate at file system level; it works at a file/folder level, agnostic of file system. 

    The file/folder level IS the filesystem level.  Snapshots are done at the block level in conjunction with knowledge of the filesystem level, and it's that integration that allows APFS and ZFS to do their magic.

    BTW, since Big Sur, a newly-created Time Machine backup will use APFS' advancements, but existing Time Machine backups are not migrated.  Unless you need the history, I definitely recommend re-creating older Time Machine backups to take advantage of the speed and space savings.
  • Reply 31 of 31
    j238j238 Posts: 10member
    darkvader said:
    What the fuck are you people blathering about?

    I'm not exactly known for being an Apple cheerleader here, but you're out of your minds.

    Time Machine is the best backup solution I've ever used on ANY platform.  Period.  The only thing it's missing is an easy offsite solution.  Everything else about it is perfect.

    I don't give a flying fuck if a backup solution is 'efficient'.  I care that it backs up data and makes it easily retrievable.  Time Machine does that better than anything else out there at any price.

    Apple got this one right.
    Good for you!   Not everyone's been so lucky.  
    On my previous machine, the combination of new OS + old hardware + File Vault made Time Machine crawl, (also iPhone backups - one took 35 hours) until I turned off FV.   
    Been great for years.  Until Monterey 12.1, not so much.  
    muthuk_vanalingam
Sign In or Register to comment.