10

Having used btrfs before, I was surprised to find out that rolling back a snapshot in ZFS not only changes the “working set” of files, but also requires that any snapshots which are newer than the rollback target must be destroyed as well:

 zfs rollback [-Rfr] snapshot
   Roll back the given dataset to a previous snapshot.  When a dataset is rolled back, all
   data that has changed since the snapshot is discarded, and the dataset reverts to the
   state at the time of the snapshot.  By default, the command refuses to roll back to a
   snapshot other than the most recent one.  In order to do so, all intermediate snapshots
   and bookmarks must be destroyed by specifying the -r option.

For comparison, here are some descriptions of non-destructively rolling back to a snapshot in btrfs:

btrfs sub snap -r fs snapshot
# ... do things on fs
btrfs sub del fs # at which point you'll lose those things you've done
                 # if you want to preserve them, just rename fs instead
btrfs sub snap snapshot fs # reinstate snapshot as a read+write fs
btrfs sub del snapshot     # delete the non-longer needed read-only snapshot

and non-destructively rolling back to a snapshot in ZFS:

zfs snapshot pool/project/production@today
zfs clone pool/project/production@today pool/project/beta
# make changes to /pool/project/beta and test them
zfs promote pool/project/beta
zfs rename pool/project/production pool/project/legacy
zfs rename pool/project/beta pool/project/production
# once the legacy version is no longer needed, it can be destroyed
zfs destroy pool/project/legacy

A “snapshot” is obviously a different thing in btrfs and ZFS, but I’m wondering what the advantage of having the destructive zfs rollback operation is, especially since there doesn’t seem to be a self-contained non-destructive rollback command. I would expect that, unless explicitly requested, the most commonly needed rollback operation is one that only affects the current state of files, and not other, unrelated snapshots based on the arbitrary criterion of when they were created.

I can imagine several kinds of reasons (e. g. historical, performance, storage space, simplicity of implementation), although not many which strike me as compelling, so relevant background information would be appreciated!

Socob
  • 331
  • 3
  • 12

1 Answers1

0

My understanding of the design of zfs, rollback is intended for a immediate undo of all changes since the last snapshot. Since this is destructive, the safety measure is to only allow going back one snapshot. However, you could go back one, then go back one, etc... to get to your indented snapshot. How ever, realize you will loose all your dataset chances since that snapshot.

However, if the intention is to access files in the dataset back to a specific snapshot, you can actually just access it anytime without doing any zfs commands. Simply access the snapshot via your file system.

For example:

$ sudo zfs mount cypher-pool/data /data # mount dataset onto /data 
$ sudo cd /data                              # Where the dataset is mounted.
$ sudo cd .zfs                               # Note: system hidden directory
$ sudo cd snapshot                           # Location of all dataset snapshots
$ sudo ls                                    # List of current dataset snapshots
autosnap_2023-01-01_00:00:01_monthly
autosnap_2023-01-01_00:00:01_yearly
autosnap_2023-02-01_00:00:03_monthly
autosnap_2023-03-01_00:00:01_monthly
autosnap_2023-04-01_00:00:02_monthly
autosnap_2023-05-01_00:00:01_monthly
autosnap_2023-06-01_00:00:01_monthly
autosnap_2023-07-01_00:01:07_monthly
autosnap_2023-07-03_23:30:01_weekly
autosnap_2023-07-10_23:30:19_weekly
autosnap_2023-07-17_23:30:01_weekly
autosnap_2023-07-24_23:30:02_weekly
autosnap_2023-07-31_23:30:02_weekly
autosnap_2023-08-01_00:00:02_monthly
autosnap_2023-08-07_23:30:02_weekly
autosnap_2023-08-09_00:00:02_daily
autosnap_2023-08-10_00:00:04_daily
autosnap_2023-08-11_00:00:01_daily
autosnap_2023-08-12_00:00:03_daily
autosnap_2023-08-13_00:00:01_daily
autosnap_2023-08-14_00:00:01_daily
autosnap_2023-08-14_23:30:50_weekly
autosnap_2023-08-15_00:00:02_daily
autosnap_2023-08-15_07:00:02_hourly
autosnap_2023-08-15_08:00:02_hourly
autosnap_2023-08-15_09:00:02_hourly
autosnap_2023-08-15_10:00:01_hourly
$ sudo cd autosnap_2023-04-01_00:00:02_monthly
$ sudo ls 
(all my files in my dataset as of the snapshot on April 1st 2023)

As far as I know, you can manipulate and view these files, just as if it was part of the normal file system.

If you really want to 'restore' the snapshot to current active file system area, you could always 'rsync' or 'cp' the files from a snapshot to the active area.

Take a snapshot of the current state, just in case you have problems. (of course, use your current date and time)

$ zfs snapshot cypher-pool/data@backup_2023-08-21_16:20:42_backup

Start with something like. (Please confirm options before using)

$ cd /data/.zfs/snapshot`
$ rsync -Pav autosnap_2023-04-01_00:00:02_monthly/* /data/

Note: Example snapshots, as if created by ZFS Sanoid, https://github.com/jimsalterjrs/sanoid .

Iain4D
  • 11
  • 3
  • 2
    `sudo cd` doesn't do anything useful. – muru Aug 22 '23 at 05:59
  • The first paragraph kind of goes into the right direction, although I still don’t see any reason why you’d design it this way? While the rest of the answer is potentially useful, I wasn’t really asking for workarounds. It seems pretty bizarre that you’d have to use an external program to work around the file system, not to mention that all the information is there for it to do the same thing much faster! And finally, I wouldn’t trust `rsync` or any other external program to restore the files to the exact same state, including metadata, hard links etc. – Socob Aug 30 '23 at 08:32