TODO: features that might be nice to add someday. ---- In general I have left features unimplemented if it is not clear that they would be useful in all circumstances. maybesame might want to run diff, or some sort of checksum diff. Unfortunately since you cant short circuit this process if the files are actually the same, this might well add a lot of processing time. On the other hand maybesame runs against the backup FS so it is local and quick. To save execution time in daily incrementals, we could choose not to descend into directories whose access time is newer than the timestamp. It is a heuristic that if a file changes, the directory it is in's Atime may change. Sadly, this is not a very good heuristic. We could (optionally) always keep one extra version which is the current version, sort of like a version number infinity. This guarantees that if one disk dies, there will be one good version on the backup system. However, I consider that the source FS counts as one copy: as it stands, you are always safe against one failure, e.g. either the source FS or one backup disk. Note that this would effectively double the amount of disk space necessary to back up a file system which changes infrequently (like ours). One might imagine an option that would allow a restore to be run to restore an entire file system as close as possible to the state as of a certain date. Unfortunately, d2dackup is very poorly suited to doing that, since a version can be deleted leaving no evidence that it existed. However, we could in principle restore a FS to a state where each file is guaranteed to be BEFORE a specified date, though consistency between different files is not guaranteed. In short, do NOT use this system to back up source trees where there are close interdependencies between versions of different files. A function to copy all versions uniquely stored on one drive in the pool to other drives, in order to decommission a drive without losing historical versions. This can be done by simply copying the drive verbatim to a new drive, but it might be better to copy the old versions onto various drives in the pool using the normal stochastic distribution algorithm. re_df checks to see if our estimate of free space differs greatly from the actual free space, and throws a warning if so. If you run multiple runs of d2dbackup in parallel, you will geta lot of such warnings which are normal. There should be a --nodfwarn and $NODFWARN to disable those. For files, the time stamp we care about is the modification time. This means that if the permissions of a source file change, unless the contents changed too a new version will NOT be written. This is probably the correct default behavior, but one might implement supprt for optionally using the inode change time as well as the mod time. If you do this, think about diffremove and maybesame in relation.