software/hoardy

Find files matching given criteria quickly, find duplicated files and deduplicate them, record file hashes and verify them, etc.

Distributed via

Recent changes

Files

Raw Source

Table of Contents

(Click me to see it.)

What is hoardy?

hoardy is an tool for digital data hoarding, a Swiss-army-knife-like utility for managing otherwise unmanageable piles of files.

On GNU/Linux, hoardy it pretty well-tested on my files and I find it to be an essentially irreplaceable tool for managing the mess of media files in my home directory, backup snapshots made with rsync, as well as git-annex and hydrus file object stores.

On Windows, however, hoardy is a work in progress essentially unusable alpha software that is completely untested.

Data formats and command-line syntax of hoardy are subject to change in future versions. See below for why.

What can hoardy do?

hoardy can

See the “Alternatives” section for more info.

On honesty in reporting of data loss issues

This document mentions data loss and situations when it could occur, repeatedly. I realize that this may turn some people off. Unfortunately, the reality is that with modern computing it’s quite easy to screw things up. If a tool can delete or overwrite data, it can loose data. Hence, make backups!

With that said, hoardy tries its very best to make situations where it causes data loss impossible by doing a ton of paranoid checks before doing anything destructive. Unfortunately, the set of situations where it could lose some data even after doing all those checks is not empty. Which is why “Quirks and Bugs” section documents all of those situations known to me. (So… Make backups!) Meanwhile, “Frequently Asked Questions”, among other things, documents various cases that are handled safely. Most of those are quite non-obvious and not recognized by other tools, which will loose your data where hoardy would not.

As far as I know, hoardy is actually the safest tool for doing what it does, but this document mentions data loss repeatedly, while other tools prefer to be quiet about it. I’ve read the sources of hoardy’s alternatives to make those comparisons there, and to figure out if I maybe should change how hoardy does some things, and I became much happier with hoardy’s internals as a result. Just saying.

Also, should I ever find an issue in hoardy that produces loss off data, I commit to fixing and honestly documenting it all immediately, and then adding new tests to the test suite to prevent that issues in the future. A promise that can be confirmed by the fact that I did such a thing before for hoardy-web tool, see its tool-v0.18.1 release.

Glossary

See man 7 inode for more info.

Quickstart

Pre-installation

Installation

Deduplicate files in your ~/Downloads

So, as the simplest use case, deduplicate your ~/Downloads directory.

Index your ~/Downloads directory:

hoardy index ~/Downloads

Look at the list of duplicated files there:

hoardy find-dupes ~/Downloads

Deduplicate them by hardlinking each duplicate file to its oldest available duplicate version, i.e. make all paths pointing to duplicate files point to the oldest available inode among those duplicates:

hoardy deduplicate --hardlink ~/Downloads
# or, equivalently
hoardy deduplicate ~/Downloads

The following should produce an empty output now:

hoardy find-dupes ~/Downloads

If it does not (which is unlikely for ~/Downloads), then some duplicates have different metadata (permissions, owner, group, extended attributes, etc), which will be discussed below.

By default, both deduplicate --hardlink and find-dupes run with implied --min-inodes 2 option. Thus, to see paths that point to the same inodes on disk you’ll need to run the following instead:

hoardy find-dupes --min-inodes 1 ~/Downloads

To delete all but the oldest file among duplicates in a given directory, run

hoardy deduplicate --delete ~/Downloads

in which case --min-inodes 1 is implied by default.

The result of which could, of course, have been archived by running this last command directly, without doing all of the above except for index.

Personally, I have

hoardy index ~/Downloads && hoardy deduplicate --delete ~/Downloads

scheduled in my daily crontab, because I frequently re-download files from local servers while developing things (for testing).

Normally, you probably don’t need to run it that often.

Deduplicate rsync snapshots

Assuming you have a bunch of directories that were produced by something like

rsync -aHAXivRyy --link-dest=/backup/yesterday /home /backup/today

you can deduplicate them by running

hoardy index /backup
hoardy deduplicate /backup

(Which will probably take a while.)

Doing this will deduplicate everything by hardlinking each duplicate file to an inode with the oldest mtime while respecting and preserving all file permissions, owners, groups, and user extended attributes. If you run it as super-user it will also respect all other extended name-spaces, like ACLs, trusted extended attributes, etc. See man 7 xattr for more info.

But, depending on your setup and wishes, the above might not be what you’d want to run. For instance, personally, I run

hoardy index /backup
hoardy deduplicate --reverse --ignore-meta /backup

instead.

Doing this hardlinks each duplicate file to an inode with the latest mtime (--reverse) and ignores all file metadata (but not extended attributes), so that the next

rsync -aHAXivRyy --link-dest=/backup/today /home /backup/tomorrow

could re-use those inodes via --link-dest as much as possible again. Without those options the next rsync --link-dest would instead re-create many of those inodes again, which is not what I want, but your mileage may vary.

Also, even with --reverse the original mtime of each path will be kept in the hoardy’s database so that it could be restored later. (Which is pretty cool, right?)

Also, if you have so many files under /backup that deduplicate does not fit into RAM, you can still run it incrementally (while producing the same deduplicated result) via sharding by SHA256 hash digest. See examples for more info.

Deduplicate files in your $HOME

Note however, that simply running hoardy deduplicate on your whole $HOME directory will probably break almost everything, as many programs depend on file timestamps not moving backwards, use zero-length or similarly short files for various things, overwrite files without copying them first, and expect them to stay as independent inodes. Hardlinking different same-data files together on a non-backup filesystem will break all those assumptions.

(If you do screw it up, you can fix it by simply doing cp -a file file.copy ; mv file.copy file for each wrongly deduplicated file.)

However, sometimes deduplicating some files under $HOME can be quite useful, so hoardy implements a fairly safe way to do it semi-automatically.

Index your home directory and generate a list of all duplicated files, matched strictly, like deduplicate would do:

hoardy index ~
hoardy find-dupes --print0 --match-meta ~ > dupes.print0

--print0 is needed here because otherwise file names with newlines and/or weird symbols in them could be parsed as multiple separate paths and/or mangled. By default, without --print0, hoardy solves this by escaping control characters in its outputs, and, in theory, it could then allow to read back its own outputs using that format. But normal UNIX tools won’t be able to use them, hence --print0, which is almost universally supported.

You can then easily view the resulting file from a terminal with:

cat dupes.print0 | tr '\0' '\n' | less

which, if none of the paths have control symbols in them, will be equivalent to the output of:

hoardy find-dupes --match-meta ~ | less

But you can now use grep or another similar tool to filter those outputs.

Say, for example, you want to deduplicate git objects across different repositories:

grep -zP '/\.git/objects/([0-9a-f]{2}|pack)/' dupes.print0 > git-objects.print0
cat git-objects.print0 | tr '\0' '\n' | less

These are never modified, as so they can be hardlinked together. In fact, git does this silently when it notices, so you might not get a lot of duplicates there, especially if you mostly clone local repositories from each other. But if you have several related repositories cloned from external sources at $HOME, the above output, most likely, will not be empty.

So, you can now pretend to deduplicate all of those files:

hoardy deduplicate --dry-run --stdin0 < git-objects.print0

and then actually do it:

hoardy deduplicate --stdin0 < git-objects.print0

Ta-da! More disk space! For free!

Quirks and Bugs

Known Issues

Situations where hoardy deduplicate could lose data

Frequently Asked Questions

I’m using fdupes/jdupes now, how do I migrate to using hoardy?

I have two identical files, but hoardy deduplicate does not deduplicate them. Why?

By default, files must match in everything but timestamps for hoardy deduplicate to consider them to be duplicates.

In comparison, hoardy find-duplicates considers everything with equal SHA256 hash digest and sizes to be duplicates instead.

It works this way because hoardy find-duplicates is designed to inform you of all the potential things you could deduplicate while hoardy deduplicate is designed to preserve all metadata by default (hoardy deduplicate --hardlink also preserves the original file mtime in the database, so it can be restored later).

If things like file permissions, owners, and groups are not relevant to you, you can run

hoardy deduplicate --ignore-meta path/to/file1 path/to/file2

to deduplicate files that mismatch in those metadata fields. (If you want to control this more precisely, see deduplicate’s options.)

If even that does not deduplicate your files, and they are actually equal as binary strings, extended file attributes must be different. At the moment, if you are feeling paranoid, you will need to manually do something like

# dump them all
getfattr --match '.*' --dump path/to/file1 path/to/file2 > attrs.txt

# edit the result so that records of both files match
$EDITOR attrs.txt

# write them back
setfattr --restore=attrs.txt

after which hoardy deduplicate --ignore-meta would deduplicate them (if they are indeed duplicates).

(Auto-merging of extended attributes, when possible, is on the “TODO” list.)

What would happen if I run hoardy deduplicate with an outdated index? Would hoardy loose some of my files by wrongly “deduplicating” them?

No, it would not.

hoardy checks that each soon-to-be deduplicated file from its index matches its filesystem counterpart, printing an error and skipping that file and all its apparent duplicates if not.

I have two files with equal SHA256 hash digests and sizes, and yet they are unequal when compared as binary strings. Would hoardy “deduplicate” them wrongly?

No, it would not.

hoardy checks that source and target inodes have equal data contents before hardlinking them.

What would happen if I run hoardy deduplicate --delete with the same directory given in two different arguments? Would it consider those files to be equivalent to themselves and delete them, losing all my data?

Nope, hoardy will notice the same path being processed twice and ignore the second occurrence, printing a warning.

Nope, hoardy will detect this too by resolving all of its inputs first.

Nope, hoardy will detect this and skip all such files too.

Before acting hoardy deduplicate checks that if source and target point to the same file on the same device then it’s nlinks is not 1. If both source and target point to the same last copy of a file, it will not be acted upon.

Note that hoardy does this check not only in --delete mode, but also in --hardlink mode, since re-linking them will simply produce useless link+rename churn and disk IO.

Actually, if you think about it, this check catches all other possible issues of “removing the last copy of a file when we should not” kind, so all other similar “What if” questions can be answered by “in the worst case, it will be caught by that magic check and at least one copy of the file will persist”. And that’s the end of that.

As far as I know, hoardy is the only tool in existence that handles this properly.

Probably because I’m rare in that I like using mount --binds at $HOME. (They are useful in places where you’d normally want to hardlink directories, but can’t because POSIX disallows it. For instance, vendor/kisstdlib directory here is a mount --bind on my system, so that I could ensure all my projects work with its latest version without fiddling with git.) And so I want hoardy to work even while they are all mounted.

Hmm, but hoardy deduplicate implementation looks rather complex. What if a bug there causes it to “deduplicate” some files that are not actually duplicates and loose data?

Firstly, a healthy habit to have is to simply not trust any one tool to not loose your data, make a backup (including of your backups) before running hoardy deduplicate first.

(E.g., if you are feeling very paranoid, you can run rsync -aHAXiv --link-dest=source source copy to make a hardlink-copy or cp -a --reflink=always source copy to make a reflink-copy first. On a modern filesystem these cost very little. And you can later remove them to save the space used by inodes, e.g., after you hoardy verifyed that nothing is broken.)

Secondly, I’m pretty sure it works fine as hoardy has quite a comprehensive test suite for this and is rather well-tested on my backups.

Thirdly, the actual body of hoardy deduplicate is written in a rather paranoid way re-verifying all assumptions before attempting to do anything.

Fourthly, by default, hoardy deduplicate runs with --paranoid option enabled, which checks that source and target have equal contents before doing anything to a pair of supposedly duplicate files, and emits errors if they are not. This could be awfully inefficient, true, but in practice it usually does not matter as on a reasonably powerful machine with those files living on an HDD the resulting content re-checks get eaten by IO latency anyway. Meanwhile, --paranoid prevents data loss even if the rest of the code is completely broken.

With --no-paranoid is still checks file content equality, but once per every new inode, not for each pair of paths. Eventually --no-paranoid will probably become the default (when I stop editing all that code and fearing I would accidentally break something).

Which, by the way, is the reason why hoardy deduplicate looks rather complex. All those checks are not free.

So, since I’m using this tool extensively myself on my backups which I very much don’t want to later restore from their cold backups, I’m pretty paranoid at ensuring it does not loose any data. It should be fine.

That is, I’ve been using hoardy to deduplicate files inside my backup directories, which contain billions of files spanning decades, since at least 2020.

So far, for me, bugs in hoardy caused zero data loss.

Why does hoardy exists?

Originally, I made hoardy as a replacement for its alternatives so that I could:

“But ZFS/BTRFS solves this!” I hear you say? Well, sure, such filesystems can deduplicate data blocks between different files (though, usually, you have to make a special effort to archive this as, by default, they do not), but how much space gets wasted to store the inodes? Let’s be generous and say an average inode takes 256 bytes (on a modern filesystems it’s usually 512 bytes or more, which, by the way, is usually a good thing, since it allows small files to be stored much more efficiently by inlining them into the inode itself, but this is awful for efficient storage of backups). My home directory has ~10M files in it (most of those are emails and files in source repositories, and this is the minimum I use all the time, I have a bunch more stuff on external drives, but it does not fit onto my SSD), thus a year of naively taken daily rsync-backups would waste (256 * 10**7 * 365) / (1024 ** 3) = 870.22 GiB in inodes alone. Sure, rsync --link-dest will save a bunch of that space, but if you move a bunch of files, they’ll get duplicated.

In practice, the last time I deduplicated a never-before touched pristine rsnapshot hierarchy containing backups of my $HOME it saved me 1.1 TiB of space. Don’t you think you would find a better use for 1.1TiB of additional space than storing useless inodes? Well, I did.

“But fdupes and its forks solve this!” I hear you say? Well, sure, but the experience of using them in the above use cases of deduplicating mostly-read-only files is quite miserable. See the “Alternatives” section for discussion.

Also, I wanted to store the oldest known mtime for each individual path, even when deduplicate-hardlinking all the copies, so that the exact original filesystem tree could be re-created from the backup when needed. AFAIK, hoardy is the only tool that does this. Yes, this feature is somewhat less useful on modern filesystems which support reflinks (Copy-on-Write lightweight copies), but even there, a reflink takes a whole inode, while storing an mtime in a database takes <= 8 bytes.

Also, in general, indexing, search, duplicate discovery, set operations, send-receive from remote nodes, and application-defined storage APIs (like HTTP/WebDAV/FUSE/SFTP), can be combined to produce many useful functions. It’s annoying there appears to be no tool that can do all of those things on top of a plain file hierarchy. All such tools known to me first slurp all the files into their own object stores, and usually store those files quite less efficiently than I would prefer, which is annoying. See the “Wishlist” for more info.

Development history

This version of hoardy is a minimal valuable version of my privately developed tool (referred to as “bootstrap version” in commit messages), taken at its version circa 2020, cleaned up, rebased on top of kisstdlib, slightly polished, and documented for public display and consumption.

The private version has more features and uses a much more space-efficient database format, but most of those cool new features are unfinished and kind of buggy, so I was actually mostly using the naive-database-formatted bootstrap version in production. So, I decided to finish generalizing the infrastructure stuff to kisstdlib first, chop away everything related to v4 on-disk format and later, and then publish this part first. (Which still took me two months of work. Ridiculous!)

The rest is currently a work in progress.

If you’d like all those planned features from the the “TODO” list and the “Wishlist” to be implemented, sponsor them. I suck at multi-tasking and I need to eat, time spent procuring sustenance money takes away huge chunks of time I could be working on this and other related projects.

Alternatives

fdupes and jdupes

fdupes is the original file deduplication tool. It walks given input directories, hashes all files, groups them into potential duplicate groups, then compares the files in each group as binary strings, and then deduplicates the ones that match.

jdupes is a fork of fdupes that does duplicate discovery more efficiently by hashing as little as possible, which works really well on an SSD or when your files contain very small number of duplicates. But in other situations, like with a file hierarchy with tons of duplicated files living on an HDD, it works quite miserably, since it generates a lot of disk seeks by doing file comparisons incrementally.

Meanwhile, since the fork, fdupes added hashing into an SQLite database, similar to what hoardy does.

Comparing hoardy, fdupes, and jdupes I notice the following:

In short, hoardy implements almost a union of features of both fdupes and jdupes, with some more useful features on top, but with some little bits missing here and there, but hoardy is also significantly safer to use than either of the other two.

RHash

RHash is “recursive hasher”.

Basically, you give it a list of directories, it outputs <hash digest> <path> lines (or similar, it’s configurable), then, later, you can verify files against a file consisting of such lines. It also has some nice features, like hashing with many hashes simultaneously, skipping of already-hashed files present in the output file, and etc.

Practically speaking, it’s usage is very similar to hoardy index followed by hoardy verify, except

Many years before hoardy was born, I was using RHash quite extensively (and I remember the original forum it was discussed/developed at, yes).

Meta

Changelog?

See CHANGELOG.md.

TODO?

See above, also the bottom of CHANGELOG.md.

License

LGPLv3+ (because it will become a library, eventually).

Contributing

Contributions are accepted both via GitHub issues and PRs, and via pure email. In the latter case I expect to see patches formatted with git-format-patch.

If you want to perform a major change and you want it to be accepted upstream here, you should probably write me an email or open an issue on GitHub first. In the cover letter, describe what you want to change and why. I might also have a bunch of code doing most of what you want in my stash of unpublished patches already.

Usage

hoardy

A thingy for hoarding digital assets.

hoardy index

Recursively walk given INPUTs and update the DATABASE to reflect them.

Algorithm
Options

hoardy find

Print paths of files under INPUTs that match specified criteria.

Algorithm
Options

hoardy find-duplicates

Print groups of paths of duplicated files under INPUTs that match specified criteria.

Algorithm
  1. For each INPUT, walk it recursively (in the DATABASE), for each walked path:

    • get its group, which is a concatenation of its type, sha256 hash, and all metadata fields for which a corresponding --match-* options are set; e.g., with --match-perms --match-uid, this produces a tuple of type, sha256, mode, uid;
    • get its inode_id, which is a tuple of device_number, inode_number for filesystems which report inode_numbers and a unique int otherwise;
    • record this inode’s metadata and path as belonging to this inode_id;
    • record this inode_id as belonging to this group.
  2. For each group, for each inode_id in group:

    • sort paths as --order-paths says,
    • sort inodess as --order-inodes says.
  3. For each group, for each inode_id in group, for each path associated to inode_id:

    • print the path.

Also, if you are reading the source code, note that the actual implementation of this command is a bit more complex than what is described above. In reality, there’s also a pre-computation step designed to filter out single-element groups very early, before loading of most of file metadata into memory, thus allowing hoardy to process groups incrementally, report its progress more precisely, and fit more potential duplicates into RAM. In particular, this allows hoardy to work on DATABASEs with hundreds of millions of indexed files on my 2013-era laptop.

Output

With the default verbosity, this command simply prints all paths in resulting sorted order.

With verbosity of 1 (a single --verbose), each path in a group gets prefixed by:

With verbosity of 2, each group gets prefixed by a metadata line.

With verbosity of 3, each path gets prefixed by associated inode_id.

With the default spacing of 1 a new line gets printed after each group.

With spacing of 2 (a single --spaced) a new line also gets printed after each inode.

Options

hoardy deduplicate

Produce groups of duplicated indexed files matching specified criteria, similar to how find-duplicates does, except with much stricter default --match-* settings, and then deduplicate the resulting files by hardlinking them to each other.

Algorithm
  1. Proceed exactly as find-duplicates does in its step 1.

  2. Proceed exactly as find-duplicates does in its step 2.

  3. For each group:

    • assign the first path of the first inode_id as source,
    • print source,
    • for each inode_id in group, for each inode and path associated to an inode_id:
      • check that inode metadata matches filesystems metadata of path,
        • if it does not, print an error and skip this inode_id,
      • if source, continue with other paths;
      • if --paranoid is set or if this the very first path of inode_id,
        • check whether file data/contents of path matches file data/contents of source,
          • if it does not, print an error and skip this inode_id,
      • if --hardlink is set, hardlink source -> path,
      • if --delete is set, unlink the path,
      • update the DATABASE accordingly.
Output

The verbosity and spacing semantics are similar to the ones used by find-duplicates, except this command starts at verbosity of 1, i.e. as if a single --verbose is specified by default.

Each processed path gets prefixed by:

Options

hoardy verify

Verfy that indexed files from under INPUTs that match specified criteria exist on the filesystem and their metadata and hashes match filesystem contents.

Algorithm

This command runs with an implicit --match-sha256 option which can not be disabled, so hash mismatches always produce errors.

Options

hoardy upgrade

Backup the DATABASE and then upgrade it to latest format.

This exists for development purposes.

You don’t need to call this explicitly as, normally, database upgrades are completely automatic.

Examples

Development: ./test-hoardy.sh [--help] [--wine] [--fast] [default] [(NAME|PATH)]*

Sanity check and test hoardy command-line interface.

Examples