software/hoardy-web/./tool/README.md

Passively capture, archive, and hoard your web browsing history, including the contents of the pages you visit, for later offline viewing, replay, mirroring, data scraping, and/or indexing. Your own personal private Wayback Machine that can also archive HTTP POST requests and responses, as well as most other HTTP-level data.

Files

Raw Source

What is hoardy-web?

hoardy-web is a tool to inspect, search, organize, programmatically extract values and generate static website mirrors from, archive, view, and replay HTTP archives/dumps in WRR (“Web Request+Response”, produced by the Hoardy-Web Web Extension browser add-on, also on GitHub) and mitmproxy (mitmdump) file formats.

How to read this document

The top part of this README file (from here to “Usage”) is designed to be read in a linear fashion, not piece-meal.

The “Usage” section can be read and referenced to in arbitrary order.

Quickstart

Pre-installation

Installation

Get some archived web data

Install the Hoardy-Web extension and get some archive data by browsing some websites.

Make a website mirror from your archived data

You can then use your archived data to generate a local offline static website mirror that can be opened in a web browser without accessing the Internet, similar to what wget -mpk does.

The invocation is slightly different depending on if the data was exported via saveAs by the Hoardy-Web extension itself, saved via the hoardy-web-sas simple archiving server, or via hoardy-web serve --archive-to (see below):

# for "Export via `saveAs`"
hoardy-web mirror --to ~/hoardy-web/mirror1 ~/Downloads/Hoardy-Web-export-*

# for `hoardy-web-sas` and/or `hoardy-web serve --archive-to`
hoardy-web mirror --to ~/hoardy-web/mirror1 ../simple_server/pwebarc-dump ~/hoardy-web/raw

You can then, e.g. rsync/copy ~/hoardy-web/mirror1 to your e-book reader/phone before hopping on a plane or going on a deep-sea dive, and still be able to read all those pages.

The default settings should work for most simple websites, but a section below contains more info and more usage examples.

View/replay your archived data interactively over HTTP

You can also view your archived pages by running hoardy-web in web server mode:

# serve a union af all available archives,
# which are not at all required to use the same file format
hoardy-web serve \
  ~/hoardy-web/raw \
  ../simple_server/pwebarc-dump \
  ~/Downloads/Hoardy-Web-export-* \
  mitmproxy.*.dump

You can then navigate to

This is very reminiscent of the Wayback Machine by design, yes.

You can also use hoardy-web serve to replace hoardy-web-sas simple archiving server by combining both archival and replay:

hoardy-web serve --implicit \
  --archive-to ~/hoardy-web/raw \
  ../simple_server/pwebarc-dump \
  ~/Downloads/Hoardy-Web-export-* \
  mitmproxy.*.dump

See a section below for more info and usage examples.

Glossary

Supported input file formats

At the moment hoardy-web tool supports

WARC and built-in HAR support will be added soon-ish, PCAP support will be added eventually.

All sub-commands of hoardy-web except for

can take all supported file formats as inputs. So, most examples described below will work fine with any mix of inputs as arguments.

You can, however, force hoardy-web to use a specific loader for all given inputs, e.g.:

hoardy-web mirror --to ~/hoardy-web/mirror1 \
  --load-mitmproxy mitmproxy.*.dump

This is slightly faster than the default --load-any and, for most loaders, produces more specific errors that explain exactly what failed to parse, instead of simply saying that all tried parsers failed to work.

Recipes

Convert anything to WRR

To use hoardy-web organize, get, and run sub-commands on data stored in file formats other than separate WRR files, you will have to import them first:

hoardy-web import bundle --to ~/hoardy-web/raw ~/Downloads/Hoardy-Web-export-*
hoardy-web import mitmproxy --to ~/hoardy-web/mitmproxy ~/mitmproxy/mitmproxy.*.dump

Note that .wrr files can be parsed as single-dump .wrrb files, so the first command above will work even when some of the exported dumps were exported as separate .wrr files by the Hoardy-Web extension (because you configured it to do that, because it exported a bucket with a single dump as a separate file, because it exported a dump that was larger than set maximum bundle size as a separate file, etc). So, essentially, the first command above command is equivalent to

hoardy-web organize --copy --to ~/hoardy-web/raw ~/Downloads/Hoardy-Web-export-*.wrr
hoardy-web import bundle --to ~/hoardy-web/raw ~/Downloads/Hoardy-Web-export-*.wrrb

In fact, internally, hoardy-web import bundle is actually an alias for hoardy-web organize --copy --load-wrrb --defer-number 0.

Find and filter things

You can search your archive directory by using hoardy-web find sub-command, that prints paths to those of its inputs which match given conditions. For example, to list reqres from ~/hoardy-web/raw that contain complete GET requests with 200 OK responses, you can run:

hoardy-web find --method GET --status-re .200C ~/hoardy-web/raw

To limit the above to responses containing text/html bodies with a (whole) word “Potter” in them:

hoardy-web find --method GET --method DOM --status-re .200C --response-mime text/html \
  --response-body-grep-re "\bPotter\b" ~/hoardy-web/raw

Most other sub-commands also accept the same filtering options. So, for instance, you can pretty-print or generate a static mirror from such files instead:

hoardy-web pprint --method GET --method DOM --status-re .200C --response-mime text/html \
  --response-body-grep-re "\bPotter\b" \
  ~/hoardy-web/raw

# we set `--index-all-inputs` to disable its default input filters
hoardy-web mirror --index-all-inputs \
  --method GET --method DOM --status-re .200C --response-mime text/html \
  --response-body-grep-re "\bPotter\b" \
  --to ~/hoardy-web/mirror-potter ~/hoardy-web/raw

Or, say, you want a list of all domains you ever visited that use CloudFlare:

hoardy-web stream --format=raw -ue hostname \
  --response-headers-grep-re '^server: cloudflare' \
  ~/hoardy-web/raw | sort | uniq

Or, say, you want to get all responses from a certain host with JSONs, except when they were fetched from CloudFlare and encoded with br, and then feed them to a script:

hoardy-web find -z --url-re 'https://example\.org/.*' --response-mime text/json \
  --not-response-headers-and-grep-re '^server: cloudflare' \
  --not-response-headers-and-grep-re '^content-encoding: br' \
  ~/hoardy-web/raw > found-paths
xargs -0 my-example-org-json-parser < found-paths

See the “Usage” section below for all possible filtering options.

In principle, the possibilities are limitless since hoardy-web has a tiny expression language which you can use to do things not directly supported by the command-line options:

hoardy-web find --and "response.body|eb|len|> 10240" ~/hoardy-web/raw

and, if you are a developer, you can easily add your own custom functions into there.

Merge multiple archive directories

To merge multiple input directories into one you can simply hoardy-web organize them --to a new directory. hoardy-web will automatically deduplicate all the files in the generated result.

That is to say, for hoardy-web organize

For example, if you duplicate an input directory via --copy or --hardlink:

hoardy-web organize --copy     --to ~/hoardy-web/copy1 ~/hoardy-web/original
hoardy-web organize --hardlink --to ~/hoardy-web/copy2 ~/hoardy-web/original

(In real-life use different copies usually end up on different backup drives or some such.)

Then, repeating the same command would a noop:

# noops
hoardy-web organize --copy     --to ~/hoardy-web/copy1 ~/hoardy-web/original
hoardy-web organize --hardlink --to ~/hoardy-web/copy2 ~/hoardy-web/original

And running the opposite command would also be a noop:

# noops
hoardy-web organize --hardlink --to ~/hoardy-web/copy1 ~/hoardy-web/original
hoardy-web organize --copy     --to ~/hoardy-web/copy2 ~/hoardy-web/original

And copying between copies is also a noop:

# noops
hoardy-web organize --hardlink --to ~/hoardy-web/copy2 ~/hoardy-web/copy1
hoardy-web organize --copy     --to ~/hoardy-web/copy2 ~/hoardy-web/copy1

But doing hoardy-web organize --move while supplying directories that have the same data will deduplicate the results:

hoardy-web organize --move --to ~/hoardy-web/all ~/hoardy-web/copy1 ~/hoardy-web/copy2
# `~/hoardy-web/all` will have each file only once
find ~/hoardy-web/copy1 ~/hoardy-web/copy2 -type f
# the output will be empty

hoardy-web organize --move --to ~/hoardy-web/original ~/hoardy-web/all
# `~/hoardy-web/original` will not change iff it is already organized using `--output default`
# otherwise, some files there will be duplicated
find ~/hoardy-web/all -type f
# the output will be empty

Similarly, hoardy-web organize --symlink resolves its input symlinks and deduplicates its output symlinks:

hoardy-web organize --symlink --output hupq_msn --to ~/hoardy-web/pointers ~/hoardy-web/original
hoardy-web organize --symlink --output shupq_msn --to ~/hoardy-web/schemed ~/hoardy-web/original

# noop
hoardy-web organize --symlink --output hupq_msn --to ~/hoardy-web/pointers ~/hoardy-web/original ~/hoardy-web/schemed

I.e. the above will produce ~/hoardy-web/pointers with unique symlinks pointing to each file in ~/hoardy-web/original only once.

Build a file system tree of latest versions of all hoarded URLs

Assuming you keep your WRR dumps in ~/hoardy-web/raw, the following commands will generate a file system hierarchy under ~/hoardy-web/latest organized in such a way that, for each URL from ~/hoardy-web/raw, it will contain a symlink from under ~/hoardy-web/latest to a file in ~/hoardy-web/raw pointing to the most recent WRR file containing 200 OK response for that URL:

# import exported extension outputs
hoardy-web import bundle --to ~/hoardy-web/raw ~/Downloads/Hoardy-Web-export-*
# and/or move and rename `hoardy-web-sas` outputs
hoardy-web organize --move --to ~/hoardy-web/raw ../simple_server/pwebarc-dump

# and then organize them
hoardy-web organize --symlink --latest --output hupq --to ~/hoardy-web/latest --status-re .200C ~/hoardy-web/raw

Personally, I prefer flat_mhs format (see the documentation of the --output below), as I dislike deep file hierarchies. Using it also simplifies filtering in my ranger file browser, so I do this:

hoardy-web organize --symlink --latest --output flat_mhs --to ~/hoardy-web/latest --status-re .200C ~/hoardy-web/raw

Update the tree incrementally, in real time

The above commands rescan the whole contents of ~/hoardy-web/raw and so can take a while to complete.

If you have a lot of WRR files and you want to keep your symlink tree updated in near-real-time you will need to use a two-stage pipeline by giving the output of hoardy-web organize --zero-terminated to hoardy-web organize --stdin0 to perform complex updates.

E.g. the following will rename new WRR files from ../simple_server/pwebarc-dump to ~/hoardy-web/raw renaming them with --output default (the for loop is there to preserve buckets/profiles):

for arg in ../simple_server/pwebarc-dump/* ; do
  hoardy-web organize --zero-terminated --to ~/hoardy-web/raw/"$(basename "$arg")" "$arg"
done > changes

Then, you can reuse the paths saved in changes file to update the symlink tree, like in the above:

hoardy-web organize --symlink --latest --output flat_mhs --to ~/hoardy-web/latest --status-re .200C --stdin0 < changes

Then, optionally, you can reuse changes file again to symlink all new files from ~/hoardy-web/raw to ~/hoardy-web/all, showing all URL versions, by using --output hupq_msn format:

hoardy-web organize --symlink --output hupq_msn --to ~/hoardy-web/all --stdin0 < changes

Generate a local offline static website mirror, similar to wget -mpk

To render your archived data into a local offline static website mirror containing interlinked HTML files and their requisite resources similar to (but better than) what wget -mpk (wget --mirror --page-requisites --convert-links) does, you need to run something like this:

# separate `WRR` files
hoardy-web mirror --to ~/hoardy-web/mirror1 ~/hoardy-web/raw

# separate `WRR` files and/or `WRR` bundles
hoardy-web mirror --to ~/hoardy-web/mirror1 ~/Downloads/Hoardy-Web-export-*

# `mitmproxy` dumps
hoardy-web mirror --to ~/hoardy-web/mirror1 mitmproxy.*.dump

# any mix of these
hoardy-web mirror --to ~/hoardy-web/mirror1 \
  ~/hoardy-web/raw \
  ~/Downloads/Hoardy-Web-export-* \
  mitmproxy.*.dump

On completion, ~/hoardy-web/mirror1 will contain said newly generated interlinked HTML files, their resource requisites, and everything else available from given archive files. The set of mirrored files can be limited with using several methods described below.

By default, the resulting HTML files will be stripped of all JavaScript and other stuff of various levels of evil. The results should be completely self-contained (i.e., work inside a browser running in “Work offline” mode) and safe to view in a dumb unconfigured browser (i.e., the resulting web pages should not request any page requisites — like images, media, CSS, fonts, etc — from the Internet).

(In practice, though, hoardy-web mirror is not completely free of bugs and HTML5 spec is constantly evolving, with new things getting added there all the time. So, it is entirely possible that the output of the above hoardy-web mirror invocation will not be completely self-contained. Which is why the Hoardy-Web extension has its own per-tab Work offline mode which, by default, gets enabled for tabs with file: URLs. That feature prevents the outputs of hoardy-web mirror from accessing the Internet regardless of any bugs or missing features in hoardy-web. It also helps with debugging.)

If you are unhappy with the above and, for instance, want to keep JavaScript and produce human-readable HTMLs, you can run the following instead:

hoardy-web mirror \
  -e 'response.body|eb|scrub response &all_refs,+scripts,+pretty' \
  --to ~/hoardy-web/mirror2 ~/hoardy-web/raw

Or, say, you want to produce minimized outputs:

hoardy-web mirror \
  -e 'response.body|eb|scrub response &all_refs,-verbose,-whitespace,-optional_tags' \
  --to ~/hoardy-web/mirror2 ~/hoardy-web/raw

See the documentation for the --remap-* options of mirror sub-command and the options of the scrub function below for more info.

If you instead want a mirror made of raw files without any content censorship or link conversions, run:

# --raw-(re)s(ponse)body
hoardy-web mirror --raw-sbody --to ~/hoardy-web/mirror-raw ~/hoardy-web/raw

The later command will render your mirror rather quickly, but the other mirror commands use the scrub function, and that can be a bit slow, mostly because html5lib and tinycss2 that hoardy-web uses for paranoid HTML and CSS parsing and filtering are fairly slow. Under CPython on my 2013-era laptop hoardy-web mirror manages to render, on average, 1-20 web pages per second, depending on the website. Bunches of small pages reusing the same CSS files across them take less time, large pages, pages with a lot of complex HTML, or lots of inlined CSS take more. Though, this is not very characteristic of the overall mirroring speed, since images and other media just get copied around at expected speeds of 300+ files per second.

Also, enabling +indent (or +pretty) in scrub will make HTML scrubbing slightly slower (since it will have to track more stuff) and CSS scrubbing a lot slower (since it will force complete structural parsing, not just tokenization).

Update your mirror incrementally

By default, hoardy-web mirror runs with an implied --remap-all option which remaps all links in mirrored HTML files to local files, even if source WRR files for those would-be mirrored files are missing. This allows you to easily update your mirror directory incrementally by re-running hoardy-web mirror with the same --to argument on new inputs. For instance:

# render everything archived in 2023
hoardy-web mirror --to ~/hoardy-web/mirror1 ~/hoardy-web/raw/*/2023

# now, add new stuff archived in 2024, keeping already exported files as-is
hoardy-web mirror --skip-existing --to ~/hoardy-web/mirror1 ~/hoardy-web/raw/*/2024

# same, but updating old files
hoardy-web mirror --overwrite-dangerously --to ~/hoardy-web/mirror1 ~/hoardy-web/raw/*/2024

After the first of the above commands, links from pages generated from WRR files of ~/hoardy-web/raw/*/2023 to URLs contained in files from ~/hoardy-web/raw/*/2024 but not contained in files from ~/hoardy-web/raw/*/2023 will point to non-existent, yet unmirrored, files on disk. I.e. those links will be broken. Running the second or the third command from the example above will then mirror additional files from ~/hoardy-web/raw/*/2024, thus fixing some or all of those links.

If you want to treat links pointing to not yet hoarded URLs exactly like wget -mpk does, i.e. you want to keep them pointing to their original URLs instead of remapping them to yet non-existent local files (like the default --remap-all does), you need to run mirror with --remap-open option:

hoardy-web mirror --remap-open --to ~/hoardy-web/mirror4 ~/hoardy-web/raw

In practice, however, you probably won’t want the exact behaviour of wget -mpk, since opening pages generated that way is likely to make your web browser try to access the Internet to load missing page requisites. To solve this problem, hoardy-web provides --remap-semi option, which does what --remap-open does, except it also remaps unavailable action links and page requisites into void links, fixing that problem:

hoardy-web mirror --remap-semi --to ~/hoardy-web/mirror4 ~/hoardy-web/raw

See the documentation for the --remap-* options below for more info.

Obviously, using --remap-open or --remap-semi will make incremental updates to your mirror impossible.

Mirror a subset of archived data

The simplest way to mirror a subset of your data is to run one of hoardy-web organize --symlink --latest commands described above, and then do something like this:

hoardy-web mirror --to ~/hoardy-web/mirror5 ~/hoardy-web/latest/archiveofourown.org

thus mirroring everything ever archived from https://archiveofourown.org.

… by input filters, --root-*, and --depth

As an alternative to (or in combination with) keeping a symlink hierarchy of latest versions, you can limit the set of files hoardy-web mirror will consider for mirroring by setting some input filters, e.g.:

hoardy-web mirror \
  --to ~/hoardy-web/mirror6 ~/hoardy-web/raw/*/2023 \
  --url-prefix 'https://archiveofourown.org/works/3733123' \
  --url-prefix 'https://archiveofourown.org/works/30186441'

Note, however, that doing this will prevent mirror from processing reqres not accepted by specified filters. Which, in the above example, will prevent mirror from processing most of requisite resources of those pages. When running with --remap-all, as the above does, this can be solved by running hoardy-web mirror repeatedly with different input filters, e.g., to mostly fix the above outputs you could then run:

hoardy-web mirror \
  --to ~/hoardy-web/mirror6 ~/hoardy-web/raw/*/2023 \
  --url-re 'https://archiveofourown\.org/.*\.css'

but this is quite inconvenient, and when running with something other than --remap-all, it will leave many output pages completely broken anyway.

Which is why hoardy-web can instead load (an index of) an assortment of WRR files into its memory but then only mirror a subset of those reqres with all requisite resources needed to properly render those pages. This can be archived by specifying some --root-* filtering options, e.g.:

hoardy-web mirror \
  --to ~/hoardy-web/mirror6 ~/hoardy-web/raw/*/2023 \
  --root-url-prefix 'https://archiveofourown.org/works/3733123' \
  --root-url-prefix 'https://archiveofourown.org/works/30186441'

The --root-* options have exactly the same syntax and semantics as the normal input filtering options, except they start with --root- prefix, and instead of making hoardy-web accept reqres satisfying them as inputs, they make hoardy-web mirror queue such reqres for mirroring at the initial depth of 0. An yes, there is also --depth option, which works similarly to wget’s --level option in that it will follow all jump (a href) and action links accessible with no more than --depth browser navigations from recursion --root-*s and then mirror all those URLs and their requisites too.

When using --root-* options, --remap-open works exactly like wget’s --convert-links in that it will only remap the URLs that are going to be mirrored and will keep the rest as-is. Similarly, --remap-semi and --remap-closed will consider only the URLs reachable from the --root-*s in no more that --depth jumps as available.

Unlike most other sub-commands of hoardy-web which set no default filters, mirror runs with implied --ignore-some-inputs and --skip-some-indexed options which set some useful default input and root filters. This can be disabled with --index-all-inputs and/or --queue-all-indexed, which can useful when using mirror to do weird things with custom --exprs, with the default --exprs, using these options is likely to produce a broken mirror, unless you add some specific filters manually. See the documentation all of those options below for more info.

Also, note, that hoardy-web loads (indexes) WRR files pretty fast, so if you are running from an SSD, you can totally feed it years of WRR files and then only mirror a couple of URLs, and it will finish pretty quickly anyway.

Prioritize some files over others

By default, files are read, queued, and then mirrored in the order they are specified on the command line, in lexicographic file system walk order when an argument is a directory. (See --paths-* and --walk-* options below if you want to change this.)

However, the above rule does not apply to page requisites, those are always (with or without --root-*, regardless of --paths-* and --walk-* options) get mirrored just after their parent HTML document gets parsed and before that document gets written to disk. I.e., mirror will produce a new file containing an HTML document only after first producing all of its requisites. I.e., when mirroring into an empty directory, if you see mirror generated an HTML document, you can be sure that all of its requisites loaded (indexed) by this mirror invocation are rendered too. Meaning, you can go ahead and open it in your browser, even if mirror did not finish yet.

Moreover, unlike all other sub-commands mirror handles duplication in its input files in a special way: it remembers the files it has already seen and ignores them when they are given the second time. (All other commands don’t, they will just process the same file the second time, the third time, and so on. This is by design, other commands are designed to handle potentially enormous file hierarchies in constant memory.)

The combination of all of the above means you can prioritize rendering of some documents over others by specifying them earlier on the command line and then, in a later argument, specifying their containing directory to allow mirror to also see their requisites and documents they link to. For instance,

hoardy-web mirror \
  --to ~/hoardy-web/mirror7 \
  ~/hoardy-web/latest/archiveofourown.org/works__3733123*.wrr \
  ~/hoardy-web/latest/archiveofourown.org

will mirror all of ~/hoardy-web/latest/archiveofourown.org, but the web pages contained in files named ~/hoardy-web/latest/archiveofourown.org/works__3733123*.wrr and their requisites will be mirrored first.

This also works with --root-* options. E.g., the following

hoardy-web mirror \
  --to ~/hoardy-web/mirror7 \
  ~/hoardy-web/latest/archiveofourown.org/works__3733123*.wrr \
  ~/hoardy-web/latest/archiveofourown.org \
  --root-url-prefix 'https://archiveofourown.org/works/'

will mirror all pages those URLs start with https://archiveofourown.org/works/ and all their requisites, but the pages contained in files named ~/hoardy-web/latest/archiveofourown.org/works__3733123*.wrr and their requisites will be mirrored first.

Finally, there is also the --boring option, which allows you to load some input PATHs without queuing them as roots, even when no --root-* options are specified or specified --root-* options say those reqres should be taken as roots. E.g., the following

hoardy-web mirror \
  --to ~/hoardy-web/mirror8 \
  --boring ~/hoardy-web/latest/i.imgur.com \
  --boring ~/hoardy-web/latest/archiveofourown.org \
  ~/hoardy-web/latest/archiveofourown.org/works__[0-9]*.wrr

will load (an index of) everything under ~/hoardy-web/latest/i.imgur.com and ~/hoardy-web/latest/archiveofourown.org into memory but will only mirror the contents of ~/hoardy-web/latest/archiveofourown.org/works__[0-9]*.wrr files and their requisites.

Control which versions (visits) get mirrored

By default, hoardy-web mirror runs with the implied --latest option, which renders the latest available version (visit) to each URL. Usually, this is fine, as most modern web-sites use versioned page requisites to improve caching. But it can produce broken results sometimes. For instance, when two different web pages share an unversioned CSS file and one those pages was recently revisited while the other was not, then, with the default --latest, only the latter version of the CSS file in question will be mirrored, making the older page broken.

To fix this, you can run mirror with --latest-hybrid option

hoardy-web mirror \
  --to ~/hoardy-web/mirror8 \
  --root-url-prefix 'https://en.wikipedia.org/wiki/'
  --latest-hybrid \
  ~/hoardy-web/raw

which will mirror each web page with its date-vise closest available resource requisites. This takes quite a bit of memory, though, since mirror has to index and keep in memory references to all versions of all reqres to produce such hybrid results.

Similarly, you can also mirror the --oldest available version of each URL:

hoardy-web mirror \
  --to ~/hoardy-web/mirror9 \
  --root-url-prefix 'https://archiveofourown.org/works/'
  --oldest \
  ~/hoardy-web/raw

or a version closest to a certain date:

hoardy-web mirror \
  --to ~/hoardy-web/mirror9 \
  --root-url-prefix 'https://en.wikipedia.org/wiki/'
  --nearest 2020-10-31 \
  ~/hoardy-web/raw

both of which also have --*-hybrid variants.

There is also --all, which mirrors all available versions of all --root-*s and --depth-reachable URLs. When using --all, you’ll probably want to switch to a time-versioned output format, otherwise those default simply-numbered hupq_n outputs will be impossible to interpret:

hoardy-web mirror \
  --to ~/hoardy-web/mirror9 \
  --root-url-prefix 'https://en.wikipedia.org/wiki/'
  --all \
  --output hupq_tn \
  ~/hoardy-web/raw

Content-addressed outputs and de-duplication

Note that, by default, hoardy-web mirror runs with the implied --hardlink option, which makes it render and write each mirrored file to <--to>/_content/<hash/based/path>.<ext> and only then hardlink the result to <--to>/<output/format/based/path>.<ext> target destination. The <hash/based/path> is derived from the sha256 hash of the generated file content.

This trick saves quite a bit of space in many cases. E.g., when pages refer to the same resource requisites by slightly different URLs, same images and fonts get distributed via different CDN hosts, when you mirror --all visits to some URLs and many of those are absolutely identical, etc.

You can change the destination those hash-based paths get written to by specifying --content-to. This allows you to easily share files between different mirrors:

hoardy-web mirror \
  --content-to ~/hoardy-web/shared \
  --to ~/hoardy-web/mirror10 \
  --root-url-prefix 'https://archiveofourown.org/works/'
  ~/hoardy-web/raw

hoardy-web mirror \
  --content-to ~/hoardy-web/shared \
  --to ~/hoardy-web/mirror11 \
  --root-url-prefix 'https://www.royalroad.com/'
  ~/hoardy-web/raw

You can also control the path of the generated files by setting --content-output, e.g.:

hoardy-web mirror \
  --content-output 'format:%(content_sha256|take_prefix 1|to_hex)s/%(content_sha256|take_prefix 2|take_suffix 1|to_hex)s/%(content_sha256|to_hex)s'
  --content-to ~/storage/sha256 \
  --to ~/hoardy-web/mirror12 \
  ~/hoardy-web/raw

hoardy-web mirror never overwrites any files under --content-to. It does, however, check that any existing files it references from there have the contents it expects, and generates errors if they do not. That is, you can set --content-output to anything and give any directory as --content-to, and hoardy-web will still ensure that the results are consistent, even when the --content-to cache is poisoned, or when different file contents compute to the same hash (produce a hash collision).

Also note that, by default, mirror treats jump-links (a href, etc) and links to resource requisites quite differently, remappings jump-links to normal --to destination paths, while remapping resource requisites to their hash-based --content-to paths instead. This renders identical HTML and CSS files referencing identical resources into identical results, which also saves quite a bit of space.

Note, however, that all of the above does make mirror slightly slower, since it needs to compute a lot of hashes and check contents of many files on disk. It also requires hardlink support on the target file system. Also, pointing --content-to outside of --to stops the mirrored results in --to from being self-contained.

Which is why you can disable all of this by specifying --copy:

hoardy-web mirror \
  --to ~/hoardy-web/mirror10 \
  --copy \
  ~/hoardy-web/raw

Also, you can make it use --symlinks instead of hardlinks. Though, enabling --symlink also enables the --absolute option by default because browsers treat file:// URLs pointing to symlinks as redirects.

Use hoardy-web serve for archival and replay over HTTP

hoardy-web comes with a builtin web server that can do

In other words, hoardy-web serve is, essentially, a combination of hoardy-web-sas archiving server and an on-demand hoardy-web mirror which talks over HTTP instead of just dumping rendered documents to disk. For interactive use, this is not only more convenient than hoardy-web mirror, it’s also usually much faster since required URL rewrites are much cheaper and no recursive requisite resource rendering is required here. That is, unlike mirror, serve is pretty snappy even on ancient hardware.

When invoking hoardy-web serve, the argument to the --archive-to option will be used by the archiving server parts, while the positional PATH arguments will used by the replay server parts. That is,

hoardy-web serve \
  --archive-to ~/hoardy-web/raw \
  ~/hoardy-web/raw/*/2024 \
  ../simple_server/pwebarc-dump \
  ~/Downloads/Hoardy-Web-export-* \
  mitmproxy.*.dump

When the argument to --archive-to and the first PATH are the same, you can specify --implicit — or -i — to simplify it:

hoardy-web serve --implicit --archive-to ~/hoardy-web/raw
# which is equivalent to
hoardy-web serve --archive-to ~/hoardy-web/raw ~/hoardy-web/raw
# which can be shortened to
hoardy-web serve -i --to ~/hoardy-web/raw
# or even
hoardy-web serve -i -t ~/hoardy-web/raw

By default, hoardy-web serve runs with an implied --all option, which makes it keep the index of all given archives in memory, allowing arbitrary visits to be replayed.

If you dislike this behaviour, you can run it with the --latest, --oldest, or --nearest options instead

hoardy-web serve --latest -i -t ~/hoardy-web/raw
# or
hoardy-web serve --oldest -i -t ~/hoardy-web/raw
# or
hoardy-web serve --nearest 2024-06-01 -i -t ~/hoardy-web/raw

which, for each URL, will make hoardy-web serve keep and allow replay of the last, the first, or the one closest to the given timestamp, respectively. This greatly improves resource consumption, but it also has the same caveats as hoardy-web mirror --latest, --oldest, and --nearest (see above).

When running with both --latest and archiving enabled, newly archived WRRs will elide older ones from the index, thus making that hoardy-web serve instance serve only the freshest archived version of each URL.

You can also disable indexing and replay completely by running it with --no-replay

hoardy-web serve --no-replay --to ~/hoardy-web/raw

which will make it essentially equivalent to hoardy-web-sas, except for serve having a customizable --output format.

The listening address and port can be controlled with --host and --port options, exactly the same as hoardy-web-sas:

hoardy-web serve --host 127.0.10.1 --port 4321 --archive-to ~/hoardy-web/raw

Currently enabled features can be queried programmatically from /hoardy-web/server-info endpoint

curl 'http://127.0.0.1:3210/hoardy-web/server-info'

which returns a JSON like

{"version": 1, "dump_wrr": "/pwebarc/dump", "index_ideal": null, "replay_oldest": "/web/-inf/{url}", "replay_latest": "/web/+inf/{url}", "replay_any": "/web/{timestamp}/{url}"}

Generate previews for WRR files, listen to them via TTS, open them with xdg-open, etc

See the script sub-directory for examples that show how to use pandoc and/or w3m to turn WRR files into previews and readable plain-text that can viewed or listened to via other tools, or dump them into temporary raw data files that can then be immediately fed to xdg-open for one-click viewing.

Usage

hoardy-web

Inspect, search, organize, programmatically extract values and generate static website mirrors from, archive, view, and replay HTTP archives/dumps in WRR (“Web Request+Response”, produced by the Hoardy-Web Web Extension browser add-on) and mitmproxy (mitmdump) file formats.

Glossary: a reqres (Reqres when a Python type) is an instance of a structure representing HTTP request+response pair with some additional metadata.

hoardy-web pprint

Pretty-print given inputs to stdout.

hoardy-web get

Print results produced by evaluating given EXPRessions on a given input to stdout.

Algorithm:

The end.

hoardy-web run

Spawn COMMAND with given static ARGuments and NUM additional arguments generated by evaluating given EXPRessions on given PATHs into temporary files.

Algorithm:

The end.

Essentially, this is {__prog__} get into a temporary file for each given PATH, followed by spawning of COMMAND, followed by cleanup when it finishes.

hoardy-web stream

Stream lists of results produced by evaluating given EXPRessions on given inputs to stdout.

Algorithm:

The end.

Esentially, this is a generalized {__prog__} get.

hoardy-web find

Print paths of inputs matching specified criteria.

Algorithm:

The end.

hoardy-web organize

Programmatically copy/rename/move/hardlink/symlink given input files based on their metadata and/or contents.

Algorithm:

The end.

Operations that could lead to accidental data loss are not permitted. E.g. hoardy-web organize --move will not overwrite any files, which is why the default --output contains %(num)d.

hoardy-web import

Use specified parser to parse data in each INPUT PATH into (a sequence of) reqres and then generate and place their WRR dumps into separate WRR files under OUTPUT_DESTINATION with paths derived from their metadata. In short, this is hoardy-web organize --copy for INPUT files that use different files formats.

hoardy-web import wrrb

Parse each INPUT PATH as a WRR bundle (an optionally compressed sequence of WRR dumps) and then generate and place their WRR dumps into separate WRR files under OUTPUT_DESTINATION with paths derived from their metadata.

hoardy-web import mitmproxy

Parse each INPUT PATH as mitmproxy stream dump (by using mitmproxy’s own parser) into a sequence of reqres and then generate and place their WRR dumps into separate WRR files under OUTPUT_DESTINATION with paths derived from their metadata.

hoardy-web mirror

Generate a local offline static website mirror from given intuts, producing results similar to those of wget -mpk.

Algorithm:

The end.

Essentially, this is a combination of hoardy-web organize --copy followed by in-place hoardy-web get which has the advanced URL remapping capabilities of (*|/|&)(jumps|actions|reqs) options available in its scrub function.

hoardy-web serve

Run an archiving server and/or serve given input files for replay over HTTP.

Algorithm:

The end.

Examples

Advanced examples

How to handle binary data

Trying to use response bodies produced by hoardy-web stream --format=json is likely to result garbled data as JSON can’t represent raw sequences of bytes, thus binary data will have to be encoded into UNICODE using replacement characters:

hoardy-web stream --format=json -ue . ../simple_server/pwebarc-dump/path/to/file.wrr | jq .

The most generic solution to this is to use --format=cbor instead, which would produce a verbose CBOR representation equivalent to the one used by --format=json but with binary data preserved as-is:

hoardy-web stream --format=cbor -ue . ../simple_server/pwebarc-dump/path/to/file.wrr | less

Or you could just dump raw response bodies separately:

hoardy-web stream --format=raw -ue response.body ../simple_server/pwebarc-dump/path/to/file.wrr | less
hoardy-web get ../simple_server/pwebarc-dump/path/to/file.wrr | less

Development: ./test-cli.sh [--help] [--all|--subset NUM] [--long|--short NUM] PATH [PATH ...]

Sanity check and test hoardy-web command-line interface.

Examples