software/hoardy-web/./tool/README.md

Passively capture, archive, and hoard your web browsing history, including the contents of the pages you visit, for later offline viewing, mirroring, and/or indexing. Your own personal private Wayback Machine that can also archive HTTP POST requests and responses, as well as most other HTTP-level data.

Files

Raw Source

What is hoardy-web?

hoardy-web is a tool to display, search, manipulate (programmatically extract values from and rename/move/symlink/hardlink files based on their metadata), import, and export Web Request+Response (WRR) files produced by the Hoardy-Web Web Extension browser add-on (also there).

Quickstart

Installation

Supported input file formats

Simple WRR-dumps (*.wrr)

When you use the Hoardy-Web extension together with hoardy-web-sas archiving server, the latter writes WRR-dumps the extension generates into separate .wrr files (aka “WRR files”) in its dumping directory. No further actions to use that data are required.

The situation is similar if you instead use the Hoardy-Web extension with “Export via saveAs” option enabled but saveAs-bundling option disabled (max bundle size set to zero). The only difference is that WRR files will be put into ~/Downloads or similar.

ls ~/Downloads/Hoardy-Web-export-*

Bundles of WRR-dumps (*.wrrb)

However, if instead of using any of the above you use the Hoardy-Web extension with both “Export via saveAs” and bundling options enabled, then, at the moment, you will need to import those .wrrb files (aka WRR-bundles) into separate WRR files first:

hoardy-web import bundle --to ~/hoardy-web/raw ~/Downloads/Hoardy-Web-export-*

Note that hoardy-web can parse .wrr files as single-dump .wrrb files, so the above will work even when some of the exported dumps are simple .wrr files (Hoardy-Web generates those when exporting an only available per-bucket dump or when exporting dumps larger than set maximum bundle size). So, essentially, the above command is equivalent to

hoardy-web organize --copy --to ~/hoardy-web/raw ~/Downloads/Hoardy-Web-export-*.wrr
hoardy-web import bundle --to ~/hoardy-web/raw ~/Downloads/Hoardy-Web-export-*.wrrb

Other file formats

hoardy-web can also use some other file formats as inputs. See the documentation of the hoardy-web import sub-command below for more info.

How to merge multiple archive directories

To merge multiple input directories into one you can simply hoardy-web organize them --to a new directory. hoardy-web will automatically deduplicate all the files in the generated result.

That is to say, for hoardy-web organize

For example, if you duplicate an input directory via --copy or --hardlink:

hoardy-web organize --copy     --to ~/hoardy-web/copy1 ~/hoardy-web/original
hoardy-web organize --hardlink --to ~/hoardy-web/copy2 ~/hoardy-web/original

(In real-life use different copies usually end up on in different backup drives or some such.)

Then, repeating the same command would a noop:

# noops
hoardy-web organize --copy     --to ~/hoardy-web/copy1 ~/hoardy-web/original
hoardy-web organize --hardlink --to ~/hoardy-web/copy2 ~/hoardy-web/original

And running the opposite command would also be a noop:

# noops
hoardy-web organize --hardlink --to ~/hoardy-web/copy1 ~/hoardy-web/original
hoardy-web organize --copy     --to ~/hoardy-web/copy2 ~/hoardy-web/original

And copying between copies is also a noop:

# noops
hoardy-web organize --hardlink --to ~/hoardy-web/copy2 ~/hoardy-web/copy1
hoardy-web organize --copy     --to ~/hoardy-web/copy2 ~/hoardy-web/copy1

But doing hoardy-web organize --move while supplying directories that have the same data will deduplicate the results:

hoardy-web organize --move --to ~/hoardy-web/all ~/hoardy-web/copy1 ~/hoardy-web/copy2
# `~/hoardy-web/all` will have each file only once
find ~/hoardy-web/copy1 ~/hoardy-web/copy2 -type f
# the output will be empty

hoardy-web organize --move --to ~/hoardy-web/original ~/hoardy-web/all
# `~/hoardy-web/original` will not change iff it is already organized using `--output default`
# otherwise, some files there will be duplicated
find ~/hoardy-web/all -type f
# the output will be empty

Similarly, hoardy-web organize --symlink resolves its input symlinks and deduplicates its output symlinks:

hoardy-web organize --symlink --output hupq_msn --to ~/hoardy-web/pointers ~/hoardy-web/original
hoardy-web organize --symlink --output shupq_msn --to ~/hoardy-web/schemed ~/hoardy-web/original

# noop
hoardy-web organize --symlink --output hupq_msn --to ~/hoardy-web/pointers ~/hoardy-web/original ~/hoardy-web/schemed

I.e. the above will produce ~/hoardy-web/pointers with unique symlinks pointing to each file in ~/hoardy-web/original only once.

How to build a file system tree of latest versions of all hoarded URLs

Assuming you keep your WRR-dumps in ~/hoardy-web/raw, the following command will generate a file system hierarchy under ~/hoardy-web/latest organized in such a way that, for each URL from ~/hoardy-web/raw, it contains a symlink from under ~/hoardy-web/latest to a file in ~/hoardy-web/raw pointing to the most recent WRR file containing 200 OK response for that URL:

hoardy-web organize --symlink --latest --output hupq --to ~/hoardy-web/latest --and "status|~= .200C" ~/hoardy-web/raw

Personally, I prefer flat_mhs format (see the documentation of the --output below), as I dislike deep file hierarchies. Using it also simplifies filtering in my ranger file browser, so I do this:

hoardy-web organize --symlink --latest --output flat_mhs --and "status|~= .200C" --to ~/hoardy-web/latest ~/hoardy-web/raw

Update the tree incrementally, in real time

The above commands rescan the whole contents of ~/hoardy-web/raw and so can take a while to complete.

If you have a lot of WRR files and you want to keep your symlink tree updated in near-real-time you will need to use a two-stage pipeline by giving the output of hoardy-web organize --zero-terminated to hoardy-web organize --stdin0 to perform complex updates.

E.g. the following will rename new reqres from ../simple_server/pwebarc-dump to ~/hoardy-web/raw renaming them with --output default (the for loop is there to preserve buckets/profiles):

for arg in ../simple_server/pwebarc-dump/* ; do
  hoardy-web organize --zero-terminated --to ~/hoardy-web/raw/"$(basename "$arg")" "$arg"
done > changes

Then, you can reuse the paths saved in changes file to update the symlink tree, like in the above:

hoardy-web organize --stdin0 --symlink --latest --output flat_mhs --and "status|~= .200C" --to ~/hoardy-web/latest ~/hoardy-web/raw < changes

Then, optionally, you can reuse changes file again to symlink all new files from ~/hoardy-web/raw to ~/hoardy-web/all, showing all URL versions, by using --output hupq_msn format:

hoardy-web organize --stdin0 --symlink --output hupq_msn --to ~/hoardy-web/all < changes

How to generate a local offline website mirror, similar to wget -mpk

To render all your archived WRR files into a local offline website mirror containing interlinked HTML files and their requisite resources similar to (but better than) what wget -mpk (wget --mirror --page-requisites --convert-links) does, you need to run something like this:

hoardy-web export mirror --to ~/hoardy-web/mirror1 ~/hoardy-web/raw

On completion, ~/hoardy-web/mirror1 will contain a bunch of interlinked HTML files, their requisites, and everything else available from WRR files living under ~/hoardy-web/raw.

The resulting HTML files will be stripped of all JavaScript and other stuff of various levels of evil and then minimized a bit to save space. The results should be completely self-contained (i.e., work inside a browser running in “Work offline” mode) and safe to view in a dumb unconfigured browser (i.e., the resulting web pages should not request any page requisites — like images, CSS, or fonts — from the Internet).

If you are unhappy with the above and, for instance, want to keep JavaScript and produce unminimized human-readable HTMLs, you can run the following instead:

hoardy-web export mirror \
  -e 'response.body|eb|scrub response &all_refs,+scripts,+pretty' \
  --to ~/hoardy-web/mirror2 ~/hoardy-web/raw

See the documentation for the --remap-* options of export mirror sub-command and the options of the scrub function below for more info.

If you instead want a mirror made of raw files without any content censorship or link conversions, run:

hoardy-web export mirror -e 'response.body|eb' --to ~/hoardy-web/mirror-raw ~/hoardy-web/raw

The later command will render your mirror pretty quickly, but the other export mirror commands use the scrub function, and that will be pretty slow, mostly because html5lib and tinycss2 that hoardy-web uses for paranoid HTML and CSS parsing and filtering are fairly slow. Under CPython on my 2013-era laptop hoardy-web export mirror manages to render, on average, 3 HTML and CSS files per second. Though, this is not very characteristic of the overall exporting speed, since images and other media just get copied around at expected speeds of 300+ files per second.

Also, enabling +pretty (or +indent) in scrub will make HTML scrubbing slightly slower (since it will have to track more stuff) and CSS scrubbing a lot slower (since it requires complete parsing of the structure, not just tokenization).

Handling outputs to the same file

The above commands might fail if the set of WRR-dumps you are trying to export contains two or more dumps with distinct URLs that map to the same --output path. This will produce an error since hoardy-web does not permit file overwrites. With the default --output hupq format this can happen, for instance, when the URLs recorded in the reqres are long and so they end up truncated into the same file system paths.

In this case you can either switch to a more verbose --output format

hoardy-web export mirror --output hupq_n --to ~/hoardy-web/mirror3 ~/hoardy-web/raw

or just skip all reqres that would cause overwrites

hoardy-web export mirror --skip-existing --to ~/hoardy-web/mirror1 ~/hoardy-web/raw

The latter method also allow for incremental updates, discussed in the next section.

Update your mirror incrementally

By default, hoardy-web export mirror runs with implied --remap-all options which remaps all links in exported HTML files to local files, even if source WRR files for those would-be exported files are missing. This allows you to easily update your mirror directory incrementally by re-running hoardy-web export mirror with the same --to argument but new input paths. For instance:

# render everything archived in 2023
hoardy-web export mirror --to ~/hoardy-web/mirror1 ~/hoardy-web/raw/*/2023

# now, add new stuff archived in 2024, keeping already exported files as-is
hoardy-web export mirror --skip-existing --to ~/hoardy-web/mirror1 ~/hoardy-web/raw/*/2024

# same, but updating old files
hoardy-web export mirror --overwrite-dangerously --to ~/hoardy-web/mirror1 ~/hoardy-web/raw/*/2024

After the first of the above commands, links from pages generated from WRR files of ~/hoardy-web/raw/*/2023 to URLs contained in files from ~/hoardy-web/raw/*/2024 but not contained in files from ~/hoardy-web/raw/*/2023 will point to non-existent, yet unexported, files on disk. I.e. those links will be broken. Running the second or the third command from the example above will then export additional files from ~/hoardy-web/raw/*/2024, thus fixing some or all of those links.

If you want to treat links pointing to not yet hoarded URLs exactly like wget -mpk does, i.e. you want to keep them pointing to their original URLs instead of remapping them to yet non-existent local files (like the default --remap-all does), you need to run export mirror with --remap-open option:

hoardy-web export mirror --remap-open --to ~/hoardy-web/mirror4 ~/hoardy-web/raw

In practice, however, you probably won’t want the exact behaviour of wget -mpk, since opening pages generated that way is likely to make your web browser try to access the Internet to load missing page requisites. To solve this problem, hoardy-web provides --remap-semi option, which does what --remap-open does, except it also remaps unavailable action links and page requisites into void links, fixing that problem:

hoardy-web export mirror --remap-semi --to ~/hoardy-web/mirror4 ~/hoardy-web/raw

See the documentation for the --remap-* options below for more info.

Obviously, using --remap-open or --remap-semi will make incremental updates to your mirror impossible.

How to export a subset of archived data

The simplest way to export a subset of your data is to run one of hoardy-web organize --symlink --latest commands described above, and then do something like this:

hoardy-web export mirror --to ~/hoardy-web/mirror5 ~/hoardy-web/latest/archiveofourown.org

thus exporting everything ever archived from https://archiveofourown.org.

… by using --root-* and --depth

As an alternative to (or in combination with) keeping a symlink hierarchy of latest versions, you can load (an index of) an assortment of WRR files into hoardy-web’s memory but then export mirror only select URLs (and all requisites needed to properly render those pages) by running something like:

hoardy-web export mirror \
  --root-url 'https://archiveofourown.org/works/3733123?view_adult=true&view_full_work=true' \
  --root-url 'https://archiveofourown.org/works/30186441?view_adult=true&view_full_work=true' \
  --to ~/hoardy-web/mirror6 ~/hoardy-web/raw/*/2023

or

hoardy-web export mirror \
  --root-url-prefix 'https://archiveofourown.org/works/' \
  --to ~/hoardy-web/mirror6 ~/hoardy-web/raw/*/2023

See the documentation for the --root-* options below for more info and more --root-* variants.

hoardy-web loads (indexes) WRR files pretty fast, so if you are running from an SSD, you can totally feed it years of WRR files and then only export a couple of URLs, and it will take a couple of seconds to finish anyway, since only a couple of files will get scrubbed.

There is also --depth option, which works similarly to wget’s --level option in that it will follow all jump (a href) and action links accessible with no more than --depth browser navigations from recursion --root-*s and then export mirror all those URLs (and their requisites) too.

When using --root-* options, --remap-open works exactly like wget’s --convert-links in that it will only remap the URLs that are going to be exported and will keep the rest as-is. Similarly, --remap-closed will consider only the URLs reachable from the --root-*s in no more that --depth jumps as available.

Prioritizing some files other others

By default, files are read, queued, and then exported in the order they are specified on the command line, in lexicographic file system walk order when an argument is a directory. (See --paths-* and --walk-* options below if you want to change this.)

However, the above rule does not apply to page requisites, those are always (with or without --root-*, regardless of --paths-* and --walk-* options) get exported just after their parent HTML document gets parsed and before that document gets written to disk. I.e., export mirror will generate a new file containing an HTML document only after all of its requisites were already written to disk. I.e., when exporting into an empty directory, if you see export mirror generated an HTML document, you can be sure that all of its requisites loaded (indexed) by this export mirror invocation are rendered too. Meaning, you can you can go ahead and open it in your browser, even if export mirror did not finish yet.

Moreover, unlike all other sub-commands export mirror handles duplication in its input files in a special way: it remembers the files it has already seen and ignores them when they are given the second time. (All other commands don’t, they will just process the same file the second time, the third time, and so on. This is by design, other commands are designed to handle potentially enormous file hierarchies in near-constant memory.)

The combination of all of the above means you can prioritize rendering of some documents over others by specifying them earlier on the command line and then, in a later argument, specifying their containing directory to allow export mirror to also see their requisites and documents they link to. For instance,

hoardy-web export mirror \
  --to ~/hoardy-web/mirror7 \
  ~/hoardy-web/latest/archiveofourown.org/works__3733123*.wrr \
  ~/hoardy-web/latest/archiveofourown.org

will export all of ~/hoardy-web/latest/archiveofourown.org, but the web pages contained in files named ~/hoardy-web/latest/archiveofourown.org/works__3733123*.wrr and their requisites will be exported first.

This also works with --root-* options. E.g., the following

hoardy-web export mirror \
  --to ~/hoardy-web/mirror7 \
  ~/hoardy-web/latest/archiveofourown.org/works__3733123*.wrr \
  ~/hoardy-web/latest/archiveofourown.org \
  --root-url-prefix 'https://archiveofourown.org/works/'

will export all pages those URLs start with https://archiveofourown.org/works/ and all their requisites, but the pages contained in files named ~/hoardy-web/latest/archiveofourown.org/works__3733123*.wrr and their requisites will be exported first.

How to generate local offline website mirrors like wget -mpk from you old mitmproxy stream dumps

Assuming mitmproxy.001.dump, mitmproxy.002.dump, etc are files that were produced by running something like

mitmdump -w +mitmproxy.001.dump

at some point, you can generate website mirrors from them by first importing them all to WRR

hoardy-web import mitmproxy --to ~/hoardy-web/mitmproxy mitmproxy.*.dump

and then export mirror like above, e.g. to generate mirrors for all URLs:

hoardy-web export mirror --to ~/hoardy-web/mirror ~/hoardy-web/mitmproxy

How to generate previews for WRR files, listen to them via TTS, open them with xdg-open, etc

See script sub-directory for examples that show how to use pandoc and/or w3m to turn WRR files into previews and readable plain-text that can viewed or listened to via other tools, or dump them into temporary raw data files that can then be immediately fed to xdg-open for one-click viewing.

Usage

hoardy-web

A tool to display, search, manipulate, import, and export Web Request+Response (WRR) archive files produced by the Hoardy-Web Web Extension browser add-on.

Terminology: a reqres (Reqres when a Python type) is an instance of a structure representing HTTP request+response pair with some additional metadata.

hoardy-web pprint

Pretty-print given WRR files to stdout.

hoardy-web get

Compute output values by evaluating expressions EXPRs on a given reqres stored at PATH, then print them to stdout terminating each value as specified.

hoardy-web run

Compute output values by evaluating expressions EXPRs for each of NUM reqres stored at PATHs, dump the results into into newly generated temporary files terminating each value as specified, spawn a given COMMAND with given arguments ARGs and the resulting temporary file paths appended as the last NUM arguments, wait for it to finish, delete the temporary files, exit with the return code of the spawned process.

hoardy-web stream

Compute given expressions for each of given WRR files, encode them into a requested format, and print the result to stdout.

hoardy-web find

Print paths of WRR files matching specified criteria.

hoardy-web organize

Parse given WRR files into their respective reqres and then rename/move/hardlink/symlink each file to DESTINATION with the new path derived from each reqres’ metadata.

Operations that could lead to accidental data loss are not permitted. E.g. hoardy-web organize --move will not overwrite any files, which is why the default --output contains %(num)d.

hoardy-web import

Use specified parser to parse data in each INPUT PATH into (a sequence of) reqres and then generate and place their WRR-dumps into separate WRR files under DESTINATION with paths derived from their metadata. In short, this is hoardy-web organize --copy for INPUT files that use different files formats.

hoardy-web import bundle

Parse each INPUT PATH as a WRR-bundle (an optionally compressed sequence of WRR-dumps) and then generate and place their WRR-dumps into separate WRR files under DESTINATION with paths derived from their metadata.

hoardy-web import mitmproxy

Parse each INPUT PATH as mitmproxy stream dump (by using mitmproxy’s own parser) into a sequence of reqres and then generate and place their WRR-dumps into separate WRR files under DESTINATION with paths derived from their metadata.

hoardy-web export

Parse given WRR files into their respective reqres, convert to another file format, and then dump the result under DESTINATION with the new path derived from each reqres’ metadata.

hoardy-web export mirror

Parse given WRR files, filter out those that have no responses, transform and then dump their response bodies into separate files under DESTINATION with the new path derived from each reqres’ metadata. Essentially, this is a combination of hoardy-web organize --copy followed by in-place hoardy-web get and with a more advanced URL remapping capabilities available to the scrub function.

In short, this sub-command generates static offline website mirrors, producing results similar to those of wget -mpk.

Examples

Advanced examples

How to handle binary data

Trying to use response bodies produced by hoardy-web stream --format=json is likely to result garbled data as JSON can’t represent raw sequences of bytes, thus binary data will have to be encoded into UNICODE using replacement characters:

hoardy-web stream --format=json -ue . ../simple_server/pwebarc-dump/path/to/file.wrr | jq .

The most generic solution to this is to use --format=cbor instead, which would produce a verbose CBOR representation equivalent to the one used by --format=json but with binary data preserved as-is:

hoardy-web stream --format=cbor -ue . ../simple_server/pwebarc-dump/path/to/file.wrr | less

Or you could just dump raw response bodies separately:

hoardy-web stream --format=raw -ue response.body ../simple_server/pwebarc-dump/path/to/file.wrr | less
hoardy-web get ../simple_server/pwebarc-dump/path/to/file.wrr | less