software/hoardy-web/./README.md

Passively capture, archive, and hoard your web browsing history, including the contents of the pages you visit, for later offline viewing, replay, mirroring, data scraping, and/or indexing. Your own personal private Wayback Machine that can also archive HTTP POST requests and responses, as well as most other HTTP-level data.

Files

Raw Source

What is Hoardy-Web?

Hoardy-Web is a suite of tools that helps you to passively capture, archive, and hoard your web browsing history. Not just the URLs, but also the contents and the requisite resources (images, media, CSS, fonts, etc) of the pages you visit. Not just the last 3 months, but from the beginning of time you start using it.

Practically speaking, you install Hoardy-Web’s extension/add-on into your web browser and just browse the web normally while it passively, in background, captures and archives HTTP requests and responses your web browser does in the process. The extension has a lot configuration options to help you tweak what should or should not be archived, provides indicators that can help you fully capture each page you do want to archive (it can notify you when some parts of a page failed to load in various ways), and has a very low memory footprint, keeping you browsing experience snappy even on ancient hardware (unless you explicitly configure it to do otherwise to, e.g., minimize writes to disk instead).

You can then view, replay, mirror, scrape, and/or index your archived data later by using Hoardy-Web’s own tool set, by plugging these tools into others, and/or by parsing and processing its outputs with your own tools.

Hoardy-Web was previously known as “Personal Private Passive Web Archive” aka “pwebarc”.

Who Hoardy-Web is for?

Hoardy-Web does this, and more, but mainly this.

The litmus test

If you are running multiple browsers or browser profiles to isolate different browsing sessions from each other, and you now want to introduce some historic persistence into your setup, then Hoardy-Web is for you.

If you are not already isolating browsing sessions, however, then introducing Hoardy-Web into your setup, in the long run, will probably be a liability. In which case, Hoardy-Web is not for you, navigate away, please.

Walkthrough

If you are reading this on GitHub, be aware that this repository is a mirror of a repository on the author’s web site. In author’s humble opinion, the rendering of the documentation pages there is superior to what can be seen on GitHub (its implemented via pandoc there).

With Hoardy-Web, technically speaking, capture, archival, and replay are all independent. This allows Hoardy-Web to be used in rather complex setups. When all the pieces are used together, however, they integrate into a rather smooth workflow, demonstrated below.

So, for illustrative purposes, I added the Hoardy-Web extension to a new browser profile in my Firefox, started a hoardy-web serve archiving server instance, ensured the extension is running in Submit dumps via 'HTTP' mode and its Server URL setting points to my hoardy-web serve instance (like this screenshot of the P&R tab shows), and then visited a Wikipedia page:

Screenshot of Firefox’s viewport with extension’s popup shown.

Also note that, for illustrative purposes, I had enabled limbo mode before visiting it so that Hoardy-Web would capture that page and all its requisite resources and then put them all into “limbo” instead of immediately archiving them, thus allowing me to look at the page first. This is most useful for when you are about to visit a new page and you are not yet sure you will want to archive that visit. Or for dynamically generated pages that update all the time with only some versions deserving being archived.

So, then, I decided I do want to save that page and its resources. Hence, I pressed the lower of “In limbo” check-mark buttons there to collect and archive everything from that tab to my hoardy-web serve archiving server instance.

Then, I pressed the “Replay” button to switch to a replay page generated by hoardy-web serve for the above capture (i.e. that button re-navigated that tab to http://127.0.0.1:3210/web/2/https://en.wikipedia.org/wiki/Bibliometrics, which hoardy-web serve then immediately redirected to the latest archived replay version of that URL):

Screenshot of Firefox’s viewport with a replay of the page from the previous screenshot.

… and closed the browser.

(Also, when not doing this for illustrative purposes, in practice, the above series of actions usually takes less than a second, via keyboard shortcuts, which Hoardy-Web has in abundance. Note how the tooltip on the above screenshot shows which shortcut that action is currently bound to.)

The magic

Then, later, I reopened my browser, restored the last session, and that tab was restored back with zero requests to the Internet.

Now note that Hoardy-Web also has a button (the one with the “eject” symbol on the “Globally” line) which re-navigates all open tabs that do not yet point to replay pages — excluding those for which Include in global replays per-tab setting is disabled — to their replays.

That is, you can use Hoardy-Web to implement the following browser workflow:

This is simply a superior way to live.

Well, alright, this is kinda nice, but I. Need. More! POWER!

Now, assuming you’ve been using Hoardy-Web for a while, capturing and archiving a bunch of stuff, you can now also use hoardy-web command-line interface to query and process your archived data in various ways. For instance:

The possibilities are, essentially, endless.

Parts and pieces

At the moment, Hoardy-Web tool set consists of the following pieces, all developed simultaneously in this repository.

The Hoardy-Web WebExtensions browser add-on

… which can capture all HTTP requests and responses (and DOM snapshots, i.e. the contents of the page after all JavaScript was run) your browser fetches, dump them into WRR format, and then archives those dumps

That is, the Hoardy-Web browser extension can be used independently of other tools developed here. You can install it and start saving your browsing history immediately, and then delay learning to use the rest for later.

Also, unless configured otherwise, the extension will dump and archive collected data immediately, to both prevent data loss and to free the used RAM as soon as possible, keeping your browsing experience snappy even on ancient hardware.

The extension can be run under

(See the gallery for screenshots).

Note, however, that while Hoardy-Web works under Chromium-based browsers, users of those browsers will have a worse experience, both with Hoardy-Web and with its alternatives, because

The extension does, however, try its best to collect all web traffic you browser generates. Therefore, it can

all the while

See the “Quirks and Bugs” section of extension’s Help page for known issues.

Nevertheless, capture-wise, the extension appears to be stable. However, the UI and additional features are being tweaked continuously at the moment. Also, Hoardy-Web is tested much less on Chromium than on Firefox.

The hoardy-web tool

… which does a bunch of stuff, to quote from there:

hoardy-web is a tool to inspect, search, organize, programmatically extract values and generate static website mirrors from, archive, view, and replay HTTP archives/dumps in WRR (“Web Request+Response”, produced by the Hoardy-Web Web Extension browser add-on, also on GitHub) and mitmproxy (mitmdump) file formats.

With the hoardy-web tool, you can view your archived data by:

hoardy-web serve can also play a role of an advanced archiving server for the Hoardy-Web browser extension. I.e., it can do archival, replay, or both at the same time.

hoardy-web allows you to search your archives

Also note that

hoardy-web tool is deep in its beta stage. At the moment, it does about 85% of the stuff I want it to do, and the things it does it does not do as well as I’d like.

See the TODO list for more info.

The hoardy-web-sas simple archiving server

… which simply dumps everything the Hoardy-Web extension submits to it to disk, one file per HTTP request+response.

This is useful in case when you can’t or do not want to use the fully-featured hoardy-web serve. E.g., say, you want to stick it onto a Raspberry Pi or something. Or if you are feeling paranoid and want to archive data from a browser which must not have any replay capability. Or if you want archival and replay to be done by separate processes.

The simple archiving server is stable (it’s so simple there hardly could be any bugs there).

A patch for Firefox

… to allow Hoardy-Web extension to collect request POST data as-is.

This is not required and even without that patch Hoardy-Web will collect everything in most cases, but it could be useful if you want to correctly capture POST requests that upload files.

See the “Quirks and Bugs” section of extension’s Help page for more info.

What Hoardy-Web is most similar to?

In essence, Hoardy-Web tool set allows you to setup your own personal private Wayback Machine which

Compared to most of its alternatives, Hoardy-Web DOES NOT:

Technically, the Hoardy-Web project is most similar to

In fact, an unpublished and now irrelevant ancestor project of Hoardy-Web was a tool to generate website mirrors from mitmproxy stream captures. (If you want that, hoardy-web tool can do that for you. It can take mitmproxy dumps as inputs.) But then I got annoyed by all the sites that don’t work under mitmproxy, did some research into the alternatives, decided there were none I wanted to use, and so I started adding stuff to my tool until it became Hoardy-Web.

For more info see the list of comparisons to alternatives.

Does the author eat what he cooks?

Yes, as of December 2024, I archive all of my web traffic using Hoardy-Web, without any interruptions, since October 2023. Before that my preferred tool was mitmproxy.

After adding each new feature to the hoardy-web tool, as a rule, I feed at least the last 5 years of my web browsing into it (at the moment, most of it converted from other formats to .wrr, obviously) to see if everything works as expected.

Quickstart

Install Hoardy-Web browser extension/add-on

… check it actually works

Now load any web page in your browser. The extension will report if everything works okay, or tell you where the problem is if something is broken.

… and you are done

Assuming the extension reported success: Congratulations! You are now collecting and archiving all your web browsing traffic originating from that browser. Repeat extension installation for all browsers/browser profiles as needed.

Technically speaking, if you just want to collect everything and don’t have time to figure out how to use the rest of this suite of tools right this moment, you can stop here and figure out how to use the rest of this suite later.

Except, be sure to see “Setup recommendations” below, since installing Hoardy-Web into a browser where you login into things, and then not configuring it properly, can make you more vulnerable.

It took me about 6 months before I had to refer back to previously archived data for the first time when I started using mitmproxy to sporadically collect my HTTP traffic in 2017. So, I recommend you start collecting immediately and be lazy about the rest.

(Also, I learned a lot about nefarious things some of the websites I visit do in background by inspecting the logs Hoardy-Web produces. You’d be surprised how many big websites generate HTTP requests with evil tracking data at the moment you close the containing tab. They do this because such requests can’t be captured and inspected with browser’s own Network Monitor, so most people are completely unaware.)

… except, you should probably switch to Submit dumps via 'HTTP' mode

In practice, though, your will probably want to install the hoardy-web tool and run hoardy-web serve archiving server, then, switch Hoardy-Web to Submit dumps via 'HTTP' mode, and then enjoy safe persistent archival with replay and search, like on the screenshots above.

Or, alternatively, you might want to use the hoardy-web-sas simple archiving server instead.

Technically speaking, archiving methods other than Submit dumps via 'HTTP' are all unsafe, since you can lose some or all of your archived data if your disk ever gets out of space, or if you accidentally uninstall the Hoardy-Web extension, or mis-click a button in your browser’s UI.

… or if you are unable or unwilling to do that

Alternatively, you can use the combination of archiving by saving of data to browser’s local storage (the default) followed by semi-manual export into WRR bundles.

Or, alternatively, you can switch to Export dumps via 'saveAs' mode by default and simply accept the resulting slightly more annoying UI (on Firefox, it can be fixed with a small about:config change) and slight unsafety.

Which is most useful when using Hoardy-Web under Tor Browser or similar.

Alternatively, on a system with Nix package manager

Setup recommendations

After you’ve installed all the parts you want to use, you should read:

Then, to follow the development:

If you are a developer yourself:

Finally, if your questions are still unanswered, then open an issue on GitHub or get in touch otherwise.

Why does Hoardy-Web exists?

So, you wake up remembering something interesting you saw a long time ago. Knowing you won’t find it in your normal browsing history, which only contains the URLs and the titles of the pages you visited in the last 3 months, you try looking it up on Google. You fail. Eventually, you remember the website you seen it at, or maybe you re-discovered the link in question in an old message to/from a friend, or maybe a tool like recoll or Promnesia helped you. You open the link… and discover it offline/gone/a parked domain. Not a problem! Have no fear! You go to Wayback Machine and look it up there… and discover they only archived an ancient version of it and the thing you wanted is missing there.

Or, say, you read a cool fanfiction on AO3 years ago, you even wrote down the URL, you go back to it wanting to experience it again… and discover the author made it private… and Wayback Machine saved only the very first chapter.

Or, say, there is a web page that can not be easily reached via curl/wget (because it is behind a paywall or complex authentication method that is hard to reproduce outside of a browser) but for accessibility or just simple reading comfort reasons each time you visit that page you want to automatically feed its source to a script that strips and/or modifies its HTML markup in a website-specific way and feeds it into a TTS engine, a Braille display, or a book reader app.

With most modern web browsers you can do TTS either out-of-the-box or by installing an add-on (though, be aware of privacy issues when using most of these), but tools that can do website-specific accessibility without also being website-specific UI apps are very few.

Or, say, there’s a web page/app you use (like a banking app), but it lacks some features you want, and in your browser’s Network Monitor you can see it uses JSON RPC or some such to fetch its data, and you want those JSONs for yourself (e.g., to compute statistics and supplement the app output with them), but the app in question has no public API and scraping it with a script is non-trivial (e.g., the site does complicated JavaScript+multifactor-based auth, tries to detect you are actually using a browser, and bans you immediately if not).

Or, maybe, you want to parse those behind-auth pages with a script, save the results to a database, and then do interesting things with them (e.g., track price changes, manually classify, annotate, and merge pages representing the same product by different sellers, do complex queries, like sorting by price/unit or price/weight, limit results by geographical locations extracted from text labels, etc).

Or, say, you want to fetch a bunch of pages belonging to two recommendation lists on AO3 or GoodReads, get all outgoing links for each fetched page, union sets for the pages belonging to the same recommendation list, and then intersect the results of the two lists to get a shorter list of things you might want to read with higher probability.

Or, more generally, say, you want to tag web pages referenced from a certain set of other web pages with some tag in your indexing software, and update it automatically each time you visit any of the source pages.

Or, say, you want to combine a full-text indexing engine, your browsing and derived web link graph data, your states/ratings/notes from org-mode, messages from your friends, and other archives, so that you could do arbitrarily complex queries over it all, like “show me all GoodReads pages for all books not marked as DONE or CANCELED in my org-mode files, ever mentioned by any of my friends, ordered by undirected-graph Pagerank algorithm biased with my own book ratings (so that books sharing GoodReads lists with the books I finished and liked will get higher scores)”. So, basically, you want a private personalized Bayesian recommendation system.

“If it is on the Internet, it is on Internet forever!” they said. “Everything will have a RESTful API!” they said. They lied!

Things vanish from the Internet, and from Wayback Machine, all the time.

A lot of useful stuff never got RESTful APIs, those RESTful APIs that exists are frequently buggy, you’ll probably have to scrape data from HTMLs anyway.

“Semantic Web will allow arbitrarily complex queries spanning multiple data sources!” they said. Well, 25 years later (“RDF Model and Syntax Specification” was published in 1999), almost no progress there, the most commonly used subset of RDF does what indexing systems in 1970s did, but less efficiently and with a worse UI.

Meanwhile, Hoardy-Web provides tools to help with all of the above.

Technical Philosophy

Hoardy-Web is designed to

To conform to the above design principles

Alternatives

Sorted by similarity to Hoardy-Web, most similar projects first. “Cons” and “Pros” are in comparison to the main workflow of Hoardy-Web.

DownloadNet

A self-hosted web crawler and web replay system written in Node.js.

Of all the tools known to me, DownloadNet is most similar to the intended workflow of the Hoardy-Web. Similarly to the combination of Hoardy-Web extension and hoardy-web serve and unlike pywb, heritrix, and other similar tools discussed below, DownloadNet captures web data directly from browser’s runtime. The difference is that Hoardy-Web does this using webRequest WebExtensions API and Chromium’s debugger API while DownloadNet is actually a web crawler that crawls the web by spawning a Chromium browser instance and attaching to it via its debug protocol (which are not the same thing). This is a bit weird, but it does work, and it allows you to use DownloadNet to archive everything passively as you browse, similarly to Hoardy-Web, since you can just browse in that debugged Chromium window and it will archive the data it fetches.

Pros:

Cons:

Same issues:

mitmproxy

A Man-in-the-middle SSL proxy.

Hoardy-Web was heavily inspired by mitmproxy and, essentially, aims to be to an in-browser alternative to it. I.e., unlike other alternatives discussed here, both Hoardy-Web and mitmproxy capture mostly-raw HTTP traffic, not just web pages. Unlike mitmproxy, however, Hoardy-Web is designed primarily for web archival purposes, not traffic inspection and protocol reverse-engineering, even though you can do some of that with Hoardy-Web too.

Pros:

Cons:

Though, the latter issue can be solved via this project’s hoardy-web tool as it can take mitmproxy dumps as inputs.

But you could just enable request logging in your browser’s Network Monitor and manually save your data as HAR archives from time to time

Cons:

Though, the latter issue can be solved via this project’s hoardy-web tool as it can take HAR dumps as inputs.

But you could setup SSL keys dumping then use Wireshark, or tcpdump, or some such, to capture your web traffic

Pros:

Cons:

And hoardy-web tool can’t help you with the latter, at the moment.

archiveweb.page and replayweb.page

Browser extensions similar to the Hoardy-Web extension in their implementation, though not in their philosophy and intended use.

Overall, Hoardy-Web and archiveweb.page extensions have a similar vibe, but the main difference is that archiveweb.page and related tools are designed for capturing web pages with the explicit aim to share the resulting archives with the public, while Hoardy-Web is designed for private capture of personally visited pages first.

In practical terms, archiveweb.page has a “Record” button, which you need to press to start recording a browsing session in a separate tab into a separate WARC file. In contrast, Hoardy-Web, by default, in background, captures and archives all successful HTTP requests and their responses from all your open browser tabs.

Pros:

Cons:

Differences in design:

Same issues:

SingleFile and WebScrapBook

Browser add-ons that capture whole web pages by taking their DOM snapshots and saving all requisite resources the captured page references.

Capturing a page with SingleFile generates a single (usually, quite large) HTML file with all the resources embedded into it. WebScrapBook saves its captures to browser’s local storage or to a remote server instead.

Pros:

Cons:

Differences in design:

WorldBrain Memex

A browser extension that implements an alternative mechanism to browser bookmarks. Saving a web page into Memex saves a DOM snapshot of the tab in question into an in-browser database. Memex then implements full-text search engine for saved snapshots and PDFs.

Pros:

Cons:

Differences in design:

pywb

A web archive replay system with a builtin web crawler and HTTP proxy. Brought to you by the people behind the Wayback Machine and then adopted by the people behind archiveweb.page.

A tool similar to hoardy-web serve.

Pros:

Cons:

heritrix

The crawler behind the Wayback Machine. It’s a self-hosted web app into which you can feed the URLs for them to be archived, so to make it archive all of your web browsing:

A tool similar to hoardy-web serve.

Pros:

Cons:

ArchiveBox

A web crawler and self-hosted web app into which you can feed the URLs for them to be archived.

Pros:

Cons:

Still, probably the best of the self-hosted web-app-server kind of tools for this ATM.

reminiscence

A system similar to ArchiveBox, but has a bulit-in tagging system and archives pages as raw HTML + whole-page PNG rendering/screenshot — which is a bit weird, but it has the advantage of not needing any replay machinery at all for re-viewing simple web pages, you only need a plain simple image viewer, though it will take a lot of disk space to store those huge whole-page “screenshot” images.

Pros and Cons are almost identical to those of ArchiveBox above, except it has less third-party tools around it so less stuff can be automated easily.

wget -mpk and curl

Pros:

Cons:

wpull

wget -mpk done right.

Pros:

Cons:

grab-site

A simple web crawler built on top of wpull, presented to you by the ArchiveTeam, a group associated with the Wayback Machine which appears to be the source of archives for the most of the interesting pages I find there.

Pros:

Cons:

monolith and obelisk

Stand-alone tools doing the same thing SingleFile add-on does: generate single-file HTMLs with bundled resources viewable directly in the browser.

Pros:

Cons:

single-file-cli

Stand-alone tool based on SingleFile, using a headless browser to capture pages.

A more robust solution to do what monolith and obelisk do, if you don’t mind Node.js and the need to run a headless browser.

Archivy

A self-hosted wiki that archives pages you link to in background.

Others

ArchiveBox wiki has a long list or related things.

If you like this, you might also like

Perkeep

It’s an awesome personal private archival system adhering to the same philosophy as Hoardy-Web, but it’s basically an abstraction replacing your file system with a content-addressed store that can be rendered into different “views”, including a POSIXy file system.

It can do very little in helping you actually archive a web page, but you can start dumping new Hoardy-Web .wrr files with compression disabled, decompress you existing .wrr files, and then feed them all into Perkeep to be stored and automatically replicated to your backup copies forever. (Perkeep already has a better compression than what Hoardy-Web currently does and provides a FUSE FS interface for transparent operation, so compressing things twice would be rather counterproductive.)

Meta

Changelog?

See CHANGELOG.md.

TODO?

See the bottom of CHANGELOG.md.

License

GPLv3+, some small library parts are MIT.

Contributing

Contributions are accepted both via GitHub issues and PRs, and via pure email. In the latter case I expect to see patches formatted with git-format-patch.

If you want to perform a major change and you want it to be accepted upstream here, you should probably write me an email or open an issue on GitHub first. In the cover letter, describe what you want to change and why. I might also have a bunch of code doing most of what you want in my stash of unpublished patches already.