software/hoardy-web/./extension/page/help.org

Passively capture, archive, and hoard your web browsing history, including the contents of the pages you visit, for later offline viewing, mirroring, and/or indexing. Your own personal private Wayback Machine that can also archive HTTP POST requests and responses, as well as most other HTTP-level data.

Files

Raw Source

How to read this document

It is highly recommended you view this page by clicking the Help button in the extension’s own UI. Doing that will make this page interactive: the settings popup will be displayed on the right on this page and hovering over or clicking on any links pointing to popup.html will highlight those elements in the popup.

See screenshots if you want to see how it will look.

You can still read this page outside of the extension’s UI, but be prepared for all links pointing to popup.html to be useless. Also, the version hosted on the author’s web site is superior to what GitHub’s web UI renders (this pages is written in org-mode markup language, converting it to GitHub Markdown will make things much harder, since it uses a lot of advanced markup features of org-mode to simplify things, and GitHub does not render org-mode files very well at the moment).

What?

Hoardy-Web is a browser extension (add-on) that passively captures and collects dumps of HTTP requests and responses as you browse the web, and then archives them using one or more of the following methods:

To view your archived data, see the accompanying hoardy-web CLI tool (also there).

Glossary

General operation

State Diagram

Reqres change their internal states according to the following state diagram (which is explained below):

(start) -> (request sent) -> (nIO) -> (headers received) -> (nIO) --> (body recived)
   |                           |                              |             |
   |                           v                              v             v
   |                     (no_response)                   (incomplete)   (complete)
   |                           |                              |             |
   |                           \                              |             |
   |\---> (canceled) ----\      \                             |             |
   |                      \      \                            \             |
   |\-> (incomplete_fc) ---\      \                            \            v
   |                        >------>---------------------------->-----> (finished)
   |\--> (complete_fc) ----/                                             /  |
   |                      /                                             /   |
   \----> (snapshot) ----/       /- (collected) <--------- (picked) <--/    |
                                /        ^                     |            |
               (stashIO?) <----/         |                     v            v
                   |                     \-- (in_limbo) <- (stashIO?) <- (dropped)
                   v                              |                         |
                (queued) <------------------\     |                         |
                / |  ^ \                     \    \-----> (discarded) <-----/
  (exported) <-/  |  |  \----------------\    \                ^
      |           |  |                    \    \               |
      |       /---/  \-----------------\   \    \              |
      |       |                        |    \    \             |
      |       v                        |     \    \            |
      |\-> (srvIO) -> (stashIO?) -> (failed) |     \           |
      |       |                        ^     /      \          |
      |       v                        |    v        |         |
      |   (sumbitted) --------------> (saveIO) --> (saved)     | {{!saving}}
      |       \                                                |
      \-------->-----------------------------------------------/

Step 1: Tracking

Hoardy-Web attaches to your browser’s runtime and tracks progress of HTTP requests and their responses, capturing both their request and response headers and data at appropriate times in the browser’s request and response processing pipeline.

Whether Hoardy-Web will track a given request depends on the Track new reqres toggles in the settings popup, e.g:

Disabling any of these toggles does not stop tracking of already initiated requests, it only stops new requests controlled by that toggle from being tracked.

The networking states of the State Diagram

As shown on the above diagram, a new reqres proceeds through the following networking states:

The states after the finished state

In principle, at reaching finished state the reqres can be serialized and saved to disk, but Hoardy-Web provides more states and UI for convenience and to workaround limitations of various browser APIs (a WebExtensions API function call that writes a data chunk into a file on a local file system while reporting out-of-disk-space errors does not exists).

Glossary

Step 2: Classification

On reaching the finished state, Hoardy-Web performs reqres classification controlled by Pick reqres for archival when they finish and Mark reqres as 'problematic' when they finish settings. The former set decides whether the reqres in question should be picked or dropped, which influences the actions Hoardy-Web will perform in the next step. The latter set decides if the reqres in question should be marked as problematic.

Problematic reqres

The problematic reqres status is a flag (NOT a state) that does not influence archival or any actions discussed in the latter steps. It exists because browsers provide no indication when some parts of the page failed to load properly — they expect you to actually look at the page with your eyes to notice something looking broken (and reload it manually) instead — which is counterproductive when you want to be sure that the whole page with all its resources was archived.

After all, parts of a dynamically loaded page might simply silently fail to be rendered by associated JavaScript because some of the HTTP requests that JavaScript did in background failed, or, on a static web page, layout and `CSS` might have made some of the incompletely loaded parts of the page invisible (by design or by accident).

So, to provide an indicator for such cases, Hoardy-Web keeps the log of problematic reqres and displays the number of elements in the log in its toolbar button’s badge.

By default, HTTP requests that failed to get a response, those that have incomplete response bodies, and those for which the browser reported potentially problematic errors but then Hoardy-Web picked them anyway, will be marked as problematic.

Problematic errors are errors like

but NOT errors like

(In principle, Hoardy-Web could have been designed to never record the errors of the latter category in the first place, thus simplifying the above bit, but Hoardy-Web is designed to follow the philosophy or “collect everything as browser gives it, as raw as possible, do all the post-processing logic separately, allow for no logic at all, if the user asks for it”.)

The raw error strings reported by the browser for each reqres can be seen in the history-log.

If this option is enabled Hoardy-Web will generate a desktop notification each time a new problematic reqres get produced. If you don’t care about the problematic flag and it annoys you, you should disable that option, not options under Mark reqres as 'problematic' when they finish settings. This way you could then still see the number of problematic reqres in extension’s toolbar button’s badge.

Glossary

Step 3: Collection, Discarding, and Limbo

On exit from the finished state each reqres gets split into

Since those tuples can be reconstructed back into the original reqres structures, the following will continue to refer to them as if nothing changed when the fact they are now being internally represented by those tuples is not relevant.

Normally, picked reqres proceed to the collected state and get queued for archival while dropped reqres proceed to being discarded from memory.

When Archive 'collected' reqres by toggle is enabled, those queued reqres proceed directly to the next step.

Limbo mode

However, sometimes you might want to actually look at a web page before deciding if you want to archive it or not. The naive way to do it would be to load a page with capture disabled first, look at it, and then, if you want to save it, enable it, and reload the page again with browser’s cache disabled via Control+F5 (and it has to be Control+F5, not just F5, because otherwise some URLs, on Firefox, might produce reqres in incomplete_fc state, on Chromium, their fetching could be silently skipped).

Obviously, this is both annoying and will force you to fetch everything twice.

Which is why Hoardy-Web implements “limbo mode”. With one of the limbo mode options enabled, Hoardy-Web will instead capture everything as normal, but then, instead of sending the reqres in question to collected or discarded states immediately, it will put them into in_limbo state where they would linger until you collect it or discard them manually by pressing the appropriate-buttons, or until Automatic actions for recently closed tabs options make a decision semi-automatically for you.

A picked reqres will be put into in_limbo when Pick into limbo setting is enabled in the currently active tab or when one-of-the-other settings is enabled for other reqres sources.

Similarly, a dropped reqres will be put into in_limbo when Drop into limbo setting is enabled in the currently active tab or when one-of-the-other settings is enabled for other reqres sources. (This latter option mainly exists for debugging.)

If this option is enabled and there are more than this number reqres in_limbo or the total size of all dumps in_limbo is more than this size (in MiB), Hoardy-Web will complain to remind you to collect or discard some of them so that your browser does not waste too much memory (and so that you won’t loose too much data if something crashes while Stash 'collected' reqres into local storage option discussed below is disabled).

Glossary

Step 3.5: Stashing

The stashed reqres status is, essentially, a flag that says this reqres was temporarily backed up to browser’s local storage.

When Archive 'collected' reqres by option is disabled but Stash 'collected' reqres into local storage option is enabled, instead of archiving newly queued reqres, Hoardy-Web will stash their (loggable, dump) tuples into browser’s local storage.

Similarly, when both Stash 'collected' reqres into local storage option and Stash 'in_limbo' reqres option (or one-of-the-other similar options) is enabled, then newly generated in_limbo reqres will also get immediately stashed into browser’s local storage.

Moreover, the following section will discuss how Hoardy-Web will try stashing failed to archive reqres into browser’s local storage too.

In other words, stashing exists to prevent loss of successfully captured but yet unarchived data in situations where

before you collected or discarded everything from in_limbo or Hoardy-Web has successfully archived everything from its archiving queue.

Note however, that even with stashing enabled Hoardy-Web will skip disk IO whenever possible: e.g., if both Archive 'collected' reqres by and Archive 'collected' reqres by > ... submitting them via 'HTTP' options discussed below are enabled, Hoardy-Web will first try to archive each new collected reqres straight from memory to the archiving server and only if that process fails will it attempt stashing them to local storage instead.

Meaning that

The above also implies that, technically, stashing is not a silver bullet against data loss. To try and make it such would mean unconditional immediate stashing of all captured data, which would waste a lot of disk IO on most Hoardy-Web configurations.

When both Archive 'collected' reqres by option and Stash 'collected' reqres into local storage option are disabled, then, after a new reqres gets queued, Hoardy-Web will generate a new desktop notification complaining about it, unless that option is disabled too.

You can also forcefully stash all currently queued, in_limbo, and failed reqres by pressing this button. It stashes everything immediately and unconditionally, ignoring all other stashing settings.

Glossary

Step 3.75: Logging

On entering collected or discarded state, loggable metadata of each reqres is copied into the recent reqres history-log and is kept there until the size of the log reaches this many elements, at which point the older elements of the log start being elided automatically.

You can also ask Hoardy-Web to forget all history manually by pressing this button, or to forget history of reqres generated by the currently active tab by pressing that button instead, or do the same by using similar buttons in the-log. Using the-log will also allow the use of reqres filtering options for doing this, allowing you to selectively forget parts of history.

Note, however, that problematic reqres will not get automatically elided from the log, nor forgotten by using the above buttons. To forget about them, you will have to unset the problematic flag on the respective reqres via this button, or that button, or use similar buttons in the-log.

Step 4: Archival

When Archive 'collected' reqres by toggle is enabled, Hoardy-Web will pop queued reqres from the archival queue one by one and then perform one or more of the following (in order they are listed):

You can enable more than one archival method at the same time. For a given loggable, Hoardy-Web will remember and skip previously successful archival methods if the loggable ever returns to the archival queue again (e.g., when one of the archival methods fails and you later ask Hoardy-Web to retry the archival, or when you re-queue a reqres from local storage from the Saved in Local Storage page).

Note the difference between stashed and saved reqres:

Buckets

Sometimes you might want to split your archivals into separate buckets to simplify future hoarding and sharing of collected archives. E.g., say, by default you might want to put everything into the “default” bucket, but then you might want to put reqres produced by a select tab where you just logged in into you personal account into the “private” bucket instead.

To implement this, for each reqres in the archival queue, Hoardy-Web computes a bucket parameter from the appropriate “Bucket” setting, e.g.

Evaluation of the bucket parameter is done just before each archival attempt, so if the queue is not yet empty, and you disable Archive 'collected' reqres by, edit some of the “Bucket” settings, and enable it again, Hoardy-Web will start using the new setting immediately.

When exporting via saveAs, bucket value will be used in the file name of the generated fake-Download .wrrb file and the dumps will be split into separate fake-Download files by said bucket. I.e., internally, the bundle discussed above is actually a set of per-bucket bundle’s.

When submitting to an HTTP server, Hoardy-Web will specify bucket as a query parameter (named “profile”, for historical reasons) to each HTTP POST request.

When stashing or saving to local storage, Hoardy-Web will record the value of bucket into each loggable before saving data to disk. If you restart your browser, thus starting a new Hoardy-Web session, Hoardy-Web will use the old stashed/saved bucket values for all new attempted archivals of old reqres generated by previous sessions.

Glossary

Handling of Failures

As noted above, if any of the archival methods fail, the reqres in question will be moved into the failed state.

Submissions of reqres that failed because of networking issues will be retried automatically every 60 seconds. Archivals of reqres rejected by the archiving server or those that failed to be saved to browser’s local storage will not be retried automatically as those usually happen when there is no space left on the device you are archiving to.

You can retry all failed archivals by pressing this button. You can also use it to nudge the archiving sub-process awake if some things got stuck in the queue by accident. E.g., after the extension got reloaded with a non-empty queue, or if you previously quit your browser before everything was archived.

If this option is enabled and a new reqres recently moved to the failed state, a new desktop notification will be generated. If this option is enabled, a new desktop notification will be generated when the archival queue gets empty the very first time or after any failures.

Glossary

Shortcuts

Hoardy-Web provides a bunch of keyboard and context menu shortcuts to allow using it in more efficient ways.

Keyboard shortcuts

Hoardy-Web provides shortcuts to:

Context menu actions

Hoardy-Web provides context menu actions to:

Quirks and Bugs

Known extension issues

Relevant issues of all browsers

Relevant issues of Firefox-based desktop browsers: Firefox, Tor Browser, LibreWolf, etc

Relevant issues of Firefox-based mobile browsers: Fenix aka Firefox for Android, Fennec, Mull, etc

All of the above apply, moreover:

Relevant issues of Chromium-based desktop browsers: Chromium, Chrome, etc

On Chromium-based browsers, there is no way to get HTTP response data without attaching Chromium’s debugger to a tab from which a request originates from. This makes things a bit tricky, for instance:

Moreover, Chromium has the following long-standing issues/bugs making things difficult:

Error messages and codes

Desktop notifications

Errors recorded in reqres, as seen in the-log

Most error codes are produced by attaching one of the following prefixes to the raw error code given by the browser:

In particular, webRequest::NS_ prefix on Firefox, and webRequest::net:: and debugger::net:: prefixes on Chromium signify various issues produced by the networking stacks of those browsers. For instance:

The exception to the above rule of keeping everything as raw as possible are webRequest::capture:: and debugger::capture:: prefixes which signify various errors produced by Hoardy-Web itself in its webRequest- or debugger-handling code, respectively. In particular:

Frequently Asked Questions

Does Hoardy-Web send any of my captured web browsing data anywhere?

Hoardy-Web only ever sends your data to the archiving Server URL you specify when the Archive 'collected' reqres by > ... submitting them via 'HTTP' option is enabled.

Nowhere else. Never else.

Does Hoardy-Web collect and send any telemetry anywhere?

For your convenience, Hoardy-Web saves some global stats across restarts (e.g., the Collected, Discarded, Picked, and Dropped lines).

However, none of those are ever sent anywhere and you can reset them at any time.

Will the answers to the above two questions ever change in a future version of Hoardy-Web?

No. I (the author) hate non-consensual data collection.

In fact, as you might have noticed, Hoardy-Web, unlike most other browser extensions, is almost trivial to reproducible-build from source on a POSIX-compliant system with a Nix package manager installed, and it has a privately operated source code mirror.

This is by design, I expect a chunk of Hoardy-Web users to be paranoid enough to only ever build it from source and install the results manually into their LibreWolf or some such, leaving zero telemetry fingerprints anywhere.

Hoardy-Web asks for a lot of permissions, what does it use all those permissions for?

Can I use Hoardy-Web to capture web pages while my browser runs with JavaScript disabled?

Yes.

Can I use Hoardy-Web to capture web pages that use a lot of JavaScript?

This is why DOM-snapshot buttons exist, see the following question.

In principle, Hoardy-Web will capture everything your browser fetches from the network as you browse the web, except for, at the moment, WebSockets data. So, web pages using only simple UI-related JavaScript code will work fine when you start replaying them “from scratch” via hoardy-web export mirror (also there) or some such.

However, in the most general case, “from scratch” replay of pages dynamically generated via JavaScript is not guaranteed. For example, consider a web page with a JavaScript code that generates a random number, then queries a remote server with that number, and then renders the result somehow. Obviously, such a web page can not be replayed “from scratch” since it will generate a new random number and your archive probably won’t have the corresponding server’s response for it.

Can I use Hoardy-Web to capture a web page as it currently is, after all JavaScript was run, not as it was when it was last fetched from the network?

Yes, you can capture DOM (Document Object Model) snapshots of all frames of the currently active tab by pressing this button in the popup.

Doing that will generate and capture snapshots of raw HTML’s or XML’s for each frame contained in the currently active tab. (Reqres-wise they will be 200 OK responses, but with protocol set to SNAPSHOT and method set to DOM.)

You can also do that for all open tabs at once by pressing that button.

How can I make Hoardy-Web capture a web page completely, especially when parts of it are loaded lazily?

In the most general case, you will have to scroll the page around and click random buttons and media elements.

Hoardy-Web has no “autopilot” for doing this, nor will it ever get one, at least as part of Hoardy-Web extension, since “autopiloting” is very website-specific. So, at the moment, the most general semi-automated solution is to run a website-specific UserScript via Tampermonkey or some such, wait until everything finishes loading, and then take a snapshot. (Hoardy-Web will get an integration for automating that, eventually.)

On the other hand, if you

then you can simply go to about:config and toggle dom.image-lazy-loading.enabled to false. All images will start being loaded eagerly after that.

Can I use Hoardy-Web to capture a web page without archiving it, look at it, decide if I want to save it, and archive it only if I do, all without reloading the page a second time?

Yes. This is why Pick into limbo setting exists. See above for more info.

In combination with Automatic actions for recently closed tabs options you can implement any of the following workflows:

Why do pages under https://addons.mozilla.org/ and https://chromewebstore.google.com/ can not be captured by Hoardy-Web?

Browsers prevent extensions from running on extension store pages to prevent them from manipulating ratings, reviews, and etc such things. However, you can archive https://addons.mozilla.org/ pages by running Hoardy-Web under Chromium and https://chromewebstore.google.com/ pages by running Hoardy-Web under Firefox.

When running Hoardy-Web under Chromium, a lot of my captures fail with debugger::capture::EMIT_FORCED::BY_DETACHED_DEBUGGER, debugger::capture::NO_RESPONSE_BODY::DETACHED_DEBUGGER, webRequest::capture::CANCELED::NO_DEBUGGER, and similar errors. What do I do?

You are either

Also, Chromium will occasionally detach its debugger at random, it just happens.

When running Hoardy-Web under Firefox, some of my captures fail with webRequest::capture::RESPONSE::BROKEN. What do I do?

This is a rare error caused by a race condition between webpage’s service/shared worker and browser’s networking code.

Usually, you can ignore this error, since loading another related page is likely to fulfill the same URL.

However, if this happens a lot to you, or if it annoys you, you can go to about:config, toggle dom.serviceWorkers.enabled to false, and restart the browser. Alternatively, you can use NoScript or some such extension to disable JavaScript, and thus the offending service/shared workers, on the page in question.

Why does a (specific) URL or some part of it fails to be properly captured by Hoardy-Web?

Did you read the notes on the bugs of the browser you are using above?

Most notably:

The documentation claims that all Hoardy-Web archival methods except for submission via HTTP are unsafe. Why?

Archival by exporting using saveAs (generation of fake-Downloads) can fail and lose a bit of your collected data at a time if you press a wrong button in you browser’s UI, mis-reconfigure your browser a bit, or your disk gets out of space unexpectedly.

Archival to browser’s local storage (which is what Hoardy-Web is doing by default) can loose all your collected data at the same time if you uninstall the extension by accident.

Meanwhile, archival by submission via HTTP has none of these problems:

Archival to browser’s local storage was added because it was very easy to implement after the stash was added. It is the default because it usually works fine, it properly reports errors, has the most consistent behaviour across all browsers, and does not require the user to install any Python code, which helps with on-boarding.

In the ideal world, browsers would provide a better saveAs API which would have a less annoying UI for the user and would return out-of-disk-space errors to the extension, in which case exporting via saveAs would be the default.

As it is now, the only way to be absolutely sure you data is properly forever saved to disk when the extension reports it archived is to use submission via HTTP.

When running Hoardy-Web under Firefox, enabling export via saveAs makes the browser’s UI quite annoying. Can it be fixed?

Yes, go to about:config and toggle browser.download.alwaysOpenPanel to false.

This page does not answer my question. What do I do?

If the whole content of this page (not just this section, did you try searching for stuff with Control+F? there’s a lot of info here) does not explain your problem, open an issue on GitHub or get in touch otherwise.