software/hoardy-web/./extension/page/help.org

Passively capture, archive, and hoard your web browsing history, including the contents of the pages you visit, for later offline viewing, replay, mirroring, data scraping, and/or indexing. Your own personal private Wayback Machine that can also archive HTTP POST requests and responses, as well as most other HTTP-level data.

Files

Raw Source

How to read this document

It is highly recommended you view this page by clicking the Help button in the extension’s own UI. Doing that will make this page interactive: the settings popup will be displayed on the right on this page and hovering over or clicking on any links pointing to popup.html will highlight those elements in the popup.

See screenshots if you want to see how it will look.

You can still read this page outside of the extension’s UI, but be prepared for all links pointing to popup.html to be useless. Also, the version hosted on the author’s web site is superior to what GitHub’s web UI renders (this pages is written in org-mode markup language, converting it to GitHub Markdown will make things much harder, since it uses a lot of advanced markup features of org-mode to simplify things, and GitHub does not render org-mode files very well at the moment).

What is this?

This page is the primary documentation of the Hoardy-Web browser extension (add-on), it

For more information see project’s documentation.

Conventions

When you open this document by clicking the Help button in extension’s UI, this page has two parts: this help text, and an iframe with a completely unrolled popup UI in it.

The whole page will switch between single- and two-column layouts depending on available viewport width (which depends on device width and zoom level). In single-column layout the popup UI is placed after the end of the help text. In two-column layout they are placed side-by-side.

In both layouts:

In cases when clicking on a link scrolls this page around or navigates to another page, pressing the “Back” button of your browser will get you back to the exact link you clicked and then highlight it, making it easy to get back to reading from the exact place you left off.

Go forth and try it by clicking one or more of the above links.

The above rules are also apply on all other internal pages of Hoardy-Web, e.g. the Changelog page.

General operation

Glossary

State Diagram

Reqres change their internal states according to the following state diagram (which is explained below):

(start) -> (request sent) -> (nIO) -> (headers received) -> (nIO) --> (body recived)
   |                           |                              |             |
   |                           v                              v             v
   |                     (no_response)                   (incomplete)   (complete)
   |                           |                              |             |
   |                           \                              |             |
   |\---> (canceled) ----\      \                             |             |
   |                      \      \                            \             |
   |\-> (incomplete_fc) ---\      \                            \            v
   |                        >------>---------------------------->-----> (finished)
   |\--> (complete_fc) ----/                                             /  |
   |                      /                                             /   |
   \----> (snapshot) ----/       /- (collected) <--------- (picked) <--/    |
                                /        ^                     |            |
               (stashIO?) <----/         |                     v            v
                   |                     \-- (in_limbo) <- (stashIO?) <- (dropped)
                   v                              |                         |
                (queued) <--------------------\   |                         |
                / |  ^ \                       \  \-----> (discarded) <-----/
  (exported) <-/  |  |  \-------------------\   \              ^
      |           |  |                       \   \             |
      |       /---/  \-----------------\      \   \            |
      |       |                        |       \   \           |
      |       v                        |        \   \          |
      |\-> (srvIO) -> (stashIO?) -> (unarchived) |   \         |
      |       |                        ^        /    |         |
      |       |                        |    /--/     |         |
      |       v                        |    v        |         |
      |   (sumbitted) --------------> (saveIO) --> (saved)     | {{!saving}}
      |       \                                                |
      \-------->-----------------------------------------------/

Step 1: Tracking

Hoardy-Web attaches to your browser’s runtime and tracks progress of HTTP requests and their responses, capturing both their request and response headers and data at appropriate times in the browser’s request and response processing pipeline.

Whether Hoardy-Web will track a given request depends on the Track new requests toggles in the settings popup, e.g:

Disabling any of these toggles does not stop tracking of already initiated requests, it only stops new requests controlled by that toggle from being tracked.

The networking states of the State Diagram

As shown on the above diagram, a new reqres, i.e. a new HTTP request and response pair, proceeds through the following networking states:

The states after the finished state

In principle, at reaching finished state the primary objective of Hoardy-Web with respect of that reqres is now complete, so it could be written to disk and forgotten about.

Unfortunately for Hoardy-Web, browsers do not allow web apps and extensions to simply write files to user’s file system, and all existing browser APIs that do allow for persistence to disk in some way all have different limitations.

Also, it is quite useful to have more states after finished to improve the UI and allow for various conditional workflows.

Which is why Hoardy-Web has more states after finished and more steps after this one.

Glossary

Step 2: Classification

When a reqres reaches the finished state it gets classified using algorithms described below. The results of these computations influence which of the next reqres processing steps get taken for that reqres and what gets displayed to the user.

Problematic reqres

Conventional web browsers provide no explicit indication when a part of a web page fails to load properly. Apparently, you are expected to actually look at the page with your eyes, notice something looking broken, and reload it manually if so. Obviously, this can be quite inconvenient when you want to be sure that the whole page with all of its resources was archived. Especially when parts of a dynamically loaded page might simply silently fail to be rendered by associated JavaScript because some of the HTTP requests that JavaScript did in background failed, or, on a static web page, layout and CSS might have made some of the incompletely loaded parts of the page invisible (by design or by accident).

So, to provide such an indicator, Hoardy-Web keeps track of reqres that fail to load properly and marks them with a problematic flag (NOT a state) which influences

What gets marked as problematic is controlled by Mark reqres as 'problematic' when they finish options.

By default, HTTP requests that failed to get a response, those that have incomplete response bodies, and those for which the browser reported potentially problematic errors but then Hoardy-Web picked them anyway, will be marked as problematic.

Problematic errors are errors like

but NOT errors like

(In principle, Hoardy-Web could have been designed to never record the errors of the latter category in the first place, thus simplifying the above bit, but Hoardy-Web is designed to follow the philosophy or “collect everything as browser gives it, as raw as possible, do all the post-processing logic separately, allow for no logic at all, if the user asks for it”.)

The raw error strings reported by the browser for each reqres can be seen in the history-log.

If you don’t care about the problematic flag in a select tab and those notifications annoy you, you should disable this option. If they annoy you in general, you can disable global one instead. You should probably not, however, disable too many of the options under Mark reqres as 'problematic' when they finish settings. This way, even with notifications disabled, you could then still see the number of problematic reqres in extension’s toolbar button’s badge.

Note, however, is that the problematic flag is purely a UI thing, it does not influence archival or any of the other step described below in any way.

Picked and Dropped reqres

In contrast, to the above, each new finished reqres advances either to the picked or the dropped states, which does influence the actions Hoardy-Web performs in the next steps.

Which of those two states gets selected is decided based on the Pick reqres for archival when they finish options.

By default, all complete and complete_fc reqres get picked, regardless of their HTTP response status codes, while the rest get dropped.

Glossary

Step 3: Collection, Discarding, and Limbo

On exit from the finished state each reqres gets split into

Since those tuples can be reconstructed back into the original reqres structures, the following will continue to refer to them as if nothing changed when the fact they are now being internally represented by those tuples is not relevant.

Normally, picked reqres proceed to the collected state and get queued for archival while dropped reqres proceed to being discarded from memory.

When Archive 'collected' reqres toggle is enabled, those queued reqres proceed directly to the next step.

“Limbo” mode

However, sometimes you might want to actually look at a web page before deciding if you want to archive it or not. The naive way to do it would be to load a page with capture disabled first, look at it, and then, if you want to save it, enable it, and reload the page again with browser’s cache disabled via Control+F5 (and it has to be Control+F5, not just F5, because otherwise some URLs, on Firefox, might produce reqres in incomplete_fc state, and on Chromium, their re-fetching could be silently skipped).

Obviously, this is both annoying and will force you to fetch everything twice.

Which is why Hoardy-Web implements “limbo mode”. With one of the limbo mode options enabled, Hoardy-Web will instead capture everything as normal, but then, instead of sending the newly captured reqres to collected or discarded states immediately, it will put them into in_limbo state where they would linger until you collect or discard them manually by pressing the appropriate-buttons, or until Closed tabs options make a decision semi-automatically for you.

A picked reqres will be put into in_limbo when Pick into limbo setting is enabled in the currently active tab or when one-of-the-other settings is enabled for other reqres sources.

Similarly, a dropped reqres will be put into in_limbo when Drop into limbo setting is enabled in the currently active tab or when one-of-the-other settings is enabled for other reqres sources. (This latter option mainly exists for debugging.)

If this option is enabled and there are more than this number reqres in_limbo or the total size of all dumps in_limbo is more than this size (in MiB), Hoardy-Web will complain to remind you to collect or discard some of them so that your browser does not waste too much memory (and so that you won’t loose too much data if something crashes while Stash 'collected' reqres into local storage option discussed below is disabled).

Glossary

Step 3.5: Stashing

The stashed reqres status is, essentially, a flag that says this reqres was temporarily backed up to browser’s local storage. In other words, stashing exists to prevent loss of successfully captured but yet unarchived data in situations where

before you collected or discarded everything from in_limbo or Hoardy-Web has successfully archived everything from its archiving queue.

In particular:

Moreover, the following section will discuss how Hoardy-Web will try stashing unarchived reqres into browser’s local storage too.

Note however, that even with stashing enabled Hoardy-Web will skip disk IO whenever possible: e.g., if both Archive 'collected' reqres and Submit dumps via 'HTTP' options discussed below are enabled, Hoardy-Web will first try to archive each new collected reqres straight from memory to the archiving server and only if that process fails will it attempt stashing them to local storage instead.

Meaning that

The above also implies that, technically, stashing is not a silver bullet against data loss. To try and make it such would mean unconditional immediate stashing of all captured data, which would waste a lot of disk IO on most Hoardy-Web configurations.

When both Archive 'collected' reqres option and Stash 'collected' reqres into local storage option are disabled, then, after a new reqres gets queued, Hoardy-Web will generate a new notification complaining about it, unless that option is disabled too.

You can also forcefully stash all currently queued, in_limbo, and unarchived reqres by pressing this button. It stashes everything immediately and unconditionally, ignoring all other stashing settings. When reloading the extension via the Reload button or via Auto-reload on updates option, this action will be run automatically.

Glossary

Step 3.75: Logging

On entering collected or discarded state, loggable metadata of each reqres is copied into the recent reqres history-log and is kept there until the size of the log reaches this many elements, at which point the older elements of the log start being elided automatically.

You can also ask Hoardy-Web to forget all history manually by pressing this button, or to forget history of reqres generated by the currently active tab by pressing that button instead, or do the same by using similar buttons in the-log. Using the-log also allows the use of reqres filtering options available there for doing this, allowing you to selectively forget parts of history.

Note, however, that problematic reqres will not get automatically elided from the log, nor forgotten by using the above buttons. To forget about them, you will have to first unset the problematic flag on the respective reqres via this button, or that button, or use similar buttons in the-log.

Step 4: Archival

When Archive 'collected' reqres toggle is enabled, Hoardy-Web will pop queued reqres from the archival queue one by one and then perform one or more of the following (in order they are listed):

You can enable more than one archival method at the same time. For a given loggable, Hoardy-Web will remember and skip previously successful archival methods if the loggable ever returns to the archival queue again (e.g., when one of the archival methods fails and you later ask Hoardy-Web to retry the archival, or when you re-queue a reqres from local storage from the Saved in Local Storage page).

Note the difference between stashed and saved reqres:

Buckets (aka collections)

Sometimes you might want to semi-automatically split your collected archives into separate disjoint sets. Say, for instance, you want to split out archives generated by a select tab into a separate set you plan to share with somebody else. In Hoardy-Web such sets are called buckets. WARC-based tools sometimes call these “collections” instead.

To implement this, for each reqres in the archival queue, Hoardy-Web takes a bucket value from a corresponding “Bucket” setting:

Evaluation of bucket is done just before each archival attempt, so if the queue is not yet empty, and you disable Archive 'collected' reqres, edit some of the “Bucket” settings, and enable it again, Hoardy-Web will start using the new setting immediately.

When exporting via saveAs, bucket value will be used in the file name of the generated fake-Download .wrrb file and the dumps will be split into separate fake-Download files by said bucket. I.e., internally, the bundle discussed above is actually a set of per-bucket bundle’s.

When submitting to an HTTP server, Hoardy-Web will specify bucket as a query parameter (named “profile”, for historical reasons) to each HTTP POST request, which will cause the configured archiving server to put those WRR files into a directory with the same name.

When stashing or saving to local storage, Hoardy-Web will record the value of bucket into each loggable before saving data to disk. If you restart your browser, thus starting a new Hoardy-Web session, Hoardy-Web will use the old stashed/saved bucket values for all new attempted archivals of old reqres generated by previous sessions.

So, for example, if you want to share a subset of your captures, you can

Handling of failures

As noted above, if any of the archival methods fail, the reqres in question will be moved into the unarchived state.

Submissions of reqres that unarchived because of networking issues will be retried automatically every 60 seconds. Archivals of reqres rejected by the archiving server or those that failed to be saved to browser’s local storage will not be retried automatically as those usually happen when there is no space left on the device you are archiving to.

You can retry all archiving failures by pressing one of this or that buttons. You can also use them to nudge the archiving sub-process awake if some things got stuck in the queue by accident. E.g., after the extension got reloaded with a non-empty queue, or if you previously quit your browser before everything was archived.

If this option is enabled and a new reqres recently moved to the unarchived state, a new notification will be generated. If this option is enabled, a new notification will be generated when the archival queue gets empty the very first time or after any failures.

Glossary

Common workflows

Replay integration

When your archiving server supports it and this option is not disabled, Hoardy-Web enables its integration with replay over HTTP.

At the moment, this includes two buttons which re-navigate all tabs or the currently active tab (respectively) to their replay pages as well as keyboard shortcuts and context menu actions described below.

“Work offline” mode

Sometimes, you might want to block a select tab from performing new HTTP requests.

Say, for instance, you opened a URL in a new tab, then you forgot about that tab for a while, but then you returned to it again, and you now want to read that page. But then you discover that the font size is too small for you, and so you want to change that tab’s zoom level. Changing zoom level will change tab’s viewport size, which, if the page uses responsive CSS, will likely force your browser to generate new HTTP requests to fetch data used by previously inactive parts of the layout. Essentially, this will notify the page’s origin server that you are now interacting with that page. Some websites do this on purpose to track users that run with JavaScript disabled.

Meanwhile, normally, when using the hoardy-web tool, pages of static website mirrors generated by its mirror sub-command and HTTP replay pages generated by its serve sub-command remap all URLs of page requisites to point to local files and replay URLs. (Though, it is configurable.) But HTML5 specification is quite large and gets updated all the time, interactions between remapped pages and some browser extensions can sometimes break things, and hoardy-web can have bugs in its remapping code. So, remapping of some of those URLs can fail sometimes.

Say, however, you want to ensure that

In some cases you might even feel paranoid enough to want to prevent your browser from opening non-remapped jump-links (a href), even when you click them (by accident).

Desktop versions of Firefox-based browsers have File > Work Offline option that can solve most of this, but it disables all new requests browser-wise, which is quite inconvenient and error-prone if you want to keep some of your tabs offline while not restricting others, and it will break replay over HTTP with hoardy-web serve. Chromium-based browsers do not appear to have such a feature at all.

To solve this issue — and to add an equivalent of File > Work Offline to Chromium-based browsers — Hoardy-Web implements its own Work offline mode controlled via the following toggles:

Unlike the File > Work Offline option of Firefox, enabling any of these toggles:

In the latter case, those newly generated canceled reqres will also be marked as problematic if that option is enabled. So, for convenience, there is also a toggle that controls whether toggling Work offline options (from the popup or with keyboard shortcuts) should also automatically set the corresponding Track new requests option to the opposite value.

Finally, there is also a bunch options that automatically enable “Work offline” mode in tabs with various classes of URLs. By default, “Work offline” mode is enabled for file: and replay URLs to stop any pages generated by hoardy-web mirror and hoardy-web serve to accessing the Internet.

Re-archival

If you archived some data by saving it into local storage and you now want to re-archive the same data using another method, do the following:

If after you confirming everything was properly re-archived you now want to wipe that re-archived data from local storage, do the following:

Using with Tor Browser

When using Hoardy-Web with Tor Browser, you probably want to configure it all in such a way so that all of the machinery of Hoardy-Web is completely invisible to web pages running under your Tor Browser, to prevent fingerprinting.

Mostly convenient, paranoid

So, in the mostly convenient yet sufficiently paranoid setup, you would only ever use Hoardy-Web extension configured to Submit dumps via 'HTTP' (which is the default) and then export your dumps manually at the end of a browsing session, see re-archival intructions.

Yes, this is slightly annoying, but this is the only absolutely safe way to export data out of Hoardy-Web without using submission via HTTP, and you don’t need to do this at the end of each and every browsing session.

Simpler, but slightly unsafe

You can also simply switch to using Export dumps via 'saveAs' by default instead, disabling the other archiving methods.

I expect this to work fine for 99.99% of the users 99.99% of the time, but, technically speaking, this is unsafe. Also, by default, browser’s UI will be slightly annoying, since Hoardy-Web will be generating new “Downloads” all the time, but that issue can be fixed with a small about:config change.

Most convenient, less paranoid

In theory, running hoardy-web-sas simple archiving server script listening on a loopback IP address should prevent web pages from accessing it, since the browsers disallow cross-origin requests from non-localhost domains to localhost, thus making the normal Submit dumps via 'HTTP' mode setup quite viable. However, Tor Browser is configured to proxy everything via the TOR network by default, so you need to configure it to exclude the requests to hoardy-web-sas from being proxied.

A slightly more paranoid than normal way to do this is as follows:

Why? When using Tor Browser, you probably don’t want to use 127.0.0.1 and 127.0.1.1 as those are normal loopback IP addresses used by most things, and you probably don’t want to allow any JavaScript code running in Tor Browser to (potentially, if there are any bugs) access to those. Yes, if there are any bugs in the cross-domain check code, with this setup JavaScript could discover you are using Hoardy-Web (and then, in the worst case, DOS your system by flooding your disk with garbage dumps), but it won’t be able to touch the rest of your stuff listening on your other loopback addresses.

So, while this setup is not super-secure if your Tor Browser allows web pages to run arbitrary JavaScript (in which case, let’s be honest, no setup is secure), with JavaScript always disabled, to me, it looks like a completely reasonable thing to do.

Best of both

In theory, you can have the benefits of both invisibility of archival to local storage and convenience, guarantees, and error reporting of archival to an archiving server at the same time:

In practice, doing this manually all the time is prone to errors. Automating this away is on the TODO list.

Then, you can improve on this setup even more by running both the Tor Browser and hoardy-web-sas in separate containers/VMs.

Shortcuts

Hoardy-Web provides a bunch of keyboard and context menu shortcuts to allow using it in more efficient ways.

Keyboard shortcuts

Hoardy-Web provides shortcuts to:

Context menu actions

Hoardy-Web provides context menu actions to:

Error messages and codes

Error messages, as seen in generated notifications

Errors recorded in reqres, as seen in the-log

Most error codes are produced by attaching one of the following prefixes to the raw error code given by the browser:

In particular, webRequest::NS_ prefix on Firefox, and webRequest::net:: and debugger::net:: prefixes on Chromium signify various issues produced by the networking stacks of those browsers. For instance:

The exception to the above rule of keeping everything as raw as possible are webRequest::capture:: and debugger::capture:: prefixes which signify various errors produced by Hoardy-Web itself in its webRequest- or debugger-handling code, respectively. In particular:

Quirks and Bugs

If you are reading this page outside of the extension’s UI be sure to read the very top of this page first.

Known Hoardy-Web’s own issues

Known issues that are consequences of issues of all supported browsers

Known issues that are consequences of issues of Firefox-based desktop browsers: Firefox, Tor Browser, LibreWolf, etc

Known issues that are consequences of issues of Firefox-based mobile browsers: Fenix aka Firefox for Android, Fennec, Mull, etc

All of the above apply, moreover:

Known issues that are consequences of issues of Chromium-based desktop browsers: Chromium, Chrome, etc

On Chromium-based browsers, there is no way to get HTTP response data without attaching Chromium’s debugger to a tab from which a request originates from. This makes things a bit tricky, for instance:

Moreover, Chromium has the following long-standing issues/bugs making things difficult:

Frequently Asked Questions

If you are reading this page outside of the extension’s UI be sure to read the very top of this page first.

General

Does Hoardy-Web send any of my captured web browsing data anywhere?

Hoardy-Web only ever sends your data to the archiving Server URL you specify when the Submit dumps via 'HTTP' option is enabled.

Nowhere else. Never else.

Does Hoardy-Web collect and send any telemetry anywhere?

For your convenience, Hoardy-Web saves some global stats across restarts (e.g., the Collected, Discarded, Picked, and Dropped lines).

However, none of those are ever sent anywhere and you can reset them at any time.

Will the answers to the above two questions ever change in a future version of Hoardy-Web?

No. I (the author) hate non-consensual data collection.

In fact, as you might have noticed, Hoardy-Web, unlike most other browser extensions, is almost trivial to reproducible-build from source on a POSIX-compliant system with a Nix package manager installed, and it has a privately operated source code mirror.

This is by design, I expect a chunk of Hoardy-Web users to be paranoid enough to only ever build it from source and install the results manually into their LibreWolf or some such, leaving zero telemetry fingerprints anywhere.

Hoardy-Web asks for a lot of permissions, what does it use all those permissions for?

Capture

Can I use Hoardy-Web to capture web pages while my browser runs with JavaScript disabled?

Yes.

Can I use Hoardy-Web to capture web pages that use a lot of JavaScript?

This is why DOM-snapshot buttons exist, see the following question.

In principle, Hoardy-Web will capture everything your browser fetches from the network as you browse the web, except for, at the moment, WebSockets data. So, web pages using only simple UI-related JavaScript code will work fine when you start replaying them “from scratch” via hoardy-web serve, hoardy-web mirror, or some such.

However, in the most general case, “from scratch” replay of pages dynamically generated via JavaScript is not guaranteed. For example, consider a web page with a JavaScript code that generates a random number, then queries a remote server with that number, and then renders the result somehow. Obviously, such a web page can not be replayed “from scratch” since it will generate a new random number and your archive probably won’t have the corresponding server’s response for it.

Can I use Hoardy-Web to capture a web page as it currently is, after all JavaScript was run, not as it was when it was last fetched from the network?

Yes, you can capture DOM (Document Object Model) snapshots of all frames of the currently active tab by pressing this button in the popup.

Doing that will generate and capture snapshots of raw HTML’s or XML’s for each frame contained in the currently active tab. (Reqres-wise they will be 200 OK responses, but with protocol set to SNAPSHOT and method set to DOM.)

You can also do that for all open tabs for which this setting is enabled all at once by pressing that button.

How can I make Hoardy-Web capture a web page completely, especially when parts of it are loaded lazily?

In the most general case, you will have to scroll the page around and click random buttons and media elements.

Hoardy-Web has no “autopilot” for doing this, nor will it ever get one, at least as part of Hoardy-Web extension, since “autopiloting” is very website-specific. So, at the moment, the most general semi-automated solution is to run a website-specific UserScript via Tampermonkey or some such, wait until everything finishes loading, and then take a snapshot. (Hoardy-Web will get an integration for automating that, eventually.)

On the other hand, if you

then you can simply go to about:config and toggle dom.image-lazy-loading.enabled to false. All images will start being loaded eagerly after that.

Can I use Hoardy-Web to capture a web page without archiving it, look at it, decide if I want to save it, and archive it only if I do, all without reloading the page a second time?

Yes. This is why Pick into limbo setting exists. See above for more info.

In combination with Closed tabs options you can implement any of the following workflows:

Why do pages under https://addons.mozilla.org/ and https://chromewebstore.google.com/ can not be captured by Hoardy-Web?

Browsers prevent extensions from running on extension store pages to prevent them from manipulating ratings, reviews, and etc such things. However, you can archive https://addons.mozilla.org/ pages by running Hoardy-Web under Chromium and https://chromewebstore.google.com/ pages by running Hoardy-Web under Firefox.

When running Hoardy-Web under Chromium, a lot of my captures fail with debugger::capture::EMIT_FORCED::BY_DETACHED_DEBUGGER, debugger::capture::NO_RESPONSE_BODY::DETACHED_DEBUGGER, webRequest::capture::CANCELED::NO_DEBUGGER, and similar errors. What do I do?

You are either

Also, Chromium will occasionally detach its debugger at random, it just happens.

When running Hoardy-Web under Firefox, some of my captures fail with webRequest::capture::RESPONSE::BROKEN. What do I do?

This is a rare error caused by a race condition between webpage’s service/shared worker and browser’s networking code.

Usually, you can ignore this error, since loading another related page is likely to fulfill the same URL.

However, if this happens a lot to you, or if it annoys you, you can go to about:config, toggle dom.serviceWorkers.enabled to false, and restart the browser. Alternatively, you can use NoScript or some such extension to disable JavaScript, and thus the offending service/shared workers, on the page in question.

Why does a (specific) URL or some part of it fails to be properly captured by Hoardy-Web?

Did you read the notes on the bugs of the browser you are using?

Most notably:

Archival

The documentation claims that all Hoardy-Web archival methods except for submission via HTTP are unsafe. Why?

Archival by exporting using saveAs (generation of fake-Downloads) can fail and lose a bit of your collected data at a time if you press a wrong button in you browser’s UI, mis-reconfigure your browser a bit, or your disk gets out of space unexpectedly.

Archival to browser’s local storage (which is what Hoardy-Web is doing by default) can loose all your collected data at the same time if you uninstall the extension by accident.

Meanwhile, archival by submission via HTTP has none of these problems:

Archival to browser’s local storage was added because it was very easy to implement after the-stash was added. It is the default because it usually works fine, it properly reports errors, has the most consistent behaviour across all browsers, and does not require the user to install any Python code, which helps with on-boarding.

In the ideal world, browsers would provide a better saveAs API which would have a less annoying UI for the user and would return out-of-disk-space errors to the extension, in which case exporting via saveAs would be the default.

As it is now, the only way to be absolutely sure you data is properly forever-saved to disk when the extension reports it archived is to use submission via HTTP.

When running Hoardy-Web under Firefox, enabling export via saveAs makes the browser’s UI quite annoying. Can it be fixed?

Yes, go to about:config and toggle browser.download.alwaysOpenPanel to false.

This page does not answer my question. What do I do?

If the whole content of this page (not just this section, did you try searching for stuff with Control+F? there’s a lot of info here) does not explain your problem, open an issue on GitHub or get in touch otherwise.