pwebarc: ./tool/README.md

A suite of tools for mirroring and hoarding web pages you visit for later offline viewing. I.e. your own personal Wayback Machine that can also archive HTTP POST requests and responses, as well as most other HTTP-level data, which also follows “archive everything now, figure out what to do with it later” philosophy

git clone https://github.com/Own-Data-Privateer/pwebarc
git clone https://oxij.org/software/pwebarc

Files

Raw Source

What?

wrrarms (pwebarc-wrrarms) is a tool for displaying, programmatically manipulating, organizing, importing, and exporting Personal Private Passive Web Archive (pwebarc) (also there) Web Request+Response (WRR) files produced by pWebArc browser extension (also there).

Quickstart

Installation

How to build a file system tree of latest versions of all hoarded URLs

Assuming you keep your WRR dumps in ~/pwebarc/raw you can generate a hierarchy of symlinks for each URL pointing from under ~/pwebarc/latest to the most recent WRR file in ~/pwebarc/raw via:

wrrarms organize --symlink --latest --output hupq --to ~/pwebarc/latest --and "status|== 200C" ~/pwebarc/raw

Personally, I prefer flat_mhs (see the documentation of the --output below) format as I dislike deep file hierarchies, using it also simplifies filtering in my ranger file browser, so I do this:

wrrarms organize --symlink --latest --output flat_mhs --to ~/pwebarc/latest --and "status|== 200C" ~/pwebarc/raw

These commands rescan the whole of ~/pwebarc/raw and so take a while to complete. If you have a lot of WRR files and you want to keep your symlink tree updated in real-time you can use a two-stage --stdin0 pipeline shown in the examples section below.

How to generate a local offline website mirror like wget -mpk

If you want to render your WRR files into a local offline website mirror containing interlinked HTML files and their resources a-la wget -mpk (wget --mirror --page-requisites --convert-links), run one of the above --symlink --latest commands, and then do something like this:

wrrarms export mirror --to ~/pwebarc/mirror1 ~/pwebarc/latest/archiveofourown.org

on completion ~/pwebarc/mirror1 will contain a bunch of interlinked minimized HTML files, their resources, and everything else available from WRR files living under ~/pwebarc/latest/archiveofourown.org.

By default, all the links in exported HTML files will be remapped to local files (even if source WRR files for those would-be exported files are missing in ~/pwebarc/latest/archiveofourown.org), and those HTML files will also be stripped of all JavaScript, CSS, and other stuff of various levels of evil (see documentation for the scrub function below).

On the plus side, the result will be completely self-contained and safe to view with a dumb unconfigured browser.

If you are unhappy with this behaviour and, for instance, want to keep the CSS and produce human-readable HTML, run the following instead:

wrrarms export mirror -e 'response.body|eb|scrub response +all_refs,-actions,+styles,+pretty' --to ~/pwebarc/mirror2 ~/pwebarc/latest/archiveofourown.org

Note, however, that CSS resource filtering and remapping is not implemented yet.

If you also want to keep links that point to not yet hoarded Internet URLs to still point those URLs in the exported files instead of them pointing to non-existent local files, similarly to what wget -mpk does, run wrrarms export mirror with --remap-open, e.g.:

wrrarms export mirror -e 'response.body|eb|scrub response +all_refs,-actions,+styles,+pretty' --remap-open --to ~/pwebarc/mirror3 ~/pwebarc/latest/archiveofourown.org

Finally, if you want a mirror made of raw files without any content censorship or link conversions, run:

wrrarms export mirror -e 'response.body|eb' --to ~/pwebarc/mirror-raw ~/pwebarc/latest/archiveofourown.org

The later command will render your mirror pretty quickly, but the other above-mentioned commands will call the scrub function, and that will be pretty slow (as in avg ~5Mb, ~3 files per second on my 2013-era laptop), mostly because html5lib that wrrarms uses for paranoid HTML parsing and filtering is fairly slow.

Using --root and --depth

As an alternative to (or in combination with) keeping a symlink hierarchy of latest versions, you can load (an index of) an assortment of WRR files into wrrarms’s memory but then export mirror only select URLs (and all resources needed to properly render those pages) by running something like:

wrrarms export mirror --to ~/pwebarc/mirror4 \
  --root 'https://archiveofourown.org/works/3733123?view_adult=true&view_full_work=true' \
  --root 'https://archiveofourown.org/works/30186441?view_adult=true&view_full_work=true' \
  ~/pwebarc/raw/*/2023

(wrrarms loads (indexes) WRR files pretty fast, so if you are running from an SSD, you can totally feed it years of WRR files and then only export a couple of URLs, and it will take a couple of seconds to finish anyway.)

There is also --depth option, which works similarly to wget’s --level option in that it will follow all jump (a href) and action links accessible with no more than --depth browser navigations from recursion --roots and then export mirror all those URLs (and their resources) too.

When using --root options, --remap-open works exactly like wget’s --convert-links in that it will only remap the URLs that are going to be exported and will keep the rest as-is. Similarly, --remap-closed will consider only the URLs reachable from the --roots in no more that --depth jumps as available.

How to generate local offline website mirrors like wget -mpk from you old mitmproxy stream dumps

Assuming mitmproxy.001.dump, mitmproxy.002.dump, etc are files that were produced by running something like

mitmdump -w +mitmproxy.001.dump

at some point, you can generate website mirrors from them by first importing them all to WRR

wrrarms import mitmproxy --to ~/pwebarc/mitmproxy mitmproxy.*.dump

and then export mirror like above, e.g. to generate mirrors for all URLs:

wrrarms export mirror --to ~/pwebarc/mirror ~/pwebarc/mitmproxy

How to generate previews for WRR files, listen to them via TTS, open them with xdg-open, etc

See script sub-directory for examples that show how to use pandoc and/or w3m to turn WRR files into previews and readable plain-text that can viewed or listened to via other tools, or dump them into temporary raw data files that can then be immediately fed to xdg-open for one-click viewing.

Usage

wrrarms

A tool to pretty-print, compute and print values from, search, organize (programmatically rename/move/symlink/hardlink files), import, export, (WIP: check, deduplicate, and edit) pWebArc WRR (WEBREQRES, Web REQuest+RESponse) archive files.

Terminology: a reqres (Reqres when a Python type) is an instance of a structure representing HTTP request+response pair with some additional metadata.

wrrarms pprint

Pretty-print given WRR files to stdout.

wrrarms get

Compute output values by evaluating expressions EXPRs on a given reqres stored at PATH, then print them to stdout terminating each value as specified.

wrrarms run

Compute output values by evaluating expressions EXPRs for each of NUM reqres stored at PATHs, dump the results into into newly generated temporary files terminating each value as specified, spawn a given COMMAND with given arguments ARGs and the resulting temporary file paths appended as the last NUM arguments, wait for it to finish, delete the temporary files, exit with the return code of the spawned process.

wrrarms stream

Compute given expressions for each of given WRR files, encode them into a requested format, and print the result to stdout.

wrrarms find

Print paths of WRR files matching specified criteria.

wrrarms organize

Parse given WRR files into their respective reqres and then rename/move/hardlink/symlink each file to DESTINATION with the new path derived from each reqres’ metadata.

Operations that could lead to accidental data loss are not permitted. E.g. wrrarms organize --move will not overwrite any files, which is why the default --output contains %(num)d.

wrrarms import

Use specified parser to parse data in each INPUT PATH into reqres and dump them under DESTINATION with paths derived from their metadata. In short, this is wrrarms organize --copy but for non-WRR INPUT files.

wrrarms import mitmproxy

Parse each INPUT PATH as mitmproxy stream dump (by using mitmproxy’s own parser) into a sequence of reqres and dump them under DESTINATION with paths derived from their metadata.

wrrarms export

Parse given WRR files into their respective reqres, convert to another file format, and then dump the result under DESTINATION with the new path derived from each reqres’ metadata.

wrrarms export mirror

Parse given WRR files, filter out those that have no responses, transform and then dump their response bodies into separate files under DESTINATION with the new path derived from each reqres’ metadata. In short, this is a combination of wrrarms organize --copy followed by in-place wrrarms get. In other words, this generates static offline website mirrors, producing results similar to those of wget -mpk.

Examples

Advanced examples

How to handle binary data

Trying to use response bodies produced by wrrarms stream --format=json is likely to result garbled data as JSON can’t represent raw sequences of bytes, thus binary data will have to be encoded into UNICODE using replacement characters:

wrrarms stream --format=json -ue . ../dumb_server/pwebarc-dump/path/to/file.wrr | jq .

The most generic solution to this is to use --format=cbor instead, which would produce a verbose CBOR representation equivalent to the one used by --format=json but with binary data preserved as-is:

wrrarms stream --format=cbor -ue . ../dumb_server/pwebarc-dump/path/to/file.wrr | less

Or you could just dump raw response bodies separately:

wrrarms stream --format=raw -ue response.body ../dumb_server/pwebarc-dump/path/to/file.wrr | less
wrrarms get ../dumb_server/pwebarc-dump/path/to/file.wrr | less