pwebarc

A suite of tools for mirroring and hoarding web pages you visit for later offline viewing. I.e. your own personal Wayback Machine that can also archive HTTP POST requests and responses, as well as most other HTTP-level data, which also follows “archive everything now, figure out what to do with it later” philosophy

git clone https://github.com/Own-Data-Privateer/pwebarc
git clone https://oxij.org/software/pwebarc

Recent changes

Files

Raw Source

What is pwebarc?

Personal Private Passive Web Archive (pwebarc) is a suite of tools to capture, collect, save, mirror, manage archives of (i.e. hoard), and view web pages and whole websites offline.

In short, pwebarc’s main workflow is this: you install an extension/add-on into the browser of your choice (both Firefox- and Chromium-based browsers are supported) and just browse the web while it captures and archives everything your browser fetches from the network to your local file system in a way that can be used to reconstruct and replay your browsing session later (by default, the extension has lots of options controlling what data from which tabs should and should not be captured).

Screenshots

Screenshot of Firefox’s viewport with extension’s popup shown.
Screenshot of Chromium’s viewport with extension’s popup shown.

See there for more screenshots.

Why does pwebarc exists?

For a relatively layman user

So, you wake up remembering something interesting, you try to look it up on Google, you fail, eventually you remember the website you seen it at (or a tool like recoll or Promnesia helps you), you go there to look it up… and discover it offline/gone/a parked domain. Not a problem! Have no fear! You go to Wayback Machine and look it up there… and discover they only archived an ancient version of it and the thing you wanted is missing there.

Or, say, you read a cool fanfiction on AO3 years ago, you even wrote down the URL, you go back to it wanting to experience it again… and discover the author made it private… and Wayback Machine saved only the very first chapter.

“If it is on the Internet, it is on Internet forever!” they said. They lied!

Things vanish from the Internet all the time, Wayback Machine is awesome, but

Meanwhile, pwebarc solves all of the above out-of-the-box (though, the full-text search is currently being done by other tools running on top of it).

For a user with accessibility or comfort requirements

Say, there is a web page that can not be easily reached via curl/wget (because it is behind a paywall or complex authentication method that is hard to reproduce outside of a browser) but for accessibility (or just simple reading comfort) reasons each time you visit that page you want to automatically feed its source to a third-party app that strips and/or modifies HTML markup in a website-specific way and feeds it into a TTS engine, a Braille display, or a book reader app.

With most modern web browsers you can do TTS either out-of-the-box or by installing an add-on (though, be aware of privacy issues when using most of these), but tools that can do website-specific accessibility without also being a website-specific UI app are very few.

Meanwhile, pwebarc with some scripts can do it.

For a technical user

Say, there’s a web page/app you use (like a banking app), but it lacks some features you want, and in your browser’s Network Monitor you can see it uses JSON RPC or some such to fetch its data, and you want those JSONs for yourself (e.g., to compute statistics and supplement the app output with them), but the app in question has no public API and scraping it with a script is non-trivial (e.g., they do complicated JavaScript+multifactor-based auth, try to detect you are actually using a browser, and they ban you immediately if not).

Or, maybe, you want to parse those behind-auth pages with a script, save the results to a database, and then do interesting things with it (e.g., track price changes, manually classify, annotate, and merge pages representing the same product by different sellers, do complex queries, like sorting by price/unit, limit results by geographical locations extracted from text labels, etc).

Or, say, you want to fetch a bunch of pages belonging to two recommendation lists on AO3 or GoodReads, get all outgoing links for each fetched page, union sets for the pages belonging to the same recommendation list, and then intersect the results of the two lists to get a shorter list of things you might want to read with higher probability.

Or, more generally, say, you want to tag web pages referenced from a certain set of other web pages with some tag in your indexing software, and update it automatically each time you visit any of the source pages.

Or, say, you want to combine a full-text indexing engine, your browsing and derived web link graph data, your states/ratings/notes from org-mode, messages from your friends, and other archives, so that you could do arbitrarily complex queries over it all, like “show me all GoodReads pages for all books not marked as DONE or CANCELLED in my org-mode files, ever mentioned by any of my friends, ordered by undirected-graph Pagerank algorithm biased with my own book ratings (so that books sharing GoodReads lists with the books I finished and liked will get higher scores)”. So, basically, you want a private personalized Bayesian recommendation system.

“Everything will have a RESTful API!” they said. They lied! A lot of useful stuff never got RESTful APIs, those RESTful APIs that exists are frequently buggy, you’ll probably have to scrape data from HTMLs anyway.

“Semantic Web will allow arbitrarily complex queries spanning multiple data sources!” they said. Well, 25 years later (“RDF Model and Syntax Specification” was published in 1999), almost no progress there, the most commonly used subset of RDF does what indexing systems in 1970s did, but less efficiently and with a worse UI.

Meanwhile, pwebarc provides some of the tools to help you build your own little local data paradise.

Features and technical details

Unlike most of its alternatives, pwebarc’s main workflow is to passively collect and archive HTTP requests and responses directly from your browser as you browse the web instead of making you to ask some tool or web app to snapshot it for you or forcing you to explicitly snapshot/record separate browsing sessions/tabs, thus

In other words, pwebarc is your own personal Wayback Machine which passively archives everything you see and, unlike the original Wayback Machine, also archives HTTP POST requests and responses, and most other HTTP-level data.

Technically, pwebarc is most similar to

Or, to summarize it another way, you can view pwebarc as an alternative for mitmproxy which leaves SSL/TLS layer alone and hooks into target application’s runtime instead.

In fact, an unpublished and now irrelevant ancestor project of pwebarc was a tool to generate website mirrors from mitmproxy stream captures. (By the way, if you want that, pwebarc’s wrrarms tool can do that for you. It can take mitmproxy dumps as inputs.) But then I got annoyed by all the sites that don’t work under mitmproxy, did some research into the alternatives, decided there were none I wanted to use, and so I made my own.

Highlights of differences when compared to the alternatives

To highlight the main differences to its alternatives, pwebarc DOES NOT:

Parts and pieces

Required

Optional

Technical Philosophy

Firstly, pwebarc is designed to be simple (as in adhering to the Keep It Stupid Simple principle) and efficient (as in running well on ancient hardware):

Secondly, pwebarc is built to follow “capture and archive all the things as they are now, as raw as possible, modify those archives never, convert to other formats and extract values on-demand” philosophy.

Meaning,

pwebarc expects you to treat your older pre-pwebarc archives you want to convert to WRR similarly:

This way, if wrrarms has some unexpected bug, or wrrarms import adds some new feature, you could always re-import them later without losing anything.

Supported use cases

For a relatively layman user

Currently, pwebarc has two main use cases for regular users, in both of which you first capture some data using the add-on and then you either

For a more technical user

Alternatively, you can programmatically access that data by asking wrrarms to dump WRR files into JSONs or verbose CBORs for you, or you can just parse WRR files yourself with readily-available libraries.

Since the whole of pwebarc adheres to the philosophy described above, the simultaneous use of pWebArc and wrrarms helps immensely when developing scrapers for uncooperative websites: you just visit them via your web browser as normal, then, possibly years later, use wrrarms to organize your archives and conveniently programmatically feed the archived data into your scraper without the need to re-fetch anything.

Given how simple the WRR file format is, in principle, you can modify any HTTP library to generate WRR files, thus allowing you to use wrrarms with data captured by other software.

Which is why, personally, I patch some of the commonly available FLOSS website scrapers to dump the data they fetch as WRR files so that in the future I could write my own better scrapers and indexers and test them on a huge collected database of already collected inputs immediately.

Also, as far as I’m aware, wrrarms is a tool that can do more useful stuff to your WRR archives than any other tool can do to any other file format for HTTP dumps with the sole exception of WARC.

What does it do, exactly? I have questions.

Does the author eats what he cooks?

Yes, as of June 2024, I archive all of my web traffic using pwebarc, without any interruptions, since October 2023. Before that my preferred tool was mitmproxy.

After adding each new feature to wrrarms CLI tool, as a rule, I feed at least the last 5 years of my web browsing into it (at the moment, most of it converted from other formats to .wrr, obviously) to see if everything works as expected.

TODO

pWebArc extension

wrrarms tool

Quickstart

On a system with Python installed

… and you are done

Assuming the extension reported success: Congratulations! You are now collecting and archiving all your web browsing traffic originating from that browser. Repeat extension installation for all browsers/browser profiles as needed.

If you just want to collect everything and don’t have time to figure out how to use the rest of this suite of tools right this moment, you can stop here and figure out how to use the rest of this suite later.

It took me about 6 months before I had to refer back to previously archived data for the first time when I started using mitmproxy to sporadically collect my HTTP traffic in 2017. So, I recommend you start collecting immediately and be lazy about the rest. Also, I learned a lot about nefarious things some of the websites I visit do in the background while doing that, now you are going to learn the same.

Next, you should read extension’s “Help” page. It has lots of useful details about how it works and quirks of different browsers. If you open it by clicking the “Help” button in the extension’s UI, then hovering over or clicking on links in there will highlight relevant settings.

See “Setup recommendations” section for best practices for configuring your system and browsers to be used with pwebarc.

How to view archived data

See the docs of the wrrarms tool.

On a system with no Python installed

On a system with Nix package manager

Setup recommendations

Using with Tor Browser

You probably don’t want to use 127.0.0.1 and 127.0.1.1 with Tor Browser as those are normal loopback addresses and you probably don’t want to allow stuff from under Tor to access your everyday stuff.

Or, you could run both the Tor Browser, and ./pwebarc_dumb_dump_server.py in a container/VM and use the default 127.0.0.1 address.

Alternatives and comparisons

“Cons” and “Pros” are in comparison to the main workflow of pwebarc. Most similar and easier to use projects first, harder to use and less similar projects later.

archiveweb.page and replayweb.page

Tools most similar to pwebarc in their implementation, though not in their philosophy and intended use.

Cons:

Pros:

Differences in design:

Same issues:

DiskerNet

A self-hosted web app and web crawler written in Node.js most similar to pwebarc in its intended use.

DiskerNet does its web crawling by spawning a Chromium browser instance and attaching to it via its debug protocol, which is a bit weird, but it does work, and with exception of pwebarc it is the only other tool I know of that can archive everything passively as you browse, since you can just browse in that debugged Chromium window and it will archive the data it fetches.

Cons:

Pros:

Same issues:

But you could just enable request logging in your browser’s Network Monitor and manually save your data as HAR archives from time to time.

Cons:

And then you still need something like this suite to look into the generated archives.

mitmproxy

Cons:

Pros:

But you could setup SSL keys dumping then use Wireshark to capture your web traffic.

Cons:

Pros:

And then you still need something like this suite to look into the generated archives.

ArchiveBox

A web crawler and self-hosted web app into which you can feed the URLs for them to be archived.

So to make it archive all of your web browsing like pwebarc does:

Cons:

Pros:

Still, probably the best of the self-hosted web-app-server kind of tools for this ATM.

SingleFile and WebScrapBook

Browser add-ons to capture whole web pages.

Cons:

Pros:

reminiscence

A system similar to ArchiveBox, but has a bulit-in tagging system and archives pages as raw HTML + whole-page PNG rendering/screenshot — which is a bit weird, but it has the advantage of not needing any replay machinery at all for re-viewing simple web pages, you only need a plain simple image viewer.

Pros and Cons are almost identical to those of ArchiveBox above.

wget -mpk and curl

Cons:

Pros:

wpull

wget -mpk done right.

Cons:

Pros:

grab-site

A simple web built on top of wpull, presented to you by the ArchiveTeam, a group associated with the Internet Archive which appears to be the source of archives for the most of the interesting pages I find there.

Cons:

Pros:

monolith and obelisk

Stand-alone tools doing the same thing SingleFile add-on does: generate single-file HTMLs with bundled resources viewable directly in the browser.

Cons:

Pros:

heritrix

The crawler behind the Internet Archive.

It’s a self-hosted web app into which you can feed the URLs for them to be archived, so to make it archive all of your web browsing:

Cons:

Pros:

Archivy

A self-hosted wiki that archives pages you link to in background.

Others

ArchiveBox wiki has a long list or related things.

If you like this, you might also like

Perkeep

It’s an awesome personal private archival system adhering to the same philosophy as pwebarc, but it’s basically an abstraction replacing your file system with a content-addressed store that can be rendered into different “views”, including a POSIXy file system.

It can do very little in helping you actually archive a web page, but you can start dumping new pwebarc .wrr files with compression disabled, decompress you existing .wrr files, and then feed them all into Perkeep to be stored and automatically replicated to your backup copies forever. (Perkeep already has a better compression than what pwebarc currently does and provides a FUSE FS interface for transparent operation, so compressing things twice would be rather counterproductive.)

License

GPLv3+, some small library parts are MIT.