pwebarc: ./doc/data-on-disk.md

A suite of tools for mirroring and hoarding web pages you visit for later offline viewing. I.e. your own personal Wayback Machine that can also archive HTTP POST requests and responses, as well as most other HTTP-level data, which also follows “archive everything now, figure out what to do with it later” philosophy

git clone https://github.com/Own-Data-Privateer/pwebarc
git clone https://oxij.org/software/pwebarc

Files

Raw Source

Data formats used by pwebarc

The file format used by pwebarc shall hence be called “Web Request+Response” aka “WRR”, with file extension of .wrr.

Internally, a WRR files is a CBOR (RFC8949) encoding of the following structure:

reqres = reqresV1

reqresV1 = [
    "WEBREQRES/1",
    source,
    protocol,
    requestV1,
    responseV1,
    endTimeStamp,
    optionalData,
]

requestV1 = [
    requestTimeStamp,
    requestMethod,
    requestURL,
    requestHeaders,
    isRequestComplete,
    requestBody,
]

responseV1 = null | [
    responseTimeStamp,
    responseStatusCode,
    responseReason,
    responseHeaders,
    isResponseComplete,
    responseBody,
]

optionalData = <map from str to anything>

On disk, dumb archiving server stores them one request+response per file, compressed with gzip if compression reduces the size and uncompressed otherwise.

Obviously, all of the above has an advantage of making WRR files easily parsable with readily available libraries in basically any programming language there is, CBOR is only slightly less supported than JSON (but it is much more space-efficient and can represent arbitrary binary data).

Comparison to other web archival formats

And yet, even with it all the being this simple, directories full of non-de-duplicated .wrr files are still more efficient than:

After converting all my previous wget, curl, mitmproxy, and HAR archives into this and with some yet unpublished data de-duplication and xdelta compression between same-URL revisions pwebarc is infinitely more efficient, even more efficient than WARC.

For me, it uses about 3GiB per year of browsing on average (~5 years of mostly uninterrupted data collection ATM) but I use things like uBlock Origin and uMatrix to cut things down, and image boorus and video hosting sites have their own pipelines.