Draft: Quirks of Copyright

by Jan Malakhovski, version 0.1.0, created , published , updated

A discussion of history and quirks of copyright laws.

This article is a work in progress.

When it becomes ready, the news feed will say so.

For now, here are some snippets referenced in the other notes.

Changelog

(Click me to see it.)

v0.1.0 -

Table of Contents

(Click me to see it.)

So, you have a situation. The law says that in this kind of situation that should happen, but, in reality, usually, completely different things happen. What do people do about it? Sometimes the law gets fixed. However, more frequently, lawyers and judges start figuring out creative ways to reinterpret the law.

Let’s discuss some examples.

Transient copies

Technically, when you watch a copyrighted video on the Internet, every intermediate node between you and the origin server is making unauthorized copies while transferring data between its networking interfaces, thus violating copyright law as it was originally defined.

This was even more true when HTTP proxies were a thing, before HTTPS took over and made them useless, since those copies were even somewhat persistent.

Similarly, when video files get loaded from an HDD to RAM, and then from RAM to video memory, it’s all unauthorized copies too.

There actually was an effort on the part of big copyright holders to make RAM, networking, and video device manufacturers buy licenses for all those copies, or at least introduce something like blank media tax for all that RAM, or something. Those initiatives failed, however.

In the US, modern judicial interpretation of copyright law simply says that all those copies are “transient” and do not count. “Buffering and the Reproduction Right: When is a Copy a Copy?” by Steven Foley (2010) discusses the harrowing history of court battles and legal interpretations on this issue. The whole things reads like a mystery thriller with a good dose of crack where both the culprit and the motive change every page. As a computer scientist, I couldn’t help but giggle throughout the whole thing (the bold emphasis is mine):

The random access memory (RAM) copy doctrine emerged from the legislative history and subsequent judicial application. The doctrine stands for the questionable notion that every transfer of a work into the volatile temporary memory of a computer makes a copy for copyright purposes. As a result, every use of a work in digital form involves the making of numerous copies.

[…]

When the CONTU Report was written the term “memory” could refer to any type of computer storage, both volatile (RAM) and non-volatile (hard disk). Because the CONTU Report recommended amending § 117 [of the Copyright Act] to permit the rightful possessor of computer software to copy or adapt it as “an essential step” in using the software, the CONTU Report can be seen as referring to disk storage.

Despite this lack of clarity, the Ninth Circuit drew a definitive line in its holding that loading software into RAM constitutes a copy.
[…]
The court held that once the program is transferred to RAM, useful representations of the program can be displayed or printed out almost instantaneously. As a result, the program residing in RAM is stable enough to be a fixed copy.

[… but then …]

Cablevision appealed and the court of appeals found that buffering did not create copies. The primary ingest buffer holds up to 0.1 second of each channel’s programming at any moment. Consequently, the buffer erases and replaces data on the buffer every tenth of a second. In addition, the BMR holds a maximum of 1.2 seconds of programming at any time. In its analysis, the court of appeals first looked at the definition of “copies” and “fixed” in § 101 of the Copyright Act. To be “fixed,” the work must satisfy two conditions: the embodiment requirement (the work is in a medium that enables it to be “perceived, reproduced, or otherwise communicated”) and the duration requirement (the work’s embodiment must last for a “period of more than transitory duration”). Both requirements must be met to consider the material of the copyrighted work in the buffers fixed and therefore a copy of the original work.

[… but then …]

The court refuted the lower court’s analysis in Cablevision I by distinguishing the MAI Sys. Corp. cases and marginalizing the DMCA Report. Generally, these cases concluded a copy is fixed without expressly addressing the duration requirement. However, the Cablevision II court asserted that this does not assume or establish that the duration requirement does not exist. Moreover, in these cases, the duration requirement was not an issue and was therefore distinguishable. The issue in MAI Sys. Corp. was whether loading software into the computer’s RAM created a copy as defined by the Act and this depended on whether the version of software present in the RAM was fixed. The RAM embodiment of the operating software constituted a copy because the technician was able to view the system’s error log and diagnose the problem. In Cablevision II, the court surmised that the parties did not litigate the duration requirement. Besides, the court assumed this analysis of duration was not necessary in the line of RAM copy doctrine cases because the “program was embodied in RAM for at least several minutes.” As a result, duration analysis was not needed and the reasoning is not dispositive to the facts present here

[… but then …]

After dispensing with the sources of the district court’s holding, the court concluded that the definition of fixed imposes both an embodiment and a durational requirement. The court determined that the data present in the buffer meets the embodiment requirement “where every second of an entire work is placed, one second at a time.” However, the buffer stores data for no more than “a fleeting 1.2 seconds.” The court sees this as transitory and failing the duration requirement.

[…]

And that is not even the end of it, but the last paragraph above is what the courts settled on in the end.

I can totally imagine reading some of the arguments in those court cases to a class of CS students as a stand-up comedy performance.

Anyway, in the EU the above was then put into law simply as a copyright exception originating from the same WIPO copyright treaty that introduced anti-circumvention laws. Meanwhile, in the US, these issues were litigated over and over even after the WIPO supposedly “settled” the issue. Which seems a bit hypocritical to me, since Clinton’s Administration and WIPO pushed that “settlement” onto all US trade partners in 1996–1998, but that “settlement” was not accepted in the US itself for quite a while yet. It would have been interesting if all WIPO signatories except the US were using the above interpretation, and the US ended up with the RAM copy doctrine or its variant applied to everything instead.

Doujinshi in Japan

In Japan, since about 1970s, copyright law with respect to doujinshi — which are fan works using copyrighted characters — became almost toothless. Doujinshi are made, sold, and bought freely, even though it is, technically, against copyright law.

It works that way because when copyright holders tried to enforce copyright against doujinshi authors, fans started boycotting their stuff, again and again. As a result, doujinshi market grew so much that Comiket bi-yearly manga market festival now features tens of thousands of “manga circles” — which are little businesses employing a handful of artists — making and selling literal tons of comics featuring copyrighted characters (most of which are porn, yes). And it’s not the only one, some franchises, like Touhou, even have their own dedicated festival markets!

This practice became so common and accepted that Japanese courts eventually started creatively reinterpreting copyright law:

The Supreme Court of Japan addressed this in the Popeye case (July 17, 1997). The Court held that a character, when detached from its specific depiction in a particular installment of a work (in this case, a serialized manga), is an “abstract concept” and not, in itself, a copyrighted work. Instead, each specific, creative depiction of the character within each installment (e.g., each manga panel or animation cel showing Popeye) can be considered a copyrighted work, or part of one. This means that copyright in a character is tied to its particular expressive form.

Some argue that this is why Japanese media market is so much more vibrant that what the West has, because new authors can experiment and become famous with fan works first, not purely as a hobby, but making at least some money doing it, which makes more of them stay as independent businesses instead of vanishing into large corporations. So much so that, at one point, Japanese media nearly took over the western youth media market with fan-translated manga and anime (through a language barrier!), see “Copyright and Comics in Japan: Does Law Explain Why All the Cartoons My Kid Watches are Japanese Imports?” by Salil K. Mehra (2002) for a scientific reference, but this was also my personal experience when I was a teen.1

Literary fanfiction in the West

Something similar to the abovementioned Japanese doujinshi case happened with literary fan-made fiction works in the West. Before the Internet became popular, making a “sequel” for a copyrighted work was a copyright violation. Today, this is so common nobody blinks at it. Websites like AO3, FFN, RoyalRoad, and many web forums hosting fanfictions flourish.

What are the exact details of the legal mental gymnastics that allow legal professionals explain this, I don’t know, but the source is quite clear to me: statistically speaking, it’s quite likely that a judges’ teenage daughter is writing and publishing Twilight fanfictions in her spare time.

Additionally, while it’s still uncommon that fanfiction author’s make money using those, it does happen, using one of the following methods:

Musical covers

WIP.

Web search engines

Technically, all web search engines (e.g., Google), copy, cache, and then index web documents, thus violating copyright law as it was originally defined. A website answering an HTTP request does not imply that it also gives the requester a license to re-distribute any part of the result before the copyright on that page expires. Additionally, technically, it could be argued that all such systems violate Computer Fraud and Abuse Act by scraping websites that forbid that via their Terms of Use.

But they mostly get away with it via hyperlinking and framing exceptions, which are treated as under fair use and fair dealing doctrines in the US and UK and under variants of “quotation with a specified source” copyright exceptions in most other jurisdictions.

Additionally, in defense against unlawful copying accusations and CFAA, websites can opt-out from indexing via robots.txt.

For instance, this appears to be the reason why Aaron Swartz was charged under CFAA. Allegedly, JSOR and MIT tried to stop him from downloading data and he worked around their measures with MAC address spoofing. By that logic, all operators of automated web crawlers that ignore robots.txt, which is quite common, could also be charged under CFAA.

Though, the exact minutiae of what is and is not allowed are pretty complex: generating links with an automated system on user’s request is okay, but if that link is a link to a news article or a place where users might buy tickets, then it’s complicated, and if you “knowingly” link to a place that violates copyright, that is not okay.

For news, EU tried to place a “link tax” on Google news via the Directive on Copyright in the Digital Single Market

Germany (with the ancillary copyright for press publishers law) and Spain tested a link tax: it was considered a “complete disaster” which cost them millions of euros. Google shut down Google News in Spain and stopped using linked snippets of German articles completely.

after which that initiative was abandoned. Canada, however, managed to force Google to pay a flat tax of $100M a year for scraping news.

Reading lists of those court cases I can’t help but notice that they all fight over little minutia of where on a website the search engine could give links to, or how they should present excerpts and image search results. But the legality of a web search engine which copies all of the internet into its cache for indexing, as a general concept, is just assumed.

The argument for their legality under most copyright systems is that a web search engine is like a library catalog/index card system, which fall under various “quotation with a specified source” copyright exceptions, but for the Internet. But, the mechanism by which a web search engine operates is completely different from how a library index card system operates. A library lawfully obtains copies of all its books, then indexes them. A web search engine crawls most of the internet without asking anyone, saves copies of everything into its cache, and then indexes all those pages.

However, it appears, that it does not matter how a tool operates, what matters is how its output looks to a layperson and who benefits. This shall be important below.

Digital Archives

The Internet Archive

The Internet Archive, its Wayback Machine, and other similar archiving services, technically, violate copyright of most websites they archive, similarly to web search engines discussed above. Most web search engines have a function of viewing the cached page, web archives usually have more than one version of each page stashed away. So, for copyright purposes, a web archive is a web search engine, turned up to 11. Thus, everything discussed above applies here, but more so.

Note that the library catalog argument does not apply here, as the search function itself is optional. So, how do they get away with it?

Firstly, robots.txt argument of voluntary inclusion does still apply. In the case of Wayback Machine, domain owners can also request deletion of previously archived copies, if they so desire.

Secondly, in the case of the Internet Archive, they are a non-profit corporation which makes their fair use arguments stronger.

Thirdly, the Internet Archive styles itself after a library, and has a pretty building, which looks legitimate.

The fact that I found no better arguments for this, tells me that the practical reality, again, comes down to how it looks and who benefits. Most legal professionals use web search engines and web archives and find those services too useful to kill. Internet Archive even has a formal affidavit request procedure. The fact that they were treated with kid gloves in a legal case where copyright violations were absolutely clear-cut suggests that explanation too.

arXiv and Sci-Hub, Google Books and Library Genesis

arXiv and Sci-Hub, from copyright standpoint, are effectively the same thing. It’s just authors publish their own articles to the former before publication in a journal and latter grabs their publication from that journal afterwards. Most of the time it’s literally the same content.

Scientific publishers forbidding authors from publishing to arXiv was a thing, but it no longer is. It became unfeasible after the proof for Poincaré conjecture was attributed to Grigori Perelman after he published to arXiv, was not stolen from him because he published to arXiv, and then he got Fields Medal and Millennium Prize for it (which he did not accept, but still).

Similarly, Google Books and Library Genesis do essentially the same thing. The former is legal, the latter is not. Though, in the former case, Google went through a lot of litigation to keep the service.

Today, the voluntary inclusion argument, would probably be how a lawyer would explain why arXiv and Google Books are legal while Sci-Hub and Library Genesis are not. But I also can’t help but notice that all the legal things mentioned above are US corporations and all the illegal things are not.

In the case of Sci-Hub, even if you dismiss Alexandra Elbakyan’s ideological arguments for it

Elbakyan questioned the morality of the publishers’ business and the legality of their methods in regards to the right to science and culture under Article 27 of the Universal Declaration of Human Rights, while maintaining that Sci-Hub should be “perfectly legal”.

there are several good arguments why it should be legal anyway.

The reality, however, is simply that Sci-Hub already changed the scientific publishing, even though the publishers won’t admit to it. See there.

“AI” circa 2022+: High-tech plagiarism machines

WIP.


  1. I frequently hear arguments based on the primary types of content of those shows, comics, and books, but I mostly dismiss those.

    Like, sure, they have a ton on shallow and/or conventionally attractive stuff, like harems of teenage girls in serafuku. But they also have a ton of yaoi and yuri titles, a manga and an anime about a boy who spends most of his time being a girl, a manga and an anime about a gay trans/cross-dressing (it’s a bit unclear there) boy, and probably others I’m not aware of. Ranma ½, for example, is one of the more popular titles within the Western Woke fanfiction crowd. Also, they have a ton of simply wild stuff like “The Flowers of Evil” (for which the manga and the anime are also wild in completely different and horrifying ways), “Bougyaku no Kokekko”, etc.

    So, in my view Japanese market simply has a high proportion of wild stuff, which is what makes it attractive. And that’s a direct consequence of more independent creators doing their own thing without a boss “guiding” them.↩︎