Blocking Internet Archive Won't Stop AI, but Will Erase Web's Historical Record

431 points - today at 7:30 AM

Source

Comments

VladVladikoff today at 1:33 PM
As a site operator who has been battling with the influx of extremely aggressive AI crawlers, I’m now wondering if my tactics have accidentally blocked internet archive. I am totally ok with them scraping my site, they would likely obey robots.txt, but these days even Facebook ignores it, and exceeds my stipulated crawl delay by distributing their traffic across many IPs. (I even have a special nginx rule just for Facebook.)

Blocking certain JA3 hashes has so far been the most effective counter measures. However I wish there was an nginx wrapper around hugin-net that could help me do TCP fingerprinting as well. As I do not know rust and feel terrified of asking an LLM to make it. There is also a race condition issue with that approach, as it is passive fingerprinting even the JA4 hashes won’t be available for the first connection, and the AI crawlers I’ve seen do one request per IP so you don’t get a chance to block the second request (never happens).

catapart today at 2:08 PM
I'm seeing a lot of comments about how we maintain the status quo, but I'm very interested in hearing from anyone who has conceded that there is no way to stop AI scrapers at this point and what that means for how we maintain public information on the internet in the future.

I don't necessarily believe that we won't find some half-successful solution that will allow server hosting to be done as it currently is, but I'm not very sure that I'll want to participate in whatever schemes come about from it, so I'm thinking more about how I can avoid those schemes rather than insisting that they won't exist/work.

The prevailing thought is that if it's not possible now, it won't be long before a human browser will be indistinguishable from an LLM agent. They can start a GUI session, open a browser, navigate to your page, snapshot from the OS level and backwork your content from the snapshot, or use the browser dev tools or whatever to scrape your page that way. And yes, that would be much slower and more inefficient than what they currently do, but they would only need to do that for those that keep on the bleeding edge of security from AI. For everyone else, you're in a security race against highly-paid interests. So the idea of having something on the public internet that you can stop people from archiving (for whatever purpose they want) seems like it's soon to be an old-fashioned one.

So, taking it as a given that you can't stop what these people are currently trying to stop (without a legislative solution and an enforcement mechanism): how can we make scraping less of a burden on individual hosts? Is this thing going to coalesce into centralizing "archiving" authorities that people trust to archive things, and serve as a much more structured and friendly way for LLMs to scrape? Or is it more likely someone will come up with a way to punish LLMs or their hosts for "bad" behavior? Or am I completely off base? Is anyone actually discussing this? And, if so, what's on the table?

tossandthrow today at 12:47 PM
I think media outlets think way too highly of their contribution to AI.

Had they never existed, it had likely not made a dent to the AI development - completely like believing that had they been twice as productive, it had likely neither made a dent to the quality of LLMs.

ashwinnair99 today at 5:55 PM
We're essentially burning the library to punish the arsonist. The arsonist already left.
gzread today at 11:58 AM
This is why archive.is was created. Should we stop trying to hunt down and punish its creator and support it as the extremely useful project that it is?
alexpotato today at 5:21 PM
As someone who did a lot of work on early spam fighting only to see it replaced by things like DKIM, I wonder if we are going to start having the "taxi medallion" style approach but for people connecting to your site.

e.g. IA will publish out signed https requests with their key so you, as the site owner, can confirm that it is indeed from them and not from AI.

Feels like that would be very anti open internet but not sure how else you would prove who is a good actor vs not (from your perspective that is).

stuaxo today at 1:33 PM
The New York Times is awful I want it to be archived so people can see that in the future.
neilv today at 6:47 PM
I'm now an AI bro, and a long-time fan of the EFF (though they occasionally make a mistake).

I think this EFF piece could be more forthright (rather than political persuasion), since the matter involves balancing multiple public interest goals that are currently in opposition.

> Organizations like the Internet Archive are not building commercial AI systems.

This NiemanLab article lists evidence that Internet Archive explicitly encouraged crawling of their data, which was used for training major commercial AI models:

| News publishers limit Internet Archive access due to AI scraping concerns (niemanlab.org) | 569 points by ninjagoo 34 days ago | 366 comments | https://news.ycombinator.com/item?id=47017138

> [...] over a fight that libraries like the Archive didn't start, and didn't ask for.

They started or stumbled into this fight through their actions. And (ideology?) they also started and asked for a related fight, about disregard of copyright and exploitation of creators:

| Internet Archive forced to remove 500k books after publishers' court win (arstechnica.com) | 530 points by cratermoon on June 21, 2024 | 564 comments | https://news.ycombinator.com/item?id=40754229

rkwtr1299 today at 3:48 PM
The EFF has a lukewarm stance on AI, but criticizes everyone else. AI is clearly ruining the Internet and the job market.

How about thinking about your mission and take an anti-AI hardliner stance? But I see multiple corporate sponsors that would not be pleased:

https://www.eff.org/thanks

All these so called freedom organizations like the OSI and the EFF have been bought and are entirely irrelevant if not harmful.

xnx today at 10:45 AM
Does Internet Archive have a distributed residential IP crawler program? I would enthusiastically contribute to that.

There must be some mechanism to prevent tampering in such a setup.

user_7832 today at 11:31 AM
> But in recent months The New York Times began blocking the Archive from crawling its website, using technical measures that go beyond the web’s traditional robots.txt rules. That risks cutting off a record that historians and journalists have relied on for decades. Other newspapers, including The Guardian, seem to be following suit.

I'm a bit surprised I never read about this till now, though while disappointing it is unfortunately not surprising.

> The Times says the move is driven by concerns about AI companies scraping news content. Publishers seek control over how their work is used, and several—including the Times—are now suing AI companies over whether training models on copyrighted material violates the law. There’s a strong case that such training is fair use.

I suspect part of it might be these corps not wanting people to skip a paywall (whether or not someone would pay even if they had no access is a different story). But this argument makes no sense for the Guardian.

rdiddly today at 3:48 PM
When you disappear from the historical record, that's called you becoming irrelevant. The world moves on, and pays attention to someone else. Not sure why the Times doesn't seem to see this angle.
b1n today at 1:42 PM
Archive now, make public after X amount of time. So, maybe both publisher and archiver are happy (or less sad).
phendrenad2 today at 6:45 PM
Does IA use a known set of IPs? Should be trivial to let them through. But yeah, news companies aren't technically capable of this kind of finesse, they probably have by-the-hour contractors doing any coding/config changes, and closing the ticket is the goal there.
lich_king today at 4:15 PM
I am really tired of this kind of moralizing. The reality is that every time geeks come up with some utopian ideal, such as that we should publish all our software under free licenses or make all human knowledge freely accessible to anyone, the same geeks later show up and build extractive industries on top of this. Be a part of the open source revolution... so that you do unpaid labor for Facebook. Make a quirky homepage... so that we can bootstrap global-scale face recognition tech. Help us build the modern-day library of Alexandria... so that OpenAI and Anthropic can sell it back to you in a convenient squeezable tube.

Maybe it's time to admit that the techie community has a pretty bad moral compass and that we're not good stewards of the world's knowledge. We turn lofty ideals into amoral money-making schemes whenever we can. I'm not sure that the EFF's role in this is all that positive. They come from a good place, but they ultimately aid a morally bankrupt industry. I don't want archive.org to retain a copy of everyone's online footprint because I know it be used the same way it always is: to make money off other people's labor and to and erode privacy.

Havoc today at 1:19 PM
As someone perpetually online it’s also making me rethink that a bit

Unless you love walled gardens, doomscrolling and endless AI slop that seems like the fun is over

charcircuit today at 7:25 PM
The EFF is being obtuse. Using archives sites is a known bypass for reading news articles for free. Every time a paywalled site someone posts an archive link so others can read for free.

>Archiving and Search Are Legal

But giving full articles away for free to everyone is not. Archive.org has the power to make archives private.

SlinkyOnStairs today at 11:29 AM
Devil's advocate: Anyone seeking to limit AI scraping doesn't have much of a choice in also blocking archivists.

And it's genuinely not that weird for news organisations to want to stop AI scraping. This is just a repeat of their fight with social media embedding.

Sure. The back catalogue should be as close to public domain as possible, libraries keeping those records is incredibly important for research.

But with current news, that becomes complicated as taking the articles and not paying the subscription (or viewing their ads) directly takes away the revenue streams that newsrooms rely on to produce the news. Hence the "Newspaper trying to ban linking" mess, which was never about the links themselves but about social media sites embedding the headline and a snippet, which in turn made all the users stop clicking through and "paying" for the article.

Social media relies on those newsrooms (same with really, most other kinds of websites) to provide a lot of their content. And AI relies on them for all of the training data (remember: "Synthetic data" does not appear ex nihilo) & to provide the news that the AI users request. We can't just let the newsrooms die. The newsroom hasn't been replaced itself, it's revenue has been destroyed.

---

And so, the question of archives pops up. Because yes, you can with some difficulty block out the AI bots, even the social media bots. A paywall suffices.

But this kills archiving. Yet if you whitelist the archives in some way, the AI scrapers will just pull their data out of the archive instead and the newsrooms still die. (Which also makes the archiving moot)

A compromise solution might be for archives to accept/publish things on a delay, keep the AI companies from taking the current news without paying up, but still granting everyone access to stuff from decades ago.

There's just major disagreement about what a reasonable delay is. Most major news orgs and other such IP-holders are pretty upset about AI firm's "steal first, ask permission later" approach. Several AI firms setting the standard that training data is to be paid for doesn't help here either. In paying for training data they've created a significant market for archives, and significant incentive to not make them publicly freely accessible.

Why would The Times ever hand over their catalogue to the Internet Archive if Amazon will pay them a significant sum of money for it? The greater good of all humanity? Good luck getting that from a dying industry.

---

Tangent: Another annoying wrinkle in the financial incentives here is that not all archiving organisations are engaging in fair play, which yet further pushes people to obstruct their work.

To cite a HN-relevant example: Source code archivist "Software Heritage" has long engaged in holding a copy of all the sourcecode they can get their hands on, regardless of it's license. If it's ever been on github, odds are they're distributing it. Even when licenses explicitly forbid that. (This is, of course, perfectly legal in the case of actual research and other fair use. But:)

They were notable involved in HuggingFace's "The Stack" project by sharing a their archives ... and received money from HuggingFace. While the latter is nominally a donation, this is in effect a sale.

---

I find it quite displeasing that the EFF fails to identify the incentives at play here. Simply trying to nag everyone into "doing the thing for the greater good!" is loathsome and doesn't work. Unless we change this incentive structure, the outcome won't change.

ryguz today at 2:32 PM
[flagged]
daliliu today at 1:03 PM
[dead]