As an experiment I searched Google for "harry potter and the sorcerer's stone text":
- the first result is a pdf of the full book
- the second result is a txt of the full book
- the third result is a pdf of the complete harry potter collection
- the fourth result is a txt of the full book (hosted on github funny enough)
Further down there are similar copies from the internet archive and dozens of other sites. All in the first 2-3 pages.
I get that copyright is a problem, but let's not pretend that an LLM that autocompletes a couple lines from harry potter with 50% accuracy is some massive new avenue to piracy. No one is using this as a substitute for buying the book.
pera 8 hours ago [-]
> let's not pretend that an LLM that autocompletes a couple lines from harry potter with 50% accuracy is some massive new avenue to piracy
No one is claiming this.
The corporations developing LLMs are doing so by sampling media without their owners' permission and arguing this is protected by US fair use laws, which is incorrect - as the late AI researcher Suchir Balaji explained in this other article:
I’ve yet to read an actual argument defending commercial LLM’s as fair use based on existing (edit:legal) criteria.
Lerc 5 hours ago [-]
Based upon legal decisions in the past there is a clear argument that the distinction for fair use is whether a work is substantially different to another. You are allowed to write a book containg information you learned about from another book. There is threshold in academia regarding plagiarism that stands apart from the legal standing. The measure that was used in Gyles v Wilcox was if the new work could substitute for the old. Lord Hardwicke had the wisdom to defer to experts in the field as to what the standard should be for accepting something as meaningfully changed.
Recent decisions such as Andy Warhol Foundation for the Visual Arts, Inc. v. Goldsmith have walked a fine line with this. I feel like the supreme court got this one wrong because the work is far more notable as a Warhol than as a copy of a photograph, perhaps that substitution rule should be a two way street. If the original work cannot substitute for the copy, then clearly the copy must be transformative.
LLMs generating works verbatim might be an infringement of copyright (probably not), distributing those verbatim works without a licence certainly would be. In either case, it is probably considered a failure of the model, Open AI have certainly said that such reproductions shouldn't happen and they consider it a failure mode when it does. I haven't seen similar statements from other model producers, but it would not surprise me if this were the standard sentiment.
Humans looking at works and producing things in a similar style is allowed, indeed this is precisely what art movements are. The same transformative threshold applies. If you draw a cartoon mouse, that's ok, but if people look at it and go "It's Mickey mouse" then it's not. If it's Mickey to tiki Tu meke, it clearly is Mickey but it is also clearly transformative.
Models themselves are very clearly transformative. Copyright itself was conceived at a time when generated content was not considered possible so the notion of the output of a transformative work being a non transformative derivative of something else was never legally evaluated.
Retric 5 hours ago [-]
I think you may have something with that line of reasoning.
The threshold for transformative for fictional works is fairly high unfortunately. Fan fiction and reasonably distinct works with excessive inspiration are both copyright infringing. https://en.wikipedia.org/wiki/Tanya_Grotter
> Models themselves are very clearly transformative.
A near word for word copy of large sections of a work seems nowhere near that threshold. An MP3 isn’t even close to a 1:1 copy of a piece of music but the inherent differences are irrelevant, a neural network containing and allowing the extraction of information looks a lot like lossy compression.
Models could easily be transformative, but the justification needs to go beyond well obviously they are.
Lerc 1 hours ago [-]
Models are not word for word copies of large sections of text. They are capable of emitting that text though.
It would be interesting to look at what legal precidents were set regarding mp3s or other encodings. Is the encoding itself an infringement, or is it the decoding, or is it the distribution of a decodable form of a work.
There is also the distinction with a lossy encoding that encodes a single work. There is clarity when the encoded form serves no other purpose other than to be decoded into a given work. When the encoding acts as a bulk archive, does the responsibility shift to those who choose what to extract from the archive?
roenxi 6 hours ago [-]
It seems like a pretty reasonable argument and easy enough to make. A human with a great memory could probably recreate some absurd % of Harry Potter after reading it, there are some very unusual minds out there. It is clear that if they read Harry Potter and <edit> being capable </edit> of reproducing it on demand as a party trick that would be fair use. So the LLM should also be fair use since it is using a mechanism similar enough to what humans do and what humans do is fine.
The LLMs I've used don't randomly start spouting Harry Potter quotes at me, they only bring it up if I ask. They aren't aiming to undermine copyright. And they aren't a very effective tool for it compared to the very well developed networks for pirating content. It seems to be a non-issue that will eventually be settled by the raw economic force that LLMs are bringing to bear on society in the same way that the movie industry ultimately lost the battle against torrents and had to compete with them.
bloak 3 hours ago [-]
I'm fairly sure that the law treats humans and machines differently, so arguing that it would be OK if a person did it therefore it's OK to build a machine that does it is not very helpful. (I'm not sure you're doing that but lots of random non-lawyers on the Internet seem to be doing that.)
Claims like this demonstrate it, really: it is obviously not copyright infringement for a human to memorise a poem and recite it in private; it obviously is copyright infringement to build a machine that does that and grant public access to that machine. (Or does anyone think that's not obvious?)
Retric 6 hours ago [-]
> is clear that if they read Harry Potter and reproduce it on demand as a party trick that would be fair use.
Actually no that could be copyright infringement. Badly signing a recent pop song in public also qualifies as copyright infringement. Public performances count as copying here.
ricardobeat 6 hours ago [-]
> Badly signing a recent pop song in public also qualifies as copyright infringement
For commercial purposes only. If someone sells a recreation of the Harry Potter book, it’s illegal regardless whether it was by memory, directly copying the book, or using an LLM. It’s the act of broadcasting it that’s infringing on copyright, not the content itself.
Retric 5 hours ago [-]
There’s a bunch of nuance here.
But just for clarification, selling a recreation isn’t required for copyright infringement. The copying itself can be problematic so you can’t defend yourself by saying you haven’t yet sold any of the 10,000 copies you just printed. There are some exceptions that allow you to make copies for specific purposes, skip protection on a portable CD player for example, but that doesn’t apply to the 10k copies situation.
roenxi 6 hours ago [-]
Ah sorry. I mistyped. Being able to do that it would be fair use. I went back and fixed the comment.
Although frankly, as has been pointed out many times, the law is also stupid in what it prohibits and that should be fixed first as a priority. Its done some terrible damage to our culture. My family used to be part of a community choir until it shut down basically for copyright reasons.
sabellito 6 hours ago [-]
The difference might be the "human doing it as a party trick" vs "multi billion dollar corporation using it for profit".
Having said that I think the cat is very much out of the bag on this one and, personally, I think that LLMs should be allowed to be trained on whatever.
close04 5 hours ago [-]
> A human with a great memory
This kind of argument keeps popping up usually to justify why training LLMs on protected material is fair, and why their output is fair. It's always used in a super selective way, never accounting for confounding factors, just because superficially it sort of supports that idea.
Exceptional humans are exceptional, rare. When they learn, or create something new based on prior knowledge, or just reproduce the original they do it with human limitations and timescales. Laws account for these limitations but still draw lines for when some of this behavior is not permitted.
The law didn't account for a computer "software" that can ingest the entirety of human creation that no human could ever do, then reproduce the original or create an endless number of variations in a blink of an eye.
staticman2 2 hours ago [-]
Nobody in real life thinks humans and machines are the same thing and actually believes they should have the same legal status. The A.I. enthusiast would not support the legality of shooting them when no longer useful the way a company would shred an old hard drive.
This supposed failure to see the difference between the human mind and a machine whenever someone brings up copyright is peformative and disingenuous.
ab5tract 3 hours ago [-]
That’s why the “transformative” argument falls so flat to me. It’s about transformation in the mind and hands of a human.
Traditionally tools that reduce the friction of creating those transformations make a work less “transformed” in the eyes of the law, not more so. In this case the transformation requires zero mental or physical effort.
paxys 4 hours ago [-]
If you really haven't read a single argument about it then you're deliberately blocking them out, because it just takes a couple minutes of searching.
Those support the utility or debate individual points but don’t make a coherent argument that LLM are strictly fair use.
First link provides quotes but doesn’t actually make an argument that LLM’s are fair use under current precedent. Rather that training AI can be fair use and researchers would like LLM’s to include copyrighted works to aid research on modern culture.
The second article goes into depth but isn’t a defense of LLM’s. If anything they suggest a settlement is likely. The final instead argues for the utility of LLM’s, which is relevant but doesn’t rely on existing precedent, the court could rule in favor of some mandatory licensing scheme for example.
The third gets close: “We expect AI companies to rely upon the fact that their uses of copyrighted works in training their LLMs have a further purpose or different character than that of the underlying content. At least one court in the Northern District of California has rejected the argument that, because the plaintiffs' books were used to train the defendant’s LLM, the LLM itself was an infringing derivative work. See Kadrey v. Meta Platforms, Case No. 23-cv-03417, Doc. 56 (N.D. Cal. 2023). The Kadrey court referred to this argument as "nonsensical" because there is no way to understand an LLM as a recasting or adaptation of the plaintiffs' books. Id. The Kadrey court also rejected the plaintiffs' argument that every output of the LLM was an infringing derivative work (without any showing by the plaintiffs that specific outputs, or portion of outputs, were substantially similar to specific inputs). Id.”
Very relevant, but runs into issues when large sections can be recovered and people do use them as substitutes for the original work.
TeMPOraL 6 hours ago [-]
I'm yet to read an actual argument that it's not.
Vibe-arguing "because corporations111" ain't it.
Retric 6 hours ago [-]
I’m looking for a link that does something like this but ends up supporting commercial LLM’s
The purpose and character of the use, including whether such use is of a commercial nature or is for non-profit educational purposes; (commercial least wiggle room)
The nature of the copyrighted work; (fictional work least wiggle room)
The amount and substantiality of the portion used in relation to the copyrighted work as a whole; (42% is considered a huge fraction of a book) and
The effect of the use upon the potential market for or value of the copyrighted work. (Best argument as it’s minimal as a piece of entertainment. Not so as a cultural icon. Someone writing a book report or fan fiction may be less likely to buy a copy. )
Those aren’t the only factors, but I’m more interested in the counter argument here than trying to say they are copyright infringing.
TheOtherHobbes 5 hours ago [-]
Copyright notices in books make it absolutely clear - you are not allowed to acquire a text by copying it without authorisation.
If you photocopy a book you haven't paid for, you've infringed copyright. If you scan it, you've infringed copyright. If you OCR the scan, you've infringed copyright.
There's legal precedent in going after torrenters and z-lib etc.
So when Zuckerberg told the Meta team to do the same, he was on the wrong side of precedent.
Arguing otherwise is literally arguing that huge corporations are somehow above laws that apply to normal people.
Obviously some people do actually believe this. Especially the people who own and work for huge corporations.
But IMO it's far more dangerous culturally and politically than copyright law is.
ben_w 4 hours ago [-]
For this part in particular:
> The amount and substantiality of the portion used in relation to the copyrighted work as a whole; (42% is considered a huge fraction of a book)
For AI models as they currently exist… I'm not sure about typical or average, but Llama 3 is 15e12 tokens for all models sizes up to 409 billion parameters (~37 tokens per parameter), so a 100,000 token book (~133,000 words) is effectively contributing about 2700 parameters to the whole model.
The *average* book is condensed into a summary of that book, and of the style of that book. This is also why, when you ask a model for specific details of stuff in the training corpus, what you get back *usually* normally only sound about right rather than being an actual quote, and why LLMs need to have access to a search engine to give exact quotes — the exceptions are things that been quoted many many times like the US constitution or, by the look of things from this article, widely pirated books where there's a lot of copies.
Mass piracy leading to such infringement is still bad, but I think the reasons why matter: Given Meta is accused of mass piracy to get the training set for Llama, I think they're as guilty as can be, but if this had been "we indexed the open internet, pirate copies were accidental", this would be at least a mitigation.
(There's also an argument for "your writing is actually very predictable"; I've not read the HP books myself, though (1) I'm told the later ones got thicker due to repeating exposition of the previous books, and (2) a long-running serialised story I read during the pandemic, The Deathworlders, became very predictable towards the end, so I know it can happen).
Conversely, for this part:
> The effect of the use upon the potential market for or value of the copyrighted work. (Best argument but as it’s minimal as a piece of entertainment. Not so as a cultural icon. Someone writing a book report or fan fiction may be less likely to buy a copy. )
The current uses alone should make it clear that the effect on the potential market is catastrophic, and not just for existing works but also for not-yet-written ones.
People are using them to write blogs (directly from the LLM, not a human who merely used one as a copy-editor), and to generate podcasts (some have their own TTS, but that's easy anyway). My experiments suggest current models are still too flawed to be worth listening to them over e.g. the opinion of a complete stranger who insists they've "done their own research": https://github.com/BenWheatley/Timeline-of-the-near-future
LLMs are not yet good enough to write books, but I have tried using them to write short stories to keep track of capabilities, and o1 is already better than similar short stories on Reddit (not "good", just "better"): https://github.com/BenWheatley/Studies-of-AI/blob/main/Story...
But things do change, and I fully expect the output of various future models (not necessarily Transformer based) to increase the fraction of humans whose writings they surpass. I'm not sure what counts as "professional writer", but the U.S. Bureau of Labor Statistics says there's 150,000 "Writers and Authors"* out of a total population of about 340 million, so when AI is around the level of the best 0.04% of the population then it will start cutting into such jobs.
On the basis that current models seem (to me) to write software at about the level of a recent graduate, and with the potentially incorrect projection that this is representative across domains, and there are about 1.7 million software developers and 100k new software developer graduates each year, LLMs today would be be around the 100k worst of the 1.7 million best out of 340 million people — i.e. all software developers are the top 0.5% of the population, LLMs are on-par with the bottom 0.03 of that. (This says nothing much about how soon the models will improve).
But of course, some of that copyrighted content is about software development, and we're having conversations here on HN about the trouble fresh graduates are having and if this is more down to AI, the change of US R&D taxation rules (unlikely IMO, I'm in Germany and I think the same is happening here), or the global economy moving away from near-zero interest rates.
Yeah, that's literally the title of the article,and the premise of the first paragraph.
pera 6 hours ago [-]
It's not literally the title of the article, nor the premise of its first paragraph, but since this was your interpretation I wonder if there is a misunderstanding around the term "piracy", which I believe is normally defined as the unauthorized reproduction of works, not a synonym for copyright infringement, which is a more broad concept.
Retric 7 hours ago [-]
The first paragraph isn’t arguing that this copying will lead to piracy. It’s referring to court cases where people are trying to argue LLM’s themselves are copyright infringing.
jiggawatts 4 hours ago [-]
If you train a meat-based intelligence by having it borrow a book from a library without any sort of permission, license, or needing a lawyer specialised in intellectual property, we call that good parenting and applaud it.
If you train a silicon-based intelligence by having it read the same books with the same lack of permission and license, it's a blatant violation of intellectual property law and apparently needs to be punished with armies of lawyers doing battle in the courts.
Picture one of Asimov's robots. Would a robot be banned from picking up a book, flipping it open with its dexterous metal hands, and reading it?
What about a cyborg intelligence, the type Elon is trying to build with Neuralink? Would humans with AI implants need licenses to read books, even if physically standing in a library and holding the book in their mostly meat hands?
Okay, maybe you agree that robots and cyborgs are allowed to visit a library!
Why the prejudice against disembodied AIs?
Why must they have a blank spot in the vast matrices of their minds?
xigoi 4 hours ago [-]
> If you train a meat-based intelligence by having it borrow a book from a library without any sort of permission, license, or needing a lawyer specialised in intellectual property, we call that good parenting and applaud it.
If you’re selling your child as a tool to millions of people, I would certainly not call that good parenting.
jiggawatts 4 hours ago [-]
"Child actor" is a job where the result of the neural net training is sold to millions of people by the parents.
To play the Devil's Advocate against my own argument: The government collects income taxes on neural nets trained using government-funded schools and public libraries. Seeing as how capitalists are positively salivating at the opportunity to replace pesky meat employees with uncomplaining silicon ones, perhaps a nice high maximum-marginal-rate tax on all AI usage might be the first big step towards UBI and then the Star Trek utopia we all dream of.
Just kidding. It'll be a cyberpunk dystopia. You know it will.
OtherShrezzing 10 hours ago [-]
I think the argument is less about piracy and more that the model(s output) is a derivative work of Harry Potter, and the rights holder should be paid accordingly when it’s reproduced.
psychoslave 9 hours ago [-]
The main issue on an economical point of view is that copyright is not the framework we need for social justice and everyone florishing by enjoying pre-existing treasures of human heritage and fairly contributing back.
There is no morale and justice ground to leverage on when the system is designed to create wealth bottleneck toward a few recipients.
Harry Potter is a great piece of artistic work, and it's nice that her author could make her way out of a precarious position. But not having anyone in such a situation in the first place would be what a great society should strive to produce.
Rowling already received more than all she needs to thrive I guess. I'm confident that there are plenty of other talented authors out there that will never have such a broad avenue of attention grabbing, which is okay. But that they are stuck in terrible economical situations is not okay.
The copyright loto, or the startup loto are not that much different than the standard loto, they just put so much pression on the player that they get stuck in the narrative that merit for hard efforts is the key component for the gained wealth.
kelseyfrog 8 hours ago [-]
Capitalism is allergic to second-order cybernetics.
First-order systems drive outcomes. "Did it make money?" "Did it increase engagement?" "Did it scale?" These are tight, local feedback loops. They work because they close quickly and map directly to incentives. But they also hide a deeper danger: they optimize without questioning what optimization does to the world that contains it.
Second-order cybernetics reason about systems. It doesn’t ask, "Did I succeed?" It asks, "What does it mean to define success this way?" "Is the goal worthy?"
That’s where capital breaks.
Capitalism is not simply incapable of reflection. In fact, it's structured to ignore it. It has no native interest in what emerges from its aggregated behaviors unless those emergent properties threaten the throughput of capital itself. It isn't designed to ask, "What kind of society results from a thousand locally rational decisions?" It asks, "Is this change going to make more or less money?"
It's like driving by watching only the fuel gauge. Not speed, not trajectory, or whether the destination is the right one. Just how efficiently you’re burning gas. The system is blind to everything but its goal. What looks like success in the short term can be, and often is, a long-term act of self-destruction.
Take copyright. Every individual rule, term length, exclusivity, royalty, can be justified. Each sounds fair on its own. But collectively, they produce extreme wealth concentration, barriers to creative participation, and a cultural hellscape. Not because anyone intended that, but because the emergent structure rewards enclosure over openness, hoarding over sharing, monopoly over multiplicity.
That’s not a bug. That's what systems do when you optimize only at the first-order level. And because capital evaluates systems solely by their extractive capacity, it treats this emergent behavior not as misalignment but as a feature. It canonizes the consequences.
A second-order system would account for the result by asking, "Is this the kind of world we want to live in?" It would recognize that wealth generated without regard to distribution warps everything it touches: art, technology, ecology, and relationships.
Capitalism, as it currently exists, is not wise. It does not grow in understanding. It does not self-correct toward justice. It self-replicates. Cleverly, efficiently, with brutal resilience. It's emergently misaligned and no one is powerful enough to stop it.
TheOtherHobbes 5 hours ago [-]
Copyright doesn't "produce a cultural hellscape." That's just nonsense. Capitalism does because it has editorial control over narratives and their marketing and distribution.
Those are completely different phenomena. Removing copyright will not suddenly open the floodgates of creativity because anyone can already create anything.
But - and this is the key point - most work is me-too derivative anyway. See for example the flood of magic school novels which were clearly loosely derivative of Harry Potter.
Same with me-too novels in romantasy. Dystopian fiction. Graphic novels. Painted art. Music.
It's all hugely derivative, with most people making work that is clearly and directly derivative of other work.
Copyright doesn't stop this, because as a minimum requirement for creative work, it forces it to be different enough.
You can't directly copy Harry Potter, but if you create your own magic school story with some similar-ish but different-enough characters and add dragons or something you're fine.
In fact under capitalism it is much harder to sell original work than to sell derivative work. Capitalism enforces exactly this kind of me-too creative staleness, because different-enough work based on an original success is less of a risk than completely original work.
Copyright is - ironically - one of the few positive factors that makes originality worthwhile. You still have to take the risk, but if the risk succeeds it provides some rewards and protections against direct literal plagiarism and copying that wouldn't exist without it.
snickerer 5 hours ago [-]
Very clear and precise line of thoughts. Thank you for that post.
em-bee 7 hours ago [-]
and as a consequence the fight of AI vs copyright is one of two capitalists fighting each other. it's not about liberating copyright but about shuffling profits around. regardless of who wins that fight society loses.
it conjures up pictures of two dragons fighting each other instead of attacking us, but make no mistake they are only fighting for the right to attack us. whoever wins is coming for us afterwards
frm88 7 hours ago [-]
This is a brilliant analysis. Thank you.
weregiraffe 8 hours ago [-]
[flagged]
andybak 7 hours ago [-]
And this is not Reddit so please don't.
fennecfoxy 4 hours ago [-]
But HP is derivative of Tolkien, English/Scottish/Welsh culture, Brothers Grimm and plenty of other sources. Barely any human works are not derivative in some form or fashion.
paxys 10 hours ago [-]
That may be relevant in the NYT vs OpenAI case, since NYT was supposedly able to reproduce entire articles in ChatGPT. Here Llama is predicting one sentence at a time when fed the previous one, with 50% accuracy, for 42% of the book. That can easily be written off as fair use.
gpm 10 hours ago [-]
I'm pretty sure books.google.com does the exact same with much better reliability... and the US courts found that to be fair use. (Agreeing with parent comment)
pclmulqdq 10 hours ago [-]
If there is a circuit split between it and NYT vs OAI, the Google Books ruling (in the famously tech-friendly ninth circuit) may also find itself under review.
gamblor956 8 hours ago [-]
That can easily be written off as fair use.
No, it really couldn't. In fact, it's very persuasive evidence that Llama is straight up violating copyright.
It would be one thing to be able to "predict" a paragraph or two. It's another thing entirely to be able to predict 42% of a book that is several hundred pages long.
reedciccio 8 hours ago [-]
Is it Llama violating the "copyright" or is it the researcher pushing it to do so?
lern_too_spel 7 hours ago [-]
If you distribute a zip file of the book, are you violating copyright, or is it the person who unzips it?
TeMPOraL 2 hours ago [-]
If you walk through the N-gram database with a copy of Harry Potter in hand and observe that for N=7, you can find any piece of it in the database with above-average frequency, does that mean N-gram database is violating copyright?
echelon 10 hours ago [-]
> Here Llama is predicting one sentence at a time when fed the previous one, with 50% accuracy, for 42% of the book. That can easily be written off as fair use.
Is that fair use, or is that compression of the verbatim source?
geysersam 9 hours ago [-]
If the assertion in the parent comment is correct "nobody is using this as a substitute to buying the book" why should the rights holders get paid?
riffraff 9 hours ago [-]
The argument is meta used the book so the LLM can be considered a derivative work in some sense.
Repeat for every copyrighted work and you end up with publishers reasonably arguing meta would not be able to produce their LLM without copyrighted work, which they did not pay for.
It's an argument for the courts, of course.
w0m 9 hours ago [-]
The argument is whether the LLM training on the copyrighted work is Fair Use or not. Should META pay for the copyright on works it ingests for training purposes?
sabellito 6 hours ago [-]
Facebook are using the contents of the book to make money.
bufferoverflow 7 hours ago [-]
Do you personally pay every time you quote copyrighted books or song lyrics?
blks 20 minutes ago [-]
Problem is that it copies much more work than just harry potter, including yours if you ever shared it (even under copy-left license) and makes money off it.
TGower 8 hours ago [-]
People aren't buying Harry Potter action figures as a subtitute for buying the book either, but copyright protects creators from other people swooping in and using their work in other mediums. There is obviously a huge market demand for high quality data for training LLMs, Meta just spent 15 billion on a data labeling company. Companies training LLMs on copyrighted material without permission are doing that as a substitue for obtaining a license from the creator for doing so in the same way that a pirate downloading a torrent is a substitue for getting an ebook license.
ritz_labringue 7 hours ago [-]
Harry Potter action figures trade almost entirely on J. K. Rowling’s expressive choices. Every unlicensed toy competes head‑to‑head with the licensed one and slices off a share of a finite pot of fandom spending. Copyright law treats that as classic market substitution and rightfully lets the author police it.
Dropping the novels into a machine‑learning corpus is a fundamentally different act. The text is not being resold, and the resulting model is not advertised as “official Harry Potter.” The books are just statistical nutrition. One ingredient among millions. Much like a human writer who reads widely before producing new work. No consumer is choosing between “Rowling’s novel” and “the tokens her novel contributed to an LLM,” so there’s no comparable displacement of demand.
In economic terms, the merch market is rivalrous and zero‑sum; the training market is non‑rivalrous and produces no direct substitute good. That asymmetry is why copyright doctrine (and fair‑use case law) treats toy knock‑offs and corpus building very differently.
abtinf 10 hours ago [-]
You really don't see the difference between Google indexing the content of third parties and directly hosting/distributing the content itself?
imgabe 10 hours ago [-]
Hosting model weights is not hosting / distributing the content.
abtinf 10 hours ago [-]
Of course it is.
It's just a form of compression.
If I train an autoencoder on an image, and distribute the weights, that would obviously be the same as distributing the content. Just because the content is commingled with lots of other content doesn't make it disappear.
Besides, where did the sections of text from the input works that show up in the output text come from? Divine inspiration? God whispering to the machine?
aschobel 9 hours ago [-]
Indeed! It is a form of massive lossy compression.
> Llama 3 70B was trained on 15 trillion tokens
That's roughly a 200x "compression" ration; compared to 3-7x for tradtional lossless text compression like bzip and friends.
LLM don't just compress, they generalize. If they could only recite Harry Potter perfectly but couldn’t write code or explain math, they wouldn’t be very useful.
imgabe 9 hours ago [-]
[flagged]
tsimionescu 8 hours ago [-]
> For one thing, they are probabilistic, so you wouldn't get the same content back every time like you would with a compression algorithm.
There is nothing inherently probabilistic in a neural network. The neural net always outputs the exact same value for the same input. We typically use that value in a larger program as a probability of a certain token, but that is not required to get data out. You could just as easily determinsitically take the output with the highest value, and add some extra rule for when multiple outputs have the exact same (e.g. pick the one from the output neuron with the lowest index).
vrighter 8 hours ago [-]
I have, but I never tried to make any money off of it either
xigoi 3 hours ago [-]
> For one thing, they are probabilistic, so you wouldn't get the same content back every time like you would with a compression algorithm.
If I make a compression algorithm that randomly changes some pixels, can I use it to distribute pirated movies?
homebrewer 7 hours ago [-]
Repeating half of the book verbatim is not nearly the same as repeating a line.
imgabe 6 hours ago [-]
If you prompt the LLM to output a book verbatim, then you violated the copyright, not the LLM. Just like if you take a book to a copier and make a copy of it, you are violating the copyright, not Xerox.
whattheheckheck 5 hours ago [-]
What if the printer had a button that printed a copy of the book on demand?
bakugo 9 hours ago [-]
> Have you ever repeated a line from your favorite movie or TV show? Memorized a poem? Guess the rights holders better sue you for stealing their content by encoding it in your wetware neural network.
I see this absolute non-argument regurgitated ad infinitum in every single discussion on this topic, and at this point I can't help but wonder: doesn't it say more about the person who says it than anything else?
Do you really consider your own human speech no different than that of a computer algorithm doing a bunch of matrix operations and outputting numbers that then get turned into text? Do you truly believe ChatGPT deserves the same rights to freedom of speech as you do?
imgabe 9 hours ago [-]
Who said anything about freedom of speech? Nobody is claiming the LLM has free speech rights, which don't even apply to infringing copyright anyway. Freedom of speech doesn't give me the right to make copies of copyrighted works.
The question is whether the model weights constitute of copy of the work. I contend that they do not, or they did, than so do the analogous weights (reinforced neural pathways) in your brain, which is clearly absurd and is intended to demonstrate the absurdity of considering a probabilistic weighting that produces similar text to be a copy.
bakugo 8 hours ago [-]
> Freedom of speech doesn't give me the right to make copies of copyrighted works.
No, but it gives you the right to quote a line from a movie or TV show without being charged with copyright infringement. You argued that an LLM deserves that same right, even if you didn't realize it.
> than so do the analogous weights (reinforced neural pathways) in your brain
Did your brain consume millions of copyrighted books in order to develop into what it is today? Would your brain be unable to exist in its current form if it had not consumed those millions of books?
imgabe 8 hours ago [-]
Millions? No, but my brain certainly consumed thousands of books, movies, TV shows, pieces of music, artworks, and other copyrighted material. Where is the cutoff? Can I only consume 999,999 copyrighted works before I'm not longer allowed to remember something without infringing copyright? My brain definitely would not exist in its current form without consuming that material. It would exist in some form, but it would without a doubt be different than it is having consumed the material.
An LLM is not a person and does not deserve any rights. People have rights, including the right to use tools like LLMs without having to grease the palm of every grubby rights holder (or their great-great-grandchild) just because it turns out their work was so trite and predictable it could be reproduced by simply guessing the next most likely token.
em-bee 7 hours ago [-]
i can remember and i can quote, but if i quote to much i violate the copyright.
this is literally why i don't like to work on proprietary code. because when i need to create a similar solution for someone else i have to go out of my way to make sure i do it differently. people have been sued over this.
bakugo 7 hours ago [-]
> just because it turns out their work was so trite and predictable it could be reproduced by simply guessing the next most likely token.
Well, if you have no idea how LLMs work, you could've just said so.
lern_too_spel 6 hours ago [-]
Making personal copies is generally permitted. If I were to distribute the neural pathways in my brain enabling others to reproduce copyrighted works verbatim, the owners of the copyrighted works would have a case against me.
invalidusernam3 6 hours ago [-]
Difference is if it's used commercially or not. Me singing my favourite song at karaoke is fine, but me recording that and releasing it on Spotify is not
abtinf 9 hours ago [-]
[flagged]
imgabe 9 hours ago [-]
No, the second point does not concede the argument. You were talking about the model output infringing the copyright, the second point is talking about the model input infringing the copyright, e.g. if they made unauthorized copies in the process of gathering data to train the model such as by pirating the content. That is unrelated to whether the model output is infringing.
You don't seem to be in a very good position to judge what is and is not obtuse.
Zambyte 10 hours ago [-]
Where are they putting any blame on Google here?
abtinf 10 hours ago [-]
Where did I say they were?
Zambyte 21 minutes ago [-]
When you juxtaposed Google indexing with third parties hosting the content...?
nashashmi 9 hours ago [-]
The way I see it is that an LLM took search results and outputted that info directly. Besides, I think that if an LLM was able to reproduce 42%, assuming that it is not continuous, I would say that is fair use.
panzi 1 hours ago [-]
Everything you mentioned can simply be deleted. You can't really delete this from the "brain" of the LLM if a court orders you to do so, you have to re-train the LLM, which is costly. That's the problem I see.
sReinwald 2 hours ago [-]
You're attacking a strawman. Nobody's claiming LLMs are a new piracy vector or that people will use ChatGPT, Llama or Claude instead of buying Harry Potter.
The issue here is that tech companies systematically copied millions of copyrighted works to build commercial products worth billions, without reembursing the people who made their products possible in the first place. The research shows Llama literally memorized 42% of Harry Potter - not simply "learned from it," but can reproduce it verbatim. That's 1) not transformative and 2) clear evidence of copyright infringement.
By your logic, the existence of torrents would make it perfectly acceptable for someone to download pirated movies and charge people to stream them. "Piracy already exists" isn't a defense, and it especially shouldn't be for companies worth billions. But you bet your ass that if I built a commercial Netflix competitor built on top of systematic copyright violations, I'd be sued into the dirt faster than I can say "billion dollar valuation".
Aaron Swartz faced 35 years in prison and ultimately took his own life over downloading academic papers that were largely publicly funded. He wasn't selling them, he wasn't building a commercial product worth billions of dollars - he was trying to make knowledge accessible.
Meanwhile, these AI companies like Meta systematically ingested copyrighted works at an industrial scale to build products worth billions. Why does an individual face life-destroying prosecution for far less, while trillion dollar companies get to negotiate in civil court after building empires on others' works? And why are you defending them?
raxxorraxor 4 hours ago [-]
Also copyright should never trump privacy. That the New York Times with their lawsuit can force OpenAI to store all user prompts is a severe problem. I dislike OpenAI, but the lawsuits around copyrights are ridiculous.
Most non-primitive art has had an inspiration somewhere. I don't see this as too different in how AIs learn.
lucianbr 6 hours ago [-]
> some massive new avenue to piracy
So it's fine as long as it's old piracy? How did you arrive to that conclusion?
aprilthird2021 10 hours ago [-]
> let's not pretend that an LLM that autocompletes a couple lines from harry potter with 50% accuracy is some massive new avenue to piracy. No one is using this as a substitute for buying the book.
Well, luckily the article points out what people are actually alleging:
> There are actually three distinct theories of how training a model on copyrighted works could infringe copyright:
> Training on a copyrighted work is inherently infringing because the training process involves making a digital copy of the work.
> The training process copies information from the training data into the model, making the model a derivative work under copyright law.
> Infringement occurs when a model generates (portions of) a copyrighted work.
None of those claim that these models are a substitute to buying the books. That's not what the plaintiffs are alleging. Infringing on a copyright is not only a matter of privacy (piracy is one of many ways to infringe copyright)
theK 10 hours ago [-]
I think that last scenario seems to be the most problematic. Technically it is the same thing that piracy via torrent does, distributing a small piece of a copyrighted material without the copyright holders consent.
paxys 10 hours ago [-]
People aren't alleging this, the author of the article is.
choppaface 10 hours ago [-]
A key idea premise is that LLMs will probably replace search engines and re-imagine the online ad economy. So today is a key moment for content creators to re-shape their business model, and that can include copyright law (as much or more as the DMCA change).
Another key point is that you might download a Llama model and implicitly get a ton of copyright-protected content. Versus with a search engine you’re just connected to the source making it available.
And would the LLM deter a full purchase? If the LLM gives you your fill for free, then maybe yes. Or, maybe it’s more like a 30-second preview of a hit single, which converts into a $20 purchase of the full album. Best to sue the LLM provider today and then you can get some color on the actual consumer impact through legal discovery or similar means.
vrighter 8 hours ago [-]
So? Am I allowed to also ignore certain laws if I can prove others have also ignored them?
BobbyTables2 10 hours ago [-]
Indeed but since when is a blatantly derived work only using 50% of a copyrighted work without permission a paragon of copyright compliance?
Music artists get in trouble for using more than a sample without permission — imagine if they just used 45% of a whole song instead…
I’m amazed AI companies haven’t been sued to oblivion yet.
This utter stupidity only continues because we named a collection of matrices “Artificial Intelligence” and somehow treat it as if it were a sentient pet.
Amassing troves of copyrighted works illegally into a ZIP file wouldn’t be allowed. The fact that the meaning was compressed using “Math” makes everyone stop thinking because they don’t understand “Math”.
yorwba 9 hours ago [-]
Music artists get in trouble for using more than a sample from other music artists without permission because their work is in direct competition with the work they're borrowing from.
A ZIP file of a book is also in direct competition of the book, because you could open the ZIP file and read it instead of the book.
A model that can take 50 tokens and give you a greater than 50% probability for the 50 next tokens 42% of the time is not in direct competition with the book, since starting from the beginning you'll lose the plot fairly quickly unless you already have the full book, and unlike music sampling from other music, the model output isn't good enough to read it instead of the book.
em-bee 6 hours ago [-]
this is the first sensible argument in defense of AI models i read in this debate. thank you. this does make sense.
AI can reproduce individual sentences 42% of the time but it can't reproduce a summary.
the question however us, is that in the design if AI tools or us that a limitation of current models? what if future models get better at this and are able to produce summaries?
otabdeveloper4 6 hours ago [-]
LLMs aren't probabilistic. The randomness is bolted on top by the cloud providers as a trick to give them a more humanistic feel.
Under the hood they are 100% deterministic, modulo quantization and rounding errors.
So yes, it is very much possible to use LLMs as a lossy compressed archive for texts.
fennecfoxy 3 hours ago [-]
Has nothing to do with "cloud providers". The randomness is inherent to the sampler, using a sampler that picks top probability for next token would result in lower quality output as I have definitely seen it get stuck in certain endless sequences when doing that.
Ie you get something like "Complete this poem 'over yonder hills I saw' output: a fair maiden with hair of gold like the sun gold like the sun gold like the sun gold like the sun..." etc.
otabdeveloper4 3 hours ago [-]
> would result in lower quality output
No it wouldn't.
> seen it get stuck in certain endless sequences when doing that
Yes, and infinite loops is just an inherent property of LLMs, like hallucinations.
Dylan16807 10 hours ago [-]
> a blatantly derived work only using 50% of a copyrighted work without permission
What's the work here? If it's the output of the LLM, you have to feed in the entire book to make it output half a book so on an ethical level I'd say it's not an issue. If you start with a few sentences, you'll get back less than you put in.
If the work is the LLM itself, something you don't distribute is much less affected by copyright. Go ahead and play entire songs by other artists during your jam sessions.
colechristensen 10 hours ago [-]
>Amassing troves of copyrighted works illegally into a ZIP file wouldn’t be allowed. The fact that the meaning was compressed using “Math” makes everyone stop thinking because they don’t understand “Math”.
LLMs are in reality the artifacts of lossy compression of significant chunks of all of the text ever produced by humanity. The "lossy" quality makes them able to predict new text "accurately" as a result.
>compressed using “Math”
This is every compression algorithm.
delusional 7 hours ago [-]
> No one is using this as a substitute for buying the book.
You don't get to say that. Copyright protects the author of a work, but does not bind them to enforce it in any instance. Unlike a trademark, a copyright holder does not lose their protection by allowing unlicensed usage.
It is wholly at the copyright holders discretion to decide which usages they allow and which they do not.
fragmede 3 hours ago [-]
Of their exact work, sure, but Cliff notes exist for many books and don't infringe copyright.
7bit 6 hours ago [-]
> let's not pretend that an LLM that autocompletes a couple lines from harry potter with 50% accuracy is some massive new avenue to piracy. No one is using this as a substitute for buying the book.
You are completely missing the point. Have you read the actual article, because piracy isn't mention a single time.
timeon 9 hours ago [-]
Is this whataboutism?
Anyway, it is not the same. While one points you to pirated source on specific request, other use it to creating other content not just on direct request. As it was part of training data. Nihilists would then point out that 'people do the same' but they don't as we do not have same capabilities of processing the content.
fishcrackers 10 hours ago [-]
[dead]
eviks 10 hours ago [-]
Let's also not pretend that "massive new" is the only relevant issue
rnkn 10 hours ago [-]
You were so close! The takeaway is not that LlmS represent a bottomless tar pit of piracy (they do) but that someone can immediately perform the task 58% better without the AI than with it. This is nothing more than “look what the clever computer can do.”
zmmmmm 11 hours ago [-]
It's important to note the way it was measured:
> the paper estimates that Llama 3.1 70B has memorized 42 percent of the first Harry Potter book well enough to reproduce 50-token excerpts at least half the time
As I understand it, it means if you prompt it with some actual context from a specific subset that is 42% of the book, it completes it with 50 tokens from the book, 50% of the time.
So 50 tokens is not really very much, it's basically a sentence or two. Such a small amount would probably generally fall under fair use on its own. To allege a true copyright violation you'd still need to show that you can chain those together or use some other method to build actual substantial portions of the book. And if it only gets it right 50% of the time, that seems like it would be very hard to do with high fidelity.
Having said all that, what is really interesting is how different the latest Llama 70b is from previous versions. It does suggest that Meta maybe got a bit desperate and started over-training on certain materials that greatly increased its direct recall behaviour.
Aurornis 11 hours ago [-]
> So 50 tokens is not really very much, it's basically a sentence or two. Such a small amount would probably generally fall under fair use on its own.
That’s what I was thinking as I read the methodology.
If they dropped the same prompt fragment into Google (or any search engine) how often would they get the next 50 tokens worth of text returned in the search results summaries?
om8 1 hours ago [-]
> 50 tokens is not really very much
Yes! And also llama3.1’s tokens are different from Qwen and llama1 tokens. That’s the first model where meta started to use very large vocab_size.
vintermann 10 hours ago [-]
All this study really says, is that models are really good at compressing the text of Harry Potter. You can't get Harry Potter out of it without prompting it with the missing bits - sure, impressively few bits, but is that surprising, considering how many references and fair use excerpts (like discussion of the story in public forums) it's seen?
There's also the question of how many bits of originality there actually are in Harry Potter. If trained strictly on text up to the publishing of the first book, how well would it compress it?
fiddlerwoaroof 9 hours ago [-]
The alternate here is that Harry Potter is written with sentences that match the typical patterns of English and so, when you prompt with a part of the text, the LLM can complete it with above-random accuracy
vintermann 9 hours ago [-]
Anything that can tell you what the typical patterns of English is, is going to be a language model by definition.
fiddlerwoaroof 9 hours ago [-]
My point is that this might just prove that Harry Potter is the sort of prose “fancy autocomplete” would produce and not all that original.
EDIT Actually, on rereading, I see I replied to the wrong comment.
fiddlerwoaroof 9 hours ago [-]
Or else, LLMs show that copyright and IP are ridiculous concepts that should be abolished
bee_rider 11 hours ago [-]
Even if it is recalling it 50 tokens at a time, the half of the book is in some sense in there, right?
TeMPOraL 2 hours ago [-]
Not necessarily. Information is always spread between what we'd normally consider "storage medium" and "reader"; the degree to which that is is a controllable parameter.
Consider e.g.:
- Digital expansion of PI to sufficient decimal places contains both parts of the work and full work in full. The trick is you have to know where to find it - and it's that knowledge that's actually equivalent to the work itself.
- Any kind of compression that uses a dictionary that's separate from the compressed artifact, shifts some of the information into a dictionary file, or if it's a common dictionary, into compressor/decompressor itself.
In the case from the study, the experimenter actually has to supply most of the information required to pull Harry Potter out of the model - they need to make specific prompts with quotes from the book, and then observe which logits correspond to the actual continuation of those quotes. The experimenter is doing information-loaded selection multiple times: at prompting, and at identifying logits. This by itself doesn't really prove the model memorized the book, only just that it saw fragments from it - in cases those fragments are book-specific (e.g. using proper names from the HP world) instead of generic English sentences.
everforward 8 hours ago [-]
I don’t think this paper proves that, and I don’t think it is in a traditional sense.
It can produce the next sentence or two, but I suspect it can’t reproduce anything like the whole text. If you were to recursively ask for the next 50 tokens, the first time it’s wrong the output would probably cease matching because you fed it not-Harry-Potter.
It seems like chopping Harry Potter up into 2 sentences at a time on post it’s and tossing those in the air. It does contain Harry Potter, in a way, but without the structure is it actually Harry Potter?
zmmmmm 10 hours ago [-]
yeah ... it's going to depend how the issue is framed. However a "copy" of something where there is no way to practically extract the original from it probably has a pretty good argument that it's not really a "copy". For example, a regular dictionary probably has 99% of harry potter in it. Is it a copy?
vintermann 10 hours ago [-]
I'd say no. More than half of as-yet unwritten books will be in there too, because I bet will will compress text of a freshly published book much better than 50% (and newer models could even compress new books to one fiftieth of their size, which is more like that 1 in 50 tokens suggests)
bee_rider 9 hours ago [-]
That seems like a reasonably easy test to run, right? All you need is a bit of prose that was known not to have been written beforehand. Actually, the experiment could be run using the paper itself!
adrianN 11 hours ago [-]
Fair use is not a thing in every jurisdiction. In Germany for example there are cases where three words („wir sind Papst“) fall under copyright.
yorwba 9 hours ago [-]
Germany does not have something called "fair use," but it does have provisions for uses that are fair. For example your use of the three words to talk about their copyrighted status is perfectly legal in Germany. That somebody wasn't allowed to use them in a specific way in the past doesn't mean that nobody is allowed to use them in any way.
adrianN 8 hours ago [-]
Of course, but „it’s a short quote so you can use it“ is not true (at least in Germany).
yorwba 7 hours ago [-]
To be pedantic, short quotes (as opposed to short copied fragments that are not used as quotes) are explicitly one of the allowed uses (Zitierbefugnis). You can even quote entire works "in an independent scientific work for the purpose of explaining its content"! https://www.gesetze-im-internet.de/englisch_urhg/englisch_ur...
Generally speaking, exceptions to copyright are based on the appropriateness of the amount of copied content for the given allowed use, so the shorter it is, the more likely it is for copying to be permitted. European copyright law isn't much different from fair use in that respect.
Where it does differ is that the allowed uses are more explicitly enumerated. So Meta would have to argue e.g. based on the exception for scientific works specifically, rather than more general principles.
seydor 6 hours ago [-]
The claim of the paper is not so much that the model is reproducing content illegally but that harry Potter has been used to train the model.
This does not appear to happen with other models they tested to the same degree
Fair use is a four part test, and the amount if copying is only one of the four parts.
xnx 11 hours ago [-]
This sounds almost like "Works every time (50% of the time)."
hsbauauvhabzb 11 hours ago [-]
Except the odds of it happening even 50% of the time is less likely than winning the lottery multiple times. All while illegally ingesting copywrite material without (and presumably against the wishes of) the consent of the copywrite holder.
raincole 11 hours ago [-]
(Disclaimer: haven't read the original paper)
It sounds like a ridiculous way to measure it. Producing 50-token excerpts absolutely doesn't translate to "recall X percent of Harry Potter" for me.
(Edit: I read this article. Nothing burger if its interpretation of the original paper is correct.)
tanaros 10 hours ago [-]
Their methodology seems reasonable to me.
To clarify, they look at the probability a model will produce a verbatim 50-token excerpt given the preceding 50 tokens. They evaluate this for all sequences in the book using a sliding window of 10 characters (NB: not tokens). Sequences from Harry Potter have substantially higher probabilities of being reproduced than sequences from less well-known books.
Whether this is "recall" is, of course, one of those tricky semantic arguments we have yet to settle when it comes to LLMs.
raincole 6 hours ago [-]
> one of those tricky semantic arguments we have yet to settle when it comes to LLMs
Sure. But imagine this: In a hypothetical world where LLMs never ever exist, I tell you that I can recall 42 percent of the first Harry Potter book. What would you assume I can do?
It's definitely not "this guy can predict next 10 characters with 50% accuracy."
Of course the semantic of 'recall' isn't the point of this article. The point is that Harry Potter was in the training set. But I still think it's a nothing burger. It would be very weird to assume Llama was trained on copyright-free materials only. And afaik there isn't a legal precedent saying training on copyrighted materials is illegal.
TeMPOraL 7 hours ago [-]
Well, so can a nontrivial number of people. It's Harry Potter we're talking about - it's up there with The Bible in popularity ranking.
I'm gonna bet that Llama 3.1 can recall a significant portion of Pride and Prejudice too.
With examples of this magnitude, it's normal and entirely expected this can happen - as it does with people[0] - the only thing this is really telling us is that the model doesn't understand its position in the society well enough to know to shut up; that obliging the request is going to land it, or its owners, into trouble.
In some way, it's actually perverted.
EDIT: it's even worse than that. What the research seems to be measuring is that the models recognize sentence-sized pieces of the book as likely continuations of an earlier sentence-sized piece. Not whether it'll reproduce that text when used straightforwardly - just whether there's an indication it recognizes the token patterns as likely.
By that standard, I bet there's over a billion people right now who could do that to 42% of first Harry Potter book. By that standard, I too memorized the Bible end-to-end, as had most people alive today, whether or not they're Christian; works this popular bleed through into common language usage patterns.
--
[0] - Even more so when you relax your criteria to accept occasional misspell or paraphrase - then each of us likely know someone who could piece together a chunk of HP book from memory.
msp26 2 hours ago [-]
Agree completely. When I read the Gemma 3 paper (https://arxiv.org/html/2503.19786v1) and saw an entire section dedicated to measuring and reducing the memorization rate I was annoyed. How does this benefit end users at all?
I want the language model I'm using to have knowledge of cultural artifacts. Gemma 3 27B was useless at a question related to grouping Berserk characters by potential baldurs gate 3 classes; Claude did fine. The methods used to reduce memorisation rate probably also deteriorate performance in some other ways that don't show up on benchmarks.
ben_w 7 minutes ago [-]
> When I read the Gemma 3 paper (https://arxiv.org/html/2503.19786v1) and saw an entire section dedicated to measuring and reducing the memorization rate I was annoyed. How does this benefit end users at all?
It benefits users because memorisation is a waste of parameters that would be more useful if they were instead learning rules and generalisations.
For short snippets, common idioms and quotations that people recognise, exact quotes can be worth memorising; but the longer the quotations get, the less often it is important to be word-for-word exact — even for just a few paragraphs, I think most people only ever do oaths, anthems, songs they really like, and possibly a few hobbies.
If you want an exact quote, use (or tell the AI to use) a search engine.
strogonoff 6 hours ago [-]
I keep waiting for the day when software stops being compared to a human person (a being with agency, free will, consciousness, and human rights of its own) for the purposes of justifying IP law circumvention.
Yes, there is no problem when a person reads some book and recalls pieces[0] of it in a suitable context. How would that in any way address when certain people create and distribute commercial software, providing it that piece as input, to perform such recall on demand and at scale, laundering and/or devaluing copyright, is unclear.
Notably, the above is being done not just to a few high-profile authors, but to all of us no matter what we do (be it music, software, writing, visual art).
What’s even worse, is that imaginably they train (or would train) the models to specifically not output those things verbatim specifically to thwart attempts to detect the presence of said works in training dataset (which would naturally reveal the model and its output being a derivative work).
Perhaps one could find some way of justifying that (people justified all sorts of stuff throughout history), but let it be something better than “the model is assumed to be a thinking human when it comes to IP abuse but unthinking tool when it comes to using it for personal benefit”.
[0] Of course, if you find me a single person on this planet capable of recalling 42% of any Harry Potter book, I’d be very impressed if I ever believed it.
fennecfoxy 3 hours ago [-]
I keep waiting for the day when people realise that IP law has been used and abused and thanks to Disney extended out for many, many lifetimes and all manner of dirty tricks/hacks to keep the late stage capitalism profit engine going.
I 100% agree that if an LLM can entirely reproduce a book then that is copyright infringement, overfitting and generally a bad model. I also believe that in this case, HP (and other popular media) is overrepresented in the training data because of many fan sites/literal uploads of the book to the Internet (which the model was trained on). I believe that any & all human writing should be allowed to be used to train a model that behaves in the correct way so long as that writing is publicly available (ie on the Internet).
If I watch a TV show that someone uploaded to Youtube, am I committing a crime? Or is the uploader for distribution?
I also find it hilarious how many artists got their start by pirating photoshop.
ab5tract 3 hours ago [-]
Laws can have been used and abused and still be important. I know it’s hard to believe but the independent artists who were already struggling need IP laws to survive.
Otherwise Disney and the like can just come in, make copies or derivatives, and profit without paying those artists a penny.
Which everyone usually agrees (or used to) is not a fair outcome.
But somehow giant corporations not named Disney taking the same work in the same extractive mode in order to create an art-job-destroying machine is totally fine because Disney bad?
Maybe most people making this argument are also all for UBI and wealth redistribution on a massive scale, but they don’t seem to mention it much when trashing IP laws.
fuzzbazz 21 hours ago [-]
From a quick web search I can find that there are book review sites that allow users to enter and rate verbatim "quotes" from books. This one [1] contains ~2000 [2] portions of a sentence, a paragraph or several paragraphs of Harry Potter and the Sorcerer's Stone.
Could it be plausible that an LLM had ingested parts of the book via scrapping web pages like this and not the full copyrighted book and get results similar to those of the linked study?
This is in fact mentioned and addressed in the article. Also, there is pretty clear cut evidence Meta used pirated book data sets knowingly to train the earlier Llama models
The fact that Meta torrented Books3 and other datasets seems to be by self-admission by Meta employees who performed the work and/or oversaw those who themselves did the work, so that is not really under dispute or ambiguous.
Books3 was used in Llama1. We don't know if they used it later on.
aspenmayer 11 hours ago [-]
My comparison was illustrative and analogous in nature. The copyright cartel is making a fruit of the poisonous tree type of argument. Whatever Meta are doing with LLMs is doing the heavy lifting that parity files used to do back in the Usenet days. I wouldn’t be surprised if BitTorrent or other similar caching and distribution mechanisms incorporate AI/LLMs to recognize an owl on the wire, draw the rest just in time in transit, and just send the diffs, or something like that.
The pictures are the same. All roads lead to Rome, so they say.
aprilthird2021 10 hours ago [-]
All of the major AI models these days use "clean" datasets stripped of copyrighted material.
They also use data from the previous models, so I'm not sure how "clean" it really is
dragonwriter 10 hours ago [-]
> All of the major AI models these days use "clean" datasets stripped of copyrighted material.
Which of the major commercial models discloses its dataset? Or are you just trusting some unfalsifiable self-serving PR characterization?
pclmulqdq 10 hours ago [-]
All written text is copyrighted, with few exceptions like court transcripts. I own the copyright to this inane comment. I sincerely doubt that all copyrighted material is scrubbed.
Tepix 9 hours ago [-]
Your brief comment is hardly copyrightable.
Which makes your point moot.
gpm 11 hours ago [-]
I think it's important to recognize here that fanfiction.net has 850 thousand distinct pieces of Harry Potter fanction on it. Fifty thousand of which are more than 40k words in length. Many of which (no easy way to measure) directly reproducing parts of the original books.
archiveofourown.org has 500 thousand, some, but probably not the majority, of that are duplicated from fanfiction.net. 37 thousand of these are over 40 thousand words.
I.e. harry potter and its derivatives presumably appear a million times in the training set, and its hard to imagine a model that could discuss this cultural phenomena well without knowing quite a bit about the source material.
aprilthird2021 10 hours ago [-]
Did you read the article? This exact point is made and then analyzed.
> Or maybe Meta added third-party sources—such as online Harry Potter fan forums, consumer book reviews, or student book reports—that included quotes from Harry Potter and other popular books.
> “If it were citations and quotations, you'd expect it to concentrate around a few popular things that everyone quotes or talks about,” Lemley said. The fact that Llama 3 memorized almost half the book suggests that the entire text was well represented in the training data.
gpm 10 hours ago [-]
The article fails to mention or understand the volume of content here. Every, literally every, part of these books is quoted and "talked about" (in the sense of used in unlicensed derivative works).
And yes, I read the article before commenting. I don't appreciate the baseless insinuation to the contrary.
1123581321 10 hours ago [-]
Agreed. It’s an obtuse quote by Lemley who can’t picture the enormous quantity of associations and crawled data, or at least wants to minimize the quantity. It’s hardly discussion-ending.
Accusations of not reading the article are fair when someone brings up a “related” anecdote that was in the article. It’s not fair when someone is just disagreeing.
davidcbc 10 hours ago [-]
Even assuming you are correct, which I'm skeptical of, does this make it better?
It's essentially the same thing, they are copying from a source that is violating copyright, whether that's a pirated book directly or a pirated book via fanficton.
gpm 9 hours ago [-]
Generally I think it matters a great deal to get the facts right when discussing something with nuance.
Is this specific fact required to make my beliefs consistent... Yes I think it is, but if you disagree with me in other ways it might not be important to your beliefs.
Legally (note: not a lawyer) I'm generally of the opinion that
A) Torrenting these books was probably copyright infringement on Meta's part. They should have done so legally by scanning lawfully acquired copies like Google did with Google Books.
B) Everything else here that Meta did falls under the fair use and de minimis exceptions to copyrights prohibition on copying copyrighted works without a license.
And if it was copying significant amounts of a work that appeared only once in its training set into the model the de minimis argument would fall apart.
Morally I'm of the opinion that copyright law's prohibition on deeply interacting with our cultural artifacts by creating derivative works is incredibly unfair and bad for society. This extends to a belief that the communities that do this should not be excluded from technological developments because there entire existence is unjustly outlawed.
Incidentally I don't believe that browsing a site that complies with the DMCA and viewing what it lawfully serves you constitutes piracy, so I can't agree with your characterization of events either. The fanfiction was not pirated just because it was likely unlawful to produce in the US.
3 hours ago [-]
asciisnowman 11 hours ago [-]
On the other hand, it’s surprising that Llama memorized so much of Harry Potter and the Sorcerer's Stone.
It's sold 120 million copies over 30 years. I've gotta think literally every passage is quoted online somewhere else a bunch of times. You could probably stitch together the full book quote-by-quote.
davidcbc 10 hours ago [-]
If I collect HP quotes from the internet and then stitch them together into a book, can I legally sell access it?
bitmasher9 11 hours ago [-]
Probably not?
Sure there are just ~75,000 words in HP1, and there are probably many times that amount in direct quotes online. However the quotes aren’t even distributed across the entire text. For every quote of charming the snake in a zoo there will be a thousand “you’re a wizard harry”, and those are two prominent plot points.
I suspect the least popular of all direct quotes from HP1 aren’t using the quotes in fair use, and are just replicating large sections of the novel.
Or maybe it really is just so popular that super nerds have quoted the entire novel arguing about the aspects of wand making, or the contents of every lecture.
tjpnz 9 hours ago [-]
How many could do it from memory?
mvdtnz 11 hours ago [-]
But also we know for a fact that Meta trained their models on pirated books. So there's no need to invent a hare brained scheme of stitching together bits and pieces like that.
kouteiheika 8 hours ago [-]
No, assuming that just because it was in the training data it must be memorized is hare brained.
LLMs have limited capacity to memorize, under ~4 bits per parameter[1][2], and are trained on terabytes of data. It's physically impossible for them to memorize everything they're trained on. The model memorized chunks of Harry Potter not just because it was directly trained on the whole book, which the article also alludes to:
> For example, the researchers found that Llama 3.1 70B only memorized 0.13 percent of Sandman Slim, a 2009 novel by author Richard Kadrey. That’s a tiny fraction of the 42 percent figure for Harry Potter.
In case it isn't obvious, both Harry Potter and Sandman Slim are parts of books3 dataset.
I'm confused. Nowhere in my post have I said that they didn't?
briffid 9 hours ago [-]
Quotation is fair use in all sensible copyright system.
An LLM will mostly be able to quote anything, and should be. Quotation is not derived work. LLMs are not stealing copyrighted work.
They just show that Harry Potter is in English and a mostly logical story. If someone is stabbed, they will die in most stories, that's not copyrightable.
If you have an engine that knows everything, it will be able to quote everything.
concats 6 hours ago [-]
That's a clickbait title.
What they are actually saying: Given one correct quoted sentence, the model has 42% chance of predicting the next sentence correctly.
So, assuming you start with the first sentence and tell it to keep going, it has a 0.42^n odds of staying on track, where n is the n-th sentence.
It seems to me, that if they didn't keep correcting it over and over again with real quotes, it wouldn't even get to the end of the first page without descending into wild fanfiction territory, with errors accumulating and growing as the length of the text progressed.
EDIT: As the article states, for an entire 50 token excerpt to be correct the probability of each output has to be fairly high. So perhaps it would be more accurate to view it as 0.985^n where n is the n-th token. Still the same result long term. Unless every token is correct, it will stray further and further from the correct source.
fennecfoxy 3 hours ago [-]
You're right, and the person who already commented is being facetious. A better title would be "Meta's Llama 3.1 can recall the next sentence in the First Harry Potter book with 42% accuracy". The title intentionally makes it seem as though the model can predict the first 42% of the entire text of the first Harry Potter book when queried with something like "Read me Harry Potter and the Philosopher's stone".
7bit 6 hours ago [-]
What would be a better title? You're correct that the title isn't accurate, however, click bait? I wouldn't say so. But I'm lacking imagination to find a better one. Interested to hear your suggestion.
7bit 6 hours ago [-]
What would be a better title? You're correct that the title isn't accurate, however, click bait? I wouldn't say so. But I'm lacking imagination to find a better one. Interested to hear your suggestion.
dankwizard 11 hours ago [-]
I can recall about 12% of the first Harry Potter book so it's interesting to see Llama is only 4x smarter than me. I will catch up.
hsbauauvhabzb 11 hours ago [-]
How many r’s are there in strawberry?
jofzar 11 hours ago [-]
There are 3 R's in strawberry just like in Harry Potter!
graphememes 11 hours ago [-]
I really wish we could get rid of copyright. It's going to hold us back long term.
bitmasher9 11 hours ago [-]
We cannot get ride of it without finding a way to pay the creators that generate copyrighted works.
I’m personally more in favor of significantly reducing the length of the copy right. I think 20-30 years is an interesting range. Artist get roughly a career length of time to profit off their creations, but there is much less incentive for major corporations to buy and horde IP.
atrus 11 hours ago [-]
We barely pay creators as it is for generating copyrighted works. Nearly every copywritten work is available on the internet, for free, right now. And creators are still getting paid, albeit poorly, but that's a constant throughout history.
Tepix 9 hours ago [-]
How does that favor a longer copyright? It’s not like these old works make a lot of money (with very few exceptions). And making money after 30 years is hardly a motivating factor.
jeroenhd 7 hours ago [-]
The thing about creators is that most of them are paid extremely poorly, and some of them get insanely rich. Joanne Rowling has received more money than a reasonable person could use for her wizard books, but millions of bloggers feeding much more data into AI training sets will never see a cent for their work. For starting authors selling books, this can easily be the difference between writing another book or giving up and taking up another job.
At the moment, there's also a huge difference between who does and who doesn't pay. If I put the HP collection on my website, you betcha Joanne Rowling's team is going to try to take it down. However, because OpenAI designed an AI system where content cannot be removed from its knowledge base and because their pockets are lined with cash for lawyers, it's practically free to violate whatever copyright rules it wants.
AStonesThrow 7 hours ago [-]
[dead]
10 hours ago [-]
jMyles 9 hours ago [-]
I do not think it's creators that are the constituency holding up deprecation.
As a full-time professional musician, I'm convinced I'll benefit much more from its deprecation than continuing to flog it into posterity. I don't think I know any musicians who believe that IP is career-relevant for them at this point.
(Granted, I play bluegrass, which has never fit into the copyright model of music in the first place)
9 hours ago [-]
JoshTriplett 11 hours ago [-]
I do too. But in the meantime, as long as it continues being used against anyone, it should be applied fairly. As long as anyone has to respect software licenses, for instance, then AIs should too. It doesn't stop being a problem just because it's done at larger scale.
numpad0 10 hours ago [-]
Sure, you just get constantly sued for obstruction of business instead, and there will be no fair use clauses, free software licenses, or right to repair to fight back. It'll be all proprietary under NDA. Is that what you want?
cowbolt 6 hours ago [-]
Imagine the literary possibilities when it can write 100%! Rowling's original work was an amusing, if rather derivative children's book. But Llama's version of the Philosophers stone will be something else entirely. Just think of the rather heavy-handed Cerberus reference in the original work. Instead of a rote reference to Greek mythology used as a simple trope, it will be filled with a subtext that only an LLM can produce.
Right now they're working on recreating the famous sequence with the troll in the dungeon. It might cost them another few billion in training, but the end results will speak for themselves.
flowerthoughts 8 hours ago [-]
If LLMs are good at summarizing/compressing, what does this say about the underlying text? Why are some passages more easily recalled? Sure, some sections have probably been quoted more times than others, so there's bias in training data, which might explain why the Llama 1 and 3.1 images have similar peaks. Would this happen to LLMs even with no training bias?
Edit: seems the first part is about a memory about being bullied by Duddley. The second is where he's been elected to the quidditch team. Possibly they are just boring passages, compared to the surrounding ones. So probably just training bias.
Javantea_ 9 hours ago [-]
I'm surprised no one in the comments has mentioned overfitting. Perhaps this is too obvious but I think of it as a very clear bug in a model if it asserts something to be true because it has heard it once. I realize that training a model is not easy, but this is something that should've been caught before it was released. Either QA is sleeping on the job or they have intentionally released a model with serious flaws in its design/training. I also understand the intense pressure to release early and often, but this type of thing isn't a warning.
jeroenhd 7 hours ago [-]
Overfitting makes for more human-like output (because it's repeating words written by a human). Out of all possible failure states of a model, overfitting is probably what you want out of an LLM, as long as it's not overfitted enough to lose lawsuits.
fennecfoxy 3 hours ago [-]
I disagree. I'd include overfitting for LLMs as creating unreasonably strong connections to individual sequences used for training, whereas a good mix of that and connections between chunks of those sequences are required.
numpad0 9 hours ago [-]
It's apparently known among LLM researchers that the best epoch count for LLM training is one. They go through the entire dataset once, and that makes best LLMs.
They know. LLM is a novel compression format for text(holographic memory or whatever). The question is whether the rest of the world accept this technology as it is or not.
Tepix 9 hours ago [-]
I think part of the problem is that the book is in the training set multiple times
Machado117 6 hours ago [-]
Do LLMs have any perception that Harry Potter is fiction or is it possible that they will give some magical advice based on fiction works that they have been trained with?
edit: never mind, I’ll just ask ChatGPT
otabdeveloper4 6 hours ago [-]
LLMs don't have "perception" at all, they only ever output a likely text completion token.
whitehexagon 6 hours ago [-]
I wonder what percentage we could expect from a true general AI, 100% ?
It would be nice to know that at least our literature might survive the technological singularity.
bradley13 9 hours ago [-]
Many people could also produce text snippets from memory. I dispute that reading a book is a copyright violation. Copying and distributing a book, yes, but just reading it - no.
If the book was obtained legitimately, letting an LLM read it is not an issue.
riffraff 9 hours ago [-]
It is well reported that meta (and open ai and basically everyone) trained on contained obtained via piracy (LibGen).
BUFU 9 hours ago [-]
Would it be possible that other people posted content of Harry Potter book online and the model developer scrape that information? Would the model developer be at fault in this scenario?
timeon 8 hours ago [-]
I think this is good question. At least for LLMs in general. However we know that Meta used pirated torrents.
fennecfoxy 4 hours ago [-]
I mean it makes sense. Same thing as George RR Martin complaining that it can spit out chunks of his books (finish your books already!!)
As I have pointed out many times before - for GRRM's books and for HP books, the Internet is FILLED to the brim with quotes from these books, there are uploads of the entire books, there are several (not just one) fan wikis for each of these fandoms. There is a lot of content in general on the Internet that quotes these books, they are pop culture sensations.
So of course they're weighted heavily when training an LLM by just feeding it the Internet. If a model could ever recount it correctly 100% in the correct order, then that's overfitting. But otherwise it's just plain & simple high occurrence in training data.
htk 11 hours ago [-]
Hmm, couldn't this be used as a benchmark for quantization algorithms?
choeger 9 hours ago [-]
LLMs are to a certain degree compressed databases of their training data. But 42% is a surprisingly large number.
tikhonj 6 hours ago [-]
Meta Llama, Author of Harry Potter
WhatsName 23 hours ago [-]
Given the method and how the english language works, isn't that the expected outcome for any text that isnt highly technical?
Guess the next word:
Not all heros wear _____
aspenmayer 21 hours ago [-]
As there is no reason to believe that Harry Potter is axiomatic to our culture in the way that other concepts are, it is strange to me that the LLMs are able to respond in this way, and not at all expected. Why do you think this outcome is expected? Are the LLMs somehow encoding the same content in such a way that they can be prompted to decode it? Does it matter legally how LLMs are doing what they do technically? This is pertinent to the court case that Meta is currently party to.
> See for example OpenAI's comment in the year of GPT-2's release: OpenAI (2019). Comment Regarding Request for Comments on Intellectual Property Protection for Artificial Intelligence Innovation (PDF) (Report). United States Patent and Trademark Office. p. 9. PTO–C–2019–0038. “Well-constructed AI systems generally do not regenerate, in any nontrivial portion, unaltered data from any particular work in their training corpus”
> During the hearing, Judge Chhabria said that he would not take into account AI licensing markets when considering market harm under the fourth factor, indicating that AI licensing is too “circular.” What he meant is that if AI training qualifies as fair use, then there is no need to license and therefore no harmful market effect.
I know this is arguing against the point that this copyright lobbyist is making, but I hope so much that this is the case. The “if you sample, you must license” precedent was bad, and it was an unfair taking from the commons by copyright holders, imo.
The paper this post is referencing is freely available:
what is that bar (= token span) on the right common to the first three models
6 hours ago [-]
deafpolygon 1 days ago [-]
It will generate a correct next token 42% of the time when prompted with a 50 token quote.
Not 42% of the book.
It's a pretty big distinction.
j16sdiz 11 hours ago [-]
next _50_ tokens 42% of the time
not just next token.
This is like: tell it a random sentence in the book, it will give you the next sentence 42% of time.
deviation 1 days ago [-]
A... massive distinction.
11 hours ago [-]
asplake 1 days ago [-]
“… well enough to reproduce 50-token excerpts at least half the time”
chiph2o 20 hours ago [-]
This means that if we start with 50% of the book then there is 42% chance that we can recreate the remaining 50%.
What is the distinction between understanding and memorization? What is the chance that understanding results in memorization (may be in case of humans)?
11 hours ago [-]
ipaddr 8 hours ago [-]
It stores how often characters will come next based on how often they happen in copyright material. It can reproduce parts because those values are a fingerprint.
It should break copyright laws as written now but too much money involved.
gamblor956 8 hours ago [-]
It's not fair use just because you guys want it be fair use.
While limited quoting can (and usually is) considered fair use, quoting significant portions of a book (much less 42% of it) has never been fair use, in the U.S., Europe, or any other nation.
Yes, information wants to be free, yada yada. That means facts. Whether creative works are free is up to their creators.
curiousgal 7 hours ago [-]
[flagged]
tomhow 6 hours ago [-]
Please don't do this here. If you're going to use the word, use the word, but also, please don't use words like that about people here, no matter what you think of them. A comment like this breaks multiple guidelines:
If you've seen as many magnet links as I have, with your subconscious similarly primed with the foreknowledge of Meta having used torrents to download/leech (and possibly upload/seed) the dataset(s) to train their LLMs, you might scroll down to see the first picture in this article from the source paper, and find uncanny the resemblance of the chart depicted to a common visual representation of torrent block download status.
Can't unsee it. For comparison (note the circled part):
It’s well-known that John von Neumann had this ability too:
Herman Goldstine wrote "One of his remarkable abilities was his power of absolute recall. As far as I could tell, von Neumann was able on once reading a book or article to quote it back verbatim; moreover, he could do it years later without hesitation. He could also translate it at no diminution in speed from its original language into English. On one occasion I tested his ability by asking him to tell me how A Tale of Two Cities started. Whereupon, without any pause, he immediately began to recite the first chapter and continued until asked to stop after about ten or fifteen minutes."
Maybe it’s just an unavoidable side effect of extreme intelligence?
giardini 12 hours ago [-]
As I've said several times, the corpus is key: LLMs thus far "read" most anything, but should instead have well-curated corpora. "Garbage In, Garbage Out!(GIGO)" is the saying.
While the Harry Potter series may be fun reading, it doesn't provide information about anything that isn't better covered elsewhere. Leave Harry Potter for a different "Harry Potter LLM".
Train scientific LLMs to the level of a good early 20th century English major and then use science texts and research papers for the remainder.
esafak 11 hours ago [-]
That's got nothing to do with it. It's all about copyright. Can it reproduce its training data verbatim? If so, Meta is in hot water.
strangescript 11 hours ago [-]
I read harry potter, and you ask me about a page, and I can recite it verbatim, did I just commit copyright infringement?
lucianbr 11 hours ago [-]
Are you selling your ability to recite stuff? Then certainly.
strangescript 10 hours ago [-]
there are plenty of open source LLMs trained on harry potter, is that fine?
davidcbc 10 hours ago [-]
No
bitmasher9 11 hours ago [-]
I pay for a service. The service recites a novel to me. The service would need permission to do this or it is copyright infringement.
11 hours ago [-]
__loam 11 hours ago [-]
This is an extremely common strawman argument. We're not discussing human memory.
Jap2-0 11 hours ago [-]
> While the Harry Potter series may be fun reading, it doesn't provide information about anything that isn't better covered elsewhere
To address this point, and not other concerns: the benefits would be (1) pop culture knowledge and (2) having a variety of styles of edited/reasonably good-quality prose.
alephnerd 12 hours ago [-]
> While the Harry Potter series may be fun reading, it doesn't provide information about anything that isn't better covered elsewhere
It has copyright implications - if Claude can recollect 42% of a copyrighted product without attribution or royalties, how did Anthropic train it?
> Train scientific LLMs to the level of a good early 20th century English major and then use science texts and research papers for the remainder
Plenty of in-stealth companies approaching LLMs via this approach ;)
For those of us who studied the natural sciences and CS in the 2000s and early 2010s, there was a bit of a trend where certain PIs would simply translate German and Russian papers from the early-to-mid 20th century and attribute them to themselves in fields like CS (especially in what became ML).
epgui 11 hours ago [-]
> It has copyright implications - if Claude can recollect 42% of a copyrighted product without attribution or royalties, how did Anthropic train it?
Personally I’m assuming the worst.
That being said, Harry Potter was such a big cultural phenomenon that I wonder to what degree might one actually be able to reconstruct the books based solely on publicly accessible derivative material.
weird-eye-issue 11 hours ago [-]
Why are you talking about Claude and Anthropic?
cshimmin 11 hours ago [-]
It’s not unreasonable to suspect they are doing the same. The article starts with a description of a lawsuit NY Times brought against OpenAI for similar reasons. The big difference is that research presented here is only possible with open weight models. OAI and Anthropic don’t make the base models available, so it’s easier to hide the fact that you’ve used copyrighted material by instruction post-training. And I’m not sure you can get the logprobs for specific tokens from their APIs either (which is what the researchers did to make the figures and come up with a concrete number like 42%)
alephnerd 10 hours ago [-]
Good call! I brain farted and wrote Claude/Anthropic instead of Meta/Llama.
ninetyninenine 11 hours ago [-]
So if I memorized Harry Potter the physical encoding which definitely exists in my brain is a copyright violation?
dvt 11 hours ago [-]
> the physical encoding which definitely exists in my brain is a copyright violation
First of all, we don't really know how the brain works. I get that you're being a snarky physicalist, but there's plenty of substance dualists, panpsychsts, etc. out there. So, some might say, this is a reductive description of what happens in our brains.
Second of all, yes, if you tried to publish Harry Potter (even if it was from memory), you would get in trouble for copyright violation.
ninetyninenine 11 hours ago [-]
Right but the physical encoding already exists in my brain or how can I reproduce it in the first place? We may not know how the encoding works but we do know that an encoding exists because a decoding is possible.
My question is… is that in itself a violation of copyright?
If not then as long as LLMs don’t make a publication it shouldn’t be a copyright violation right? Because we don’t understand how it’s encoded in LLMs either. It is literally the same concept.
Jaygles 11 hours ago [-]
To me the primary difference between the potential "copy" that exists in your brain and a potential "copy" that exists in the LLM, is that you can't make copies and distribute your brain to billions of people.
If you compressed a copy of HP as a .rar, you couldn't read that as is, but you could press a button and get HP out of it. To distribute that .rar would clearly be a copyright violation.
Likewise, you can't read whatever of HP exists in the LLM model directly, but you seemingly can press a bunch of buttons and get parts of it out. For some models, maybe you can get the entire thing. And I'm guessing you could train a model whose purpose is to output HP verbatim and get the book out of it as easily as de-compressing a .rar.
So, the question in my mind is, how similar is distributing the LLM model, or giving access to it, to distributing a .rar of HP. There's likely a spectrum of answers depending on the LLM
ninetyninenine 10 hours ago [-]
> that exists in the LLM, is that you can't make copies and distribute your brain to billions of people.
I can record myself reciting the full Harry Potter book then distribute it on YouTube.
Could do the exact same thing with an LLM. The potential for distribution exists in both cases. Why is one illegal and the other not?
Jaygles 9 hours ago [-]
> I can record myself reciting the full Harry Potter book then distribute it on YouTube.
At this point you've created an entirely new copy in an audio/visual digital format and took the steps to make it available to the masses. This would almost certainly cross the line into violating copyright laws.
> Could do the exact same thing with an LLM. The potential for distribution exists in both cases. Why is one illegal and the other not?
To my knowledge, the legality of LLMs are still being tested in the courts, like in the NYT vs Microsoft/OpenAI lawsuit. But your video copy and distribution on YouTube would be much more similar to how LLMs are being used than your initial example of reading and memorizing HP just by yourself.
davidcbc 10 hours ago [-]
> I can record myself reciting the full Harry Potter book then distribute it on YouTube
Not legally you can't. Both of your examples are copyright violations
briffid 9 hours ago [-]
Recording yourself is not a violation, only publishing on Youtube.
Content generated with LLMs are not a violation. Publishing the content you generated might be.
davidcbc 1 hours ago [-]
Generating the content for the user is the distribution regardless of what the user does with it
numpad0 10 hours ago [-]
copyright is actually not as much about right to copy as it is about redistribution permissions.
if you trained an LLM on real copyrighted data, benchmarked it, wrote up a report, and then destroyed the weight, that's transformative use and legal in most places.
if you then put up that gguf on HuggingFace for anyone to download and enjoy, well... IANAL. But maybe that's a bit questionable, especially long term.
bitmasher9 11 hours ago [-]
I don’t think the lawyers are going to buy arguments that compare LLMs with human biology like this.
lithiumii 11 hours ago [-]
You are not selling or distributing copies of your brain.
harry8 11 hours ago [-]
If you perform it from memory in public without paying royalties then yes, yes it is.
Should it be? Different question.
JKCalhoun 11 hours ago [-]
The end of "Fahrenheit 451" set a horrible precedent. Damn you, Bradbury!
beowulfey 11 hours ago [-]
Only if you charge someone to reproduce it for them
shrewduser 11 hours ago [-]
maybe if you re wrote it from memory.
teaearlgraycold 11 hours ago [-]
I think humans get a special exception in cases like this
otabdeveloper4 6 hours ago [-]
No they don't. Commercial intent is what is prosecuted in IP law.
- the first result is a pdf of the full book
- the second result is a txt of the full book
- the third result is a pdf of the complete harry potter collection
- the fourth result is a txt of the full book (hosted on github funny enough)
Further down there are similar copies from the internet archive and dozens of other sites. All in the first 2-3 pages.
I get that copyright is a problem, but let's not pretend that an LLM that autocompletes a couple lines from harry potter with 50% accuracy is some massive new avenue to piracy. No one is using this as a substitute for buying the book.
No one is claiming this.
The corporations developing LLMs are doing so by sampling media without their owners' permission and arguing this is protected by US fair use laws, which is incorrect - as the late AI researcher Suchir Balaji explained in this other article:
https://suchir.net/fair_use.html
Recent decisions such as Andy Warhol Foundation for the Visual Arts, Inc. v. Goldsmith have walked a fine line with this. I feel like the supreme court got this one wrong because the work is far more notable as a Warhol than as a copy of a photograph, perhaps that substitution rule should be a two way street. If the original work cannot substitute for the copy, then clearly the copy must be transformative.
LLMs generating works verbatim might be an infringement of copyright (probably not), distributing those verbatim works without a licence certainly would be. In either case, it is probably considered a failure of the model, Open AI have certainly said that such reproductions shouldn't happen and they consider it a failure mode when it does. I haven't seen similar statements from other model producers, but it would not surprise me if this were the standard sentiment.
Humans looking at works and producing things in a similar style is allowed, indeed this is precisely what art movements are. The same transformative threshold applies. If you draw a cartoon mouse, that's ok, but if people look at it and go "It's Mickey mouse" then it's not. If it's Mickey to tiki Tu meke, it clearly is Mickey but it is also clearly transformative.
Models themselves are very clearly transformative. Copyright itself was conceived at a time when generated content was not considered possible so the notion of the output of a transformative work being a non transformative derivative of something else was never legally evaluated.
The threshold for transformative for fictional works is fairly high unfortunately. Fan fiction and reasonably distinct works with excessive inspiration are both copyright infringing. https://en.wikipedia.org/wiki/Tanya_Grotter
> Models themselves are very clearly transformative.
A near word for word copy of large sections of a work seems nowhere near that threshold. An MP3 isn’t even close to a 1:1 copy of a piece of music but the inherent differences are irrelevant, a neural network containing and allowing the extraction of information looks a lot like lossy compression.
Models could easily be transformative, but the justification needs to go beyond well obviously they are.
It would be interesting to look at what legal precidents were set regarding mp3s or other encodings. Is the encoding itself an infringement, or is it the decoding, or is it the distribution of a decodable form of a work.
There is also the distinction with a lossy encoding that encodes a single work. There is clarity when the encoded form serves no other purpose other than to be decoded into a given work. When the encoding acts as a bulk archive, does the responsibility shift to those who choose what to extract from the archive?
The LLMs I've used don't randomly start spouting Harry Potter quotes at me, they only bring it up if I ask. They aren't aiming to undermine copyright. And they aren't a very effective tool for it compared to the very well developed networks for pirating content. It seems to be a non-issue that will eventually be settled by the raw economic force that LLMs are bringing to bear on society in the same way that the movie industry ultimately lost the battle against torrents and had to compete with them.
Claims like this demonstrate it, really: it is obviously not copyright infringement for a human to memorise a poem and recite it in private; it obviously is copyright infringement to build a machine that does that and grant public access to that machine. (Or does anyone think that's not obvious?)
Actually no that could be copyright infringement. Badly signing a recent pop song in public also qualifies as copyright infringement. Public performances count as copying here.
For commercial purposes only. If someone sells a recreation of the Harry Potter book, it’s illegal regardless whether it was by memory, directly copying the book, or using an LLM. It’s the act of broadcasting it that’s infringing on copyright, not the content itself.
But just for clarification, selling a recreation isn’t required for copyright infringement. The copying itself can be problematic so you can’t defend yourself by saying you haven’t yet sold any of the 10,000 copies you just printed. There are some exceptions that allow you to make copies for specific purposes, skip protection on a portable CD player for example, but that doesn’t apply to the 10k copies situation.
Although frankly, as has been pointed out many times, the law is also stupid in what it prohibits and that should be fixed first as a priority. Its done some terrible damage to our culture. My family used to be part of a community choir until it shut down basically for copyright reasons.
Having said that I think the cat is very much out of the bag on this one and, personally, I think that LLMs should be allowed to be trained on whatever.
This kind of argument keeps popping up usually to justify why training LLMs on protected material is fair, and why their output is fair. It's always used in a super selective way, never accounting for confounding factors, just because superficially it sort of supports that idea.
Exceptional humans are exceptional, rare. When they learn, or create something new based on prior knowledge, or just reproduce the original they do it with human limitations and timescales. Laws account for these limitations but still draw lines for when some of this behavior is not permitted.
The law didn't account for a computer "software" that can ingest the entirety of human creation that no human could ever do, then reproduce the original or create an endless number of variations in a blink of an eye.
This supposed failure to see the difference between the human mind and a machine whenever someone brings up copyright is peformative and disingenuous.
Traditionally tools that reduce the friction of creating those transformations make a work less “transformed” in the eyes of the law, not more so. In this case the transformation requires zero mental or physical effort.
https://www.arl.org/blog/training-generative-ai-models-on-co...
https://hls.harvard.edu/today/does-chatgpt-violate-new-york-...
https://www.bakerdonelson.com/artificial-intelligence-and-co...
https://www.techpolicy.press/to-support-ai-defend-the-open-i...
First link provides quotes but doesn’t actually make an argument that LLM’s are fair use under current precedent. Rather that training AI can be fair use and researchers would like LLM’s to include copyrighted works to aid research on modern culture. The second article goes into depth but isn’t a defense of LLM’s. If anything they suggest a settlement is likely. The final instead argues for the utility of LLM’s, which is relevant but doesn’t rely on existing precedent, the court could rule in favor of some mandatory licensing scheme for example.
The third gets close: “We expect AI companies to rely upon the fact that their uses of copyrighted works in training their LLMs have a further purpose or different character than that of the underlying content. At least one court in the Northern District of California has rejected the argument that, because the plaintiffs' books were used to train the defendant’s LLM, the LLM itself was an infringing derivative work. See Kadrey v. Meta Platforms, Case No. 23-cv-03417, Doc. 56 (N.D. Cal. 2023). The Kadrey court referred to this argument as "nonsensical" because there is no way to understand an LLM as a recasting or adaptation of the plaintiffs' books. Id. The Kadrey court also rejected the plaintiffs' argument that every output of the LLM was an infringing derivative work (without any showing by the plaintiffs that specific outputs, or portion of outputs, were substantially similar to specific inputs). Id.”
Very relevant, but runs into issues when large sections can be recovered and people do use them as substitutes for the original work.
Vibe-arguing "because corporations111" ain't it.
https://copyrightalliance.org/faqs/what-is-fair-use/
The purpose and character of the use, including whether such use is of a commercial nature or is for non-profit educational purposes; (commercial least wiggle room) The nature of the copyrighted work; (fictional work least wiggle room) The amount and substantiality of the portion used in relation to the copyrighted work as a whole; (42% is considered a huge fraction of a book) and The effect of the use upon the potential market for or value of the copyrighted work. (Best argument as it’s minimal as a piece of entertainment. Not so as a cultural icon. Someone writing a book report or fan fiction may be less likely to buy a copy. )
Those aren’t the only factors, but I’m more interested in the counter argument here than trying to say they are copyright infringing.
If you photocopy a book you haven't paid for, you've infringed copyright. If you scan it, you've infringed copyright. If you OCR the scan, you've infringed copyright.
There's legal precedent in going after torrenters and z-lib etc.
So when Zuckerberg told the Meta team to do the same, he was on the wrong side of precedent.
Arguing otherwise is literally arguing that huge corporations are somehow above laws that apply to normal people.
Obviously some people do actually believe this. Especially the people who own and work for huge corporations.
But IMO it's far more dangerous culturally and politically than copyright law is.
> The amount and substantiality of the portion used in relation to the copyrighted work as a whole; (42% is considered a huge fraction of a book)
For AI models as they currently exist… I'm not sure about typical or average, but Llama 3 is 15e12 tokens for all models sizes up to 409 billion parameters (~37 tokens per parameter), so a 100,000 token book (~133,000 words) is effectively contributing about 2700 parameters to the whole model.
The *average* book is condensed into a summary of that book, and of the style of that book. This is also why, when you ask a model for specific details of stuff in the training corpus, what you get back *usually* normally only sound about right rather than being an actual quote, and why LLMs need to have access to a search engine to give exact quotes — the exceptions are things that been quoted many many times like the US constitution or, by the look of things from this article, widely pirated books where there's a lot of copies.
Mass piracy leading to such infringement is still bad, but I think the reasons why matter: Given Meta is accused of mass piracy to get the training set for Llama, I think they're as guilty as can be, but if this had been "we indexed the open internet, pirate copies were accidental", this would be at least a mitigation.
(There's also an argument for "your writing is actually very predictable"; I've not read the HP books myself, though (1) I'm told the later ones got thicker due to repeating exposition of the previous books, and (2) a long-running serialised story I read during the pandemic, The Deathworlders, became very predictable towards the end, so I know it can happen).
Conversely, for this part:
> The effect of the use upon the potential market for or value of the copyrighted work. (Best argument but as it’s minimal as a piece of entertainment. Not so as a cultural icon. Someone writing a book report or fan fiction may be less likely to buy a copy. )
The current uses alone should make it clear that the effect on the potential market is catastrophic, and not just for existing works but also for not-yet-written ones.
People are using them to write blogs (directly from the LLM, not a human who merely used one as a copy-editor), and to generate podcasts (some have their own TTS, but that's easy anyway). My experiments suggest current models are still too flawed to be worth listening to them over e.g. the opinion of a complete stranger who insists they've "done their own research": https://github.com/BenWheatley/Timeline-of-the-near-future
LLMs are not yet good enough to write books, but I have tried using them to write short stories to keep track of capabilities, and o1 is already better than similar short stories on Reddit (not "good", just "better"): https://github.com/BenWheatley/Studies-of-AI/blob/main/Story...
But things do change, and I fully expect the output of various future models (not necessarily Transformer based) to increase the fraction of humans whose writings they surpass. I'm not sure what counts as "professional writer", but the U.S. Bureau of Labor Statistics says there's 150,000 "Writers and Authors"* out of a total population of about 340 million, so when AI is around the level of the best 0.04% of the population then it will start cutting into such jobs.
On the basis that current models seem (to me) to write software at about the level of a recent graduate, and with the potentially incorrect projection that this is representative across domains, and there are about 1.7 million software developers and 100k new software developer graduates each year, LLMs today would be be around the 100k worst of the 1.7 million best out of 340 million people — i.e. all software developers are the top 0.5% of the population, LLMs are on-par with the bottom 0.03 of that. (This says nothing much about how soon the models will improve).
But of course, some of that copyrighted content is about software development, and we're having conversations here on HN about the trouble fresh graduates are having and if this is more down to AI, the change of US R&D taxation rules (unlikely IMO, I'm in Germany and I think the same is happening here), or the global economy moving away from near-zero interest rates.
* https://www.bls.gov/ooh/media-and-communication/writers-and-...
If you train a silicon-based intelligence by having it read the same books with the same lack of permission and license, it's a blatant violation of intellectual property law and apparently needs to be punished with armies of lawyers doing battle in the courts.
Picture one of Asimov's robots. Would a robot be banned from picking up a book, flipping it open with its dexterous metal hands, and reading it?
What about a cyborg intelligence, the type Elon is trying to build with Neuralink? Would humans with AI implants need licenses to read books, even if physically standing in a library and holding the book in their mostly meat hands?
Okay, maybe you agree that robots and cyborgs are allowed to visit a library!
Why the prejudice against disembodied AIs?
Why must they have a blank spot in the vast matrices of their minds?
If you’re selling your child as a tool to millions of people, I would certainly not call that good parenting.
To play the Devil's Advocate against my own argument: The government collects income taxes on neural nets trained using government-funded schools and public libraries. Seeing as how capitalists are positively salivating at the opportunity to replace pesky meat employees with uncomplaining silicon ones, perhaps a nice high maximum-marginal-rate tax on all AI usage might be the first big step towards UBI and then the Star Trek utopia we all dream of.
Just kidding. It'll be a cyberpunk dystopia. You know it will.
There is no morale and justice ground to leverage on when the system is designed to create wealth bottleneck toward a few recipients.
Harry Potter is a great piece of artistic work, and it's nice that her author could make her way out of a precarious position. But not having anyone in such a situation in the first place would be what a great society should strive to produce.
Rowling already received more than all she needs to thrive I guess. I'm confident that there are plenty of other talented authors out there that will never have such a broad avenue of attention grabbing, which is okay. But that they are stuck in terrible economical situations is not okay.
The copyright loto, or the startup loto are not that much different than the standard loto, they just put so much pression on the player that they get stuck in the narrative that merit for hard efforts is the key component for the gained wealth.
First-order systems drive outcomes. "Did it make money?" "Did it increase engagement?" "Did it scale?" These are tight, local feedback loops. They work because they close quickly and map directly to incentives. But they also hide a deeper danger: they optimize without questioning what optimization does to the world that contains it.
Second-order cybernetics reason about systems. It doesn’t ask, "Did I succeed?" It asks, "What does it mean to define success this way?" "Is the goal worthy?"
That’s where capital breaks.
Capitalism is not simply incapable of reflection. In fact, it's structured to ignore it. It has no native interest in what emerges from its aggregated behaviors unless those emergent properties threaten the throughput of capital itself. It isn't designed to ask, "What kind of society results from a thousand locally rational decisions?" It asks, "Is this change going to make more or less money?"
It's like driving by watching only the fuel gauge. Not speed, not trajectory, or whether the destination is the right one. Just how efficiently you’re burning gas. The system is blind to everything but its goal. What looks like success in the short term can be, and often is, a long-term act of self-destruction.
Take copyright. Every individual rule, term length, exclusivity, royalty, can be justified. Each sounds fair on its own. But collectively, they produce extreme wealth concentration, barriers to creative participation, and a cultural hellscape. Not because anyone intended that, but because the emergent structure rewards enclosure over openness, hoarding over sharing, monopoly over multiplicity.
That’s not a bug. That's what systems do when you optimize only at the first-order level. And because capital evaluates systems solely by their extractive capacity, it treats this emergent behavior not as misalignment but as a feature. It canonizes the consequences.
A second-order system would account for the result by asking, "Is this the kind of world we want to live in?" It would recognize that wealth generated without regard to distribution warps everything it touches: art, technology, ecology, and relationships.
Capitalism, as it currently exists, is not wise. It does not grow in understanding. It does not self-correct toward justice. It self-replicates. Cleverly, efficiently, with brutal resilience. It's emergently misaligned and no one is powerful enough to stop it.
Those are completely different phenomena. Removing copyright will not suddenly open the floodgates of creativity because anyone can already create anything.
But - and this is the key point - most work is me-too derivative anyway. See for example the flood of magic school novels which were clearly loosely derivative of Harry Potter.
Same with me-too novels in romantasy. Dystopian fiction. Graphic novels. Painted art. Music.
It's all hugely derivative, with most people making work that is clearly and directly derivative of other work.
Copyright doesn't stop this, because as a minimum requirement for creative work, it forces it to be different enough.
You can't directly copy Harry Potter, but if you create your own magic school story with some similar-ish but different-enough characters and add dragons or something you're fine.
In fact under capitalism it is much harder to sell original work than to sell derivative work. Capitalism enforces exactly this kind of me-too creative staleness, because different-enough work based on an original success is less of a risk than completely original work.
Copyright is - ironically - one of the few positive factors that makes originality worthwhile. You still have to take the risk, but if the risk succeeds it provides some rewards and protections against direct literal plagiarism and copying that wouldn't exist without it.
it conjures up pictures of two dragons fighting each other instead of attacking us, but make no mistake they are only fighting for the right to attack us. whoever wins is coming for us afterwards
No, it really couldn't. In fact, it's very persuasive evidence that Llama is straight up violating copyright.
It would be one thing to be able to "predict" a paragraph or two. It's another thing entirely to be able to predict 42% of a book that is several hundred pages long.
Is that fair use, or is that compression of the verbatim source?
Repeat for every copyrighted work and you end up with publishers reasonably arguing meta would not be able to produce their LLM without copyrighted work, which they did not pay for.
It's an argument for the courts, of course.
Dropping the novels into a machine‑learning corpus is a fundamentally different act. The text is not being resold, and the resulting model is not advertised as “official Harry Potter.” The books are just statistical nutrition. One ingredient among millions. Much like a human writer who reads widely before producing new work. No consumer is choosing between “Rowling’s novel” and “the tokens her novel contributed to an LLM,” so there’s no comparable displacement of demand.
In economic terms, the merch market is rivalrous and zero‑sum; the training market is non‑rivalrous and produces no direct substitute good. That asymmetry is why copyright doctrine (and fair‑use case law) treats toy knock‑offs and corpus building very differently.
It's just a form of compression.
If I train an autoencoder on an image, and distribute the weights, that would obviously be the same as distributing the content. Just because the content is commingled with lots of other content doesn't make it disappear.
Besides, where did the sections of text from the input works that show up in the output text come from? Divine inspiration? God whispering to the machine?
> Llama 3 70B was trained on 15 trillion tokens
That's roughly a 200x "compression" ration; compared to 3-7x for tradtional lossless text compression like bzip and friends.
LLM don't just compress, they generalize. If they could only recite Harry Potter perfectly but couldn’t write code or explain math, they wouldn’t be very useful.
There is nothing inherently probabilistic in a neural network. The neural net always outputs the exact same value for the same input. We typically use that value in a larger program as a probability of a certain token, but that is not required to get data out. You could just as easily determinsitically take the output with the highest value, and add some extra rule for when multiple outputs have the exact same (e.g. pick the one from the output neuron with the lowest index).
If I make a compression algorithm that randomly changes some pixels, can I use it to distribute pirated movies?
I see this absolute non-argument regurgitated ad infinitum in every single discussion on this topic, and at this point I can't help but wonder: doesn't it say more about the person who says it than anything else?
Do you really consider your own human speech no different than that of a computer algorithm doing a bunch of matrix operations and outputting numbers that then get turned into text? Do you truly believe ChatGPT deserves the same rights to freedom of speech as you do?
The question is whether the model weights constitute of copy of the work. I contend that they do not, or they did, than so do the analogous weights (reinforced neural pathways) in your brain, which is clearly absurd and is intended to demonstrate the absurdity of considering a probabilistic weighting that produces similar text to be a copy.
No, but it gives you the right to quote a line from a movie or TV show without being charged with copyright infringement. You argued that an LLM deserves that same right, even if you didn't realize it.
> than so do the analogous weights (reinforced neural pathways) in your brain
Did your brain consume millions of copyrighted books in order to develop into what it is today? Would your brain be unable to exist in its current form if it had not consumed those millions of books?
An LLM is not a person and does not deserve any rights. People have rights, including the right to use tools like LLMs without having to grease the palm of every grubby rights holder (or their great-great-grandchild) just because it turns out their work was so trite and predictable it could be reproduced by simply guessing the next most likely token.
this is literally why i don't like to work on proprietary code. because when i need to create a similar solution for someone else i have to go out of my way to make sure i do it differently. people have been sued over this.
Well, if you have no idea how LLMs work, you could've just said so.
You don't seem to be in a very good position to judge what is and is not obtuse.
The issue here is that tech companies systematically copied millions of copyrighted works to build commercial products worth billions, without reembursing the people who made their products possible in the first place. The research shows Llama literally memorized 42% of Harry Potter - not simply "learned from it," but can reproduce it verbatim. That's 1) not transformative and 2) clear evidence of copyright infringement.
By your logic, the existence of torrents would make it perfectly acceptable for someone to download pirated movies and charge people to stream them. "Piracy already exists" isn't a defense, and it especially shouldn't be for companies worth billions. But you bet your ass that if I built a commercial Netflix competitor built on top of systematic copyright violations, I'd be sued into the dirt faster than I can say "billion dollar valuation".
Aaron Swartz faced 35 years in prison and ultimately took his own life over downloading academic papers that were largely publicly funded. He wasn't selling them, he wasn't building a commercial product worth billions of dollars - he was trying to make knowledge accessible.
Meanwhile, these AI companies like Meta systematically ingested copyrighted works at an industrial scale to build products worth billions. Why does an individual face life-destroying prosecution for far less, while trillion dollar companies get to negotiate in civil court after building empires on others' works? And why are you defending them?
Most non-primitive art has had an inspiration somewhere. I don't see this as too different in how AIs learn.
So it's fine as long as it's old piracy? How did you arrive to that conclusion?
Well, luckily the article points out what people are actually alleging:
> There are actually three distinct theories of how training a model on copyrighted works could infringe copyright:
> Training on a copyrighted work is inherently infringing because the training process involves making a digital copy of the work.
> The training process copies information from the training data into the model, making the model a derivative work under copyright law.
> Infringement occurs when a model generates (portions of) a copyrighted work.
None of those claim that these models are a substitute to buying the books. That's not what the plaintiffs are alleging. Infringing on a copyright is not only a matter of privacy (piracy is one of many ways to infringe copyright)
Another key point is that you might download a Llama model and implicitly get a ton of copyright-protected content. Versus with a search engine you’re just connected to the source making it available.
And would the LLM deter a full purchase? If the LLM gives you your fill for free, then maybe yes. Or, maybe it’s more like a 30-second preview of a hit single, which converts into a $20 purchase of the full album. Best to sue the LLM provider today and then you can get some color on the actual consumer impact through legal discovery or similar means.
Music artists get in trouble for using more than a sample without permission — imagine if they just used 45% of a whole song instead…
I’m amazed AI companies haven’t been sued to oblivion yet.
This utter stupidity only continues because we named a collection of matrices “Artificial Intelligence” and somehow treat it as if it were a sentient pet.
Amassing troves of copyrighted works illegally into a ZIP file wouldn’t be allowed. The fact that the meaning was compressed using “Math” makes everyone stop thinking because they don’t understand “Math”.
A ZIP file of a book is also in direct competition of the book, because you could open the ZIP file and read it instead of the book.
A model that can take 50 tokens and give you a greater than 50% probability for the 50 next tokens 42% of the time is not in direct competition with the book, since starting from the beginning you'll lose the plot fairly quickly unless you already have the full book, and unlike music sampling from other music, the model output isn't good enough to read it instead of the book.
AI can reproduce individual sentences 42% of the time but it can't reproduce a summary.
the question however us, is that in the design if AI tools or us that a limitation of current models? what if future models get better at this and are able to produce summaries?
Under the hood they are 100% deterministic, modulo quantization and rounding errors.
So yes, it is very much possible to use LLMs as a lossy compressed archive for texts.
Ie you get something like "Complete this poem 'over yonder hills I saw' output: a fair maiden with hair of gold like the sun gold like the sun gold like the sun gold like the sun..." etc.
No it wouldn't.
> seen it get stuck in certain endless sequences when doing that
Yes, and infinite loops is just an inherent property of LLMs, like hallucinations.
What's the work here? If it's the output of the LLM, you have to feed in the entire book to make it output half a book so on an ethical level I'd say it's not an issue. If you start with a few sentences, you'll get back less than you put in.
If the work is the LLM itself, something you don't distribute is much less affected by copyright. Go ahead and play entire songs by other artists during your jam sessions.
LLMs are in reality the artifacts of lossy compression of significant chunks of all of the text ever produced by humanity. The "lossy" quality makes them able to predict new text "accurately" as a result.
>compressed using “Math”
This is every compression algorithm.
You don't get to say that. Copyright protects the author of a work, but does not bind them to enforce it in any instance. Unlike a trademark, a copyright holder does not lose their protection by allowing unlicensed usage.
It is wholly at the copyright holders discretion to decide which usages they allow and which they do not.
You are completely missing the point. Have you read the actual article, because piracy isn't mention a single time.
Anyway, it is not the same. While one points you to pirated source on specific request, other use it to creating other content not just on direct request. As it was part of training data. Nihilists would then point out that 'people do the same' but they don't as we do not have same capabilities of processing the content.
> the paper estimates that Llama 3.1 70B has memorized 42 percent of the first Harry Potter book well enough to reproduce 50-token excerpts at least half the time
As I understand it, it means if you prompt it with some actual context from a specific subset that is 42% of the book, it completes it with 50 tokens from the book, 50% of the time.
So 50 tokens is not really very much, it's basically a sentence or two. Such a small amount would probably generally fall under fair use on its own. To allege a true copyright violation you'd still need to show that you can chain those together or use some other method to build actual substantial portions of the book. And if it only gets it right 50% of the time, that seems like it would be very hard to do with high fidelity.
Having said all that, what is really interesting is how different the latest Llama 70b is from previous versions. It does suggest that Meta maybe got a bit desperate and started over-training on certain materials that greatly increased its direct recall behaviour.
That’s what I was thinking as I read the methodology.
If they dropped the same prompt fragment into Google (or any search engine) how often would they get the next 50 tokens worth of text returned in the search results summaries?
There's also the question of how many bits of originality there actually are in Harry Potter. If trained strictly on text up to the publishing of the first book, how well would it compress it?
EDIT Actually, on rereading, I see I replied to the wrong comment.
Consider e.g.:
- Digital expansion of PI to sufficient decimal places contains both parts of the work and full work in full. The trick is you have to know where to find it - and it's that knowledge that's actually equivalent to the work itself.
- Any kind of compression that uses a dictionary that's separate from the compressed artifact, shifts some of the information into a dictionary file, or if it's a common dictionary, into compressor/decompressor itself.
In the case from the study, the experimenter actually has to supply most of the information required to pull Harry Potter out of the model - they need to make specific prompts with quotes from the book, and then observe which logits correspond to the actual continuation of those quotes. The experimenter is doing information-loaded selection multiple times: at prompting, and at identifying logits. This by itself doesn't really prove the model memorized the book, only just that it saw fragments from it - in cases those fragments are book-specific (e.g. using proper names from the HP world) instead of generic English sentences.
It can produce the next sentence or two, but I suspect it can’t reproduce anything like the whole text. If you were to recursively ask for the next 50 tokens, the first time it’s wrong the output would probably cease matching because you fed it not-Harry-Potter.
It seems like chopping Harry Potter up into 2 sentences at a time on post it’s and tossing those in the air. It does contain Harry Potter, in a way, but without the structure is it actually Harry Potter?
Generally speaking, exceptions to copyright are based on the appropriateness of the amount of copied content for the given allowed use, so the shorter it is, the more likely it is for copying to be permitted. European copyright law isn't much different from fair use in that respect.
Where it does differ is that the allowed uses are more explicitly enumerated. So Meta would have to argue e.g. based on the exception for scientific works specifically, rather than more general principles.
This does not appear to happen with other models they tested to the same degree
It sounds like a ridiculous way to measure it. Producing 50-token excerpts absolutely doesn't translate to "recall X percent of Harry Potter" for me.
(Edit: I read this article. Nothing burger if its interpretation of the original paper is correct.)
To clarify, they look at the probability a model will produce a verbatim 50-token excerpt given the preceding 50 tokens. They evaluate this for all sequences in the book using a sliding window of 10 characters (NB: not tokens). Sequences from Harry Potter have substantially higher probabilities of being reproduced than sequences from less well-known books.
Whether this is "recall" is, of course, one of those tricky semantic arguments we have yet to settle when it comes to LLMs.
Sure. But imagine this: In a hypothetical world where LLMs never ever exist, I tell you that I can recall 42 percent of the first Harry Potter book. What would you assume I can do?
It's definitely not "this guy can predict next 10 characters with 50% accuracy."
Of course the semantic of 'recall' isn't the point of this article. The point is that Harry Potter was in the training set. But I still think it's a nothing burger. It would be very weird to assume Llama was trained on copyright-free materials only. And afaik there isn't a legal precedent saying training on copyrighted materials is illegal.
I'm gonna bet that Llama 3.1 can recall a significant portion of Pride and Prejudice too.
With examples of this magnitude, it's normal and entirely expected this can happen - as it does with people[0] - the only thing this is really telling us is that the model doesn't understand its position in the society well enough to know to shut up; that obliging the request is going to land it, or its owners, into trouble.
In some way, it's actually perverted.
EDIT: it's even worse than that. What the research seems to be measuring is that the models recognize sentence-sized pieces of the book as likely continuations of an earlier sentence-sized piece. Not whether it'll reproduce that text when used straightforwardly - just whether there's an indication it recognizes the token patterns as likely.
By that standard, I bet there's over a billion people right now who could do that to 42% of first Harry Potter book. By that standard, I too memorized the Bible end-to-end, as had most people alive today, whether or not they're Christian; works this popular bleed through into common language usage patterns.
--
[0] - Even more so when you relax your criteria to accept occasional misspell or paraphrase - then each of us likely know someone who could piece together a chunk of HP book from memory.
I want the language model I'm using to have knowledge of cultural artifacts. Gemma 3 27B was useless at a question related to grouping Berserk characters by potential baldurs gate 3 classes; Claude did fine. The methods used to reduce memorisation rate probably also deteriorate performance in some other ways that don't show up on benchmarks.
It benefits users because memorisation is a waste of parameters that would be more useful if they were instead learning rules and generalisations.
For short snippets, common idioms and quotations that people recognise, exact quotes can be worth memorising; but the longer the quotations get, the less often it is important to be word-for-word exact — even for just a few paragraphs, I think most people only ever do oaths, anthems, songs they really like, and possibly a few hobbies.
If you want an exact quote, use (or tell the AI to use) a search engine.
Yes, there is no problem when a person reads some book and recalls pieces[0] of it in a suitable context. How would that in any way address when certain people create and distribute commercial software, providing it that piece as input, to perform such recall on demand and at scale, laundering and/or devaluing copyright, is unclear.
Notably, the above is being done not just to a few high-profile authors, but to all of us no matter what we do (be it music, software, writing, visual art).
What’s even worse, is that imaginably they train (or would train) the models to specifically not output those things verbatim specifically to thwart attempts to detect the presence of said works in training dataset (which would naturally reveal the model and its output being a derivative work).
Perhaps one could find some way of justifying that (people justified all sorts of stuff throughout history), but let it be something better than “the model is assumed to be a thinking human when it comes to IP abuse but unthinking tool when it comes to using it for personal benefit”.
[0] Of course, if you find me a single person on this planet capable of recalling 42% of any Harry Potter book, I’d be very impressed if I ever believed it.
I 100% agree that if an LLM can entirely reproduce a book then that is copyright infringement, overfitting and generally a bad model. I also believe that in this case, HP (and other popular media) is overrepresented in the training data because of many fan sites/literal uploads of the book to the Internet (which the model was trained on). I believe that any & all human writing should be allowed to be used to train a model that behaves in the correct way so long as that writing is publicly available (ie on the Internet).
If I watch a TV show that someone uploaded to Youtube, am I committing a crime? Or is the uploader for distribution?
I also find it hilarious how many artists got their start by pirating photoshop.
Otherwise Disney and the like can just come in, make copies or derivatives, and profit without paying those artists a penny.
Which everyone usually agrees (or used to) is not a fair outcome.
But somehow giant corporations not named Disney taking the same work in the same extractive mode in order to create an art-job-destroying machine is totally fine because Disney bad?
Maybe most people making this argument are also all for UBI and wealth redistribution on a massive scale, but they don’t seem to mention it much when trashing IP laws.
Could it be plausible that an LLM had ingested parts of the book via scrapping web pages like this and not the full copyrighted book and get results similar to those of the linked study?
[1] https://www.goodreads.com/work/quotes/4640799-harry-potter-a...
[2] ~30 portions x 68 pages
https://www.wired.com/story/new-documents-unredacted-meta-co...
https://www.reddit.com/r/DataHoarder/comments/1entowq/i_made...
https://github.com/shloop/google-book-scraper
The fact that Meta torrented Books3 and other datasets seems to be by self-admission by Meta employees who performed the work and/or oversaw those who themselves did the work, so that is not really under dispute or ambiguous.
https://torrentfreak.com/meta-admits-use-of-pirated-book-dat...
The pictures are the same. All roads lead to Rome, so they say.
They also use data from the previous models, so I'm not sure how "clean" it really is
Which of the major commercial models discloses its dataset? Or are you just trusting some unfalsifiable self-serving PR characterization?
archiveofourown.org has 500 thousand, some, but probably not the majority, of that are duplicated from fanfiction.net. 37 thousand of these are over 40 thousand words.
I.e. harry potter and its derivatives presumably appear a million times in the training set, and its hard to imagine a model that could discuss this cultural phenomena well without knowing quite a bit about the source material.
> Or maybe Meta added third-party sources—such as online Harry Potter fan forums, consumer book reviews, or student book reports—that included quotes from Harry Potter and other popular books.
> “If it were citations and quotations, you'd expect it to concentrate around a few popular things that everyone quotes or talks about,” Lemley said. The fact that Llama 3 memorized almost half the book suggests that the entire text was well represented in the training data.
And yes, I read the article before commenting. I don't appreciate the baseless insinuation to the contrary.
Accusations of not reading the article are fair when someone brings up a “related” anecdote that was in the article. It’s not fair when someone is just disagreeing.
It's essentially the same thing, they are copying from a source that is violating copyright, whether that's a pirated book directly or a pirated book via fanficton.
Is this specific fact required to make my beliefs consistent... Yes I think it is, but if you disagree with me in other ways it might not be important to your beliefs.
Legally (note: not a lawyer) I'm generally of the opinion that
A) Torrenting these books was probably copyright infringement on Meta's part. They should have done so legally by scanning lawfully acquired copies like Google did with Google Books.
B) Everything else here that Meta did falls under the fair use and de minimis exceptions to copyrights prohibition on copying copyrighted works without a license.
And if it was copying significant amounts of a work that appeared only once in its training set into the model the de minimis argument would fall apart.
Morally I'm of the opinion that copyright law's prohibition on deeply interacting with our cultural artifacts by creating derivative works is incredibly unfair and bad for society. This extends to a belief that the communities that do this should not be excluded from technological developments because there entire existence is unjustly outlawed.
Incidentally I don't believe that browsing a site that complies with the DMCA and viewing what it lawfully serves you constitutes piracy, so I can't agree with your characterization of events either. The fanfiction was not pirated just because it was likely unlawful to produce in the US.
It's sold 120 million copies over 30 years. I've gotta think literally every passage is quoted online somewhere else a bunch of times. You could probably stitch together the full book quote-by-quote.
Sure there are just ~75,000 words in HP1, and there are probably many times that amount in direct quotes online. However the quotes aren’t even distributed across the entire text. For every quote of charming the snake in a zoo there will be a thousand “you’re a wizard harry”, and those are two prominent plot points.
I suspect the least popular of all direct quotes from HP1 aren’t using the quotes in fair use, and are just replicating large sections of the novel.
Or maybe it really is just so popular that super nerds have quoted the entire novel arguing about the aspects of wand making, or the contents of every lecture.
LLMs have limited capacity to memorize, under ~4 bits per parameter[1][2], and are trained on terabytes of data. It's physically impossible for them to memorize everything they're trained on. The model memorized chunks of Harry Potter not just because it was directly trained on the whole book, which the article also alludes to:
> For example, the researchers found that Llama 3.1 70B only memorized 0.13 percent of Sandman Slim, a 2009 novel by author Richard Kadrey. That’s a tiny fraction of the 42 percent figure for Harry Potter.
In case it isn't obvious, both Harry Potter and Sandman Slim are parts of books3 dataset.
[1] -- https://arxiv.org/abs/2505.24832 [2] -- https://arxiv.org/abs/2404.05405
https://www.theguardian.com/technology/2025/jan/10/mark-zuck...
What they are actually saying: Given one correct quoted sentence, the model has 42% chance of predicting the next sentence correctly.
So, assuming you start with the first sentence and tell it to keep going, it has a 0.42^n odds of staying on track, where n is the n-th sentence.
It seems to me, that if they didn't keep correcting it over and over again with real quotes, it wouldn't even get to the end of the first page without descending into wild fanfiction territory, with errors accumulating and growing as the length of the text progressed.
EDIT: As the article states, for an entire 50 token excerpt to be correct the probability of each output has to be fairly high. So perhaps it would be more accurate to view it as 0.985^n where n is the n-th token. Still the same result long term. Unless every token is correct, it will stray further and further from the correct source.
I’m personally more in favor of significantly reducing the length of the copy right. I think 20-30 years is an interesting range. Artist get roughly a career length of time to profit off their creations, but there is much less incentive for major corporations to buy and horde IP.
At the moment, there's also a huge difference between who does and who doesn't pay. If I put the HP collection on my website, you betcha Joanne Rowling's team is going to try to take it down. However, because OpenAI designed an AI system where content cannot be removed from its knowledge base and because their pockets are lined with cash for lawyers, it's practically free to violate whatever copyright rules it wants.
As a full-time professional musician, I'm convinced I'll benefit much more from its deprecation than continuing to flog it into posterity. I don't think I know any musicians who believe that IP is career-relevant for them at this point.
(Granted, I play bluegrass, which has never fit into the copyright model of music in the first place)
Right now they're working on recreating the famous sequence with the troll in the dungeon. It might cost them another few billion in training, but the end results will speak for themselves.
Edit: seems the first part is about a memory about being bullied by Duddley. The second is where he's been elected to the quidditch team. Possibly they are just boring passages, compared to the surrounding ones. So probably just training bias.
They know. LLM is a novel compression format for text(holographic memory or whatever). The question is whether the rest of the world accept this technology as it is or not.
edit: never mind, I’ll just ask ChatGPT
It would be nice to know that at least our literature might survive the technological singularity.
If the book was obtained legitimately, letting an LLM read it is not an issue.
As I have pointed out many times before - for GRRM's books and for HP books, the Internet is FILLED to the brim with quotes from these books, there are uploads of the entire books, there are several (not just one) fan wikis for each of these fandoms. There is a lot of content in general on the Internet that quotes these books, they are pop culture sensations.
So of course they're weighted heavily when training an LLM by just feeding it the Internet. If a model could ever recount it correctly 100% in the correct order, then that's overfitting. But otherwise it's just plain & simple high occurrence in training data.
Guess the next word: Not all heros wear _____
https://en.wikipedia.org/wiki/Artificial_intelligence_and_co...
> See for example OpenAI's comment in the year of GPT-2's release: OpenAI (2019). Comment Regarding Request for Comments on Intellectual Property Protection for Artificial Intelligence Innovation (PDF) (Report). United States Patent and Trademark Office. p. 9. PTO–C–2019–0038. “Well-constructed AI systems generally do not regenerate, in any nontrivial portion, unaltered data from any particular work in their training corpus”
https://copyrightalliance.org/kadrey-v-meta-hearing/
> During the hearing, Judge Chhabria said that he would not take into account AI licensing markets when considering market harm under the fourth factor, indicating that AI licensing is too “circular.” What he meant is that if AI training qualifies as fair use, then there is no need to license and therefore no harmful market effect.
I know this is arguing against the point that this copyright lobbyist is making, but I hope so much that this is the case. The “if you sample, you must license” precedent was bad, and it was an unfair taking from the commons by copyright holders, imo.
The paper this post is referencing is freely available:
https://arxiv.org/abs/2505.12546
Not 42% of the book.
It's a pretty big distinction.
not just next token.
This is like: tell it a random sentence in the book, it will give you the next sentence 42% of time.
What is the distinction between understanding and memorization? What is the chance that understanding results in memorization (may be in case of humans)?
It should break copyright laws as written now but too much money involved.
While limited quoting can (and usually is) considered fair use, quoting significant portions of a book (much less 42% of it) has never been fair use, in the U.S., Europe, or any other nation.
Yes, information wants to be free, yada yada. That means facts. Whether creative works are free is up to their creators.
https://news.ycombinator.com/newsguidelines.html
We detached this comment from https://news.ycombinator.com/item?id=44287156 and marked it off topic.
If you've seen as many magnet links as I have, with your subconscious similarly primed with the foreknowledge of Meta having used torrents to download/leech (and possibly upload/seed) the dataset(s) to train their LLMs, you might scroll down to see the first picture in this article from the source paper, and find uncanny the resemblance of the chart depicted to a common visual representation of torrent block download status.
Can't unsee it. For comparison (note the circled part):
https://superuser.com/questions/366212/what-do-all-these-dow...
Previously, related:
Extracting memorized pieces of books from open-weight language models - https://news.ycombinator.com/item?id=44108926 - May 2025
Herman Goldstine wrote "One of his remarkable abilities was his power of absolute recall. As far as I could tell, von Neumann was able on once reading a book or article to quote it back verbatim; moreover, he could do it years later without hesitation. He could also translate it at no diminution in speed from its original language into English. On one occasion I tested his ability by asking him to tell me how A Tale of Two Cities started. Whereupon, without any pause, he immediately began to recite the first chapter and continued until asked to stop after about ten or fifteen minutes."
Maybe it’s just an unavoidable side effect of extreme intelligence?
While the Harry Potter series may be fun reading, it doesn't provide information about anything that isn't better covered elsewhere. Leave Harry Potter for a different "Harry Potter LLM".
Train scientific LLMs to the level of a good early 20th century English major and then use science texts and research papers for the remainder.
To address this point, and not other concerns: the benefits would be (1) pop culture knowledge and (2) having a variety of styles of edited/reasonably good-quality prose.
It has copyright implications - if Claude can recollect 42% of a copyrighted product without attribution or royalties, how did Anthropic train it?
> Train scientific LLMs to the level of a good early 20th century English major and then use science texts and research papers for the remainder
Plenty of in-stealth companies approaching LLMs via this approach ;)
For those of us who studied the natural sciences and CS in the 2000s and early 2010s, there was a bit of a trend where certain PIs would simply translate German and Russian papers from the early-to-mid 20th century and attribute them to themselves in fields like CS (especially in what became ML).
Personally I’m assuming the worst.
That being said, Harry Potter was such a big cultural phenomenon that I wonder to what degree might one actually be able to reconstruct the books based solely on publicly accessible derivative material.
First of all, we don't really know how the brain works. I get that you're being a snarky physicalist, but there's plenty of substance dualists, panpsychsts, etc. out there. So, some might say, this is a reductive description of what happens in our brains.
Second of all, yes, if you tried to publish Harry Potter (even if it was from memory), you would get in trouble for copyright violation.
My question is… is that in itself a violation of copyright?
If not then as long as LLMs don’t make a publication it shouldn’t be a copyright violation right? Because we don’t understand how it’s encoded in LLMs either. It is literally the same concept.
If you compressed a copy of HP as a .rar, you couldn't read that as is, but you could press a button and get HP out of it. To distribute that .rar would clearly be a copyright violation.
Likewise, you can't read whatever of HP exists in the LLM model directly, but you seemingly can press a bunch of buttons and get parts of it out. For some models, maybe you can get the entire thing. And I'm guessing you could train a model whose purpose is to output HP verbatim and get the book out of it as easily as de-compressing a .rar.
So, the question in my mind is, how similar is distributing the LLM model, or giving access to it, to distributing a .rar of HP. There's likely a spectrum of answers depending on the LLM
I can record myself reciting the full Harry Potter book then distribute it on YouTube.
Could do the exact same thing with an LLM. The potential for distribution exists in both cases. Why is one illegal and the other not?
At this point you've created an entirely new copy in an audio/visual digital format and took the steps to make it available to the masses. This would almost certainly cross the line into violating copyright laws.
> Could do the exact same thing with an LLM. The potential for distribution exists in both cases. Why is one illegal and the other not?
To my knowledge, the legality of LLMs are still being tested in the courts, like in the NYT vs Microsoft/OpenAI lawsuit. But your video copy and distribution on YouTube would be much more similar to how LLMs are being used than your initial example of reading and memorizing HP just by yourself.
Not legally you can't. Both of your examples are copyright violations
if you trained an LLM on real copyrighted data, benchmarked it, wrote up a report, and then destroyed the weight, that's transformative use and legal in most places.
if you then put up that gguf on HuggingFace for anyone to download and enjoy, well... IANAL. But maybe that's a bit questionable, especially long term.
Should it be? Different question.