Who owns the code Claude Code wrote?
437 points - yesterday at 11:24 AM
SourceComments
> The US Copyright Office confirmed this in January 2025, and the Supreme Court declined to disturb it in March 2026 when it turned away the Thaler appeal. Works predominantly generated by AI without meaningful human authorship are not eligible for copyright protection, and that rule is now settled at the highest judicial level available.
Misstates the law. Denial of certiorari can happen for many reasons unrelated to the merits and does not settle the issue nationwide.I'm concerned about the copyright 'washing' this enables though, especially in OSS, and I think the right thing for OSS devs to do is to try to publish resulting code with the strongest copyleft licensing that they are comfortable with - https://jackson.dev/post/moral-ai-licensing/
Zarya of the Dawn already settled it for Midjourney output: human-written elements were protected, AI-generated images were not. The character design didn't get copyright even though the human picked, prompted, and curated. Code isn't different. Prompting Claude to produce a function is closer to prompting Midjourney to produce a frame than to writing the function yourself.
The reason it feels different to engineers is that we're used to thinking of the compiler as the analogy. But a compiler is deterministic â same input, same output. An LLM isn't. That's the line the Copyright Office is drawing, and image cases got there first.
This particular AI-ism really encapsulates what annoys me about some AI-isms. I don't mind the delves and the em-dashes that just give away the AI source of what otherwise might be good text. But these structural pieces just feel fundamentally not for the reader. Part of it is blatant pick-me language for the human feedback ("hey look you wanted plain language I did that") and part of it feels like it's just helping the future token stream (thinking-like tokens polluting the actual text).
The not-this-but-that, the sycophancy, the symbolizing-vague-significance, they all have this flavor of serving a process that's no longer there as I now need to read it. It gives a similar sickening feeling to the one I get seeing something designed by committee.
I think that the gold rush approach happening right now around me (my company EMs forcing me to work with claude as fast as possible) show really short-sight of all the management people.
First - I lose my understanding of the code base by relying too much on claude code.
Second - we drop all the good coding practices (like XP, code review etc.) because claude is reviewing claude's code.
Third - we just take a big smelly dump on the teamwork - it's easier and cheaper to let one developer drive the whole change from backend to frontend, despite there are (or were) two different teams - one for FE, one for BE.
Fourth - code commenting was passe, as the code is documentation itself... Unless... there is a problem with the context (which is). So when the people were writing the code, they would not understand the over-engineered code because of their fault. But now we make a step back for our beloved claude because it has small context... It's unfair treatment.
I could go on and on. And all those cultural changes are because of money. So I dub this "goldrush", open my popcorn and see what happens next.
Claude code itself is a trade secret, and it is not open source, so its own copyrightability is moot till you get your hands on a copy of it with clean hands.
Recipes cannot be copyrighted because they are not expressions of human creativity. Software written by AIs are also not expressions of human creativity, so the balance is tilted in favor of AI generated copy not being copyrightable.
The Supreme Court or legislation could change this, and I'd guess there will be a movement to go in that direction, but till something like that succeeded it's not so.
This comes up in a few places as a kind of vindictive battle. One example is Oracle suing Google for too closely mimicking their API in Android. Here is an example:
> private static void rangeCheck(int arrayLen, int fromIndex, int toIndex) {
if (fromIndex > toIndex)
throw new IllegalArgumentException("fromIndex(" +
fromIndex + ") > toIndex(" +
toIndex + ")"); if (fromIndex < 0)
throw new ArrayIndexOutOfBoundsException(fromIndex);
if (toIndex > arrayLen)
throw new ArrayIndexOutOfBoundsException(toIndex);
}And it was deemed fair use by the Supreme Court. Other times high frequency hedge funds sued exiting employees, sometimes successfully. In America, anyone can sue you for any reason, so sure, you'll have Ellison take a feud up with Page and Brin all the way up to the Supreme Court.
In 99.9% of instances none of this matter. Sure there's the technical letter of the law but in practice, and especially now, none of this matters.
After all, is this not what happens with compilers as well? LLM agents are just quite advanced compilers that don't require the specification to be as detailed as with traditional compilers.
1/ Was the pork in my sausage reared on a farm that meets agricultural standards?
2/ Was the food handled safely by the kitchen that cooked my food?
3/ Does the owner of the diner pay kitchen wages in accordance with labor law?
By contrast, I have no idea what went into the models I use, what system prompts have prejudiced it, and whose IP has been exploited in pursuit of my answer.
Thatâs being charitable, really. In practice the open secret of the AI industry is that the vast majority of training data, for want of a better word even if it is likely to be the most precise description, is stolen data.
Or is it still IP even if it is not copyrightable? That would feel weird: if it's in the public domain, then it's not IP, is it?
The answer is probably "Nobody"!
> The second commit message versus the first is the difference between a defensible authorship claim and a clean âClaude wrote thisâ record.
That makes no sense to me, as the commit message is probably LLM generated as well. (and even easier to generate as it doesn't have to compile or pass automated tests).
Is there any citation for this "legal consensus"? I was not aware there was any evidence backed stances on this topic as of yet
Except if it happens to regurgitate a significant excerpt of some existing work, then the authors of that can assert their copyright; i.e. claim that it infringes.
I use my own computer, I pay for my own subscription and I build my open source projects then the code belongs to me.
If I use my company's computer, they pay for my subscription and we work on the company's projects then the code belongs to the company.
In any step of the way if some copy-left or any other form of exotic open source license is violated, who pays for discovery? Is it someone in Russia who created a popular OSS library that is now owed? How will it be enforced?
Twice in my career the owners of a company have wanted to sue competitors for stealing their "product" after poaching our staff.
Each time, the lawyers came in and basically told us that suing them for copyright is suicide, will inevitably be nearly impossible to prove, and money would be better spent in many other areas.
In fact, we ended up suing them (and they settled) for stealing our copyrighted clinical content, which they copied so blatantly they left our own typos and customer support phone number in it.
Go ahead, try to sue over your copyrighted code, 10 years and 100M later you will end up like Google v Oracle. What if the code is even 5% different? What about elements dictated by external constraints; hardware, industry standards, common programming practices, these aren't copyrightable.
Then you have merger doctrine, how many ways can we really represent the same basic functions?
Same goes with the copyleft argument, "code resembling copyleft" is incredibly vague, it would need to be verbatim the code, not resembling. Then you have the history of copyleft, there have been many abuses of copyleft and only ~10 notable lawsuits. Now because AI wrote it (which makes it _even harder_ to enforce), we will see a sudden outburst of copyleft cases? I doubt it.
Ultimately anyone can sue you for any reason, nothing is stopping anyone right now from suing you claiming AI stole their copyleft code.
https://www.vice.com/en/article/musicians-algorithmically-ge...
"Who owns the text microsoft word helped you write?"
Claude code is a software tool not a legal entity.
But something that is overlooked is that the world is bigger than the US and it's an absolute zoo out there in terms of copyright laws in different countries. Anything you think you might understand about this topic goes out of the window if you have international customers or provide software services outside the US. Or are not actually based there to begin with. And there are treaties between countries to consider as well.
Courts tend to try to be consistent with previous rulings, interpretations, etc. When it comes to copyright, there are a few centuries of such rulings. The commonly held opinions among developers that aren't lawyers are that AI is somehow different. And of course since the law hasn't actually changed, the simple legal question then becomes "How?". And the answer to that seems to involve a lot of different notions.
For example, "AIs are not people, and therefore any content produced by them isn't covered by copyright to begin with" is one of the notions brought up in the article. A lawyer might have some legal nits to pick with that one but it seems to broadly be the common interpretation. So AI's don't violate copyright by doing what they do. In the same way you can't charge a Xerox machine with copyright infringements. Or Xerox. But you could go after a person using one.
And another notion is that any content distributed by a human can be infringing on somebody else's copyright and that party can try to argue their case in a court and ask for compensation. Note that that sentence doesn't involve the word AI in any way. How the infringing party creates/copies the content is actually irrelevant. Either it infringes or it doesn't. You could be using AI, a Tibetan Monk copying things by hand, trained monkeys hitting the keyboard randomly, a photo copier, or whatever. It does not really matter from a legal point of view. All that matters is that you somehow obtained a copy of an apparently copyrighted work. AI is just yet another way to create copies and not in any way special here.
There are of course lots of legal fine points to make to how models are trained, how training data is handled, etc. But if you break each of those down it boils down to "this large blob of random numbers doesn't really resemble the shape or form of some copyrighted thing" and "Anthropic used dodgy means to get their hands on copies of copyrighted work". I actually received a letter inviting me to claim some money back from them recently, like many other copyright holders.
Who coded the code Claude Code code?
If computer generated code is not copyrightable, ownership cannot be reassigned either.
And I'm worried that once that has been sufficiently normalized, laws and interpretations of them will adapt to whatever best suits those users. Which will mean copyrightwashing of FOSS. My only hope then is that surely if free software can be copyright-washed by the big guys, then so can the little guy copyright-wash the big guys' blockbuster movies or whatever, which might lead to some sort of reckoning.
There's obviously a huge issue with the legitimacy and ownership of training data being fed to LLMs. That seems like an issue between the owners of that IP and the people training the models and selling them as services more than the people using the tool. Isn't this just another flavor of SCO trying to extort money out of companies using Linux?
But AI might in fact do the exact opposite and reverse the privatization trend that the West has been going through for the last 400 years. All of our copyright laws rely on the idea that there is a human consciousness behind the copyright. The more AI has input, the less we can claim ownership. If AI returns everything to the commons, then it results in a much more egalitarian world.
Hilariously, many people, especially artists, see the return of the commons as an assault against them. Theyâre so captured by copyright that they assume any infringement on their copyright is inherently fascist. Itâs ridiculous. Copyright is a corporations number 1 weapon when it comes to creating a moat and keeping the masses out.
The original intent of copyright, in fact, was an incentive to return an idea to the commons. Experts used to hide their discoveries in order to keep them for themselves. Copyright provided an opportunity to release this knowledge and still profit. There were even several cases where it was established that those who claimed copyright could retain copyright even if the idea had been previously discovered. This created a huge incentive: release the knowledge or risk having your process copyrighted by the opposition. But that system worked because copyright could only exist for so long (14 years, doubled if they filed again.)
Now copyright is a lifelong sentence at almost 100 years. The entire purpose of it has been undermined. Corporations own all your childhood and by the time you can profit off of it, itâs outdated.
A world where the mainstream is primarily a commons seems to me like an egalitarian world. Iâd like to live in that world.
Part of the problem with generated works is that it is lower effort like the person copying something. Itâs not an activity that demands special protection like original authorship. I believe this is a large part of the reasoning.
Anything else is just bullshit equivocation.
-Claude
(Of course, there's no way to be certain of this, but it's what our software thinks, and the overall pattern is pretty convincing.)
See https://news.ycombinator.com/newsguidelines.html#generated and https://news.ycombinator.com/item?id=47340079
Even steering it with prompts isn't enough. The guy couldn't copyright the image he made with ai, code is no different.
Maybe prompts written by humans are copyrightable.
Can't wait for the Billionaires to entrench in court they can steal everything for these machines and claim it as their own and maybe even reach for anything that it helps produce. Fuck that