Redox OS has adopted a Certificate of Origin policy and a strict no-LLM policy
174 points - today at 8:54 AM
SourceComments
It doesn't really matter what your stance on AI is, the problem is the increased review burden on OSS maintainers.
In the past, the code itself was a sort of proof of effort - you would need to invest some time and effort on your PRs, otherwise they would be easily dismissed at a glance. That is no longer the case, as LLMs can quickly generate PRs that might look superficially correct. Effort can still have been out into those PRs, but there is no way to tell without spending time reviewing in more detail.
Policies like this help decrease that review burden, by outright rejecting what can be identified as LLM-generated code at a glance. That is probably a fair bit today, but it might get harder over time, though, so I suspect eventually we will see a shift towards more trust-based models, where you cannot submit PRs if you haven't been approved in advance somehow.
Even if we assume LLMs would consistently generate good enough quality code, code submitted by someone untrusted would still need detailed review for many reasons - so even in that case it would like be faster for the maintainers to just use the tools themselves, rather than reviewing someone else's use of the same tools.
I think one way to compare the use of LLMs is that it is like comparing a dynamically typed language with a functional/statically typed one. Functional programming languages with static typing makes it harder to implement the solution without understanding and developing an intuition of the problem.
But programming languages with dynamic typing will let you create a (partial) solutions with a lesser understanding the problem.
LLMs takes it even more easy to implement an even more partial solutions, without actually understanding even less of the problem (actually zero understanding is required)..
If I am a client who wants reliable software, then I want an competent programmer to
1. actually understand the problem,
2. and then come up with a solution.
The first part will be really important for me. Using LLM means that I cannot count on 1 being done, so I would not want the contractor to use LLMs.
> any content submitted that is clearly labelled as LLM-generated (including issues, merge requests, and merge request descriptions) will be immediately closed
Note the word "clearly". Weirdly, as a native English speaker this term makes the policy less strict. What about submarine LLM submissions?I have no beef with Redox OS. I wish them well. This feels like the newest form of OSS virtue signaling.
For instance a GPL LLM trained only on GPL code where the source data is all known, and the output is all GPL.
It could be done with a distributed effort.
Are they really that delusional to think that their AI slop has any value to the project?
Do they think acting like a complete prick and increasing the burden for the maintainers will get them a job offer?
I guess interacting with a sycophantic LLM for hours truly rots the brain.
To spell it out: No, your AI generated code has zero value. Actually less than that because generating it helped destroy the environment.
If the problem could be solved by using an LLM and the maintainers wanted to, they could prompt it themselves and get much better results than you do because they actually know the code. And no AI will not help you "get into open source". You don't learn shit from spamming open source projects.
Time consuming work can be done quickly at a fraction of the cost or even almost free with open weights LLMs.
I think part of the battle is actually just getting people to identify which LLM made it to understand if someones contribution is good or not. A javascript project with contributions from Opus 4.6 will probably be pretty good, but if someone is using Mistral small via the chat app, it's probably just a waste of time.
[1] https://www.datadoghq.com/blog/ai/harness-first-agents/
[2] https://www.datadoghq.com/blog/ai/fully-autonomous-optimizat...
[3] https://www.datadoghq.com/blog/engineering/self-optimizing-s...
P.S. I know this will be downvoted to death but I'll leave it here anyway for folks who want to keep their eyes wide open.
IOW I think this stance is ethically good, but technically irresponsible.
"any content submitted that is clearly labelled as LLM-generated (including issues, merge requests, and merge request descriptions) will be immediately closed"
For example:
- What if a non-native English speaker uses the help of an AI model in the formulation of some issue/task?
- What about having a plugin in your IDE that rather gives syntax and small code fragment suggestions ("autocomplete on steroids")? Does this policy mean that the programmers are also restricted on the IDE and plugins that they are allowed to have installed if they want to contribute?
What makes sense if that of course any LLM-generated code must be reviewed by a good programmer and must be correct and well written, and the AI usage must be precisely disclosed.
What they should ban is people posting AI-generated code without mentioning it or replying "I don't know, the AI did it like that" to questions.