Debian decides not to decide on AI-generated contributions

214 points - today at 2:53 PM

Source

Comments

mr-wendel today at 4:06 PM
My two cents: I've been coding practically my entire life, but a few years back I sustained a pretty significant and lasting injury to my wrists. As such, I have very little tolerance for typing. It's been quite a problem and made full time work impossible.

With the advent of LLMs, AI-autocomplete, and agent-based development workflows, my ability to deliver reliable, high-quality code is restored and (arguably) better. Personally, I love the "hallucinations" as they help me fine-tune my prompts, base instructions, and reinforce intentionality; e.g. is that >really< the right solution/suggestion to accept? It's like peer programming without a battle of ego.

When analyzing problems, I think you have to look at both upsides and downsides. Folks have done well to debate the many, many downsides of AI and this tends to dominate the conversation. Probably thats a good thing.

But, on the flip side, I personally advocate hard for AI from the point-of-view on accessibility. I know (more-or-less) exactly what output I'm aiming for and control that obsessively, but it's AI and my voice at the helm instead of my fingertips.

I also think it incorrect to look at it from a perspective of "does the good outweigh the bad?". Relevant, yes, but utilitarian arguments often lead to counter-intuitive results and end up amplifying the problems they seek to solve.

I'd MUCH rather see a holistic embrace and integration of these tools into our ecosystems. Telling people "no AI!" (even if very well defined on what that means) is toothless against people with little regard for making the world (or just one specific repo) a better place.

vladms today at 3:28 PM
Very reasonable stance. I see reviewing and accepting a PR is a question of trust - you trust the submitter to have done the most he can for the PR to be correct and useful.

Something might be required now as some people might think that just asking an LLM is "the most he can done", but it's not about using AI it's about being aware and responsible about using it.

sothatsit today at 4:40 PM
Concerns about the wasting of maintainer’s time, onboarding, or copyright, are of great interest to me from a policy perspective. But I find some of the debate around the quality of AI contributions to be odd.

Quality should always be the responsibility of the person submitting changes. Whether a person used LLMs should not be a large concern if someone is acting in good-faith. If they submitted bad code, having used AI is not a valid excuse.

Policies restricting AI-use might hurt good contributors while bad contributors ignore the restrictions. That said, restrictions for non-quality reasons, like copyright concerns, might still make sense.

gorgoiler today at 8:02 PM
Does Debian have a rule that forbids (or a taboo that proscribes) contributors passing off other people’s work as their own? I could believe that such a rule is implied rather than written down. The GR could be about writing it down, and it would surely cover the case of code that came directly from a model. Even if we don’t consider a model to be another person it is certainly not the contributor’s own work.

(If anything, the copyright to model-generated code cannot possibly be said to belong to the human contributor. They… didn’t write it! I’m glad to see that aspect was discussed though I’m surprised it wasn’t the main thrust.)

SamuelAdams today at 3:08 PM
My question on AI generated contributions and content in general: on a long enough timeline, with ever improving advancements in AI, how can people reliably tell the difference between human and AI generated efforts?

Sure now it is easy, but in 3-10 years AI will get significantly better. It is a lot like the audio quality of an MP3 recording. It is not perfect (lossless audio is better), but for the majority of users it is "good enough".

At a certain point AI generated content, PR's, etc will be good enough for humans to accept it as "human". What happens then, when even the best checks and balances are fooled?

kruffalon today at 7:23 PM
retired today at 5:04 PM
Fork it to Slobian and let the clankers go to town creating, approving and merging pull requests by themselves. Look at the install base to see what people prefer.
Yhippa today at 6:27 PM
This reminds me of the Hacktoberfest situation where maintainers were getting flooded with low-quality PRs. This could be that, but on steroids and constantly, not just one month.
veunes today at 6:21 PM
The quality argument against LLM-generated code has always seemed weak to me. Maintainers already review patches because humans routinely submit bad code. The review process is the filter.
MeteorMarc today at 6:29 PM
Did anyone say it is a risk? What if courts eventually decide that users of products of closed models have to pay some reasonable fee to the owners of the training data?
arjie today at 6:30 PM
In some sense, I think the promise of free software is more real today than before because everyone else's software is replicable for relatively cheap. That's probably a much stronger situation for individual freedom to replicate and run code than in the era of us relying on copyright.
hombre_fatal today at 3:28 PM
Aside, that's a fun read/format, like reading about judges arguing how to interpret a law or debating whether a law is constitutional.
MintPaw today at 5:30 PM
An interesting concept that stood out to me. Committing the prompts instead of the resulting code only.

It it really true the LLM's are non-deterministic? I thought if you used the exact input and seed with the temperature set to 0 you would get the same output. It would actually be interesting to probe the commit prompts to see how slight variants preformed.

tonymet today at 7:02 PM
Given the 10x+ productivity rate, it would be reasonable to establish a higher quality acceptance bar for AI submissions. 50-100% more performance, correctness, usability testing , and one round of human review.

If a change used to take a day or two, and now requires a few minutes, then it's fair to ask for a couple hours more prompting to add the additional tangible tests to compensate for any risks of hallucinations or low quality code sneaking in

1vuio0pswjnm7 today at 5:52 PM
A title that might make Geddy Lee proud
deleted today at 3:09 PM
shevy-java today at 6:55 PM
Soon we can call it debslop!
jaredcwhite today at 6:44 PM
LLM-generated code is incompatible with libre software. It's extremely frustrating to see such a lack of conviction to argue this point forcefully and repeatedly. It's certainly bad enough to see such a widespread embrace of this dangerous and anti-libre technology within proprietary software teams, but when it comes to FLOSS, it should be a no-brainer to formalize an emphatic anti-slop contributor policy.
theptip today at 3:19 PM
> disclosure if "a significant portion of the contribution is taken from a tool without manual modification", and labeling of such contributions with "a clear disclaimer or a machine-readable tag like '[AI-Generated]'.

Quixotic, unworkable, pointless. It’s fundamentally impossible (at least without a level of surveillance that would obviously be unavceptable) to prove the “artisanal hand-crafted human code” label.

> contributors should "fully understand" their submissions and would be accountable for the contributions, "including vouching for the technical merit, security, license compliance, and utility of their submissions".

This is in the right direction.

I think the missing link is around formalizing the reputation system; this exists for senior contributors but the on-ramp for new contributors is currently not working.

Perhaps bots should ruthlessly triage in-vouched submissions until the actor has proven a good-faith ability to deliver meaningful results. (Or the principal has staked / donated real money to the foundation to prove they are serious.)

I think the real problem here is the flood of low-effort slop, not AI tooling itself. In the hands of a responsible contributor LLMs are already providing big wins to many. (See antirez’s posts for example, if you are skeptical.)

pessimizer today at 7:48 PM
I don't understand a lot of the anti-LLM venom within this specific context. Debian doesn't have to worry about stealing GPL code, so the copyright argument is nearly nil. There's still the matter of attribution-ware, but Debian includes tons of attribution and I'm sure would happily include anyone who thinks their OSS might have been trained on.

So leaving that aside, it just seems to be the revulsion that programmers feel towards a lot of LLM slop and the aggravation of getting a lot of slop submissions? Something that seems to be universal in the FOSS social environment, but also seems to be indicative of a boundary issue for me:

The fact that machines have started to write reasonable code doesn't mean that you don't have any responsibility to read or review it before you hand it to someone. You could always write shit code and submit it without debugging it or refactoring it sanely, etc. Projects have always had to deal with this, and I suspect they've dealt with this through limiting the people they talk to to their friends, putting arbitrary barriers in front of people who want to contribute, and just being bitchy. While they were doing this, non-corporate FOSS was stagnating and dying because 1) no one would put up with that without being paid, and/or 2) money could buy your way past barriers and bitchiness.

Projects need to groom contributors, not simply pre-filter contributions by identity in order to cut down on their workload. There has to be an onboarding process, and that onboarding process has to include banning and condemning people that give you unreviewed slop, and spreading their names and accounts to other projects that could be targeted. Zero tolerance for people who send you something to read that they didn't bother to read. If somebody is getting AI to work for them, then trust grows in that person, and their contributions should be valued.

I think the AI part is a distraction. AI is better for Debian that almost anyone else, because Debian is copyleft and avoids the problems that copyleft poses for other software. The problem is that people working within Free Software need some sort of structured social/code interaction where there are reputations to be gained and lost that aren't isolated to single interactions over pull requests, or trying to figure out how and where to submit patches. Where all of the information is in one place about how to contribute, and also about who is contributing.

Priority needs to be placed on making all of this stuff clear. Debian is a massive enough project, basically all-encompassing, where it could actually set up something like this for itself and the rest of FOSS could attach itself later. Why doesn't Debian have a "github" that mirrors all of the software it distributes? Aren't they the perfect place? One of the only good, functional examples of online government?

edit: There's no reason that Debian shouldn't be giving attribution to every online FOSS project that could possibly be run on Linux (it will be run on Debian, and hopefully distributed through apt-get.) Maybe a Debian contributor slash FOSS-in-general social network is the way to do that? Isn't debian.org almost that already?

aplomb1026 today at 5:32 PM
[dead]
bhekanik today at 5:00 PM
[dead]
techpulse_x today at 3:00 PM
[dead]
newzino today at 4:20 PM
[dead]
wetpaws today at 5:01 PM
[dead]
ray023 today at 6:26 PM
The website is absolutely atrocious, dark mode has pitch-black background with bold 100% white glowing text in foreground, shitty font, way to wide text.

Seriously how is lwn.net even still so popular with such an atrocious unreadable ugly website. Well yes I get the irony of asking that on HN (I use an extension to make it better).

3012846 today at 3:04 PM
Again you can see which developers are owned by corporations and which are not. There is no free software any longer.
est31 today at 3:37 PM
I think it's a complicated issue.

A lot of low quality AI contributions arrive using free tiers of these AI models, the output of which is pretty crap. On the other hand, if you max out the model configs, i.e. get "the best money can buy", then those models are actually quite useful and powerful.

OSS should not miss out on the power LLMs can unleash. Talking about the maxed out versions of the newest models only, i.e. stuff like Claude 4.5+ and Gemini 3, so developments of the last 5 months.

But at the same time, maintainers should not have to review code written by a low quality model (and the high quality models, for now, are all closed, although I heard good things about Minmax 2.5 but I haven't tried it).

Given how hard it is to tell which model made a specific output, without doing an actual review, I think it would make most sense to have a rule restricting AI access to trusted contributors only, i.e. maintainers as a start, and maybe some trusted group of contributors where you know that they use the expensive but useful models, and not the cheap but crap models.