AI is breaking two vulnerability cultures
347 points - yesterday at 5:55 PM
SourceComments
The catalyst is the shift towards software transparency: both the radically increased adoption of open source and source-available software, and the radically improved capabilities of reversing and decompilation tools. It has been over a decade since any ordinary off-the-shelf closed-source software was meaningfully obscured from serious adversaries.
This has been playing out in slow motion ever since BinDiff: you can't patch software without disclosing vulnerabilities. We've been operating in a state of denial about this, because there was some domain expertise involved in becoming a practitioner for whom patches were transparently vulnerability disclosures. But AIs have vaporized the pretense.
It is now the case that any time something gets merged into mainline Linux, several different organizations are feeding the diffs through LLM prompts aggressively evaluating whether they fix a vulnerability and generating exploit guidance. That will be the case for most major open source projects (nginx, OpenSSL, Postgres, &c) sooner rather than later.
The norms of coordinated disclosure are not calibrated for this environment. They really haven't been for the last decade.
I'm weirdly comfortable with this, because I think coordinated disclosure norms have always been blinkered, based on the unquestioned premise that delaying disclosure for the operational convenience of system administrators is a good thing. There are reasons to question that premise! The delay also keeps information out of the hands of system operators who have options other than applying patches.
Day -X + 1: Engineer at Alibaba finds the vuln and tells Apache. Patch is pushed to git while new release is coordinated.
Day -X: A black hat sees commits fixing the bug. Attacks start happening.
Day 0: Memes start circulating in Minecraft communities of people crashing servers. Some logs are shared on Twitter, especially in China, of people getting pwned.
Day 0 + ~4 hours: My friend DMs me a meme on Twitter. I look up to find the CVE. Doesn't exist. My friend and I reproduce the exploit and write up a blog post about it. (We name it Log4Shell to differentiate it from a different, older log4j RCE vuln)
Day ~1: Media starts picking it up. Apache is forced to release patches faster in response. CVE is actually published to properly allow security scanners to identify it.
Today: AI makes this happen faster and more consistently. Patches probably should be kept private until a coordinated disclosure happens post-testing and CVE being published?
Hard to say what the right move is, but this is gonna be happening a lot over the next 1-3 years. Lots of companies are going to be getting cooked until AI helps us patch faster than attackers can exploit these fresh 0-days.
people were already diffing kernel commits and figuring out which ones were security fixes long before llms. if a patch lands publicly, the race has basically already started.
also not sure shorter embargoes really help. the orgs that can patch in hours are already fine. everyone else still takes days or weeks.
if anything, cheaper exploit generation probably makes coordinated disclosure more important, not less.
The US is at war. Much of the world is at war at the cyber attack level right now. The US, the EU, most of the Middle East, Israel, Russia... Major services have been attacked and have gone down for days at a time - Ubuntu, Github, Let's Encrypt, Stryker. Entire hospital systems have had to partially shut down.
Now, in the middle of this, AI has made attacks much faster to generate. Faster than the defensive side can respond. Zero-day attacks used to be rare. Now they're normal.
It's going to get worse before it gets better. Maybe much worse.
Why?
> Additionally, having AI evaluate each commit as it passes is increasingly cheap and effective
This is the key. With AI, the “people won't notice, with so many changes going past” assumption fails.
Soon, there will be no such thing as a safe way to disclose a vulnerability in an open source project. Centralized SaaS will have a major security advantage here.
Security researchers should report their findings to a committee that includes some big companies (IBM and Oracle seem like trustworthy choices here, but ideally we should find a way to get Microsoft included). Those companies would apply the security patches and distribute binary builds of Linux to their customers. Users fortunate enough to have a business relationship with those companies would be protected immediately. The source would still be published after 90 days for educational purposes and for anyone who doesn't appreciate the security benefits of this approach.
"But even if you could convince people to collaborate like this for the greater good, the GPL makes it legally impossible", you say. Ah, but the GPL only says you have to make the source available for a minimal monetary cost, it doesn't impose a time limit. Traditionally, responding to source code requests with a snail-mailed CD is good enough. No judge in the US is going to rule that a short administrative delay in sending out those CDs - in the name of everyone's security, after all, and 90 days is nothing to the judicial system - violates a nebulous licensing agreement from a different era.
This is an important facet of the problem space: security risks turning into an arms race for who wants to spend more tokens.
90d seems long too though.
Think ultimately the big AI houses will need to help the core internet infra guys. Running latest and greatest AI over stuff like nginx and friends makes sense for us all collectively I think
We should be able to turn around a bug report to a patched product ready for QA testing in 1 hour. Standardize/open source it, have the whole software supply chain use it (ex. Linux kernel -> distros -> products that use distros -> users). With AI there's no reason we can't do this, we're just slow.
Not as worrysome in a philosophical way (since it's not a serious culture) but it's a real issue. And just wait for a nation state to start astroturing helpful little libraries at scale ...
Turns out it did.
Thus we should probably start treating our thinking model of computing as a Dark Forest, not a friendly community. That mitigates these risks to some degree.
There will be much wailing and gnashing of teeth around this, because a lot of tech types really resent having to update constantly, but I don't think people will have a choice. If you have a complicated stack where major or even minor version updates are a huge hassle, I'd start working now to try and clear out the cruft and grease those wheels.
The 3rd one is what I practice when giving companies time to fix their issue. Note, I haven't reported anything to FOSS projects, but to several companies I found exploits in. I give them 5 days. If they don't respond at all in the first day, I deduct 1 day - apparently they're either incompetent or don't care. After the 5 days have passed, I make it public. So far they've all fixed the issue on the 3rd or 4th day.
If I were to report something to a FOSS project, I'd give them a bit more, say 8-9 days. Enough time for everyone to wake up, review the vuln, patch it and ship it. Enough time for all the downstream projects to also ship the patch.
90 days is ridiculous, especially for companies. If I report something on Friday 23:30 and they reply Monday 15:00 - what were they doing during the weekend? Did they forget their software is used 24/7? I had one company complain quite a bit, threatening to sue. When they realized there was no one to sue (me being anonymous with my report), they fixed it in less than a day.
Bottom line - if you're a company offering a product or service, you should have a security team 24/7.
If you're a FOSS project - either alert your users to stop using your software or disable the service yourself, if you can.
If it's an extremely important life-or-death service you can't shut off - then fix it quickly. What are you doing with life-or-death stuff when you can't react quickly enough?
Fuck the 90 days standard - it's what companies want us to do because it's easier on them. If security hasn't been your top priority, you have a few days to make it your top priority.
With AI, that makes even more sense now. Bugs won't be able to stay hidden for months. Especially bugs I've reported like IDORs or SQL injections - things everyone tries first.
(and I love Linux, but getting an "Oh noes!" from Anubis at kernel.org because I don't have cookies enabled (I do??) really makes me not want to report anything to the Linux kernel in particular. If I ever did find something, I'd just immediately post it as a HN comment or something like that)
Not just for vulnerabilities, having a nice agents|skills|etc.md definitions would encourage new devs to contribute instead of dealing with an overworked maintener repeating the same thing for n time.