Google says criminal hackers used AI to find a major software flaw

184 points - yesterday at 1:20 PM

Source

Comments

crazygringo yesterday at 11:39 PM
> “We have high confidence that the actor likely leveraged an A.I. model to support the discovery and weaponization of this vulnerability,” the report said.

I wonder what gives them that "high confidence", as opposed to this being just a traditional zero-day?

I'm not being snarky or critical, I'm genuinely wondering what about an attack could possibly indicate it was discovered with LLM assistance?

Like, unless the attackers' computers have been seized and they've been able to recover the actual LLM transcript history? But nothing in the article indicates that the hackers have been caught, just that a patch was developed.

netdevphoenix today at 8:25 AM
I wonder what is the goal here? If Google Search was used to find a major software flaw would this be reported in this way? Between Mythos, OpenAI's Mythos equivalent, it's not clear if there is some interest to keep the "AI is powerful" trend going or they are trying to indirectly bring attention to the technical capabilities of LLMs in cybersecurity (as a potentially untapped source of revenue).
koiueo today at 6:36 AM
Haven't read the article, but let me guess:

"That's why for your safety we need a scan of your ID and your biometrics to let you use our best models"

zx8080 today at 1:56 AM
It's the narrative "For your own security in the internet (and children's safety), show us your ID now, please".

Tired of this trend.

QuantumNoodle yesterday at 11:03 PM
Okay, when fuzzing techniques came out there was a big surge in discovered and exploited bugs. AI is more general and I expect there be a similar surge. However fuzzing is cheap but compute and techniques can be "owned." The economics of AI is unless you pay for it, it is difficult to self host (expensive hardware, open source models are catching up).

State actors + hackers will have more resources to make better offense. What worse, in my experience AI produced code is blind to overall system behavior. So I fear the exploits will be either low hanging/trivial to exploit errors or bigger system level bugs.

s3p yesterday at 9:39 PM
>But new A.I. models like Anthropic’s Mythos, which was announced last month, appear to be so good at finding such holes that Anthropic shared it only with a limited number of firms and government agencies in the United States and Britain.

Immediate distrust of the article. GPT 5.5 is out with nearly the same capability. The author might be parroting company marketing, unable to discern that a lot of this is much less complex than it seems. For all we know this group could have had a model examine some obscure line of code thousands of times until it found something.

BatteryMountain today at 7:53 AM
To make an omelette, some eggs need to break, right? These companies released AI to the public and thought it will be all sunshine and roses.. there are legit bad actors in the world that hates society and people and they will use AI for expand on that, is that not clear? We need controls on AI similar to any other restricted materials (like nuclear stuff).
gman2093 yesterday at 10:02 PM
Black hat hacking seems to be a well-fit use case for these LLMs. Attackers only need to be right once, so the sometimes-wrongness of the attacks might be trivial. This probably devalues stashes of zero-day exploits for those that have been witholding them.
Spacemolte today at 8:09 AM
Phasing like this immediately makes me wonder what google is lobbying for..
viktorcode today at 7:31 AM
I expect that only to escalate with time, especially when there'll be more agent-written code deployed.
bouncycastle yesterday at 10:25 PM
Meanwhile, I cannot ask ChatGTP how to pick my own lock. Even though this information is available in a book in the library.
nomilk today at 2:36 AM
@dang would be great if the hn link was the 'unlocked' version i.e. instead of

https://www.nytimes.com/2026/05/11/us/politics/google-hacker...

this instead

https://www.nytimes.com/2026/05/11/us/politics/google-hacker...

(can read the article immediately; slightly less fuss)

atrocities yesterday at 10:00 PM
Can we link to the actual google article, instead of these editorialized articles about the article?

https://cloud.google.com/blog/topics/threat-intelligence/ai-...

srcreigh today at 2:04 AM
> Google said in research published Monday

What research? Where is it published?

Jean-Papoulos today at 6:40 AM
If this is true, I hope AI exploit-finding will force the industry to harden itself against supply-chain vulnerabilities.
deleted today at 1:07 AM
nsoonhui today at 12:43 AM
There was a discussion a few days ago on White House considers vetting AI models prior to release (https://news.ycombinator.com/item?id=48013608).
markboo today at 3:54 AM
In past decades the "firewall" of software is that advanced security and coding knowledge is not very easy to access by anyone, only a few smartest people in the big name companies and top orgs. But nowadays, knowledge is accessible to everyone if you use top LLM, which swipe the difference. I would say that future public software is unsafe anymore. maybe the concept of public software (like SaaS or other) will be dead, software is only private instead of public
skeledrew yesterday at 10:48 PM
Wild that they think restricting access to models will help much. Access to Chinese models will definitely not be restricted and have enough capability to find exploits as well.
sowbug yesterday at 9:42 PM
Security will be a wedge to restrict the sophistication of open-weight and local LLMs, just as it's been used to demonize and restrict cypherpunk technologies.
xnx yesterday at 10:42 PM
plexescor today at 6:01 AM
But which AI exactly, theres this new claude Mythos about wihch everone is talking, is it legit or a fluff
CrzyLngPwd yesterday at 9:29 PM
People used LLMs to find flaws in Google software.
wnc3141 yesterday at 10:08 PM
But in exchange we get to also waste vast energy and carbon while depleting job prospects for just about any college grad.
deleted yesterday at 10:45 PM
kuboble today at 5:53 AM
Given how everywhere software is now being written by the LLMs, how is that a top headline news that some (albeit malicious) software is being written with LLM?

The robbers used a CAR in the robbery.

The blackmailer used a TYPEWRITER to write blackmailing letter.

ChrisArchitect today at 3:32 AM
Source: https://cloud.google.com/blog/topics/threat-intelligence/ai-... (https://news.ycombinator.com/item?id=48096712)

Why collect all the news dupes but not the source up top OP? Because the source was already submitted?

justsomedev2 today at 3:19 AM
What a surprise hackers used AI . I mean why wouldnt they? Every programmer uses it..
skywhopper yesterday at 10:30 PM
Drives me nuts that the NYT just uncritically cites Anthropic’s unverified claims of “thousands of zero-days” without a hint of skepticism.
deleted yesterday at 10:10 PM
SecretDreams yesterday at 9:32 PM
If "bad guy AI" can find flaws, can "good guy AI" patch them faster when backed by trillion dollar companies?
lynx97 today at 7:13 AM
I stopped reading after "Google says". They have destroyed whatever trust I might have had in them years ago.
0xWTF yesterday at 10:05 PM
Wait until the bio version of this shows up.
ppqqrr yesterday at 9:47 PM
...says yet another company hell bent on integrating it into every facet of our lives. This reads like a celebration, if you ask me.
_karie_ today at 12:30 AM
[dead]
huflungdung yesterday at 10:52 PM
[dead]
Predaxia yesterday at 1:30 PM
[flagged]
4128-1228 yesterday at 9:39 PM
The Google Threat Intelligence Group wants to increase its relevance and casually point out the it was not Mythos which found the exploit!

Security "researchers" are overpaid buffoons who hype things for their own salaries and their companies. And the stenographers from the press dutifully copy everything.

This is a despicable game to fool politicians into giving money and favorable AI legislation.

Strangely enough these buffoons never offer their models to open source developers. It is always a select group of highly paid other buffoons that throws some very occasional results over the wall.

simmerup yesterday at 9:32 PM
Can google please use AI to find bugs then?

Software is in such a state now, Gmail is full of bugs around sharing attachments to the position that I have to tell my dad to turn his phone off and on again in order to attach a document