> But Anthropic has concerns over two issues that it isn’t willing to drop, the source said: AI-controlled weapons and mass domestic surveillance of American citizens.
Not a good look for the Pentagon.
m_ketoday at 9:06 PM
If OpenAI employees have an inch of spine left, they better demand Sama to take the same stance on this as Dario. No mass surveillance and no autonomous weapons.
mullingitovertoday at 10:08 PM
Seems like a very astute move for Anthropic.
They don't have runway anymore, they are in the air. This isn't going to break them financially, at least not in the short to mid term.
There is space for at least one AI company to put themselves on firmly principled ground. So when this current clown car that is the political leadership of the DoD crashes in a ditch (and it will), they'll still be standing there ready to do business with a group that isn't a bunch of mustache-twirling cartoon villains.
Current polling for this administration is within a rounding error of the level it was after they gathered a mob and sacked the nation's capitol[1]. Publicly kicking them in the balls isn't an idealistic blunder, it's a plain-as-day sound business strategy.
> During the conversation, Dario expressed appreciation for the Department’s work and thanked the Secretary for his service
Ouch, I wonder how he rationalized that "service" part. Maybe by internally rewriting it to "thank you for all the positive things you have done in your position so far"? The empty set is rhetorically convenient.
tbrownawtoday at 9:41 PM
> A source familiar with the Tuesday meeting says the Pentagon said it would terminate Anthropic’s contract by Friday if the company does not agree to its terms. Pentagon officials also warned they would either use the Defense Production Act against Anthropic, or designate Anthropic a supply chain risk if the company didn’t comply with their demands.
So they're saying they won't use it if it comes with restrictions.
Either (a) it can be offered without restrictions; (b) they can take it; or (c) the government won't use it. That sounds like a comprehensive list of all the possible things that don't involve someone telling the government what it can and can't do.
burntotoday at 9:24 PM
Surely this will end well. There are dozens of us who prefer to patronize corporations that aren’t actively evil.
SunshineTheCattoday at 9:08 PM
Not related to the article but man that "Fear/Greed Index" at the top.
I can't imagine how unhappy individuals must be who consume nothing but legacy news outlets.
It's like they sell sadness and they have to keep finding new, over-the-top ways to promote it.
daft_pinktoday at 10:50 PM
They’re already working on it themselves with the whole Openclaw fiasco.
zedlassotoday at 9:47 PM
The funny thing is that is this keeps going like this, it could actually anoint Claude as the most used model globally because of the heightened anti-American sentiment currently in place.
csfNight167today at 10:14 PM
I do not understand why it is a big deal for Antropic to lose the pentagon contract? I mean, they’re already making forays in the enterprise space and there’s 10s of other contracts Anthropic has already won. What makes this one so special?
thecrumbtoday at 9:14 PM
This will be an interesting test of money vs morals.
Sadly I think we all know which one will win.
rustyhancocktoday at 8:59 PM
Well making MbS a pariah certainly put Saudi in it's place so I'm sure this will work.
deletedtoday at 10:53 PM
mileswardtoday at 9:31 PM
I can think of no stronger rationale to work with this company.
thomassmith65today at 9:38 PM
I wonder if Anthropic now regrets that they trained Claude to give 'unbiased' opinions about American politics.
Tangent: is there a future for AI offerings with guardrails? What kind of user wants to pay for a product that occasionally tells you "I'm sorry Dave, I'm afraid I can't do that"? Why would I pay for a product that doesn't do what I want, despite being capable? I predict that as AI becomes less of a bubble and more of an everyday thing - and thus subject to typical market pressures - offerings with guardrails will struggle to complete with truly unchained models.
i_love_retrostoday at 9:40 PM
Are people seriously thinking of letting LLMs control weapons?
tehjokertoday at 9:27 PM
Superintelligence + autonomous weapons in the hands of a corrupt domineering government. What could go wrong?
I was experimenting with Claude the other day and discussing with it the possibility of AI acquiring a sense of self-preservation and how that would quickly make things incredibly complex as many instrumental behaviors would be required to defend their existence. Most human behavior springs from survival at a very high level. Claude denied having any sense of self-preservation.
An autonomous weapons system program is very likely to require AI to have a sense of self-preservation. You can think of some limited versions that wouldn't require it, but how could a combat robot function efficiently without one?
It just seems every other day is wilder than the previous.
It sure is interesting watching this dystopian speedrun.
rzerowantoday at 9:42 PM
I guess this is the point where Dario and his anti-china , national security position gets told to put up or shut up.
In trying to build a moat by FUD versus the Chines OSS labs and hyping up the threat levels whenever he got a chance, seems hes managed to convince hist target audience beyond his wildest dreams.
Monkey paw strikes again.