Access to frontier AI will soon be limited by economic and security constraints

173 points - today at 1:08 AM

Source

Comments

sho today at 4:51 AM
I am no-where near as concerned by this as I was a year ago, when I was expecting the axe to fall at any moment before the Chinese labs achieved some sort of escape velocity. I now think it's too late, all the cats are out of all the bags, there's no moat except maybe a temporal one of a few months, the genie is out of the bottle.

There is no secret sauce the US labs have that the Chinese ones don't, or won't have soon enough. Deepseek 4 and Kimi 2.5 are not quite Claude 4.5/GPT5.5 but there's no fundamental principle missing - they are strong evidence that there's no real advantage the "frontier" labs possess that isn't related to scale, which they will gain in time (if they even need to). The RL post-training techniques that work are widely known and easily copied. All Deepseek is really lacking is data, which they're getting - and the harder Anthropic/the USG makes it to access claude in china, the more of that precious data they'll get!

I used to sort of entertain the "fast take-off breakaway" scenario as being plausible but not really anymore. The only genuine moat the frontier labs have is their product take-up, which isn't nothing, far from it, but it's not some unbreakable technological wall. Too late guys - it might have been too late for quite some time.

terrib1e today at 4:26 AM
No mention of open weights anywhere in the piece, which is weird. Qwen, Llama, DeepSeek are months behind frontier, not years. If you're a European startup worried about getting cut off from Anthropic's API in 2027, the real question is what the open-weight frontier looks like then. Probably pretty capable. That undercuts most of the doom scenario.

Also, he concedes Mythos-level capabilities will be cheap next year, then handwaves it with "you need the best AI, not good-enough AI." For most use cases, frontier minus six months is fine.

rsolva today at 10:19 AM
In our company of 24 employees, we get by with two DGX Sparks. We don't use AI heavily, but each Spark can serve about 6-8 concurrent requests with a full context lenght of 256k, which is decent. We get about ~35 t/s depending on the model we use (currently Qwen3.5 122B A10B and Qwen3 Coder Next), but we might set up a smaller model too for simpler tasks.

This works for us and will work for years to come. It is not SOTA, but it works darn well for our purposes, and we control the compute and data flowing through it, so totally worth it.

seydor today at 11:05 AM
We should be aiming for less token usage, ideally none at all. The current AI is using LLMs to expanding horizontally but with the goal of achieving vertical progress - inventing truly new stuff and being able to eliminate our biggest problems. problems like cancer need only be solved once, and is no more tokens needed after that.
pu_pe today at 7:07 AM
The more fundamental bottleneck is not even the frontier models, it's the datacenters. Let's say Europe breaks apart from the US completely tomorrow. It does not have enough datacenters (or GPUs in general) to sustain its inference needs even if it would resort to Chinese open models. And to build new datacenters, it would need to source parts from the US and China.

In other words, if AI does have continued significant economic impact, only the US and China would be able to leverage it completely. The rest of the world is implicitly betting that AI won't be good enough, or that eventually the compute curve flattens out so using a model that is 10x larger only leads to marginal benefits.

coderenegade today at 4:35 AM
The distillation risk has been brewing for a while now. In a very real sense, the model is the data, so if the data is locked down because of how valuable it is, it was only a matter of time before fully open access to the models would be revoked.

There's also an additional economic concern that rarely gets mentioned: because no one has cracked continual learning, keeping models up-to-date and filling in gaps in performance requires retraining on an ever growing dataset. Granted, you aren't starting from scratch each time, but the scaling required just to stay relevant looks daunting.

I don't know where any this goes on a societal level, but I've believed since the release of deepseek r1 that access to frontier models would eventually be locked up behind contracts, since the only moats protecting the models themselves are purely artificial. It remains to be seen how effective China is at pushing the envelope, and whether they are interested in providing unfettered access. And on top of that, it remains to be seen how well these models actually turn out to scale in the long run.

adrithmetiqa today at 6:00 AM
Considering the economic angle, one possible long term future is that access to frontier models is only realistic for the wealthiest 1% They will use this access to the ultra intelligent models to increase their wealth further. Inequality will continue to be negatively impacted
jillesvangurp today at 10:42 AM
Physics and economics will drive cost. Current token pricing is based on unsustainable investment and energy cost. However, this is more of an optimization problem than an inherent show stopper. Token cost will inevitably come down over time. But this could take a while before it catches up with demand. Manufacturing will step up to provide cheaper GPUs. Etc. There will be some consolidation but the whole thing will converge on something that should make long term economical sense.

Ultimately it's a resource control issue. To power AI you need land/space (to build on), water, energy, and lots of hardware. Hardware needs to be manufactured and engineered. It needs metals, some exotic materials, machines, etc. More resources in other words. If you look at China vs US here, they are really well positioned in terms of resources and supply chains. The US has fallen behind quite a bit on energy and all the critical resources needed to produce hardware. AI is bottle necked on a lot of stuff that China has or makes in abundance.

For the frontier models, there are a growing number of companies and countries that provide them. We're used to mostly talking about the US ones. But of course the Chinese have a lot of capability here and they are not that far behind. And that's judging by the models they choose to release under OSS licenses. Those models are not their frontier models. And there are a lot of other countries developing and using models that aren't necessarily talking openly about what they are doing.

The irony with these frontier models is that they only generate revenue if people can use them. Why sink billions in AI infrastructure and models without a revenue model?

The reality with Mythos is that you have to assume that the Chinese (and others) are not that far behind and may already be running an equivalent model that they just haven't told anyone about yet. Anthropic gate keeping Mythos and its findings is probably wise. But it's not long term sustainable to depend on that happening or working very well. Or even on them even being a leader in this space.

This is becoming an arms race between countries, and economies. And it's an economical and resource control race. Developing and researching in the open has advanced things massively. But it has also empowered the rest of the world. Both Anthropic and OpenAI are staffed with people from all over the world. You have to assume that they probably aren't very good at keeping things secret.

digitaltrees today at 5:41 AM
The thing is, the open source models are are smart enough to do most work if the harness and orchestration is right. So even if the next gen model get locked behind monopoly pay walls build Real things in the real world and fight for a humane world
Animats today at 7:32 AM
Over on the image generation side, "frontier AI" seems to be coming along rather well. Watch this video, which was released eight days ago.[1] Can you find any flaws? Two years ago, just getting hands with the right number of fingers was tough. Last year, there were jarring errors in every scene. Now, very little is wrong. How much longer will anyone need Hollywood studios?

[1] https://www.youtube.com/watch?v=4zTCLIhScCM

BrtByte today at 5:01 AM
The uncomfortable implication is that "AI sovereignty" may end up being less about training your own GPT-class model and more about securing compute, energy, datacenter security and contractual access
evdubs today at 4:41 AM
What's the likelihood that universities eventually become open model providers?
mc-serious today at 8:00 AM
Open-Source will handle access to models, someone will find a way. Security by obfuscation has never worked.
Garlef today at 9:50 AM
I'm not so sure about "soon" - the big labs are profiting from the discovery and experimentation efforts by independent contributors (openclaw, etc) and reducing their capabilities also reduces input from this side.
nl today at 4:52 AM
Quote:

> ā€œThe two AI superpowers are going to start talking. We’re going to set up a protocol in terms of how do we go forward with best practices for AI to make sure nonstate actors don’t get a hold of these models,ā€ Bessent told Joe Kernen on Thursday, on the sidelines of President Donald Trump’s two-day meeting in Beijing with Chinese President Xi Jinping.

https://www.cnbc.com/2026/05/14/us-china-ai-rules-bessent-us...

OpenAI is already talking openly about gated access to their models (see this OpenAI podcast episode for example: https://openai.com/podcast/#oai-podcast-episode-16)

Separately there's also a very active effort to stop open weight releases.

It's dangerous to those who think access to frontier intelligence is important.

tactlesscamel today at 9:45 AM
How much money are you all paying to use this tech? Last I even tried, it would cost my entire salary. Yet, everyone and their newborns are using it every day for everything. How is this possible?
threepts today at 8:28 AM
When intelligence is a commercial commodity, it is only bound to happen that the rich gatekeep it to secure their socioeconomic status.

But, I think, with every revolution, hierarchies have only historically fallen only for the former serfs to rise.

The industrial revolution, the renaissance -> all were marked by an massive shift in the socioeconomic status and the rise of the middle class.

I think AGI, when it happens, will only raise equality. I may be wrong.

Havoc today at 9:03 AM
Think this somewhat underestimates economic pressures the US labs are under.

OpenAI etc need to make crazy revenue to get their investment math to work. Perhaps you can sell some tokens to privileged partners at a premium rate but I think they’ll need global scale ultimately

phantomathkg today at 6:49 AM
Instead of soon, how about just "now"?

I would imagine not single everyone on HN have enough disposable income that allow us to subscribe Claude Max or other similar max plan of other models without thinking.

Some people mentioned open weight model, but there are two hurdles. One the current economic mean securing the best hardware is already stupidly expensive compare to a year or two ago. And the open weight model lack the magic that Claude/Gemini/OpenAI put in the proprietary one, meaning one will have to create their own agent that is clever enough to search the internet when it knows its training data is stale.

nikhilpareek13 today at 9:31 AM
the piece focuses on closed frontier models but skips that Llama, Mistral, DeepSeek and Owen run reliably 6 to 9 months behind. For most countries and most use cases, that's what people actually run, and it's not gated by US policy. The "frontier haves vs have-nots" divide is try for the top 5% of capabilities. The other 95% of the economy will run on open weights regardless of what Mythos rollout policy looks like.
wewewedxfgdf today at 6:02 AM
Its worse than that - all AI features will get broken down into even finer slices and you will have to pay for everything based on the finest level of slice they can make and still make money.
partloyaldemon today at 5:43 AM
All the downsides of your cliched agi nightmares but with the ā€œintelligenceā€ of your bog standard national security functionary
dmantis today at 9:49 AM
I hope regular people will stop using "national security" and "national interests" as euphemisms and framing, and will call these things a psychopathic fight for power.

Assuming that some humans are worse than others because of their flag picture and that they deserve less access to resources is barbarism. There is no security in limiting access to NSA-style entities; it's an absolute insecurity for everyone but them throughout the whole world. How is that in anyone's "interests"?

We see every day now how suspicious bugs that look exactly like backdoors (i.e., Microsoft BitLocker) get exposed. That's in humanity's interests (and those of particular nations as a subset) — not being subjugated by small rings of professional outlaws. We need these instruments to defend people, everywhere. We don't need to give a leverage to any state psycho. Let's make everyone of them weaker.

chvid today at 6:14 AM
DeepSeek is not a distillation of Claude or ChatGPT - stating this is just idiotic politics at this point.

The Chinese labs have reached "escape velocity" long ago - they will continue development regardless of API access to US models or the willingness of US labs to share their research.

sublimefire today at 9:16 AM
> margins shrink and become razor-thin

You need to understand that these models are provided by the corporate entities, they are expensive to maintain, iterate and run. There is still no strong correlation between the use of AI and the business outcomes so there should be a real ceiling to how much enterprises would pay for tokens. The gov is a usual choice to establish contracts and get some stability, similar to building nuclear reactors or military equipment. And posturing about limiting model access is just saying it is expensive to subsidise its use for cat image generation or call summaries.

I am pretty sure we have not found the killer app (like an IDE even) for us to extract all the possible value from the models yet. I would even go as far as to say that the synthesis between a human and AI could leverage average models to achieve a lot more compared to the model/agent working on its own.

edit: Just to add to this, I am going through Mythos scans and it is not perfect, very much similar to what pentesters would do with the added bloat of noise in reports about nonissues.

viking123 today at 6:10 AM
If Amodei and the co. were in charge the models would alert the police if someone said "boob" and the goys would only get GPT 2 level models, hell, even that might be too dangerous.
ares623 today at 5:36 AM
I wonder if the countries that don't have "AI Sovereignty" end up being like what Japan is now, technologically. It's stuck in 90's/early 2000's tech and norms (i.e. left behind) but its infrastructure and society chugs along (the demographic problem is a separate issue).

Would that make those countries more attractive to young people perhaps? As a place to grow and learn skills where the opportunities are non-existent in the AI Sovereign countries.

eth0up today at 4:23 AM
Damn. I predicted this last year and got thrashed for it.

Glad to see others catching on.

zelon88 today at 4:16 AM
> And it doesn’t stop with the security questions: the Trump administration’s signature style of international engagement is to wield American leverage as a bundle. Deadlocks in trade negotiations are broken by threatening to withhold intelligence, tech deals are stalled by reference to food safety standards. And so I don’t know when a U.S. administration would choose to leverage its seemingly inevitable predeployment authority over frontier models to secure its broader interests, but I’m sure it would in due time. That means that even if we do everything ā€˜right’ on the security and economic side, frontier access is still fundamentally contingent as long as there’ll be divergences between governments’ strategic interests.

The Trump Administration telling the very neo-fascist oligarchs who bought him an election and bought him a ballroom to play nice with their toys? At the expense of rampant capitalism? Lol.

He already showed us the limit of his comprehension of the topic when he made EO 14179 limiting states from regulating AI.

Trump doesn't swing for perfect pitches. He is a madman, a lunatic, and a true moron. Do not give this man any credit. I would be shocked if he could tell you the time on an analog clock.

Ozzie-D today at 10:08 AM
[dead]
xer today at 8:35 AM
[dead]
schnitzelstoat today at 8:33 AM
[dead]
dobreandl today at 7:04 AM
[flagged]
jdw64 today at 7:48 AM
[dead]
shevy-java today at 5:22 AM
So now AI is about apartheid. I am not liking this at all.