I agree. I call it my Extended Mind in the spirit of Clark (1).
One thing I realized while working a lot in the last weeks with openClaw that this Agents are becoming an extension of my self. They are tools that quickly became a part of my Being. I outsource a lot of work to them, they do stuff for me, help me and support me and therefore make my (work-)life easier and more enjoyable. But its me in the driver seat.
It’s a tool like a linter. It’s a fancy tool, but calling it anything more than a tool is hype
hintymadtoday at 10:15 PM
In the latest interview with Claude Code's author: https://podcasts.apple.com/us/podcast/lennys-podcast-product..., Boris said that writing code is a solved problem. This brings me to a hypothetical question: what if engineers stop contributing to open source, in which case would AI still be powerful enough to learn the knowledge of software development in the future? Or is the field of computer science plateaued to the point that most of what we do is linear combination of well established patterns?
datakazkntoday at 10:35 PM
The exoskeleton framing resonates, especially for repetitive data work. Parts where AI consistently delivers: pattern recognition, format normalization, first-draft generation. Parts where human judgment is still irreplaceable: knowing when the data is wrong, deciding what 'correct' even means in context, and knowing when to stop iterating.
The exoskeleton doesn't replace instinct. It just removes friction from execution so more cycles go toward the judgment calls that actually matter.
oxag3ntoday at 9:54 PM
> We're thinking about AI wrong.
And this write up is not an exception.
Why even bother thinking about AI, when Anthropic and OpenAI CEOs openly tell us what they want (quote from recent Dwarkesh interview) - "Then further down the spectrum, there’s 90% less demand for SWEs, which I think will happen but this is a spectrum."
So save thinking and listen to intent - replace 90% of SWEs in near future (6-12 months according to Amodei).
finnjohnsen2today at 9:58 PM
I like this. This is an accurate state of AI at this very moment for me. The LLM is (just) a tool which is making me "amplified" for coding and certain tasks.
I will worry about developers being completely replaced when I see something resembling it. Enough people worry about that (or say it to amp stock prices) -- and they like to tell everyone about this future too. I just don't see it.
m_ketoday at 9:19 PM
It's the new underpaid employee that you're training to replace you.
People need to understand that we have the technology to train models to do anything that you can do on a computer, only thing that's missing is the data.
If you can record a human doing anything on a computer, we'll soon have a way to automate it
protocolturetoday at 10:36 PM
Petition to make "AI is not X, but Y" articles banned or limited in some way.
h4kunamatatoday at 11:12 PM
Neither, AI is a tool to guide you in improving your process in any way and/or form.
The problem is people using AI to do the heavy processing making them dumber.
Technology itself was already making us dumber, I mean, Tesla drivers not even drive anymore or know how, coz the car does everything.
Look how company after company is being either breached or have major issues in production because of the heavy dependency on AI.
delichontoday at 8:21 PM
If we find an AI that is truly operating as an independent agent in the economy without a human responsible for it, we should kill it. I wonder if I'll live long enough to see an AI terminator profession emerge. We could call them blade runners.
solarisostoday at 11:11 PM
The exoskeleton analogy is much more accurate than the 'agent' or 'coworker' hype. A coworker implies delegation and trust, whereas an exoskeleton implies amplification. The risk with the exoskeleton model, though, is 'atrophy'—if we use AI to do all the structural thinking, we might lose the ability to spot the hallucinations when the suit starts to malfunction.
euroderftoday at 11:05 PM
In the language of Lynch's Dune, AI is not an exoskeleton, it is a pain amplifier. Get it all wrong more quickly and deeply and irretrievably.
TrianguloYtoday at 10:42 PM
I like this analogy, and in fact in have used it for a totally different reason: why I don't like AI.
Imagine someone going to a local gym and using an exosqueleton to do the exercises without effort. Able to lift more? Yes. Run faster? Sure. Exercising and enjoying the gym? ... No, and probably not.
I like writing code, even if it's boilerplate. It's fun for me, and I want to keep doing it. Using AI to do that part for me is just...not fun.
Someone going to the gym isn't trying to lift more or run faster, but instead improving and enjoying. Not using AI for coding has the same outcome for me.
pavlovtoday at 9:21 PM
> “The AI handles the scale. The human interprets the meaning.”
Claude is that you? Why haven’t you called me?
yifanltoday at 9:47 PM
AI is not an exoskeleton, it's a pretzel: It only tastes good if you douse it in lye.
ottahtoday at 10:35 PM
Make centaurs, not unicorns. The human is almost always going to be the strongest element in the loop, and the most efficient. Augmenting human skill will always outperform present day SOTA AI systems (assuming a competent human).
random3today at 10:48 PM
I'll guess we'll se a lot of analogies and have to get used to it, although most will be off.
AI can be an exoskeleton. It can be a co-worker and it can also replace you and your whole team.
The "Office Space"-question is what are you particularly within an organization and concretely when you'll become the bottleneck, preventing your "exoskeleton" for efficiently doing its job independently.
There's no other question that's relevant for any practical purposes for your employer and your well being as a person that presumably needs to earn a living based on their utility.
bGl2YW5jtoday at 9:29 PM
I like the analogy and will ponder it more. But it didn't take long before the article started spruiking Kasava's amazing solution to the problem they just presented.
xlerbtoday at 9:45 PM
Humans don’t have an internal notion of “fact” or “truth.” They generate statistically plausible text.
Reliability comes from scaffolding: retrieval, tools, validation layers. Without that, fluency can masquerade as authority.
The interesting question isn’t whether they’re coworkers or exoskeletons. It’s whether we’re mistaking rhetoric for epistemology.
acjohnson55today at 10:08 PM
> Autonomous agents fail because they don't have the context that humans carry around implicitly.
Yet.
This is mostly a matter of data capture and organization. It sounds like Kasava is already doing a lot of this. They just need more sources.
obsidianbases1today at 10:57 PM
And markdown is like the data streamed from the brain to the exoskeleton.
Exoskeleton dexterity is like something like coherence in the markdown stream.
dwheelertoday at 9:46 PM
I prefer the term "assistant". It can do some tasks, but today's AI often needs human guidance for good results.
givemeethekeystoday at 9:56 PM
Closer to a really capable intern. Lots of potential for good and bad; needs to be watched closely.
hintymadtoday at 9:51 PM
Or software engineers are not coachmen while AI is diesel engine to horses. Instead, software engineers are mistrels -- they disappear if all they do is moving knowledge from one place to another.
cranberryturkeytoday at 10:33 PM
The exoskeleton metaphor is closer than most analogies but it still undersells one thing: exoskeletons augment existing capability along the same axis. AI augments along orthogonal axes too.
Running 17 products as an indie maker, I've found AI is less "do the same thing faster" and more "attempt things you'd never justify the time for." I now write throwaway prototypes to test ideas that would have died as shower thoughts. The bottleneck moved from "can I build this" to "should I build this" — and that's a judgment call AI makes worse, not better.
The real risk of the exoskeleton framing is that it implies AI makes you better at what you already do. In practice it makes you worse at deciding what to do, because the cost of starting is near zero but the cost of maintaining and shipping is unchanged.
ge96today at 9:29 PM
It's funny developing AI stuff eg. RAG tools and being against AI at the same time, not drinking the kool aid I mean.
But it's fun, I say "Henceforth you shall be known as Jaundice" and it's like "Alright my lord, I am now referred to as Jaundice"
xnxtoday at 9:23 PM
An electric bicycle for the mind.
lukevtoday at 9:44 PM
Frankly I'm tired of metaphor-based attempts to explain LLMs.
Stochastic Parrots. Interns. Junior Devs. Thought partners. Bicycles for the mind. Spicy autocomplete. A blurry jpeg of the web. Calculators but for words. Copilot. The term "artificial intelligence" itself.
These may correspond to a greater or lesser degree with what LLMs are capable of, but if we stick to metaphors as our primary tool for reasoning about these machines, we're hamstringing ourselves and making it impossible to reason about the frontier of capabilities, or resolve disagreements about them.
A understanding-without-metaphors isn't easy -- it requires a grasp of math, computer science, linguistics and philosophy.
But if we're going to move forward instead of just finding slightly more useful tropes, we have to do it. Or at least to try.
mikkupikkutoday at 9:33 PM
Exoskeletons sound cool but somebody please put an LLM into a spider tank.
functionmousetoday at 9:33 PM
blogger who fancies themselves an ai vibe code guru with 12 arms and a 3rd eye yet can't make a homepage that's not totally broken
How typical!
blibbletoday at 9:26 PM
an exoskeleten made of cheese
sibeliusstoday at 10:07 PM
This utterly boring AI writing. Go, please go away...
ath3ndtoday at 11:35 PM
[dead]
filipeishotoday at 9:55 PM
By reading the title, I already know you did not try OpenClaw. AI employees are here.