The cult of vibe coding is dogfooding run amok
607 points - last Monday at 6:31 PM
SourceComments
If anything, it’s the exact opposite. It shows that you can build a crazy popular & successful product while violating all the traditional rules about “good” code.
Claude Code is being produced at AI Level 7 (Human specced, bots coded), whereas the author is arguing that AI Level 6 (Bots coded, human understands somewhat) yields substantially better results. I happen to agree, but I'd like to call out that people have wildly different opinions on this; some people say that the max AI Level should be 5 (Bots coded, human understands completely), and of course some people think that you lose touch with the ground if you go above AI Level 2 (Human coded with minor assists).
The PRs that it comes with are rarely even remotely controversial, shrink the codebase, and are likely saving tokens in the end when working on a real feature, because there's less to read, and it's more boring. Some patterns are so common you can just write them down, and throw them at different repos/sections of a monorepo. It's the equivalent of linting, but at a larger scale. Make the language hesitant enough, and it won't just be a steamroller either, and mostly fix egregrious things.
But again, this is the opposite of the "vibe coding" idea, where a feature appears from thin air. Vibe Linting, I guess.
- Shills or people with a financial incentive
- Software devs that either never really liked the craft to begin with or who have become jaded over time and are kind of sick of it.
- New people that are actually experiencing real, maybe over-excitement about being able to build stuff for the first time.
Forgetting the first group as that one is obvious.
I’ve encountered a heap of group 2. They’re the ones sick of learning new things, for whatever reason. Software work has become a grind for them and vibe coding is actually a relief.
Group 3 I think are mostly the non-coders who are genuinely feeling that rush of being able to will their ideas into existence on a computer. I think AI-assisted coding could actually be a great on-ramp here and we should be careful not to shit on them for it.
As if 97% of web apps aren't just basic CRUD with some integration to another system if you are lucky.
99% of companies won't even have 50k users.
Both of these camps are the loudest voices on the internet, but there is a quiet but extremely productive camp somewhere in the middle that has enough optimism, open mindedness along with years of experience as an engineer to push Claude Code to its limit.
I read somewhere that the difference between vibe coding and "agentic engineering" is if you are able to know what the code does. Developing a complex website with claude code is not very different than managing a team of off shore developers in terms of risks.
Unless you are writing software for medical devices, banking software, fighter jets, etc... you are doing a disservice to your career by actively avoiding using LLMs as a tool in developing software.
I have used around $2500 in claude code credits (measured with `bunx ccusage` ) the last 6 months, and 95% of what was written is never going to run on someone else's computer, yet I have been able to get ridiculous value out of it.
Consider this overly simplified process of writing a logic to satisfy a requirement:
1. Write code
2. Verify
3. Fix
We, humans, know the cost of each step is high, so we come up various way to improve code quality and reduce cognitive burden. We make it easier to understand when we have to revisit.
On the other hand, LLMs can understand** a large piece of code quickly***, and in addition, compile and run with agentic tools like Claude Code at the cost of token****. Quality does not matter to vibe coders if LLMs can fill the function logic that satisfies the requirement by iterating the aforementioned steps quickly.
I don't agree with this approach and have seen too many things broken from vibe code, but perhaps they are right as LLMs get better.
* Anecdotal
** I see LLM as just a probabilistic function so it doesn't "reason" like humans do. It's capable of highly advanced problem solving yet it also fails at primitive task.
*** Relative to human
**** Cost of token I believe is relatively cheaper compared to a full-time engineer and it'll get cheaper over time.
Disruption happens when firms are disincentivized to switch to the new thing or address the new customer because the current state of it is bad, the margins are low. Intel missed out on mobile because their existing business was so excellent and making phone chips seemed beneath them.
The funny thing is that these firms are being completely rational. Why leave behind high margins and your excellent full-featured product for this half-working new paradigm?
But then eventually, the new thing becomes good enough and overtakes the old one. Going back to the Intel example, they felt this acutely when Apple switched their desktops to ARM.
For now, Claude Code works. It's already good enough. But unless we've plateaued on AI progress, it'll surpass hand crafted equivalents on most metrics.
Also, to those who say "this is proof that code quality doesn't matter any more", let's have this chat 5 years from now when they're crumbling under the weight of their own technical debt :)
Users like the author must be the most valuable Claude asset, because AI itself isn't a product — people's feedback that shapes output is.
There's nothing wrong with saying that Claude Code is written shoddily. It definitely is. But I think it should come with the recognition that Anthropic achieved all of its goals despite this. That's pretty interesting, right? I'd love to be talking about that instead.
want code that isn't shit? embrace a coding paradigm and stick to it without flip-flopping and sticking your toe into every pond, use a good vcs, and embrace modularity and decomposability.
the same rules when 'writing real code'.
9/10 times when I see an out-of-control vibe coded project it sorta-kinda started as OOP before sorta-kinda trying to be functional and so on. You can literally see the trends change mid-code. That would produce shit regardless of what mechanism used such methods, human/llm/alien/otherwise.
Once you have learned enough from playing with sand castles, you can start over to build real castles with real bricks (and steel if you want to build skyscraper). Then it is your responsibility to make sure that they would not collapse when people move it.
But that isn't the hard part. The hard part is that some people are using the tool versions and some are using the agent versions, so consolidating them one way or another will break someone's workflow, and that incurs a real actual time cost, which means this is now a ticket that needs to be prioritized and scheduled instead of being done for free.
- Brooks' No Silver Bullet: no single technology or management technique will yield a 10-fold productivity improvement in software development within a decade. If we write a spec that details everything we want, we would write soemthing as specific as code. Currently people seem to believe that a lot of the fundamentals are well covered by existing code, so a vague lines of "build me XXX with YYY" can lead to amazing results because AI successfully transfers the world-class expertise of some engineers to generate code for such prompt, so most of the complex turns to be accidental, and we only need much fewer engineers to handle essential complexities.
- Kernighan's Law, which says debugging is twice as hard as writing the code in the first place. Now people are increasingly believing that AI can debug way faster than human (most likely because other smart people have done similar debugging already). And in the worst case, just ask AI to rewrite the code.
- Dijkstra on the foolishness of programming in natural language. Something along the line of which a system described in natural language becomes exponentially harder to manage as its size increases, whereas a system described in formal symbols grows linearly in complexity relative to its rules. Similar to above, people believe that the messiness of natural language is not a problem as long as we give detailed enough instructions to AI, while letting AI fills in the gaps with statistical "common sense", or expertise thereof.
- Lehman’s Law, which states that a system's complexity increases as it evolves, unless work is done to maintain or reduce it. Similar to above, people start to believe otherwise.
- And remotely Coase's Law, which argues that firms exist because the transaction costs of using the open market are often higher than the costs of directing that same work internally through a hierarchy. People start to believe that the cost of managing and aligning agents is so low that one-person companies that handle large number of transactions will appear.
Also, ultimately Jevons Paradox, as people worry that the advances in AI will strip out so much demand that the market will slash more jobs than it will generate. I think this is the ultimate worry of many software engineers. Luddites were rediculed, but they were really skilled craftsmen who spent years mastering the art of using those giant 18-pound shears. They were the staff engineers of the 19th-century textile world. Mastering those 18-pound shears wasn't just a job but an identity, a social status, and a decade-long investment in specialized skills. Yeah, Jevons Paradox may bring new jobs eventually, but it may not reduce the blood and tears of the ordinary people.
Intereting times.
AI naysayers are heavily incentivized to find fault with it, but in my experience it's pretty rare to see a codebase of that size where it's not easy to pick out "bad code" examples.
Are there any relatively neutral parties who've evaluated the code and found it to be obviously junk?
The ship has sailed. Vibe coding works. It will only work better in the future.
I have been programming for decades now, I have managed teams of developers. Vibe coding is great, specially in the hands of experts that know what they are doing.
Deal with it because it is not going to stop. In the near future it will be local and 100x faster.
"I have been screaming at my computer this past week dealing with a library that was written by overpaid meatbags with no AI help."
And here we go: The famous "humans do it, too" argument. With the gratuitous "meatbag" propaganda.
Look Bram, if you work on bitcoin bullshit startups, perhaps AI is good enough for you. No one will care.
In the past, which is a different country, we would throw away the prototypes.
Nowadays vibe coding just keeps adding to them.
And then you have the source code for quake or doom.
memory created!
IME AI-native engineering requires a lot of infrastructure to make it viable. Teams who are just opening up cursor and putting it on "auto" and trying to one shot features may get stuff that works but is indeed slop.
Since the beginning of the year, I've been spearheading a low-stakes AI-native project (an internal tool). No one's written a single line of code. And we've learned so much from this experience. The first rule was our product manager, who is technical but isn't typically in the weeds, needs to be able to one-shot prompts with cursor auto. And so many rules stem from there, from e2e tests to ensure he doesn't break stuff, to custom linters to ensure that code lives in the right place, to architectural spec sheets so the LLM doesn't try to do raw DB queries from the client.
We're still not there, but we're getting closer and learning and improving every day.
I think the folks who are vibe coding a lot either aren't working in a team, or they are omitting the fact that they have spent a long time building harnesses to ensure the LLM doesn't run amok.
And I think the people who hate vibe coding are likely just asking Claude Code to do X without using Skills that have opinionated ways to do X.
All that said, I don't think we should ignore how the sausage is made at all. Part of what makes me able to move quickly in this project is knowing where stuff lives. I may not understand the line-by-line code, but if I know where to look to find out why I'm missing data that's in the DB, I can move a lot faster than if I have no idea what's going on in the codebase. Then when I find the problematic file or function, I can ask the LLM why it's like X and tell it it should be like Y.
So I set out to build an app with CC just to see what it's like. I currently use Copilot (copilot.money) to track my expenditures, but I've become enamored with sankey diagrams. Copilot doesn't have this charting feature, so I've been manually exporting all my transactions and massaging them in the sankey format. It's a pain in the butt, error prone, and my python skills are just not good enough to create a conversion script. So I had CC do it. After a few minutes of back and forth, it was working fine. I didn't care about spaghetti code at all.
So next I thought, how about having it generate the sankey diagrams (instead of me using sankeymatic's website). 30 minutes later, it had a local website running that was doing what I had been manually doing for months.
Now I was hooked. I started asking it to build a native GUI version (for macOS) and it dutifully cranked out a version using pyobjC etc. After ironing out a few bugs it was usable in less than 30 min. Feature adds consumed all my tokens for the day and the next day I was brimming with changes. Burned through that days tokens as well and after 3 days (I'm on the el cheapo plan), I have an app that basically does what I want in a reasonable attractive, and accurate manner.
I have no desire to look at the code. The size is relatively small, and resource usage is small as well. But it solved this one niche problem that I never had the time or skill to solve.
Is this a good thing? Will I be downvoted to oblivion? I don't know. I'm very very concerned about the long term impact of LLMs on society, technology and science. But it's very interesting to see the other side of what people are claiming.
All this to say, Vibe Coding as a no-code, even if the solution can hook api's for you, etc, nah. It should be a gateway at best to fully understand and build via agentic development.
I get it, existential threats are scary, but you can't just shit talk them and hope they go away.
Bad code or good code is no longer relevant anymore. What matters is whether or not AI fulfills the contract as to how the application is supposed to work. If the code sucks, you just rerun the prompt again and the next iteration will be better. But better doesn't matter because humans aren't reading the code anymore. I haven't written a line of code since January and I've made very large scale improvements to the products I work on. I've even stopped looking at the code at all except a cursory look out of curiosity.
Worrying about how the sausage is made is a waste of time because that's how far AI has changed the game. Code doesn't matter anymore. Whether or not code is spaghetti is irrelevant. Cutting and pasting the same code over and over again is irrelevant. If it fulfills the contract, that's all that matters. If there's a bug, you update the contract and rerun it.
2024 - Utter Trash
2025 - Merely hotdog water
2026 - Aaaaaaaaaactually pretty good...
Every forward-leaning platform is building out an MCP interface, I think we're past the point of "soulless fad."
creating a product in a span of mere months that millions of developers use everday is opposite of ridiculous. we wouldn't even have known about the supposed ridiculousness of code if it hadnt leaked.
People were given faster typers with incredible search capabilities and decided quality doesn’t matter anymore.
I don’t even mean the code. The product quality is noticeably sub par with so many vibe-coded projects.
This is painful to read. It feels like rant from person who does not use version control, testing and CI.
It is cruel to force machine into guessing game with a todler whose spec is "I do not like it". If you have a coding standarts and preferences, they should be already destiled and exlained somewhere, and applied automatically (like auto linter in not so old days). Good start is to find OS projects you like, let claude review it, and generate code rule. Than run it on your code base over night, until it passes tests and new coding standarts automated code review.
The "vibe coding" is you run several agants in parallel, sometimes multiple agents on the same problem with different approach, and just do coding reviews. It is mistake to have a synchronous conversation with a machine!
This type of works needs severe automation and parallelisation.
Set up an AI bot to analyze the code for spaghetti code parts and clean up these parts to turn it into a marvel. :-)