If you thought code writing speed was your problem you have bigger problems
265 points - today at 5:48 PM
SourceComments
So far there's no obvious change one way or the other, but it hasn't been very long and everyone is in various states of figuring out their new workflows, so I don't think we have enough data for things to average out yet.
We're finding cases where fast coding really does seem to be super helpful though:
* Experimenting with ideas/refactors to see how they'll play out (often the agent can just tell you how it's going to play out)
* Complex tedious replacements (the kind of stuff you can't find/replace because it's contextual)
* Times where the path forward is simple but also a lot of work (tedious stuff)
* Dealing with edge cases after building the happy path
* EDIT: One more huge one I would add: anywhere where the thing you're adding is a complete analogy of another branch/PR the agent seems to do great at (which is like a "simple but tedious" case)
The single biggest potential productivity gain though I think is being able to do something else while the agent is coding, like you can go review a PR and then when you come back check out what the agent produced.
I would say we've gone from being extremely skeptical to cautiously excited. I think it's far fetched that we'll see any order of magnitude differences, we're hoping for 2x (which would be huge!).
Why not? Why can't faster typing help us understand the problem faster?
> When you speed up code output in this environment, you are speeding up the rate at which you build the wrong thing.
Why can't we figure out the right thing faster by building the wrong thing faster? Presumably we were gonna build the wrong thing either way in this example, weren't we?
I often build something to figure out what I want, and that's only become more true the cheaper it is to build a prototype version of a thing.
> You will build the wrong feature faster, ship it, watch it fail, and then do a retro where someone says "we need to talk to users more" and everyone nods solemnly and then absolutely nothing changes.
I guess because we're just cynical.
This is the company I (soon no longer) work at (anyone hiring?).
The thing is that they don’t even allow the use of AI. I’ve been assured that the vast majority of the code was human-written. I have my doubts but the timeline does check out.
Apart from that, this article uses a lot of words to completely miss the fact that (A) “use agents to generate code” and “optimize your processes” are not mutually exclusive things; (B) sometimes, for some tickets - particularly ones stakeholders like to slide in unrefined a week before the sprint ends - the code IS the bottleneck, and the sooner you can get the hell off of that trivial but code-heavy ticket, the sooner you can get back to spending time on the actual problems; and (C) doing all of this is a good idea completely regardless of whether you use LLMs or not; and anyone who doesn’t do any of it and thinks the solution is to just hire more devs will run into the exact same roadblocks.
The points specific to software where it might not even be producing in-spec is also very good.
Comments that cite the solo dev/prototype case are of course not what this is getting at, but it's one good use of quick generation.
I would extend this article by saying what The Goal says, namely that the goal of every firm is to make money, and everything is intermediate to that. So whether or not software architecture is grade-A or grade-C, it's only ever in this subservient role to the firm's goal.
I just set up Claude Code tonight. I still read and understand every line, but I don't need to Google things, move things around and write tests myself. I state my low-level intent and it does the grunt work.
I'm not going to 10x my productivity, but it'll free up some time. It's just a labour-saving technology, not a panacea. Just like a dishwasher.
Proper approach to speeding things up would be to ask "What are the limiting factors which stops us from X, Y, Z".
--
This situation of management expecting things to become fast because of AI is "vibe management". Why to think, why to understand, why to talk to your people if you saw an excited presentation of the magic tool and the only thing you need to do is to adopt it?..
Take the way AI is being developed as an example. People rush to build giant agents in giant datacenters that are aligned to giant corporations and governments. They're building the agentic organism equivalent of machiavellian organizations, even though they'd be better off building digital humans that are aligned to individual humans that run on people's gaming PCs at home. They will find out that the former is the wrong architecture, but the cost of that failed iteration is the future of human civilization, and nobody gets a second try.
Of course, this is an extreme example on one end of the scale. On the other end, it wouldn't matter at all if you're building a small game for yourself as a weekend project with no users to please or societal impacts to consider.
For instance, GCC will inline functions, unroll loops, and myriad other optimizations that we don't care about. But when we review the ASM that GCC generates (we don't) we are not concerned with the "spaghetti" and the "high coupling" and "low cohesion". We care that it works, and is correct for what it is supposed to do. And that it is a faithful representation of the solution that we are trying to achieve.
Source code in a higher-level language is not really different anymore. Agents write the code, maybe we guide them on patterns and correct them when they are obviously wrong, but the code is merely the work-item artifact that comes out of extensive specification, discussion, proposal review, and more review of the reviews.
A well-guided, iterative process and problem/solution description should be able to generate an equivalent implementation whether a human is writing the code or an agent.
Then there's the speedup. A smaller team can now achieve what a larger team was needed for before. This means less communication overhead, in theory fewer and/or shorter meetings. Which all translates to me spending more time and more energy on thinking about the solution. Which is what matters.
When a person writes code, the person reasons out the code multiple times, step by step, so that they don't make at least stupid or obvious mistakes. This level of close examination is not covered in code review. And arguably this is why we can trust more on human-written code than AI-produced, even though AI can probably write better code at smaller scale.
In contrast, Amazon asked senior engineers to review AI-generated code before merging them. But the purpose of code review was never about capturing all the bugs -- that is the job of test cases, right? Besides, the more senior an engineer is in Amazon, the more meetings they go to, and the less context they have about code. How can they be effective in code review?
cries in Factorio
Btw: https://playcode.io
So 'writing helper' + 'research helper' + 'task helper' alone is amazing and we are def beyond that.
Even side features like 'do this experiment' where you can burn a ton of tokens to figure things out ... so valuable.
These are cars in the age of horses, it's just a matter of properly characterizing the cars.
factorio ... it's also the most useful engineering homework that's technically a game
The biggest time sink is usually debugging integration issues that only surface after you've connected three services together. Writing the code took 2 hours, figuring out why it doesn't work as expected takes 2 days.
I've found the most impactful investment is in local dev environments that mirror production as closely as possible. Docker Compose with realistic seed data catches more bugs than any amount of unit testing.
Yeah. I keep seeing this over and over with devs who use LLMs. It's painful to watch.
if you ever played factorio this is pretty clear.
You could write more code, but you also could abstract code more if you know what/how/why.
This same idea abstracts to business, you can perform more service or you can try to provide more value with same amount of work.
It is not about the speed of typing code.
Its about the speed of "creating" code: the boilerplate code, the code patterns, the framework version specific code, etc.
A lot of these blog start from a false premise or a lack of imagination.
In this case both the premise that coding isn't a bulk time waste (and yes llm can do debugging, so the other common remark still doesnt apply) is faulty and unsubstantiated (just measure the ratio of architects to developers) but also the fact that time saving on secondary activities dont translate in productivity is false, or at least it's reductive because you gain more time to spend on doing the bottlenecked activity.
Same goes for the terminal, I like that it allows me to use a large directory tree with many assorted file types as if it was a database. I.e. ad hoc, immediate access to search, filter, bulk edits and so on. This is why one of the first things I try to learn in a new language is how to shell out, so I can program against the OS environment through terminal tooling.
Deciding what and how to edit is typically an important bottleneck, as are the feedback loops. It doesn't matter that I can generate a million lines of code, unless I can also with confidence say that they are good ones, i.e. they will make or save money if it is in a commercial organisation. Then the organisation also needs to be informed of what I do, it needs to give me feedback and have a sound basis to make decisions.
Decision making is hard. This is why many bosses suck. They're bad at identifying what they need to make a good decision, and just can't help their underlings figure out how to supply it. I think most developers who have spent time in "BI" would recognise this, and a lot of the rest of us have been in worthless estimation meetings, retrospectives and whatnot where we ruminate a lot of useless information and watch other people do guesswork.
A neat visualisation of what a system actually contains and how it works is likely of much bigger business value than code generated fast. It's not like big SaaS ERP consultancy shops have historically worried much about how quickly the application code is generated, they worry about the interfaces and correctness so that customers or their consultants can make adequate unambiguous decisions with as little friction as possible.
Please stop making fools of yourselves and go use Claude for a month before writing that “AI coding ain’t nothing special” post.
Ignorance of what Claude can actually do means your arguments have no standing at all.
“I hate it so much I’ll never use it, but I sure am expert enough on it to tell you what it can’t do, and that humans are faster and better.”
I have very much upset a CEO before by bursting his bubble with the fact that how fast you work is so much less important than what you are working on.
Doing the wrong thing quickly has no value. Doing the right thing slowly makes you a 99th percentile contributor.
Expedience is the enemy of quality.
Want proof? Everyone as a result of “move fast and break things” from 5-10 years ago is a pile of malfunctioning trash. This is not up for debate.
This is simply an observation. I do not make the rules. See my last submission for some CONSTRUCTIVE reading.
Bye for now.
PS. The tech bros tried to do exactly that to millennials, but accidentally shot boomers instead.
The sentiment that developers shouldn't be writing code anymore means I cannot take you seriously. I see these tools fail on a daily basis and it is sad that everyone is willing to concede their agency.