Some things just take time
347 points - today at 2:46 PM
SourceComments
I’ve been noticing that this simple reality explains almost all of both the good and the bad that I hear about LLM-based coding tools. Using AI for research or to spin up a quick demo or prototype is using it to help plot a course. A lot of the multi-stage agentic workflows also come down to creating guard rails before doing the main implementation so the AI can’t get too far off track. Most of the success stories I hear seem to be in these areas so far. Meanwhile, probably the most common criticism I see is that an AI that is simply given a prompt to implement some new feature or bug fix for an existing system often misunderstands or makes bad assumptions and ends up repeatedly running into dead ends. It moves fast but without knowing which direction to move in.
Consider the idea of trying to determine how quickly an unknown number of timers will go ping, It could be 10,000 timers that go ping when finished or 1,000,000 timers that go ping when finished. I don't know when they are going to go ping, just that they all the timers are running at different speeds spread over some distribution.
After one time period, 5,000 pings have been detected. Should you conclude that timers are pinging fairly quickly?
You cannot tell the overall duration of timers if you don't know the number of timers there are out there. Your only data that the timer exists is the ping, consequently you cannot tell if a small population is at high speed or a large population is at a moderate speed. In both cases the data you receive are the fastest of the population.
In other words we haven't yet seen what the 10 year project made using these tools is like (or even if it exists/will exist), because they haven't been around for 10 years.
Really like the article I think it is awesome, and I strongly believe AI for coding will stay, but I also beleive that we need to still have a strong understanding of why we are building things and what they look like.
I've been using AI to help me write it and I've come to a couple conclusions:
- AI can make working PoCs incredibly quickly
- It can even help me think of story lines, decision paths etc
- Given that, there is still a TON of decisions to be made e.g. what artwork to use, what makes sense from a story perspective
- Playtesting alone + iterating still occurs at human speed b/c if humans are the intended audience, getting their opinions takes human time, not computer time
I've started using this example more and more as it highlights that, yes, AI can save huge amounts of time. However, as we learned from the Theory of Constraints, there is always another bottleneck somewhere that will slow things down.
An example being the common attitude that [advanced tech] is just a math problem to be solved, and not a process that needs to play itself out in the real world, interacting with it and learning, then integrating those lessons over time.
Another way to put this is: experience is undervalued, and knowledge is overvalued. Probably because experience isn’t fungible and therefore cannot be quantified as easily by market systems.
1. Probably not his original idea, and now that I think about it this is kind of more Hegelian. I’m not familiar enough with Hegel to reference him though.
You fill a jar with sand and there is no space for big rocks.
But if you fill the jar with big rocks, there is plenty of space for sand. Remove one of the rocks and the sand instantly fills that void.
Make sure you fit the rocks first.
Lost me in paragraph three. We pay for those things because they're recognizable status symbols, not because they took a long time to make. It took my grandmother a long time to knit the sweater I'm wearing, but its market value is probably close to zero.
Agentic coding very much feels like a "video game" in the sense of you pull the lever and open the loot box and sometimes it's an epic +10 agility sword and sometimes its just grey vendor trash. Whether or not it generates "good" or even "usable" code fades to the background as the thrill of "I just asked for a UI to orchestrate micro services and BLAMMO there it was!" moves to the fore.
Yes, you cannot build years of community and trust in a weekend. But sometimes it's totally sufficient to plant a seed, give it some small amounts of water and leave it on its own to grow. Go ask my father having to deal with a huge maple tree, that I’ve planted 30 years ago and never cared for it.
Open Source projects sometimes work like this. I've created a .NET library for Firebase Messaging in a weekend a few years ago… and it grew on its own with PRs flowing in. So if your weekend project generates enough interest and continues to grow a community without you, what’s the bad thing here? I don’t get it.
Sometimes a tree dies and an Open Source project wasn’t able to make it.
That said, I’ve just finished rewriting four libraries to fix long standing issues, that I haven’t been able to fix for the past 10 years.
It's been great to use Gemini as a sparring partner to fix the API surface of these libraries, that had been problematic for the past 10 years. I was so quick to validate and invalidate ideas.
Once being one of the biggest LLM haters I have to say, that I immensely enjoy it right now.
I feel this new world sucks. We have new technology that boosts the productivity of the individual engineer, and we could be doing MUCH better work, instead of just rushed slop to meet quotas.
I feel I'm just building my replacement, to bring the next level of profits to the c-suite. I just wish I wasn't burning out while doing so.
What's slower now are threats to production - even minor regulations take years or decades, and often appear only when workarounds have surfaced.
So what changed in the last 40+ years are the many tools for businesses to shape the conditions of their business -the downstream market, upstream suppliers, and regulatory support/constraints. This is extremely patient work over generations of players, sometimes by individuals, but usually by coalitions of mutual corporate self-interest, where even the largest players couldn't refuse to participate.
It's evolution.
I do wonder if productivity with AI coding has really gone up, or if it just gives the illusion of that, and we take on more projects and burn ourselves out?
absolutely although i wonder how different 'trust' is in the culture of tomorrow? will it 'matter' as much, be as cherished, as earned over the fullness of time?
i suspect it is a pendulum - and we are back to oak trees at some point - but which way is the pendulum swinging right now?
Vibe slop-ing at supersonic speeds and waiting years to grow aren't the only options, there's something in between where you have enough signal to keep going and enough speed to not waste years on the wrong thing.
I feel that today's VCs have completely disregarded the middle and are focused on getting as big as possible as fast as possible without regard to the effect it's having on the ecosystem.
This is a bad start. Louis XIV at Versailles and Marly famously made while forests appear or disappear overnight, to the utter dismay of Saint-Simon, the memorialist, who thought this was an unacceptable waste of money and energy.
And this was before the industrial revolution. Today I'm sure many more miracles happen every day.
Refactoring decent sized components are an order of magnitude easier than it was, but the more important signal is still, why are you refactoring? What changed in your world or your world-view that caused this?
Good things still take time, and you can't slop-AI code your way to a great system. You still need domain expertise (as the EXCELLENT short story from the other day explained, Warranty Void if Regenerated (https://nearzero.software/p/warranty-void-if-regenerated) ). The decrease in friction does definitely allow for more slop, but it also allows for more excellence. It just doesn't guarantee excellence.
But no one wants to go out of their house.
Social connections. Trust. Facetime. All matter more than ever.
Want a moatable software business? Know your customers on a personal level. Have a personal relationship. Know the people that sign the contracts, know their kids names, where they vacationed last winter, their favorite local restaurant.
Get out of the house.
Imagine a world in which the promise of AI was that workers could keep their jobs, at the same compensation as before, but work fewer hours and days per week due to increased productivity.
What could you do with those extra hours and days? Sleep better. Exercise more. Prepare healthy meals. Spend more time with family and friends. The benefits to physical and mental well-being are priceless. Even if you happened to earn extra money for the same amount of work, your time can be infinitely more valuable than money.
Unfortunately, that's not this world. Which is why the "increased productivity" promise doesn't seem to benefit workers at all.
If you look at the technological utopias that people imagined 50, 60+ years ago, they involved lives of leisure. If you would have told them that advances in technology would not reduce our working hours at all, maybe they would have started smashing the machines back then. Now we're supposed to be happy with more "stuff", even if there's no more time to enjoy stuff.
What AI allow us is to do those things we would not have been able to prioritize before. To "write" those extra tests, add that minor feature or to solve that decade old bug. Things that we would never been able to prioritize are we noe able to do. It's not perfect, it's sometimes sloppy, but at least its getting shit done. It does not matter if you solve 10% of your problem perfect if you never have time for the remaining 90.
I do miss the coding, _a lot_, but productivity is a drug and I will take it.
Oh, I thought it was because they're a way to show off about being rich.
> We require age minimums for driving, voting, and drinking because we believe maturity only comes through lived experience.
Even if she could reach the pedals, my 4yo doesn't have the attention span to drive. This isn't a "lived experience" thing, it's a physical brain development thing. IIRC the are effects with learning math, where starting earlier had limited impact on being able to move to certain more advanced topics earlier; ie there's more going on than just hours of experience.
The standard age for voting is also the age for being a legal adult. There are sound logical reasons that these ages should match.
The standard drinking age is due to pressure by activists, and AIUI is lower in other countries.
They have spent the last decade building processes and guardrails for getting consistent average performance from people. But now, some talented people who worked at those companies are building their own new companies without the overhead and moving much, much more quickly.
I think what we assume is "vibe slop at inference speed" is not as simple as people make it out to be. From a perspective, I think generally it might be people trying to save jobs.
I'm seeing more slop come out of larger, older companies than the new ones (with experienced operators).
And the speed is somewhat scary. For smaller team it doesn't take as much effort to build deep, beautiful product anymore.
The bottleneck was never the ability for a engineer to code. It was the 16 layers between the customer and the programmer which has vanished in smaller companies and is forcing larger ones to produce slop.
I'm reading Against The Machine by Paul Kingsnorth, and now reading this blog piece is hard not to make connections with the points of the book: the usage of the tree as a counter-argument for the machine's automation credo exposed in the blog post very much aligns with I've read so far.
Undoubtedly a lot of that comes down to production cost and safety. A plane is far more likely to kill people and it costs a shitload more to produce then an app (though plenty of software is mission critical). But now in software we can move quick enough up front that if we don't start applying some discipline it's going to bite us in the ass in the long run.
Not true, we do this because the 99% of the time it's true, however there are people who would be perfectly competent and responsible to drive without living to the age of 16-18. Same with voting, there are humans who have a deep understanding and intelligence about politics at a younger age than suffrage. Equally there are people who will be reckless drivers at 40 and vote on whim at 60.
We have these rules not because sophistication only comes through lived experience, we have them because it's strongly correlated and covers of most error cases.
To take this to AI, run the model enough times with a higher enough temperature, then perhaps it can solve your challenges with a high enough quality - just a thought.