Why senior developers fail to communicate their expertise
566 points - yesterday at 3:08 PM
SourceComments
An average unaware person believes that anything can be put in words and once the words are said, they mean to reader what the sayer meant, and the only difficulty could come from not knowing the words or mistaking ambiguities. The request to take a dev and "communicate" their expertise to another is based on this belief. And because this belief is wrong, the attempt to communicate expertise never fully succeeds.
Factual knowledge can be transferred via words well, that's why there is always at least partial success at communicating expertise. But solidified interconnected world model of what all your knowledge adds up to, cannot. AI can blow you out of the water at knowing more facts, but it doesn't yet utilize it in a way that allows surprisingly often having surprisingly correct insights into what more knowledge probably is. That mysterious ability to be right more often is coming out of "world model", that is what "expertise" is. That part cannot be communicated, one can only help others acquire the same expertise.
Communicating expertise is a hint where to go and what to learn, the reader still needs to put effort to internalize it and they need to have the right project that provides the opportunity to learn what needs to be learnt. It is not an act of transfer.
> “Do we really need that?” > “What happens if we don’t do this?” > “Can we make do for now? Maybe come back to this later when it becomes more important?”
as with experimenters. Every system is different, every product is different. If I were building firmware for a CT scanner, my approach towards trying out new things would be different than a CRUD SaaS with 100 clients in a field that could benefit from a fresh perspective.
There are definitely ways to have eager/very open seniors drive systems into hard to get out corners. But then there are people that claim PHP5 is all you need.
A rewrite?
I recall a few times everyone promised, if this gets promoted then we will rewrite it from zero. Never happened.
The article touches on responsability, accountability. There is none for risk taker. By definition. You have a crazy idea, you rush it out, you hope clients bite. You profit. It's not even your problem how to make it work, scale, not cost more to run than we sell it for.
The loop on the right. There are companies, two of them are very popular these days, they took it to an extreme. You ship something fast, and since it only scales linearly you go raise money. Successful companies, countless users, some of them even pay. Who's to blame? The senior developer, or simply someone reasonable who asks, how's that sustainable, what's the way out of this? Those are fired, so whoever's left is a believer.
So it's not like I have nothing to share after 30 years of experience in the industry, I just have nobody to share it with.
It’s not difficult for seasoned engineers with deep technical backgrounds to whiteboard a distributed system in twenty minutes. It takes hundreds of customer discussions, invalid hypotheses, and years of experience building judgment about whether this is the right solution at the right time.
The engineers who compound quickly have usually built their skills in both areas concurrently. Communication of the latter is more challenging due to the judgment-based foundation beneath it.
> Need to build a whole new feature to test it? Have you tried putting a button in the existing UI and seeing if people click it?
Pretty much word for word.It feels like engineers are collectively feeling the pain now that product has decided that engagement of mental faculties is no longer necessary on their behalf; just build it and figure out the user persona and utility later...if ever. What used to be a process of taking the time to understand the domain, the user, and how the product fits into some process has been tossed out the window; just ship whatever we think some imaginary user wants and experiment until we succeed.
It creates the exact problem that OP talks about: every random feature that gets vibe-coded becomes a source of instability and risk; something that can then only be maintained via more vibe coding because no one has a working mental model of the thing.
There exist people who's jobs depend entirely on rolling out new features, or apps of some sort, and having them show up in some form of company metric. If the senior developers says it's a bad idea, those people won't listen, or won't care. Their job is on the line.
People don't want to be judged in the introduction of an article, based on how they like to approach their literal dayjob. It's a weird jab.
There are other properties such as, maintainability, scalability, reliability, resilience, anti-fragility, extensibility, versatility, durability, composability. Not all apply.
Being able to talk about tradeoffs in terms of solution spaces, not just along a single dimension, is one of what I consider the differentiator between a senior and staff+ developer.
As this kind of person, it can be alienating in some teams / companies.
What I've found works best is to convey how the added complexity will affect non-engineers. You have to understand the incentives and trade offs though, and sometimes it's better to take the loss.
If you have the fortune of sticking around with the same leaders for awhile, a few rounds of being vocal, but compromising, will work in your favor. When that complexity comes back around to bite them in the way you described, you will earn some trust.
In my experience the solution proposed will rarely result in a less complex solution. Quick MVPs have the tendency to stick around. As soon as a customer starts using some product or feature, the cost of pivoting goes up. If you wish to experiment, do it on a segment.
None of the things I can think of have anything to do with avoiding problems.
To some degree, having 5+ agents working on different projects is similar to leading a team of 5+ people. The skills translate well.
The senior is also able to understand what the agents do, review and challenge it. Juniors often can't.
And finally, the senior has a deeper understanding of what the business and problem domain are, and can therefore guide the AI more effectively towards building the right thing.
“I found this new tool and it’s pretty cool ...”
yup “This company <company totally unlike the one we’re in> does things this way, so …”
agreed “Here, look at this HackerNews post that says this is best practice, we should probably …”
sir/m'lady, we're at war from now on. This is the only reason I come here. Of course I don't take everything carelessly, but the amount of experts on this forum is damn high and this is the only forum in the last 10 years that helped me grow so muchAs you might imagine, a lot of these ideas fell by the wayside but we had to develop them in full.
And push an insurmountable pile of technical debt onto the successor.
Well, yeah, I understand the idea and I'm all for it: the less code the better, the less changes the better.
However in certain industries it is no longer a right approach for the job. In modern frontend development if you did not update your codebase for like a couple of months, this codebase falls so much behind that it becomes way more expensive to push an upgrade as compared to daily minor updates of packages. Yeah, I hate this as much as you do, but this is the pace frontend is moving at, and if you don't follow, you will mount technical debt.
I guess the author has never worked on a dog shit system with no tests at all and constant downtime.
I have worked with “complexity averse” engineers who would rather fix the edges over and over again, than roll up their sleeves and just get the job done.
I just don’t believe that using new tools is at odds with avoiding complexity.
Sometimes you have to take it to the chin, and get to use the new shiny thing along the way to move much faster.
I saw this yesterday
https://trinkle23897.github.io/learning-beyond-gradients/
They are very remotely related yet somehow very close.
The company provides (offer | service) to the (market | user) and receives (feedback | payment).
The service IS the offer, the userbase IS the market, and payment IS the feedback signal.
Right?
EDIT - expanded on original comment to add:
The author's point might be lost on me but seems to be that framing things with one of those sets of labels vs the other may correspond to use of "complexity" vs "uncertainty" as the element targeted for reduction, and choosing those labels carefully in turn correlates to "senior" devs' persuasiveness in prioritization battles with product owners. To which my response would be, "maybe?". (shrug)
I'm not a copywriter by trade but I care about words and may have just been nerd-sniped.
AI is actually quite awful for prototyping, because it makes it far too easy to add random crap to your "prototype" without any specific intention. This quickly transforms the prototyping process from something that's high-level and geared towards building the mental model of the real system into something akin to copy-editing a random piece of software without any coherent mental model involved. Moreover, prompting allows to to glaze over some essential complexity of the task without getting any notions of the scope of the effort of actually doing it. In other words, people end up failing to make necessary decisions and simultaneously get bogged down with unnecessary ones.
In short, fast feedback loops are only useful if there is actual feedback involved.
FWIW though the idea about a "speed" product and a "stability" product isn't new. We used to call it "prototyping". I don't know when/how that disappeared from the collective consciousness. "have a space where we can build things fast with horrible practices" isn't some AI era innovation, it's what smart companies have done for decades.
Bro & I would not get along well =)))) But the article IS good stuff.
Fine, then, I'll keep the experience to myself.
The message that hits for me is that of AI being a destabilizer while simultaneously being an accelerator. The Speed/Scale suggestion won't address this. A codebase no one understands, growing at machine speed won't go away just because you drew a box around it. The fix is likely more mundane stuff like process and role shifts, smaller PRs, tests, tooling, ownership principles.
The thing breaks, the salesperson says "can you check this out?" then disappears and we're back to where we started.
I don't even find this very new: many companies I've been at have tried to spin-off a "fast" team to sell stuff.
The safest answer an engineer can give is "no".
Remember that the first half of this statement, the part listed here, is great. I love playing with new tools.
The only bad part is the implicit bit after the dots: "we should use this in our product." You don't want cool things anywhere near your product, unless the cool thing is that they remove complexity.
But senior devs are also expected to have a compounding effect even pre-AI. Writing a single doc, refactoring legacy code to make it extensible, building security frameworks specific to the project and many more. All of these would compound the dev team.
I think the same will happen with agents working on a org specific paved path set by senior devs.
I am really skeptical of arguments based around "I can do things the model can't" because that space of things is not very large and is getting smaller every day.
The opportunity to not merely cling on to what we have another year but to grow is to say "together, the model can manage so much more complexity than before that we can do things that were not previously possible."
We haven't identified too many of those things yet, but I am certain they are coming.
>> “AI agents are the future of software development. We won’t need developers anymore to slow down the progress of a business.”
> And so, to me, a copywriter, what’s happening here is that the same message is meaning two different things to two different audiences.
I couldn't tell whether to parse this as "We will be faster without those slow developers", or more cynically as "We don't need developers to slow us down; We can now be slow with ai agents". I suspect that with creeping complexity the latter reading will hold up better for large projects.
I think the framing started in the right path and then took a slightly wrong turn.
Both loops presented benefit from being tighter, faster. One to take a system to a “stable” (maintainable) setpoint quickly. The other to handle uncertainty.
And the additional insight about splitting the systems to better adapt to AI… we’ve described spikes for years, well before AI went mainstream.
I think the framing started in the right path and then took a slightly wrong turn.
Both loops presented benefit from being tighter, faster. One to take a system to a “stable” (maintainable) setpoint quickly. The other to handle uncertainty.
And the additional insight about splitting the systems to better adapt to AI… we’ve described spikes for years, well before AI went mainstream
Maps to what we believe on our team - functional vs non-functional. AI ships functional features fast but developers are more important than ever in making sure the non-functional aspects are taken care of
The "speed" loop reminds me a lot of RAD. In fact, AI might be _the_ thing that helps us deliver on RAD's promises from decades ago.
https://www.geeksforgeeks.org/software-engineering/software-...
Honest question does high velocity / first mover ever really pay off these days?
I don't feel like having the first AI slop to the market has actually paid off for anyone? Am I wrong? Am I missing something? Am I out of touch?
The way I see it, first movers do a lot of work proving the idea works, and everyone else swoops in with better product or at least at a cheaper rate.
Beyond that, let's take the company I work for, for example. We have an ingrained and actually relatively happy customer base on a subscription model. I feel like the only thing increased velocity can do is rapidly ruin their experience.
Many vendors seem to be learning (or not learning, but just throwing their weight against it anyway) that adding hastily-generated AI features are causing customer dissatisfaction, as more people brand the features "slop".
In the best case, the users give the company more chances. Infinitely more chances.
In a worse case, the users assume the new feature will always be bad, given their first impression. It's hard for a vendor to make people reconsider a first impression.
The absolute worst case is that AI enables a new market, but the first attempts are so poor that the first movers make people write that market off as a dead end, leading to a lost opportunity.
I feel like this is shooting from the hip from a single point of view from some semi-large corpo.
Literally what people thought after Fan Hui (2-dan) was beaten. For humans software requires ingenuity and creativity. Computers can cheat that, infact computers ALWAYS cheat that to beat humans. NTP as a method of cheating is slightly more general than say board evaluation, so it's less efficient for the same problem, but scaling laws show that with enough compute NTP can beat humans at chess (or any most other arbitrary games, in real time).
Now, with so-called AI they will mostly slap something kinda working in few days and then maybe get hacked or double invoice some customer from time to time... They will learn of those problems the hard way. Or maybe they will not because it will be mostly working emailing system and nobody will care if it will loose 2% of the emails because of some bug.
Nevertheless, either the Stable version, Scale version of the software will never happen or will be looked like not necessary or it will became a thing after catastrophic failure.
Anyway I do not think it will be like that, everybody cares about speed and money and making money quickly without an effort is the ultimate unicorn entire world is after.
Those complaining developers just stand in a way.
PING
Junior developer: PING is used to check if a host is reachable by a networkMiddle developer: PING constructs and sends ICMP packets to an address
Senior developer: what machine, what OS?
Junior manager: Don't care, ask a techie if you need to do something technical
Middle manager: Ask <techies name> about it, I know he has great experience with it
Senior manager: PING is used to check if a host is reachable by a network
Senior developers fail to communicate their expertise, because that expertise is developed and formed by asking more questions than answers, and managers fail to understand the capabilities of "their techies", because managers see question-asking techies as counter-productive, and attempt to route around them. Managers only want answers, developers know the value of asking deep questions.
Thus, AI.
(BTW, PING is a command that produces a distinct sound on the Oric-1/Atmos computers, and it is thus an Onomatopoeia.. I know this, because I am a Senior Oric-1/Atmos Developer who knows what lies at #FA9F, how it works, what the 14 bytes are for, and so on.. because I once asked the question, "how does PING go 'poooinnng' but ZAP go 'zap'?")
AI: <asks billions of questions in a second>PING is ..
This is why part of a senior developer’s job is designing and developing the fast version in a way that, if it goes into production, won’t burn the building down. This is the subtle art of development: recognizing where the line is for “good enough” to ship fast without jeopardizing the long-term health of the company. This is also the part that AI is absolutely atrocious at - vibe code is fast, that’s the pitch, but it’s also basically disposable (or it’s not fast - I see all you “exhaustive spec/comprehensive tests/continuous iteration” types, and I see your timelines, too). If you can convince the org that’s the tradeoff, great, but I had a hell of a time doing it back when code was moving at human speed, and now you just strapped rockets onto the shitty part of the system and are trying to convince leadership that rocket-speed is too fast.
No-one says this.
There's a place for prototyping and experimental features but now agile has cultivated extreme learned helplessness and everything is an A/B test because there's no longer any ability to judge whether something is good or bad based on a holistic vision.
The senior should also start using AI to increase the amount of work done to stabilise the system, in a careful manner. More benchmarks, better testing, better safety net when delivering software, automated security reviews, better instrumentation, and so on.
> And this is how AI affects the two loops
There should be another image illustrating that the amount of mitigations done from senior side, red-/blue-team style.
1. I am discouraged or forbidden from devoting time to communicating my expertise; they would rather use it. Well, often, they'd rather I did the grunt work to facilitate the use of my expertise.
2. Same, but devoting time to preparing materials which communicate my expertise.
3. A lot of my expertise is a bunch of hunches and intuitions, a "sense of smell" for things. And that's difficult to communicate.
4. My junior colleagues don't get time off their other duties to listen to "expertise sharing", when it does not immediately promote the project they're working on.
5. Many of my junior colleagues lack enough fundamentals (IMNSHO) for me to share all sorts of expertise with them. That is, to share B with them I would need to first teach them A, and knowing A is not much of an expertise; but they're inexperienced, maybe fresh out of university.
6. My expertise may only be partially or very-partially relevant to many of my colleagues; but I can't just divide the expertise up.
7. For good reasons or bad, I have trouble separating my expertise from various ethical/world-view principles, which fundamentally disagree with the way things are done where I'm at. So, such sharing is to some extent a subversive diatribe against the status quo.
8. My expertise on some matters is very partial - and what I know just underlines for me how much I _don't_ know. So, I am apprehensive to talk about what I feel I actually don't know enough about - which may just result in my appearing presumptuous and not knowledgeable enough.
9. My expertise on some matters is very partial - and what I know just underlines for me how much I _don't_ know. So, I try to polish and complete my expertise before sharing it - and that's a path you can walk endlessly, never reaching a point where you feel ready to share.
10. Tried sharing some expertise in the past, few people attended the session, I got demotivated.
11. Tried sharing some expertise in the past, few people were engaged enough to follow what I was saying, I got demotivated.
12. Shared some expertise in the past, got a positive feedback, but then those people who seemed to appreciate what I said did not implement/apply any of it, even though they could have and really should have.
But reduction is narrower than management which is narrower than organization.
Also uncertainty is part of complexity. Being able to isolate what is deemed predictable under clearly identified premises is the best that can hoped on that matter. It means that then one strategy can be applied to protect the stable core, and other strategy can be tried on what is unknown (known and unknown unknowns).
Want me to communicate my expertise? Give me some time to actually do it.
An AI agent using a web browser like a human. I used various stealth technologies to achieve this. I set it off on a research task for me and it saved me $30 of a purchase by finding the best price. Its Jeff Bezos worst nightmare, visting amazon.com and ignoring all the product placement ads.
It had multiple tabs open, did searches in multiple places, opening products and checking sites....it looked just like a human would do doing the same task.
This I can assure you would not have been possible without my expertise. I had to be very careful to remove all bot signals from the browser, including going to browserscan.net to check. Once done, most captchas were never shown to the agent. There is a NodeJS codebase involved that I wrote by hand.
I searched through the code of the browser automation framework I was using, looking for ways to make it look more human. I had AI help with this part, but had to confirm everything and pull the agent up when it suggested bad ideas.
Most of the work was architectural, including making sure my browser was easy for the agent to use.
I'm going to add 2captcha as a next step, to solve the few captchas that it still encounters (as I still do sometimes as a human).
I'm thinking of open sourcing it, but i'm not sure if its a good idea as if it became widespread, it might encourage the adoption of even more invasive anti-bot measures.
complexity bad
say again:
complexity very bad
you say now:
complexity very, very bad
That includes gate-keeping behaviour such as not handing off knowledge, sham performance reviews to prevent ambitious juniors from over-taking them (even with AI) and being over-critical to others but absent and contrarian when the same is done to them.
That leverage does not work anymore in the age of AI as having "expensive" seniors begging for a pay-rise can cost the company an extra amount of $$$. So it is temping to lay them off for another one that is a yes person that will accept less.
In the age of AI, I would now expect such experience to include both building and working at a startup instead of being difficult to work with for the sake of a performance review.
Almost all business presidents, CEOs, and owners are thinking this. I guarantee you they are sick and tired of developers taking forever on every project. Now they can create the apps themselves.
My comment isn't meant to debate every nitty-gritty detail about code quality, security, stability, thinking of every aspect of how the code works, does it scale, etc. All of those things are extremely important. However, most leadership never cared about any of that anyways. They only heard those as excuses why developers took so long. Over the last decade they put up with it begrudgingly.
You know all the developers that wanted to complain about IT, cybersecurity, DevOPs, cloud architects for getting in their way and if they only had administrator access then they could get everything done themselves because they are experts in networking and everything else? Well, those developers are about to have the worst day ever when every single person on the planet can generate code and will be "experts" in everything as well.
If only higher-ups would recognize that. Instead we see left and right mass layoffs, restructurings and clueless higher-ups who clearly drank not just a bottle of koolaid but a barrel.
> The ‘Speed’ version allows the rest of the business to continue learning from the market, as the senior developers build a trailing version of the system that’s well-reviewed and understandable.
Yeah... that doesn't fly. The beancounters don't care. The "speed" version works, so why even invest a single cent into the "scale" version? That's all potential profit that can be distributed to shareholders. And when it (inevitably) all crashes down, the higher ups all have long since cashed out, leaving the remaining shareholders as bagholders, the employees without employment and society to pick up the tab. Yet again.