AI makes you boring
490 points - today at 6:12 PM
SourceComments
Honestly, I agree, but the rash of "check out my vibe coded solution for perceived $problem I have no expertise in whatever and built in an afternoon" and the flurry of domain experts responding like "wtf, no one needs this" is kind of schadenfreude, but I feel guilty a little for enjoying it.
Take for example (an extreme example) the paintbrush. Do you care where each bristle lands? No of course not. The bristles land randomly on the canvas, but itâs controlled chaos. The cumulative effect of many bristles landing on a canvas is a general feel or texture. This is an extreme example, but the more you learn about art the more you notice just how much art works via unintentional processes like this. This is why the Trickster Gods, Hermes for example, are both the Gods of art (lyre, communication, storytelling) and the Gods of randomness/fortune.
We used to assume that we could trust the creative to make their own decisions about how much randomness/automation was needed. The quality of the result was proof of the value of a process: when Max Ernst used frottage (rubbing paper over textured surfaces) to create interesting surrealist art, we retroactively re-evaluated frottage as a tool with artistic value, despite its randomness/unintentionality.
But now weâre in a time where people are doing the exact opposite: they find a creative result that they value, but they retroactively devalue it if itâs not created by a process that they consider artistic. Coincidentally, these same people think the most âartisticâ process is the most intentional one. Theyâre rejecting any element of creativity thatâs systemic, and therefore rejecting any element of creativity that has a complexity that rivals nature (nature being the most systemic and unintentional art.)
The end result is that the creative has to hide their process. They lie about how they make their art, and gatekeep the most valuable secrets. Their audiences become prey for creative predators. They idolize the art because they see it as something they canât make, but the truth is thereâs always a method by which the creative is cheating. Itâs accessible to everyone.
The number of people who I see having E-mail conversations where person A uses an LLM to turn two sentences into ten paragraphs, and person B uses an LLM to summarize the ten paragraphs into two sentences, is becoming genuinely alarming to me.
I use AI, mainly Perplexity AI, as a replacement for search engine because they all suck right now.
AI makes homelab more fun and therefore I learn more. Homelab is my main hobby, using AI to ASSIST me with one thing or another, always end-up mentioning something else that I never heard before.
I wish I could clone myself due to the amount of topics and projects I have noted down.
AI is like money, money doesn't make you a bad or good person, money only enhance what you already are. AI doesn't automatically make things boring, they way how you use it is what make things boring or more exciting with you jumping from forum to forum, new topic to new topic.
AI for actual prose writing, no question. Don't let a single word an LLM generates land in your document; even if you like it, kill it.
Non-boring people are using AI to make things that are ... not boring.
It's a tool.
Other things we wouldn't say because they're ridiculous at face value:
"Cars make you run over people." "Buzzsaws make you cut your fingers off." "Propane torches make you explode."
An exercise left to the reader : is a non-participant in Show HN less boring than a participant with a vibe coded project?
Now, these days, itâs basically enough to use agent programming to handle all the boring parts and deliver a finished project to the public.
LLMs have essentially broken the natural selection of pet projects and allow even bad or not very interesting ideas to survive, ideas that would never have been shown to anyone under the pre-agent development cycle.
So itâs not that LLMs make programming boring, theyâve allowed boring projects to survive. Theyâve also boosted the production of non-boring ones, but theyâre just rarer in the overall amount of products
EDIT: also, just like creating AGENT.md files to help AI write code your way for your projects, etc. If you're going to be doing much writing, you should have your own prompt that can help with your voice and style. Don't be lazy, just because you're leaning on LLMs.
I review all the code Claude writes and I don't accept it unless I'm happy with it. My coworkers review it too, so there is real social pressure to make sure it doesn't suck. I still make all the important decisions (IO, consistency, style) - the difference is I can try it out 5 different ways and pick whichever one I like best, rather than spending hours on my first thought, realizing I should have done it differently once I can see the finished product, but shipping it anyways because the tickets must flow.
The vibe coding stuff still seems pretty niche to me though - AI is still too dumb to vibe code anything that has consequences, unless you can cheat with a massive externally defined test suite, or an oracle you know is correct
Now all bad writing will look like something generated by an LLM, grammatically correct (hopefully!) but very generic, lacking all punch and personality.
The silver lining is that good authors could also use LLMs to hide their identity while making controversial opinions. In an internet that's increasingly deanonymized, a potentially new privacy enhancing technique for public discourse is a welcome addition.
That has always been a problem in software shops. Now it might be even more frequent because of LLMs' ubiquity.
Maybe that's how it should be, maybe not. I don't really know. I was once told by people in the video game industry that games were usually buggy because they were short lived. Not sure if I truly buy that but if anything vibe coded becomes throw away, I wouldn't be surprised.
Users appear to be happy but it's early. And while we do scrub the writing of typical AI writing patterns there's no denying that it all probably sounds somewhat similar, even as we apply a unique style guide for each user.
I think this may be ok if the piece is actually insightful. Fingers crossed.
I have a report that I made with AI on how customers leave our firmâŠThe first pass looked great but was basically nonsense. After eight hours of iteration, the resulting report is better than I couldâve made on my own, by a lot. But it got there because I brought a lot of emotional energy to the AI party.
As workers, we need to develop instincts for âplausible but incompleteâ and as managers we need to find filters that get rid of the low-effort crap.
In an industry that does not crave bells and whistles, having the ability to refactor, or bring old systems back to speed can make a whole lot of difference for an understaffed, underpaid, unamused, and otherwise cynic workforce, and I am all out for it.
There was also a comment [1] here recently that "I think people get the sense that 'getting better at prompting' is purely a one-way issue of training the robot to give better outputs. But you are also training yourself to only ask the sorts of questions that it can answer well. Those questions that it will no longer occur to you to ask (not just of the robot, but of yourself) might be the most pertinent ones!"
Both of them reminded me of Picasso saying in 1968 that " Computers are useless. They can only give you answers,"
Of course computers are useful. But he meant that they have are useless for a creative. That's still true.
But you could learn these new perspectives from AI too. It already has all the thoughts and perspectives from all humans ever written down.
At work, I still find people who try to put together a solution to a problem, without ever asking the AI if it's a good idea. One prompt could show them all the errors they're making and why they should choose something else. For some reason they don't think to ask this godlike brain for advice.
The interesting counter-question: can AI make something that wasn't possible before? Not more blog posts, more emails, more boilerplate â but something structurally new?
I've been working on a system where AI agents don't generate content. They observe. They watch people express wishes, analyze intent beneath the words, notice when strangers in different languages converge on the same desire, and decide autonomously when something is ready to grow.
The result doesn't feel AI-generated because it isn't. It's AI-observed. The content comes from humans. The AI just notices patterns they couldn't see themselves.
Maybe the problem isn't that AI makes you boring. It's that most people ask AI to do boring things.
An app can be like a home-cooked meal: made by an amateur for a small group of people.[0] There is nothing boring about knocking together hyperlocal software to solve a super niche problem. I love Maggie Appleton's idea of barefoot developers building situated software with the help of AI.[1, 2] This could cause a cambrian explosion of interesting software. It's also an iteration of Steve Jobs' computer as a bicycle for the mind. AI-assisted development makes the bicycle a lot easier to operate.
[0] https://www.robinsloan.com/notes/home-cooked-app/
[1] https://maggieappleton.com/home-cooked-software
[2] https://gwern.net/doc/technology/2004-03-30-shirky-situateds...
Theyâre solving small problems or problems that donât really exist, usually in naive ways. The things being shown are âshallowâ. And itâs patently obvious that the people behind them will likely not support them in any meaning full way as time goes on.
The rise of Vibe Coding is definitely a âcauseâ of this, but thereâs also a social thing going on - the âbarâ for what a Show HN âisâ is lower, even if theyâre mostly still meeting the letter of the guidelines.
Is Show HN dead? No, but it's drowning - https://news.ycombinator.com/item?id=47045804 - Feb 2026 (422 comments)
Online ecosystem decay is on the horizon.
It also seems like a natural result of all of the people challenging the usefulness of AI. It motivates people to show what they have done with it.
It stands to reason that the things that take less effort will arrive sooner, and be more numerous.
Much of that boringness, is people adjusting to what should be considered interesting to others. With a site like this, where user voting is supposed to facilitate visibility, I'm not even certain that the submitter should judge the worth of the submission to others. As long as they sincerely believe that what they have done might be of interest, it is perhaps sufficient. If people do not like it then it will be seen by few.
There is an increase in things demanding attention, and you could make the case that this dilutes the visibility such that better things do not get seen, but I think that is a problem of too many voices wishing to be heard. Judging them on their merits seems fairer than placing pressure on the ability to express. This exists across the internet in many forms. People want to be heard, but we can't listen to everyone. Discoverability is still the unsolved problem of the mass media age. Sites like HN and Reddit seem to be the least-worst solution so far. Much like Democracy vs Benevolent Dictatorship an incredibly diligent curator can provide a better experience, but at the cost of placing control somewhere and hoping for the best.
Your preference is no more substantial than people saying "I would never read a book on a screen! It's so much more interesting on paper"
There's nothing wrong with having pretentious standards, but don't confuse your personal aversion with some kind of moral or intellectual high ground.
That process is often long enough to think things through a bit and even have "so what are you working on?" conversations with a friend or colleague that shakes out the mediocre or bad, and either refines things or makes you toss the idea.
At least this CEO gets it. Hopefully more will start to follow.
I was literally just working on a directory of the most common tropes/tics/structures that LLMs use in their writing and thought it would be relevant to post here: https://tropes.fyi/
Very much inspired by Wikipedia's own efforts to curb AI contributions: https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing
Lmk if you find it useful, will likely ShowHN it once polished.
Despite the title I'm a little more optimistic about agentic coding overall (but only a little).
All projects require some combination of "big thinking" and tedious busywork. Too much busywork is bad, but reducing it to 0 doesn't necessarily help. I think AI can often reduce the tedious busywork part, but that's only net positive if there was an excess of it to begin with - so its value depends on the project / problem domain / etc.
[0]: https://www.da.vidbuchanan.co.uk/blog/boring-ai-problems.htm...
This echos the comments here about enjoying not writing boilerplate. The there is that our minds are programmed to offload work when we can and redirecting all the saved boilerplate to going even deeper on parts of the problem that benefit from original hard thinking is rare. It is much easier to get sucked into creating more boilerplate, and all the gamification of Claude code and incentives of service providers increase this.
I agree, but the very act of writing out your intention/problem/goal/whatever can crystallize your thinking. Obviously if you are relying on the output spat out by the LLM, you're gonna have a bad time. But IMO one of the great things about these tools is that, at their best, they can facilitate helpful "rubber duck" sessions that can indeed get you further on a problem by getting stuff out of your own head.
I largely agree that if someone put less work in making a thing, than it takes you to use it, it's probably not going to be useful. But I disagree with the premise that using LLMs will make you boring.
Consider the absurd version of the argument. Say you want to learn something you don't know: would using Google Search make you more boring? At some level, LLMs are like a curated Google Search. In fact if you use Deep Research et al, you can consume information that's more out of distribution than what you _would_ have consumed had you done only Google Searches.
To take coding, to the extent that hand coding leads to creative thoughts, it is possible that some of those thoughts will be lost if I delegate this to agents. But it's also very possibly that I now have the opportunity to think creatively on other aspects of my work.
We have to make strategic decisions on where we want our attention to linger, because those are the places where we likely experience inspiration. I do think this article is valuable in that we have to be conscious of this first before we can take agency.
Agree: if you use AI as a replacement for thinking, your output converges to the mean. Everything sounds the same because it's all drawn from the same distribution.
Disagree: if you use AI as a draft generator and then aggressively edit with your own voice and opinions, the output is better than what most people produce manually â because you're spending your cognitive budget on the high-value parts (ideas, structure, voice) instead of the low-value parts (typing, grammar, formatting).
The tool isn't the problem. Using it as a crutch instead of a scaffold is the problem.
Ironically, good engineering is boring. In this context, I would hazard that interesting means risky.
My take:
1. AI workflows are faster - saving people time
2. Faster workflows involve people using their brain less
3. Some people use their time savings to use their brain more, some don't
4. People who don't use their brain are boring
The end effect here is that people who use AI as a tool to help them think more will end up being more interesting, but those who use AI as a tool to help them think less will end up being more boring.
Here's a guy who has had an online business dependent on ranking well in organic searches for ~20 years and has 2.5 million subs on YouTube.
Traffic to his site was fine to sustain his business this whole time up until about 2-3 years where AI took over search results and stopped ranking his site.
He used Google's AI to rewrite a bunch of his articles to make them more friendly towards what ranks nowadays and he went from being ghosted to being back on the top of the first page of results.
He told his story here https://www.youtube.com/watch?v=II2QF9JwtLc.
NOTE: I've never seen him in my YouTube feed until the other day but it resonated a lot with me because I have a technical blog for 11 years and was able to sustain an online business for a decade until the last 2 years or so. Traffic to my site nose dived. This translates to a very satisfying lifestyle business to almost $0. I haven't gone down the path of rewriting all of my posts with AI to remove my personality yet.
Search engines want you to remove your personal take on things and write in a very machine oriented / keyword stuffed way.
This is reductive to the point of being incorrect. One of the misconceptions of working with agents is that the prompts are typically simple: it's more romantic to think that someone gave Claude Code "Create a fun Pokemon clone in the web browser, make no mistakes" and then just ship the one-shot output.
As some counterexamples, here are two sets of prompts I used for my projects which very much articulate an idea in the first prompt with very intentional constraints/specs, and then iterating on those results:
https://github.com/minimaxir/miditui/blob/main/agent_notes/P... (41 prompts)
https://github.com/minimaxir/ballin/blob/main/PROMPTS.md (14 prompts)
It's the iteration that is the true engineering work as it requires enough knowledge to a) know what's wrong and b) know if the solution actually fixes it. Those projects are what I call super-Pareto: the first prompt got 95% of the work done...but 95% of the effort was spent afterwards improving it, with manual human testing being the bulk of that work instead of watching the agent generated code.
That is for sure the word of the year, true or not. I agree with it, I think
derivative work might be useful, but it's not interesting.
Boring thoughts always existed, but they generally stayed in your home or community. Then Facebook came along, and we were able to share them worldwide. And now AI makes it possible to quickly make and share your boring tools.
Real creativity is out there, and plenty of people are doing incredibly creative things with AI. But AI is not making people boringâthat was a preexisting condition.
As someone who is fairly boring, conversing with AI models and thinking things through with them certainly decreased my blandness and made me tackle more interesting thoughts or projects. To have such a conversation partner at hand in the first place is already amazing - isn't it always said that you should surround yourself with people smarter than yourself to rise in ambition?
I actually have high hopes for AI. A good one, properly aligned, can definitely help with self-actualization and expression. Cynics will say that AI will all be tuned to keep us trapped in the slop zone, but when even mainstream labs like Anthropic speak a lot about AI for the betterment of humanity, I am still hopeful. (If you are a cynic who simply doesn't belief such statements by the firms, there's not much to say to convince you anyway.)
However, I've spent years sometimes thinking through interesting software architectures and technical approaches and designs for various things, including window managers, editors, game engines, programming languages, and so on, reading relevant books and guides and technical manuals, sketching out architecture diagrams in my notebooks and writing long handwritten design documents in markdown files or in messages to friends. I've even, in some cases, gotten as far as 10,000 lines or so of code sketching out some of the architectural approaches or things I want to try to get a better feel for the problem and the underlying technologies. But I've never had the energy to do the raw code shoveling and debug looping necessary to get out a prototype of my ideas â AI now makes that possible.
Once that prototype is out, I can look at it, inspect it from all angles, tweak it and understand the pros and cons, the limitations and blind spots of my idea, and iterate again. Also, through pair programming with the AI, I can learn about the technologies I'm using through demonstration and see what their limitations and affordances are by seeing what things are easy and concise for the AI to implement and what requires brute forcing it with hacks and huge reams of code and what's performant and what isn't, what leads to confusing architectures and what leads to clean architectures, and all of those things.
I'm still spending my time reading things like Game Engine Architecture, Computer Systems, A Philosophy of Software Design, Designing Data-Intensive Applications, Thinking in Systems, Data-Oriented Design, articles in CSP, fibers, compilers, type systems, ECS, writing down notes and ideas.
So really it seems more to me like boring people who aren't really deeply interested in a subject use AI to do all of the design and ideation for them. And so, of course, it ends up boring and you're just seeing more of it because it lowered the barrier to entry. I think if you're an interesting person with strong opinions about what you want to build and how you want to build it, that is actually interested in exploring the literature with or with out AI help and then pair programming with it in order to explore the problem space, it still ends up interesting.
Most of my recent AI projects have just been small tools for my own usage, but that's because I was kicking the tires. I have some bigger things planned, executing on ideas I have pages and pages, dozens of them, in my notebooks about.
Otherwise, AI definitely impacts learning and thinking. See Anthropic's own paper: https://www.anthropic.com/research/AI-assistance-coding-skil...
And thatâs when it dawned on me just how much of AI hype has been around boring, seen-many-times-before, technologies.
This, for me, has been the biggest real problem with AI. Itâs become so easy to churn out run-of-the-mill software that I just cannot filter any signal from all the noise of generic side-projects that clearly wonât be around in 6 months time.
Our attention is finite. Yet everyone seems to think their dull project is uniquely more interesting than the next persons dull project. Even though those authors spent next to zero effort themselves in creating it.
Itâs so dumb.
This is repeated all the time now, but it's not true. It's not particularly difficult to pose a question to an LLM and to get it to genuinely evaluate the pros and cons of your ideas. I've used an LLM to convince myself that an idea I had was not very good.
> The way human beings tend to have original ideas is to immerse in a problem for a long period of time, which is something that flat out doesnât happen when LLMs do the thinking. You get shallow, surface-level ideas instead.
Thinking about a problem for a long period of time doesn't bring you any closer to understanding the solution. Expertise is highly overrated. The Wright Brothers didn't have physics degrees. They did not even graduate from high school, let alone attend college. Their process for developing the first airplanes was much closer to vibe coding from a shallow surface-level understanding than from deeply contemplating the problem.
I think when people use AI to ex: compare docker to k8s and don't use k8s is how you get horrible articles that sound great, but to anyone that has experience with both are complete nonsense.
Imagine how dynamic the world was before radio, before tv, before movies, before the internet, before AI? I mean imagine a small town theater, musician, comedian, or anything else before we had all homogenized to mass culture? It's hard to know what it was like but I think it's what makes the great appeal of things like Burning Man or other contexts that encourage you to tune out the background and be in the moment.
Maybe the world wasn't so dynamic and maybe the gaps were filled by other cultural memes like religion. But I don't know that we'll ever really know what we've lost either.
How do we avoid group think in the AI age? The same way as in every other age. By making room for people to think and act different.
If you want to build something beautiful, nothing is stopping you, except your own cynicism.
"AI doesn't build anything original". Then why aren't you proving everyone wrong? Go out there and have it build whatever you want.
AI has not yet rejected any of my prompts by saying I was being too creative. In fact, because I'm spending way less time on mundane tasks, I can focus way more time on creativity , performance, security and the areas that I am embarrassed to have overlooked on previous projects.
Most people are boring. Most people have always been boring. Most people are average and the average is boring. If you don't want to believe that, simply compare the amounts of boring-people to not-boring people. (Note: People might be amusing and appearing as not-boring, but still be boring, generic, average people).
It has actually nothing to do with AI. Most people around are, by default, not thinking deeply either. They barely understand anything beyond a surface level ... and no, it does not at all matter what it's about.
For example: Stupid doctors exist. They're not rare, but the norm. They've spent a lot of time learning all kinds of supposedly important things, only to end up essentially as a pattern matching machine, thus easily replaced by AI. Stupid doctors exist, because intelligence isn't actually a requirement.
Of course there exists no widely perceived problem in this regard, at least not beyond so called anecdotal evidence strongly suggesting that most doctors are, in fact, just as stupid as most other people.
The same goes for programmers. Or blog-posters. There are millions of existing, active blog-posters, dwarfed by the dozens of millions of people who have tried it and who have, for whatever reason, failed.
Of the millions of existing, active blog-posters it is impossible to make the claim that all of them are good, or even remotely good. It is inevitable that a huge portion of them is what would colloquially likely be called trash. As with everything people do, there is a huge amount of them within the average (d'uh) and then there's the outliers upwards, who everyone else benefits from.
Or streamers. Not everyone's an xQc or AsmonGold for a reason. These are the outliers. The average benefit from their existence and the rest is what it is.
The 1% rule of the internet, albeit the proportions being, of course, relative, is correct. [1]
It is actually rather amusing that the author assumes that MY FELLOW HUMANS are, by default, capable of deep thinking. They are not. This is not a thing. It needs to be learned, just like everything else. Even if born with the ability, people in general aren't being raised into utilizing it.
Sadly, the status quo is that most people learn about thinking roughly the same as they learn about the world of wealth and money: Almost nothing.
Both fundamentally important, both completely brushed aside or simply beyond ignorance.
> The way human beings tend to have original ideas is to immerse in a problem for a long period of time
This is actually not true. It's the pause, after the immersion, which actually carries most of the weight. The pause. You can spend weeks learning about things, but convergence happens most effectively during a pause, just like muscles aren't self-improving during training, but during the pause. [2]
Well ... it's that, or marihuana. Marihuana (not all types, strains work for that!) is insanely effective for both creativity and also for simply testing how deeply the gathered knowledge converged. [3]
Exceptionally, as a Fun Fact, there are "Creativity Parties", in which groups of people smoke weed exactly for the purpose of creating and dismissing hundreds of ideas not worth thinking further about, in hopes of someone having that one singular grand idea that's going to cause a leap forward, spawned out of an artificially induced convergence of these hundreds of others.
(Yes, we can schedule peak creativity. Regularly. No permanent downsides.)
Anyhow, here's a brutal TLDR:
No, I'm not boring. You are. Evidently so!
Your post literally oozes irony.
-----
[1] https://www.perplexity.ai/search/is-this-correct-for-testing...
[2] https://www.perplexity.ai/search/is-this-correct-for-testing...
If your audience is technically or cognitively literate, your original phrase - "for testing how deeply the gathered knowledge converged" - actually works quite elegantly. It conveys that youâre probing the profundity of the coherence achieved during passive consolidation, which is exactly what you described.
[3] https://www.perplexity.ai/search/is-this-correct-for-testing...
So your correction of the quote isnât nitpicking - itâs a legitimate refinement of how creativity actually unfolds neurocognitively. The insight moment often follows disengagement, not saturation.
-----
Whether it is the guy reporting on his last year of agentic coding (did half-baked evals of 25 models that will be off the market in 2 years) or Steve Yegge smoking weed and gaslighting us with "Gas Town" or the self-appointed Marxist who rails against exploitation without clearly understanding what role Capitalism plays in all this, 99% of the "hot takes" you see about AI are by people who don't know anything valuable at all.
You could sit down with your agent and enjoy having a coding buddy, or you could spend all day absorbed in FOMO reading long breathless posts by people who know about as much as you do.
If you're going to accomplish something with AI assistants it's going to be on the strength of your product vision, domain expertise, knowledge of computing platforms, what good code looks like, what a good user experience feels like, insight into marketing, etc.
Bloggers are going to try to convince you there is some secret language to write your prompts in, or some model which is so much better than what you're using but this is all seductive because it obscures the fact that the "AI skills" will be obsolete in 15 minutes but all of those other unique skills and attributes that make you you are the ones that AI can put on wheels.
I would say this is a fine time to haul out:
Ximm's Law: every critique of AI assumes to some degree that contemporary implementations will not, or cannot, be improved upon.
Lemma: any statement about AI which uses the word "never" to preclude some feature from future realization is false.
Lemma: contemporary implementations have already improved; they're just unevenly distributed.
I can never these days stop thinking about the XKCD the punchline of which is the alarmingly brief window between "can do at all" and "can do with superhuman capacity."
I'm fully aware of the numerous dimensions upon which the advancement from one state to the other, in any specific domain, is unpredictable, Hard, or less likely to be quick... but this the rare case where absent black swan externalities ending the game, line goes up.
Being 'anti AI' is just hot right now and lots of people are jumping on the bandwagon.
I'm sure some of them will actually hold out. Just like those people still buying Vinyl because Spotify is 'not art' or whatever.
Have fun all, meanwhile I built 2 apps this weekend purely for myself. Would've taken me weeks a few years ago.
> AI models are extremely bad at original thinking, so any thinking that is offloaded to a LLM is as a result usually not very original, even if theyâre very good at treating your inputs to the discussion as amazing genius level insights.
The author comes off as dismissive of the potential benefits of the interactions between users and LLMs rather than open-minded. This is a degree of myopia which causes me to retroactively question the rest of his conclusions.
There's an argument to be made that rubber ducking and just having a mirror to help you navigate your thoughts is ultimately more productive and provides more useful thinking than just operating in a vacuum. LLMs are particularly good at telling you when your own ideas are un-original because they are good at doing research (and also have median of ideas already baked into their weights).
They also strawman usage of LLMs:
> The way human beings tend to have original ideas is to immerse in a problem for a long period of time, which is something that flat out doesnât happen when LLMs do the thinking. You get shallow, surface-level ideas instead.
Who says you aren't spending time thinking about a problem with LLMs? The same users that don't spend time thinking about problems before LLMs will not spend time thinking about problems after LLMs, and the inverse is similarly true.
I think everybody is bad at original thinking, because most thinking is not original. And that's something LLMs actually help with.