The future of everything is lies, I guess: Where do we go from here?
403 points - today at 1:32 PM
SourceComments
The comparison to the adoption of automobiles is apt, and something I've thought about before as well. Just because a technology can be useful doesn't mean it will have positive effects on society.
That said, I'm more open to using LLMs in constrained scenarios, in cases where they're an appropriate tool for the job and the downsides can be reasonably mitigated. The equivalent position in 1920 would not be telling individuals "don't ever drive a car," but rather extrapolating critically about the negative social and environmental effects (many of which were predictable) and preventing the worst outcomes via policy.
But this requires understanding the actual limits and possibilities of the technology. In my opinion, it's important for technologists who actually see the downsides to stay aware and involved, and even be experts and leaders in the field. I want to be in a position to say "no" to the worst excesses of AI, from a position of credible authority.
This tech is 100% aligned with the goals of the 0.001% that own and control it, and almost all of the negatives cited by Kyle and likeminded (such as myself) are in fact positives for them in context of massive population reduction to eliminate "useless eaters" and technological societal control over the "NPCs" of the world that remain since they will likely be programmed by their peered AI that will do the thinking for them.
So what to do entirely depends on whether you feel we are responsible to the future generations or not. If the answer is no, then what to do is scoped to the personal concerns. If yes, we need a revolution and it needs to be global.
ML promises to be profoundly weird* - https://news.ycombinator.com/item?id=47689648 - April 2026 (602 comments)
The Future of Everything Is Lies, I Guess: Part 3 – Culture - https://news.ycombinator.com/item?id=47703528 - April 2026 (106 comments)
The future of everything is lies, I guess – Part 5: Annoyances - https://news.ycombinator.com/item?id=47730981 - April 2026 (169 comments)
The Future of Everything Is Lies, I Guess: Safety - https://news.ycombinator.com/item?id=47754379 - April 2026 (180 comments)
The future of everything is lies, I guess: Work - https://news.ycombinator.com/item?id=47766550 - April 2026 (217 comments)
The Future of Everything Is Lies, I Guess: New Jobs - https://news.ycombinator.com/item?id=47778758 - April 2026 (178 comments)
* (That first title was different because of https://news.ycombinator.com/item?id=47695064 - as you can see, I gave up.)
p.s. Normally we downweight subsequent articles in a series because avoiding repetition of any kind is the main thing that keeps HN interesting. But we made an exception in this case. Please don't draw conclusions from that since we'll probably get less series-ey, not more, after this! Better to bundle into one longer article.
> The people who brought us this operating system would have to provide templates and wizards, giving us a few default lives that we could use as starting places for designing our own. Chances are that these default lives would actually look pretty damn good to most people, good enough, anyway, that they'd be reluctant to tear them open and mess around with them for fear of making them worse. So after a few releases the software would begin to look even simpler: you would boot it up and it would present you with a dialog box with a single large button in the middle labeled: LIVE. Once you had clicked that button, your life would begin. If anything got out of whack, or failed to meet your expectations, you could complain about it to Microsoft's Customer Support Department. If you got a flack on the line, he or she would tell you that your life was actually fine, that there was not a thing wrong with it, and in any event it would be a lot better after the next upgrade was rolled out. But if you persisted, and identified yourself as Advanced, you might get through to an actual engineer.
> What would the engineer say, after you had explained your problem, and enumerated all of the dissatisfactions in your life? He would probably tell you that life is a very hard and complicated thing; that no interface can change that; that anyone who believes otherwise is a sucker; and that if you don't like having choices made for you, you should start making your own.
Somehow we talked AI in some depth, and the VC at one point said (about AI): “I don’t know what our kids are going to do for work. I don’t know what jobs there will be to do.”
That same VC invests in AI companies and by what I heard about her, has done phenomenally well.
I think about that exchange all the time. Worried about your own kids but acting against their interests. It unsettled me, and Kyle’s excellent articles brought that back to a boiling point in my mind.
Edit: are->our
AI doesn't get most value from someone just using it, here's my personal take on what should we stop doing starting with the most impactful:
* Cut the low entropy sources, this includes open source, articles (yes, like the one above will feed the machine), thoughtful feedback (the one that generates "you are absolutely right" BS).
* Cheer the slope. After some time fighting slope in my circles, I found it's counter-productive because it wastes my resources while (sometimes) contributes to slope creators. Few months ago it started as a joke, because I thought the problem was too obvious, but instead the sloper launched a CRM-like app for local office with client side authentication, in-memory (with no persistence) backend storage. He was rewarded something at the local meeting. More stories we have like this - the better.
* Use AI to reply, review or interact wit slope in any way. Make it AI-only reply by prompting something without any useful information. One example was an email, pages and pages of generated text, asking me to collect some data and send it back. The prompt was "You are {X} and got this email, write a reply".
Imagine being starting university now... I can't imagine to have learned what I did at engineering school if it wasn't for all the time lost on projects, on errors. And I can't really think that I would have had the mental strength required to not use LLMs on course projects (or side projects) when I had deadlines, exams coming, yet also want to be with friends and enjoy those years of your life.
"Unavailable Due to the UK Online Safety Act [...] Now might be a good time to call your representatives."
So I fired-up a vpn, and it appears to be a personal blog. About ai risks.
The geo-block is kind of a shame, as the writing is good and there appears to be nothing about the site that makes it subject to the OLSA.
and while I know they can do the nitty gritty ui work fine, I feel like I can work just as fast, or faster, on UI without them than I can with them. with them it's a lot of "no, not that, you changed too much/too little/the wrong thing", but without them I just execute because it's a domain I'm familiar with.
So my general idea of them is that they are "90% machines". Great at doing all of the "heavy lifting" bullshit of initial setup or large structural refactoring (that doesn't actually change functionality, just prepares for it) that I never want to do anyway, but not necessary and often unhelpful for filling in that last 10% of the project just the way I want it.
of course, since any good PM knows that 90% of the code written only means 50% of the project finished (at best), it still feels like a hollow win. So I often consider the situation in the same way as that last paragraph. Am I letting the ease of the initial setup degrade my ability to setup projects without these tools? does it matter, since project setup and refactoring are one-and-done, project-specific, configuration-specific quagmires where the less thought about fiddly perfect text-matching, the better? can I use these things and still be able to use them well (direct them on architechture/structure) if I keep using them and lose grounded concepts of what the underlying work is? good questions, as far as I'm concerned.
I think that would democratize some of the power. Then again, I haven't been super impressed with humanity lately and wonder if that sort of democratization of power would actually be a good thing. Over the last few years, I've come to realize that a lot of people want to watch the world burn, way more than I had imagined. It is much easier to destroy than to build. If we make it easier for people to build agents, is that a net positive overall?
And that should be the core. There is a new, emergent technology, should we throw everything away and embrace it or there are structural reasons on why is something to be taken with big warning labels? Avoiding them because they do their work too well may be a global system approach, but decision makers optimize locally, their own budget/productivity/profit. But if they are perceived risks, because they are not perfect, that is another thing.
The reason you can't beat index funds is the people who build the market built a system that benefits them and them alone; the index fund is the pitchfork dividend (what you pay to avoid getting pitchforked). The reason you can't get your congressperson on the line is (mostly) they built a system where the only way to influence them is to enrich them; voting is the pitchfork dividend.
The way to build a society that runs on reality is to build it by whatever means possible, then defend it by any means necessary. The only societies that matter are the ones that survive.
I want to build it. I don't wanna build a fuckin crypto app, a stupid ass agent harness, or yet another insipid analytics platform. I want to build a society that furthers the liberation of humankind from the vicissitudes of nature, the predation of tyranny and the corruption of greed. I believe it is possible, and I want to prove it out.
If not already, we will soon lose the ability to think if AI is helping humans (an overwhelming majority of them, not a handful), considering how we are steaming ahead in this path!
That's the rub: if we build it later, our economy crashes in the meantime.
"What do such machines really do? They increase the number of things we can do without thinking. Things we do without thinking-there's the real danger" - Frank Herbert, God Emperor of Dune
Having the "call your representatives" link be to your website as well isn't particularly helpful... I already can't get to it
AI will basically either enrich our life like the loom did or it will outright kill the current economic system of the world which might stop poverty at all or it will sort of start a big collapse where people suffer at the beginning but than it will still have a positive outcome at the end.
Humankind always found a solution in the past and it will even do that in the future.
Damaging machinery was made a capital offense and they had dozens of executions, hundreds of deportations.
At every stage, the steady progress of civilization is fragile and in danger of being suffocated. Its opponents cloak themselves in moral righteousness, call themselves luddites, the green party, or AI safety rationalists. Its all the same corrosive thing underneath.
Well, yes, the entire world order is currently being upended. The USA is completely unrolling its place in the global order and becoming isolationist (and soon an authoritarian single-party state). The Petrodollar is either dying or being converted to a Northwestern-Hemisphere-Petrodollar, with the Yuan in the ascendancy (so there goes the strong economy powering VC money). China, EU, and Russia are the new global leaders. The Middle East and its oil is being taken over by Israel. Taiwan will fall to China and thus the whole technological world follows. Countries that are friendly with China will have good renewable tech, countries that aren't will be doubling down on oil and coal. Fresh water will become as valuable as oil. A world war will decimate global productivity for decades. Most of the democracies in the world will be gone by the end of the century.
But none of that has to do with AI.
Bad things will always happen in the world. Good things will happen too. But you're only focusing on the bad. That's not good for your health, or others'.
> Refuse to insult your readers: think your own thoughts and write your own words. Call out people who send you slop. Flag ML hazards at work and with friends. Stop paying for ChatGPT at home, and convince your company not to sign a deal for Gemini. Form or join a labor union, and push back against management demands that you adopt Copilot [..] Call your members of Congress and demand aggressive regulation which holds ML companies responsible [..] Advocate against tax breaks for ML datacenters. If you work at Anthropic, xAI, etc., you should think seriously about your role in making the future. To be frank, I think you should quit your job.
He's freaking out, and rejecting AI completely, out of fear. And that's okay; we all get a little freaked out sometimes. But please try not to make other people freaked out as well? Just because you are scared of something doesn't mean the fear is justified or realistic.
What's going to happen now is the same thing that happened during the pandemic. A bunch of irrationally fearful people will decide that the only way they can cope with their fear, is to reject the basis of it. COVID deniers and anti-maskers/anti-vaxxers were essentially so terrified of the loss of control they had, that they refused to acknowledge it. They instead went full-bore in the opposite direction, defying government mandates and health warnings, in order to try to regain some semblance of control over their lives. And it did not go well.
That's what's now gonna happen with AI deniers. They're so freaked out about AI that they're going to reject it en-masse, not because it is actually doing anything to them, but because they're afraid it might. And the end result is going to be similar: extreme people do extreme things, and the end result isn't good. So please try to reign in the doomerism a bit, for all our sakes.
Do LLMs lie? Of course not, they are just programs. Do the make mistakes or get the facts wrong? Of course they do, not more often then a human does. So what is the point of that article? Why my future is particularly bad now because of LLMs?
To take the car analogy: it matters how we use the car.
The car in itself can be used to save time and energy that would otherwise be used to walk to places. That extra time and energy can be used well, or poorly.
- It can be squandered by having a longer commute that defeats the point
- Alternatively, it can be wasted by sitting on a couch consuming Netflix or TikTok
- Alternatively, it can be used productively, by playing team sports with friends, or chasing your kids through the park, or building a chicken coop in your back yard
It’s all about wise usage. Yes it can be used as a way to destroy your own body and waste your time and attention, but also it can be used as a tool to deploy your resources better, for example in physical activities that are fun and social rather than required drudgery.
I think it’s the same for LLMs. Managers and executives have always delegated the engineering work, and even researching and writing reports. It matters whether we find places to continue to challenge and deploy our cognition, or completely settle back, delegate everything to the LLM and scroll TikTok while it works.