The future of everything is lies, I guess – Part 5: Annoyances
164 points - today at 2:32 PM
SourceComments
Then again, I do think LLMs are an incredible technological achievement. The issue is not so much what they do or that they exist, but how they are utilized. Right now, they are utilized to further the class divide between rich and poor.
Who are we to trust in the future? Not big companies, not the state, not LLMs. Time to organize around groups and collectives that we know we can trust and that we know have our wellbeing in mind.
All that stuff about support, though, inevitable.
For most simple mainstream questions I just ask ai instead of googling shitty results.
Most of the time ai is good enough and often better than the status ante.
People do not care if it is a stupid token prediction machine as long as the job gets done.
Given how many people hate AI in general, I'm surprised there hasn't been anything like this happening. They could even get around the irony of using "AI" themselves, I bet low-tech language models like Markov chains could provide sufficient time wasting potential (I'd love to see it done with an old fashioned AIML chatbot). Asymmetric chatbot warfare.
Remember that the polygraph still exists, now we will be dealing with a massive portion of the decision makers will treat as artificial inteligence not in the technical sense we use, but as real inteligence, maybe even super-inteligence.
a fun side effect is that CS is also an early warning system for companies, so when you make it harder to get through to a human, you start throwing out info on your users' pain points. of course this only matters if people have a choice about whether to use your product, so that's gotta be an upside for insurance companies, etc.
THEREFORE A COMPUTER MUST NEVER MAKE A MANAGEMENT DECISION
—IBM internal training, 1979
It took me a while to realise that the premise is saying the same thing as the reason why we have so many "Computer says no" experiences today.
The conclusion only follows if you want someone to be accountable.
If you want to avoid being accountable, computers should make all management decisions. This has nothing to do with AI other than it provides another mechanism to do that.
People saying "I'd love to help you but the computer won't let me do that" has been happening for years now.
Websites develop abusive patterns because A/B testing lets a process decide based on the goal you want, It doesn't measure the repercussions so you have made no decision to allow them.
Management read it as
A COMPUTER CAN NEVER BE HELD ACCOUNTABLE
THEREFORE THERE CAN BE NO LIABILITY IF COMPUTERS MAKE ALL MANAGEMENT DECISIONS
I chat with these friends a lot but I rarely send articles that I suggest they read and that I think are profound, so I expected them to read it. These are smart people that have a history of reading lots of books.
They are both huge AI proponents now and use AI for nearly everything now. Debates on various topics with them used to be rich; now, they're shallow and they just send me AI summaries of points they're clearly just predisposed to. Their attention spans are dwindling.
[1] https://aphyr.com/data/posts/411/the-future-of-everything-is...
For the "bureaucracy has royally fucked up and doesn't want to fix it", if it is something that can be fixed with money and isn't time sensitive (e.g. you need a refund rather than get the airline to actually provide you the ticket you already paid for and want to fly this weekend): In countries that have effective small claims courts, these can be a surprisingly convenient (less hassle than the "talk to the bot" wall of the company!) to resolve this kind of issue.
I hope that these resolution methods become more common - I think the tools to fight enshittification often already exist, we just don't use them enough. A welcome side effect would, of course, be that this would impose a real cost on the enshittifiers, creating an incentive to provide proper support.
The pattern goes something like this:
- this development is bad
- companies will be unrestrained in their use of this development
- there will be no rules so they can do whatever they want
- we are all fucked as a result
But then...propose that we make some laws to put rules around this stuff, also known as regulations and everybody goes "whoa hold up hold up hold up...I dunno about that part."
Dear friends - America has always been this way. Study your 19th and 20th century history. Companies will exploit the shit out of us unless we put some rules in place to prevent it. Yes, that might mean making less money in the short term as regulations cause friction. But in the long term it means we can have a better and actually livable society.
(For what it's worth I'm an American and not an uppity European or Australian taking potshots from across the pond; no offense to Euros or Aussies intended, love you guys)
Haha yes. I interacted with a bank one. It was like press 5 for mortgages but with a text to speech front end.
At the end of the day the LLM can be tricked into doing anything.
"Yes, we cost more, but your get what you pay for" can be a good play.
Most of these annoyances are also things that existed before AI, and will continue to exist after, because consumerist capitalism. The good little obedient consumers get abused because they don't stand up for themselves. Customer service is an enfuriating maze? Yeah, because you voted with your dollars (and political indifference) to allow companies to make customer service (the thing you pay for) worse. We bring these problems on ourselves. It's pointless to complain if you aren't willing to do anything to change it. (And if you think you can't change it, there's other nations to look at, as well as the fact that you live in a democracy - for now - unlike the rest of the world)
Hell, we already have companies whose sole purpose is to manage your subscriptions for you because you're too lazy to do it yourself. You could look at this and say, man, the world is terrible! Or you could look at this and say, man, how great is my life that I can not only subscribe to a lot of things without going bankrupt, but I have extra cash left over to pay a company to manage my subscriptions?
Don't let the hedonic treadmill and complacency trick you into A) accepting a worse life, or B) convincing yourself your life is bad when it's actually better than most people's.
It's certainly worth discussing the fact that the entire industry is starting to outsource large amounts of our thinking and writing work to non-sentient statistical algorithms, but this discussion needs to honestly confront the extent to which they are successfully completing useful tasks today.
Lots of blaming LLMs but I think the root cause lies elsewhere, I’m not even sure whether dismissing it as “capitalism” or “profit motives” would do it justice, because in general it feels more like the world that we live in lacks humanity.
Even in a capitalist world, a company could take a stance and decide not to purposefully screw people over, but in the world that we live in instead they look for ways to better screw over people and extract more money from them. It doesn’t matter whether your customer support is handled by someone from India, a crappy telephone tree or some voice model, when the incentive is the same - to do the bare minimum for customer “support” (in practice, just getting you to fuck off). Same for handling insurance claims and “dynamic pricing” of things - it doesn’t matter whether it’s some proprietary algorithm or just an LLM making crap up when the goal is to screw you over.
Blaming “AI” for all of this would be barking up the wrong tree (without that tech they’d just find other ways), though one can definitely acknowledge that this technology provides another convenient scapegoat, same as how you can lay employees off and just say cause it’s because of AI when in actuality it’s just greed and wanting to make your books look better.
Payment processing, is better than it was in 2000, but still not good.
Micropayments: this is obnoxiously expensive to do.
Discovery, and discoverability: again here we have better but not good solutions (and many of the ones that were once good are enshitified).
Pricing: this is a problem everywhere, and frankly we need the law to change in a way that is pro consumer. Publishing prices, disclosure of fees, in both services and for payment processing (that 3 percent back from visa looks a lot less attractive when it's part of a 5 percent mark up).
Customer service: well there are already companies promoting models where they cut you off and send you into a black hole (google is a prime example). Good customer service will become a differentiator, and maybe a "paid for" service as well.
It’s already bad. I’m not looking forward to the future. These systems are terrible. It’s a future without people that they want for some reason. I’d rather deal with people incompetent, tired, annoyed people than an LLM.
LLM when it came out, was perfect as an interface between a system and a normal human.
So many people call customer support for issues they could in theory fix themselves. If that LLM system can understand me well enough, its an okay interface.
In worst case you have to escalate anyway. My mum actually told me that she talked to some AI.
And yes normal systems are also not correct often enough. With AI/LLM software will get cheaper which should incresase quality overall.
I dont think ai/llm in this case will change anything.
Relevant change will happen due to the fact that humans can be replaced by AI/LLMs. It was not even imaginable a few years back how a good ai system would even look like. Translaters lost their jobs, basic arists lost their jobs. Small contracts for basic things are gone. The restaurant poster no one cares? AI. The website translation for some small business? no one cares.