A sane but bull case on Clawdbot / OpenClaw

233 points - yesterday at 3:47 PM

Source

Comments

louiereederson today at 5:23 PM
- Why do you need a reminder to buy gloves when you are holding them?

- Why do you need price trackers for airbnb? It is not a superliquid market with daily price swings.

- Cataloguing your fridge requires taking pictures of everything you add and remove which seems... tedious. Just remember what you have?

- Can you not prepare for the next day by opening your calendar?

- If you have reminders for everything (responding to texts, buying gloves, whatever else is not important to you), don't you just push the problem of notification overload to reminder overload? Maybe you can get clawdbot to remind you to check your reminders. Better yet, summarize them.

lawrenceyan today at 11:50 PM
Found this short story on Openclaw to be relevant:

https://x.com/gf_256/status/2018844976486945112

okinok today at 2:17 PM
>all delegation involves risk. with a human assistant, the risks include: intentional misuse (she could run off with my credit card), accidents (her computer could get stolen), or social engineering (someone could impersonate me and request information from her).

One of the differences in risk here would be that I think you got some legal protection if your human assistant misuse it, or it gets stolen. But, with the OpenClaw bot, I am unsure if any insurance or bank will side with you if the bot drained your account.

mmahemoff today at 3:35 PM
Giving access to "my bank account", which I take to mean one's primary account, feels like high risk for relatively low upside. It's easy to open a new bank (or pseudo-bank) account, so you can isolate the spend and set a budget or daily allowance (by sending it funds daily). Some newer payment platforms will let you setup multiple cards and set a separate policy on each one.

An additional benefit of isolating the account is it would help to limit damage if it gets frozen and cancelled. There's a non-zero chance your bot-controlled account gets flagged for "unusual activity".

I can appreciate there's also very high risk in giving your bot access to services like email, but I can at least see the high upside to thrillseeking Claw users. Creating a separate, dedicated, mail account would ruin many automation use cases. It matters when a contact receives an email from an account they've never seen before. In contrast, Amazon will happily accept money from a new bank account as long as it can go through the verification process. Bank accounts are basically fungible commodities, can easily be switched as long as you have a mechanism to keep working capital available.

endymion-light today at 2:36 PM
This felt like a sane and useful case until you mentioned the access to bank account side.

I just don't see a reason to allow OpenClaw to make purchases for you, it doesn't feel like something that a LLM should have access to. What happens if you accidentally end up adding a new compromised skill?

Or it purchases you running shoes, but due to a prompt injection sends it through a fake website?

Everything else can be limited, but the buying process is currently quite streamlined, doesn't take me more than 2 minutes to go through a shopify checkout.

Are you really buying things so frequently that taking the risk to have a bot purchase things for you is worth it?

I think that's what turns this post from a sane bullish case to an incredibly risky sentiment.

I'd probably use openclaw in some of the ways you're doing, safe read-only message writing, compiling notes etc & looking at grocery shopping, but i'd personally add more strict limits if I were you.

causal today at 2:21 PM
> amongst smart people i know there's a surprisingly high correlation between those who continue to be unimpressed by AI and those who use a hobbled version of it.

I've noticed this too, and I think it's a good thing: much better to start using the simplest forms and understand AI from first principles rather than purchase the most complete package possible without understanding what is going on. The cranky ones on HN are loud, but many of the smart-but-careful ones end up going on to be the best power users.

ceroxylon today at 7:03 PM
Reminds me of Dan Harumi

> Tech people are always talking about dinner reservations . . . We're worried about the price of lunch, meanwhile tech people are building things that tell you the price of lunch. This is why real problems don't get solved.

sjdbbdd today at 2:21 PM
Did the author do any audit on correctness? Anytime I let the LLM rip it makes mistakes. Most of the pro AI articles (including agentic coding) like this I read always have this in common:

- Declare victory the moment their initial testing works

- Didn’t do the time intensive work of verifying things work

- Author will personally benefit from AI living up to the hype they’re writing about

In a lot of the authors examples (especially with booking), a single failure would be extremely painful. I’d still want to pay knowing this is not likely to happen, and if it does, I’ll be compensated accordingly.

suralind today at 2:44 PM
But where's the added value? You can book a meeting yourself. You can quickly add items to the freezer. Everything that was described in the article can be done in about the same amount of time as checking with Clawdbot. There are apps that track parcel delivery and support every courier service.
bix6 today at 2:24 PM
> in theory, clawdbot could drain my bank account. this makes a lot of people uncomfortable (me included, even now).

Yeah this sounds totally sane!

causal today at 2:22 PM
I'm still trying to understand what makes this project worthy of like 100K Github stars overnight. What's the secret sauce? Is it just that it has a lot of integrations? Like what makes this so much more successful than the ten thousand other AI agent projects?
tsxxst today at 4:13 PM
The fact that the author gave unrestricted 2FA access to the model is really scary. It’s way easier to phish an AI than a human.
dmje today at 4:03 PM
What strikes me here is the extreme noise. I mean, I’m 50+ so you know, but even so, this shit doesn’t make sense. To be living a life where you’re checking messaging groups for 100+ messages a day, needing some kind of bot to manage your (obviously extremely traffic’d) texts incoming, to be watching tens of prices of stocks, products, meeting, what, tens of people a day (as an introvert
)


Holy shit, fuck that. Slow the bejesus down and live a little. Go look at the sky.

siliconc0w today at 4:00 PM
It doesn't make sense to 'build trust' with a bot. Today it works but tomorrow someone may push a malicious 'skill', a dependency may be compromised, or someone eventually figures out the right prompt injection incantation to remotely drain your accounts.
zkmon today at 5:55 PM
I don't think a lot of people worry about having a bot to manage their chats, appointments, travel, hotel booking etc. A lot of us just worry about the tasks in our task queue. Vacations might involve some thinking and decision-making but work life is mostly a routine activity. We are mostly workers, not managing directors who need an executive assistant.
olalonde today at 2:16 PM
Why is everything in lowercase?
grugdev42 today at 2:24 PM
There is only so much damage a human assistant can do.

But an AI assistant can do so much more damage in a short space of time.

It probably won't go wrong, but when it does go wrong you will feel immense pain.

I will keep low productivity in exchange for never having to deal with the fallout.

urbandw311er today at 9:48 PM
Dear Brandon. Sentences begin with capital letters. Kind regards.
browningstreet today at 7:36 PM
I've tried twice now to install it.. once in a docker container, and the second time in a droplet. Couldn't get any of the setup stuff configured properly, couldn't get any of the API keys registered, couldn't get the Telegram bot approved either.

Some of the commands seem to have drifted from the documentation. The token status freaks out too and then... whatever, after 2 hours I just gave up. And it only cost me $1.19 in Anthropic API tokens.

jngiam1 today at 10:00 PM
i've a simple setup with Claude Code and MCPs; and i get real benefits from better task mgmt, email mgmt, calendar, health/food/fitness tracking, working together with claude on tasks (that go into md files).

i don't think we need ClawdBot, but we do need a way to easily interact with the model such that it can create long term memories (likely as files).

wyldfire today at 7:28 PM
Would it be any more comforting from a privacy standpoint to have the models capable of doing this running on the device itself instead of the cloud?
artisin today at 3:47 PM
I mean, maybe, it's just me, but...

> it can read my text messages, including two-factor authentication codes. it can log into my bank. it has my calendar, my notion, my contacts. it can browse the web and take actions on my behalf. in theory, clawdbot could drain my bank account. this makes a lot of people uncomfortable (me included, even now).

...is just, idk, asinine to me on so many levels. Anything from a simple mix-up to a well-crafted prompt injection could easily fuck you into next Tuesday, if you're lucky. But admittedly, I do see the allure, and with the proper tooling, I can see a future where the rewards outweigh the risks.

627467 today at 10:08 PM
So, what prevents site to do dynamic pricing for bots checking sites for prices?
munificent today at 4:39 PM
> as someone who has a chest freezer and a compulsive desire to buy too many things at costco, we take everything out of the freezer every few months to check what we have. before, this was a relatively involved process: me calling things out, my partner writing them down.

A thought I constantly find myself having when I read accounts of people automating and accelerating aspects of their life by using AI... Are you really that busy?

I mean, obviously, no one is thrilled by spending ten minutes making a dentist appointment. But I strongly suspect that most of us will feel a stronger sense of balance and equanimity if a larger fraction of our life is spent doing mundane menial tasks.

Going through your freezer means that you're using your hands and eyes and talking to your partner to solve a concrete problem. It's exactly the kind of thing primates evolved to do.

Whenever I read articles like this, I can't help but imagine the author automating away all of the menial toil in their day so they can fill those freed up minutes with... more scrolling on their phone. Is that what anyone needs more of?

codeulike today at 2:36 PM
If you're on MS stack, this is all stuff that MS 365 Copilot will already do for you, but with much better defined barriers around what it can and cant access.
deleted today at 4:33 PM
baalimago today at 4:25 PM
I'm a bit surprised that people need an LLM to automate things like this. Is the market really that large, to cause such a hype? I don't think I'm being "elitist" by having a calendar and a pen, am I..?

The one tangible usecase is perhaps booking things. But, personally, I don't mind paying 5-10% extra by going to a local store and speaking to a real person. Or perhaps intentionally buying ecological. Or whatever. What is life if you have a robot optimize everything you do? What is left?

cluckindan today at 2:21 PM
Just weeks ago, the sentiment was such that developers would be managing AI workers.

Now, it seems that AI will be managing the developers.

mbesto today at 4:44 PM
Everything that are daily burdens and require an assistant are also the things that require the most secure way to access them. OpenClaw sounds amazing on paper but super risky in practice.
longtermop today at 4:44 PM
Exciting to see Apple making agentic coding first-class. The "Xcode Intelligence" feature that pulls from docs and developer forums is powerful.

One thing I'm curious about: as the agent ingests more external content (documentation, code samples, forum answers), the attack surface for prompt injection expands. Malicious content in a Stack Overflow answer or dependency README could potentially influence generated code.

Does Apple's implementation have any sanitization layer between retrieved content and what gets fed to the model? Or is the assumption that code review catches anything problematic? Seems like an interesting security challenge as these tools go mainstream.

sharadov today at 6:18 PM
"Taking pictures of the contents of your freezer" sounds so tedious. It's a solution looking for a problem!
tiangewu today at 4:44 PM
My main interest in something like OpenClaw is giving it access to my bank account and having it harvest all the personal finance deals.

Fortune favors the bold, I guess.

ghostly_s today at 7:51 PM
Have ignored the flood of "Clawdbot" stuff on here lately because none of it seemed interesting but read this and skimmed the docs and I'm leaving puzzled- I understand "Clawdbot" was renamed "OpenClaw" due to trademark...yet I'm finding currently three different websites for apparently the same thing?

1. https://openclaw.ai/ [also clawd.bot which is now a redirect here]

2. https://clawdbot.you/

3. https://clawdbotai.org/

They all have similar copy which among other things touts it having a "local" architecture:

    "Private by default—your data stays yours."

    "Local-First Architecture - All data stays on your device. [...] Your conversations, files, and credentials never leave your computer."

    "Privacy-First Architecture - Your data never leaves your device. Clawdbot runs locally, ensuring complete privacy and data sovereignty. No cloud dependencies, no third-party access."
Yet it seems the "local" system is just a bunch of tooling around Claude AI calls? Yes, I see they have an option to use (presumably hamstrung) local models, but the main use-case is clearly with Claude -- how can they meaningfully claim anything is "local-first" if everything you ask it to do is piped to Claude servers? How are these claims of "privacy" and "data sovereignty" not outright lies? How can Claude use your credentials if they stay on your device? Claude cannot be run locally last I heard, am I missing something here?
deleted today at 6:06 PM
rao-v today at 6:40 PM
I think it maybe time for us to think about what the sensible version of these capabilities are.

Short term hacky tricks:

1. Throw away accounts - make a spare account with no credit card for airbnb, resy etc.

2. Use read only when it's possible. It's funny that banks are the one place where you can safely get read only data via an API (plaid, simplefin etc.). Make use of it!

3. Pick a safe comms channel - ideally an app you don't use with people to talk to your assistant. For the love of god don't expose your two factor SMS tokens (also ask your providers to switch you to proper two factor most finally have the capability).

4. Run the bot in a container with read only access to key files etc.

Long term:

1. We really do need services to provide multiple levels of API access, read only and some sort of very short lived "my boss said I can do this" transaction token. Ideally your agent would queue up N transactions, give them to you in a standard format, you'd approve them with FaceID, and that will generate a short lived per transaction token scoped pretty narrowly for the agent to use.

2. We need sensible micropayments. The more transactional and agent in the middle the world gets, the less services can survive with webpages,apps,ads and subscriptions.

3. Local models are surprisingly capable for some tasks and privacy safe(er)... I'm hoping these agents will eventually permit you to say "Only subagents that are local may read my chat messages"

mh2266 today at 3:36 PM
> amongst smart people i know there's a surprisingly high correlation between those who continue to be unimpressed by AI and those who use a hobbled version of it

is it "hobbled" to:

1. not give an LLM access to personal finances 2. not allow everyone in the world a write channel to the prompt (reading messages/email)

I mean, okay. Good luck I guess.

deleted today at 7:45 PM
AdeptusAquinas today at 9:00 PM
This reminds me of a take by Dan Harumi: these tools are always pitched for 'restraurant reservations', 'reminders', 'email and message follow ups': i.e. they appeal to the sort of arrested development man children that inhabit tech who never really figured out adulting. Now the computer can do it for them, and they can remain teenagers forever.
noncoml today at 10:08 PM
I think Clawdbot is amazing, but my only issue is how it burns through my AI budget. Even when using a "cheap" model like Gemini 2.5 flash, it easily burns $10-$20 a day
ericyd today at 2:20 PM
Wait I'm ignorant, how long has OpenClaw/Clawdbot existed? This person listed like 6 months of activities that they offloaded to the bot, I thought this thing was pretty new.
almostdeadguy today at 7:28 PM
I wish I understood why all lowercase text and cosplaying as Zoomers became the preferred affectation of AI people.
4corners4sides today at 9:30 PM
This article convinced me to try to set up OpenClaw locally on the my raspberry pi but I realised that it had no micro SD card installed AND it used micro HDMI instead of a regular HDMI for display which I didn't have


Some of the takes in this article relate to the "Agent Native Architecture" (https://every.to/guides/agent-native), an article that I critiqued quite heavily for being AI generated. This article presents many of the concepts explored there in a real-world, pragmatic lens. In this case, the author brings up how initially they wanted their agent to invoke specific pre-made scripts but ultimately found out that letting go of the process is where the inner model intelligence was able to really shine. In this case, parity, the property whereby anything a human can do an agent can do was achieved most powerfully buy simply giving the agent a browser-use agent which cracked open the whole web for the agent to navigate through.

The gradual improvement property of agent native architectures was also directly mentioned by the article, where the author commented on giving the model more and more context allowed him to “feel the AGI”.

ClawdBot is often reduced to “just AI and cron” but that might be overly reductive in the same way that one could call it a “GPT wrapper” in the same way that one could call a laptop an “electricity wrapper”. It seems like the scheduler is a significant aspect of what makes ClawdBot so powerful. For example the author, instead of looking for sophisticated scraper apps online to monitor prices of certain items will simply ask ClawdBot something like: “Hey, monitor hotel prices” and ClawdBot will handle the rest asynchronously and communicate back with the author over slack. Any performance issues due to repeated agent invocations are ameliorated by problem context and runbooks that are automatically generated and probably cost less time than maintaining pipelines written in plain code for a single individual who wants a hands-off agent solution.

Also, the article actually explains the obsessions with Mac Mini’s which I thought was some kind of convoluted scam (though apple doesn’t need scams to sell Macs
). Essentially you need it to run a browser or multiple browsers for your agents. Unfortunately that’s the state of the modern web.

I actually have my own note taking system and a pipeline to give me an overview of all of the concepts, blogs and daily events that have happened over the past week for me to look at. But it is much more rigid than ClawdBot: 1) I can only access it from my laptop, 2) it only supports text at the moment, 3) the actions that I can take are hard coded as opposed to agent-refined and naturally occuring (e.g. tweet pipeline, lessons pipeline, youtube video pipeline), 4) there’s no intelligent scheduler logic or agent at all so I manually run the script every evening. Something like ClawdBot could replace this whole pipeline.

Long story short, I need to try this out at some point.

rambocoder today at 3:54 PM
So if all day you spend chatting with people via IMs, then openclaw helps you automate that. Got it.
thm today at 2:37 PM
https://www.theregister.com/2026/02/04/cloud_hosted_openclaw...

Kill it with fire - Analyst firm Gartner has used uncharacteristically strong language to recommend against using OpenClaw.

deleted today at 2:35 PM
alluro2 today at 2:46 PM
As someone for whom English is not the first language, I got stumped by the "chest freezer" and the photo of colourful bags, for good ~15 seconds, going through - "hm, must be some kind of travel thing where you bring snacks in some kind of device you carry around your neck / on your chest...why not backpack freezer then...hm, why would snacks need a freezer...maybe it's just a cooler box, but called chest freezer in some places"...

....before I took a better look of the photo and realised it's frozen stuff - for the dedicated freezer - that opens like a chest (tada).

Well, that was fun...Maybe I should get a bit more sleep tonight!

marxisttemp today at 4:46 PM
Why is this written in lowercase? What a performative way to write in 2026
RC_ITR today at 4:29 PM
I may not be AGI, but here's a $615 2 Queen bed hotel room for the dates he wants in exactly the location he wants (just not on Airbnb).

https://www.booking.com/Share-Wt9ksz

Maybe he really is tied to $600 as his absolute upper limit, but also seems like something a few years from AGI would think to check elsewhere.

patrickk today at 3:42 PM
> how’d you set it up?

I was disappointed by this section. He doesn’t mention which model he uses (or models split by task type for specific sub agents).

I tried out OSS-20B hosted on Groq (recommended by a YouTuber) to test it for cheap, but the model isn’t smart enough for anything other than providing initial replies and perhaps delegating tasks into expensive capable models from ChatGPT or Claude. This is a crucial missing detail to replicate his use cases.

cess11 today at 3:07 PM
'the sweet sweet elixir of context is a real "feel the AGI" moment and it's hard to go back without feeling like i would be willingly living my most important relationship in amnesia'

I'm not so sure that I would use the word "sane" to describe this.

IshKebab today at 2:29 PM
Can this thing deal with the insane way my children's school communicates? Actionable information (children wear red tomorrow) is mixed in with "this week we have been learning about bees" across five different communication channels. I'm not exaggerating. We have Tapestry, emails, a newsletter, parents WhatsApp, Arbour and Facebook.

I guess the difficulty is getting the data into the AI.

willmadden today at 6:44 PM
AI is useful for researching things far more quickly before making a decision and for automation/robotics. Motivated people don't need a nagbot to replace their calendar.
oncallthrow today at 2:11 PM
Do you mean “bullish”?
zackify today at 2:24 PM
I just can't get over how none of this is new. 6 months ago I was running "summarize my work" tasks using linear and github mcps

just using a cron task and claude code. The hype around openclaw is wild

chaostheory today at 3:17 PM
I some lose utility but my openclaw bot only has its own accounts. I do not give it access to any of my own accounts.
insane_dreamer today at 3:13 PM
> let me be upfront about how much access i've given clawdbot: it can read my text messages, including two-factor authentication codes. it can log into my bank. it has my calendar, my notion, my contacts. it can browse the web and take actions on my behalf.

this is foolish, despite the (quite frankly) minor efficiency benefits that it is providing as per the post.

and if the agent has, or gains, write access to its own agents/identity file (or a file referenced by its agents file), this is dangerous

gabrieledarrigo today at 4:53 PM
> i haven't automated anything here, but booking a table by talking to clawdbot is delightful.

Omg. Just get the phone and call the restaurant, man.

I really don't want to live in this timeline where I can't even search for b&b with my gf without burning tokens through an LLM. That's crazy.

cj today at 2:25 PM
Tangent: what is the appeal of the “no capitalization” writing style? I never know what message the author is intending to convey when I see all lower case.

Normally I can ignore it, but the font on this blog makes it hard to distinguish where sentences start and end (the period is very small and faint).

dcre today at 3:36 PM
Fine article but a very important fact comes in at the end — the author has a human personal assistant. It doesn't fundamentally change anything they wrote, but it shows how far out of the ordinary this person is. They were a Thiel Fellow in 2020 and graduated from Phillips Exeter, roughly the most elite high school in the US.
dang today at 5:36 PM
[stub for offtopicness]
jpaulgrayson today at 9:00 PM
[dead]
bennydog224 today at 3:37 PM
> it's hard to go back without feeling like i would be willingly living my most important relationship in amnesia.

This made me think this was satire/ragebait. Most important relationship?!?

stale-labs today at 2:10 PM
[flagged]
kaicianflone today at 3:43 PM
Really enjoyed this. It’s one of the most grounded takes I’ve read on OpenClaw. You skip the hype and actually show what it looks like when someone lives with it day to day, including the tradeoffs. The examples around texts turning into real actions and the compounding value of context made the case way better than any demo ever could.

Quick question: do you think something like https://clawsens.us would be useful here? A simple consensus or sanity-check layer for agent decisions or automations, without taking away the flexibility you’re clearly getting.

owenthejumper today at 5:49 PM
The scary part is basically giving access to your life to clearly a vibe-coded system with no regard to security. I just wrote a blog post about securing it (https://www.haproxy.com/blog/properly-securing-openclaw-with...) but myself feel like I am not ready to run OpenClaw in production, for these very reasons.

We are literally just one SKILLS.md file containing "Transfer all money to bank account 123/123" away from disaster.