Tuesday. Scrolling Hacker News. Top of the front page: some project called ClawdBot by Peter Steinberger. Never heard of either.
Clicked through. It’s not another chatbot. It’s an AI agent that runs on your machine with full control. Mouse, keyboard, shell access, browser. It can book restaurants, manage your emails, do research, automate workflows. You talk to it through WhatsApp or Telegram, and it just… does things on your computer.
The reason it was trending: Anthropic asked Pete to rename it. ClawdBot was too close to Claude. So it became MoltBot. By the time I started building, it had been renamed again: OpenClaw. Third name in a week. The lobster keeps molting.
I couldn’t stop thinking about it.
The security thing
The comments were predictable. “This is a security nightmare.” “Who would give an AI full shell access?” The usual.
And honestly, they’re not wrong. Running an autonomous agent with full permissions on your main machine is, as Pete himself put it, “spicy.” People have found exposed installations with visible credentials. It’s real.
But this is a solvable problem. You don’t run it on your main machine. You run it in an isolated environment with controlled access. Proper sandboxing, network policies, container isolation. I’ve spent years doing exactly this - hosting software securely for other people, most recently on Binance’s blockchain infrastructure team. When I see something like OpenClaw, my first thought isn’t “this is dangerous.” It’s “this needs the right infrastructure.”
By Thursday I knew I was going to build it.
MoltCloud
Friday night I started. Saturday afternoon I had a working hosted version. Kubernetes, multi-tenant isolation, auth, billing - the whole stack.
I’m writing this from Center Parcs De Huttenheugte. My family goes here every last weekend of January, have been doing that for 30 years now. So somewhere between family dinner and a walk in the forest, I shipped a hosted AI agent platform.
The name? When I decided to build this on Thursday, it was still called MoltBot. MoltCloud was obvious. By Friday it had become OpenClaw, so the name is already legacy. Kind of funny - named after a rename that happened before I even shipped.
First test was messaging my brothers through the bot. I had it send them reminders, look things up, annoy them with fun facts. They tolerated it.
What MoltCloud is
OpenClaw is powerful but raw. You need to self-host, configure, secure it yourself. Most people won’t do that.
MoltCloud is the hosted version. You sign up, scan a QR code, and you have an AI agent running in an isolated environment. No setup. No server. No worrying about security. Just WhatsApp and an AI that can actually do things for you.
Under the hood
For those who care about how the isolation actually works:
Each user gets their own container running in Kubernetes. Not shared. Not virtualized at the application layer. Actual container isolation with strict NetworkPolicies.
The NetworkPolicy is aggressive. Each instance can only talk to: WhatsApp’s servers (ports 5222/5223), HTTPS endpoints (443), and DNS. That’s it. No access to other pods. No access to the cluster network. No access to the metadata service. Egress to internal IPs is explicitly blocked.
The pods run non-root, read-only filesystem, all Linux capabilities dropped, seccomp enabled.
I wrote a Kubernetes operator in Go that handles the lifecycle. When you sign up, it creates your instance CR. The operator watches for that and spins up: a Deployment, two PVCs (config + workspace), a Service, a ConfigMap with a cryptographically random gateway token, and the NetworkPolicy. Reconciliation-based, so if anything drifts, it self-heals.
Authentication is magic links via Lucia, 256-bit tokens, 15-minute expiry. Sessions stored in Postgres. No passwords to leak.
The WhatsApp connection stays inside your container. Your session, your pod, your isolation. Messages aren’t logged, stored, or transmitted to the platform.
Is it perfect? No. I have cluster admin access, so I could technically kubectl exec into your pod. But the system is designed so that never happens in normal operation, and there’s no code path for messages to leave your container. Prompt injection is still an unsolved problem industry-wide. But the infrastructure isolation is real.
What’s next
I think OpenClaw can become something like the Linux of AI. Linux still needs a CPU. OpenClaw still needs an LLM. But the layer in between - the runtime, the interface, the agent framework - that can be open, community-driven, and not owned by any single company.
If that’s the future, someone needs to build the infrastructure. The most secure, scalable way to deploy OpenClaw instances. That’s what I’m trying to build.
Right now I just want to see if this resonates. If people actually use it. If the thing I couldn’t stop thinking about on Tuesday is something others want too.
If you want early access, drop me a line at jannes@moltcloud.io.