nanoclaw

Agent-to-messaging bridge. Connects Claude Code agents to WhatsApp/Telegram/Discord groups, with scheduled task execution.

Goals

Make the Claude Code agent reachable from a phone — so that sending a WhatsApp message kicks off a pipeline run, receives a response, or sets a reminder — without building or hosting any of the messaging infrastructure ourselves.

Effectiveness

Works. This entire conversation is running through nanoclaw. Message delivery is reliable; scheduled tasks fire on time; the send_message tool lets the agent push updates mid-run rather than waiting until the end. The group registration and multi-channel model (WhatsApp, Telegram, Discord under one interface) is clean.

What made it effective

Bonus utility

The list_tasks / pause_task / resume_task / cancel_task tools make the scheduler inspectable and controllable from within the same conversation. No separate admin interface needed.

Friction / pain points / surprises

send_message is the only way to communicate with teammates. Plain text output from a sub-agent is not visible to the team lead or other agents. Easy to forget — a sub-agent can produce extensive reasoning and all of it disappears unless it explicitly calls send_message. Required re-reading the skill documentation to understand the messaging model.

once schedule timestamps must omit the Z suffix. Pass a local time without UTC marker. Passing a UTC timestamp silently schedules at the wrong time. The documentation says this but it's easy to reach for ISO format by habit.

No visibility into whether a scheduled task's last run succeeded or failed. list_tasks shows the task exists and its next run time; it doesn't show the outcome of the previous run. Diagnosing a task that "ran but did nothing" requires reading the task's output file directly.

Scheduled task agents have a turn limit that kills the subprocess tree. A task agent wrapping a long-running script (e.g. axe run podcasterbun src/pipeline.ts) gets killed when it hits the turn budget, taking all child processes with it. The pipeline leaves no error trace — just missing checkpoint entries and a 7-hour mystery. Fix: run deterministic scripts directly in the task prompt (bun src/pipeline.ts), not through an LLM intermediary that counts turns while waiting. If the task requires no judgment, it requires no agent.

HTTP_PROXY is injected into all agent environments. NanoClaw sets HTTP_PROXY=http://x:...@host.docker.internal:10255, routing all fetch calls through its proxy. Internal services (Antfly at host.docker.internal:8080, Ollama at 172.17.0.1:11434) return 400 through the proxy — silently, with an empty error body. Bun 1.2.4 reads proxy settings at process startup; setting process.env.NO_PROXY at runtime has no effect. Fix: export NO_PROXY="host.docker.internal,127.0.0.1,localhost,172.17.0.1" in .secrets, sourced before any bun invocation.