SlimClaw: A Personal AI Assistant You Can Set Up in 5 Minutes
I've been running NanoClaw for a while. It's a great project — a minimal Claude assistant on WhatsApp, built in TypeScript with container isolation. But every time I wanted to tweak something or debug an issue, I found myself fighting the Node.js runtime: 100 MB idle, native SQLite addons that break across versions, and a setup flow that assumed you had Claude Code installed.
Around the same time, nanobot showed that a personal AI agent doesn't need a complex architecture — pip install nanobot-ai and you're running. No Docker required, no IDE required, just Python and a config file.
I wanted both: NanoClaw's container isolation and per-group memory, with nanobot's simplicity of setup. So I built SlimClaw.
What is it
SlimClaw is a personal Claude assistant accessible via WhatsApp. You message it from your phone, and it responds using Claude's API running inside isolated Docker containers. Each group gets its own filesystem, memory, and conversation history.
The entire thing is a single Python process. An asyncio event loop runs three tasks: a message poller (2s), an IPC watcher (1s), and a task scheduler (60s). That's it.
Setup in one command
The thing I cared about most was onboarding. NanoClaw requires Claude Code for setup (/setup skill). nanobot requires editing a JSON config file. I wanted something in between — an interactive wizard that works in any terminal.
pip install slimclaw
slimclaw-setup
The wizard walks you through 13 steps: name your bot (default: TARS), choose your app (WhatsApp, or suggest a skill for Telegram/Discord), check dependencies, build the container, authenticate WhatsApp (QR code opens in your browser), pick a Claude model (Haiku/Sonnet/Opus), register your main channel, and start the service.
No AI IDE needed. No JSON config files. If you have Claude Code, you can also run /setup for an AI-guided experience that troubleshoots errors for you — but it's optional.
What I learned building it
Bridging Python's asyncio with a Go runtime (neonize/whatsmeow) was the hardest part. Go threads don't respect Python's thread-local storage, so every database call from a callback crashed until I dispatched everything through call_soon_threadsafe. The Go runtime also ignores os._exit() — the only way to terminate the process after authentication is SIGKILL.
The concurrency model needed careful thought. asyncio.create_task schedules but doesn't execute immediately, which creates a window where two callers can both see a resource as "free" and spawn duplicate work. The fix is simple once you see it: claim the resource at scheduling time, not execution time. The same pattern showed up three times — in message processing, task scheduling, and container lifecycle management.
The onboarding taught me the most. Every assumption I made about what users know was wrong. "Main channel" means nothing without explaining it's the private chat where you admin the bot. JIDs are internal identifiers that should never be shown. "Channel" should be "app." The best setup flow is one where the user never has to edit a config file or run SQL.
Design decisions
Skills over Features
Instead of building every integration into the codebase, SlimClaw uses Claude Code skills — markdown files in .claude/skills/ that teach Claude Code how to transform the project. Want Telegram? Run /add-telegram and the AI agent writes the actual integration code. If a skill doesn't exist, the setup wizard tells you how to contribute one. The core stays small.
Andrej Karpathy noticed this pattern in NanoClaw and called it "a new, AI-enabled approach to preventing config mess and if-then-else monsters." His framing: "the implied new meta is to write the most maximally forkable repo and then have skills that fork it into any desired more exotic configuration." That's exactly what SlimClaw is designed around — a minimal kernel that grows through skills, not feature flags.
Modular app system
SlimClaw started as WhatsApp-only. Now the app system is fully modular — adding support for any messaging platform (Telegram, Discord, Slack, Signal) means creating a single Python file in channels/. No changes to main.py or core code. The registry auto-discovers app classes at startup using pkgutil.iter_modules, and apps with missing dependencies are silently skipped.
There's even an LLM-readable skill (/add-app-template) that teaches any AI agent how to create a new app integration — the Channel protocol, JID convention, message handler pattern, credential handling, and optional dependency setup. The goal: someone can instruct an LLM to add support for any app without reading the source.
Group management
When someone mentions @TARS in a WhatsApp group that isn't registered, SlimClaw notifies your main channel instead of silently ignoring it. You reply "join Family Chat" and the group is live. Want to remove a group? Say "unregister Family Chat" — handled through the same file-based IPC with authorization checks (only the main channel can register or unregister groups).
Benchmarks
| Metric | SlimClaw | NanoClaw |
|---|---|---|
| Idle RSS | 30.2 MB | 100.3 MB |
| Final RSS | 54.1 MB | 138.9 MB |
| SQLite insert (10K) | 482 ms | 52 ms |
| SQLite query (10K) | 35 ms | 7.6 ms |
| Dependencies | 6 | 9 |
| Source lines | 4,860 | 6,650 |
Python uses 2x less memory. Node.js is faster at SQLite (native C++ addon with better-sqlite3). Both run the same Docker containers for agent execution.
Try it
Requirements: Python 3.11+, Docker, macOS or Linux. The setup wizard handles everything else.
If you're running OpenClaw or NanoClaw and want something lighter, or if you tried nanobot and want container isolation, SlimClaw sits in between. It's ~4,900 lines of Python, 6 dependencies, a modular app system that any LLM can extend, and a setup flow that doesn't assume you have an AI IDE installed.