📋 Docker Guide Contents
🐳 Why Use Docker for ClawdBot?
Isolation
Clean separation from host system
Portability
Run anywhere Docker runs
Easy Updates
Pull new image, restart container
Security
Sandboxed environment
When to Use Docker
- ✅ Deploying to cloud servers (AWS, DigitalOcean, etc.)
- ✅ Running multiple instances
- ✅ Need clean, reproducible deployments
- ✅ Want easy rollbacks and updates
- ✅ Prefer containerized infrastructure
When NOT to Use Docker
- ❌ Running on personal Mac/PC (native is simpler)
- ❌ Limited system resources
- ❌ Need direct hardware access
- ❌ Unfamiliar with Docker (steep learning curve)
🚀 Quick Start
Prerequisites
# Install Docker
# macOS/Windows: Download Docker Desktop from docker.com
# Linux:
curl -fsSL https://get.docker.com | sh
sudo usermod -aG docker $USER
# Log out and back in
# Verify installation
docker --version
docker compose version
Run ClawdBot Container (One Command)
docker run -d \
--name clawdbot \
--restart unless-stopped \
-v ~/.clawdbot:/root/.clawdbot \
-p 3000:3000 \
-e ANTHROPIC_API_KEY="your-key-here" \
clawdbot/clawdbot:latest
Check Status
# View logs
docker logs -f clawdbot
# Check if running
docker ps
# Access container shell
docker exec -it clawdbot /bin/bash
Stop and Remove
# Stop container
docker stop clawdbot
# Remove container
docker rm clawdbot
# Remove image
docker rmi clawdbot/clawdbot:latest
📝 Docker Compose Setup (Recommended)
Docker Compose makes managing ClawdBot much easier.
Basic docker-compose.yml
version: '3.8'
services:
clawdbot:
image: clawdbot/clawdbot:latest
container_name: clawdbot
restart: unless-stopped
# Ports
ports:
- "3000:3000"
# Volumes
volumes:
- ./clawdbot-data:/root/.clawdbot
- ./clawdbot-memory:/root/.clawdbot/memory
# Environment variables
environment:
- NODE_ENV=production
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
- OPENAI_API_KEY=${OPENAI_API_KEY}
- GOOGLE_API_KEY=${GOOGLE_API_KEY}
# Resource limits
deploy:
resources:
limits:
cpus: '2'
memory: 2G
reservations:
cpus: '1'
memory: 512M
Advanced docker-compose.yml with Multiple Services
version: '3.8'
services:
# ClawdBot Gateway
clawdbot:
image: clawdbot/clawdbot:latest
container_name: clawdbot-gateway
restart: unless-stopped
ports:
- "3000:3000"
volumes:
- ./config:/root/.clawdbot
- ./memory:/root/.clawdbot/memory
- ./logs:/var/log/clawdbot
environment:
- NODE_ENV=production
- LOG_LEVEL=info
env_file:
- .env
networks:
- clawdbot-network
depends_on:
- redis
# Redis for caching (optional)
redis:
image: redis:7-alpine
container_name: clawdbot-redis
restart: unless-stopped
volumes:
- redis-data:/data
networks:
- clawdbot-network
command: redis-server --appendonly yes
# Ollama for local AI (optional)
ollama:
image: ollama/ollama:latest
container_name: clawdbot-ollama
restart: unless-stopped
volumes:
- ollama-data:/root/.ollama
ports:
- "11434:11434"
networks:
- clawdbot-network
networks:
clawdbot-network:
driver: bridge
volumes:
redis-data:
ollama-data:
Environment File (.env)
# Create .env file in same directory as docker-compose.yml
# AI Provider API Keys
ANTHROPIC_API_KEY=sk-ant-xxx
OPENAI_API_KEY=sk-xxx
GOOGLE_API_KEY=xxx
# Telegram Bot
TELEGRAM_BOT_TOKEN=xxx
TELEGRAM_ALLOWED_USERS=123456789,987654321
# WhatsApp
WHATSAPP_ENABLED=true
# Discord
DISCORD_BOT_TOKEN=xxx
# Gateway Settings
GATEWAY_PORT=3000
LOG_LEVEL=info
# Redis Connection
REDIS_URL=redis://redis:6379
Using Docker Compose
# Start all services
docker compose up -d
# View logs
docker compose logs -f
# Stop all services
docker compose down
# Restart specific service
docker compose restart clawdbot
# Update to latest version
docker compose pull
docker compose up -d
# Remove everything (including volumes)
docker compose down -v
💾 Volume Management
Important Directories to Persist
| Container Path | Purpose | Must Persist? |
|---|---|---|
/root/.clawdbot |
Configuration files | ✅ Yes |
/root/.clawdbot/memory |
Conversation history | ✅ Yes |
/var/log/clawdbot |
Application logs | ⚠️ Recommended |
/root/.clawdbot/skills |
Custom skills | ⚠️ If using custom skills |
Volume Types
1. Bind Mounts (Recommended for Development)
volumes:
- ./local-config:/root/.clawdbot
- ./local-memory:/root/.clawdbot/memory
2. Named Volumes (Recommended for Production)
volumes:
- clawdbot-config:/root/.clawdbot
- clawdbot-memory:/root/.clawdbot/memory
volumes:
clawdbot-config:
clawdbot-memory:
Backup and Restore
# Backup volumes
docker run --rm \
-v clawdbot-config:/data \
-v $(pwd):/backup \
alpine tar czf /backup/clawdbot-backup-$(date +%Y%m%d).tar.gz /data
# Restore volumes
docker run --rm \
-v clawdbot-config:/data \
-v $(pwd):/backup \
alpine tar xzf /backup/clawdbot-backup-20260127.tar.gz -C /
🌐 Networking
Port Mapping
# Default gateway port
ports:
- "3000:3000"
# Custom port mapping
ports:
- "8080:3000" # Access on host:8080
# Bind to specific interface
ports:
- "127.0.0.1:3000:3000" # Localhost only
Reverse Proxy Setup (Nginx)
Add Nginx to docker-compose.yml:
nginx:
image: nginx:alpine
container_name: clawdbot-nginx
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
- ./ssl:/etc/nginx/ssl:ro
networks:
- clawdbot-network
depends_on:
- clawdbot
nginx.conf example:
events {
worker_connections 1024;
}
http {
upstream clawdbot {
server clawdbot:3000;
}
server {
listen 80;
server_name your-domain.com;
location / {
proxy_pass http://clawdbot;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
}
🏭 Production Deployment
Production-Ready docker-compose.yml
version: '3.8'
services:
clawdbot:
image: clawdbot/clawdbot:${CLAWDBOT_VERSION:-latest}
container_name: clawdbot-prod
restart: always
ports:
- "127.0.0.1:3000:3000"
volumes:
- clawdbot-config:/root/.clawdbot:rw
- clawdbot-memory:/root/.clawdbot/memory:rw
- clawdbot-logs:/var/log/clawdbot:rw
environment:
- NODE_ENV=production
- LOG_LEVEL=warn
- MAX_MEMORY=2048
env_file:
- .env.production
networks:
- clawdbot-network
# Health check
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
# Resource limits
deploy:
resources:
limits:
cpus: '2'
memory: 2G
reservations:
cpus: '0.5'
memory: 512M
# Security
security_opt:
- no-new-privileges:true
cap_drop:
- ALL
cap_add:
- NET_BIND_SERVICE
read_only: true
tmpfs:
- /tmp
networks:
clawdbot-network:
driver: bridge
volumes:
clawdbot-config:
driver: local
clawdbot-memory:
driver: local
clawdbot-logs:
driver: local
Deployment Checklist
- ✅ Use specific version tags (not
:latest) - ✅ Set resource limits
- ✅ Configure health checks
- ✅ Use named volumes for persistence
- ✅ Enable restart policies
- ✅ Set up log rotation
- ✅ Use secrets for sensitive data
- ✅ Configure reverse proxy
- ✅ Set up monitoring
- ✅ Plan backup strategy
Monitoring with Watchtower (Auto-Updates)
watchtower:
image: containrrr/watchtower
container_name: watchtower
restart: unless-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
- WATCHTOWER_CLEANUP=true
- WATCHTOWER_POLL_INTERVAL=86400 # Check daily
command: clawdbot-prod
🔧 Troubleshooting
Container Won't Start
# Check logs
docker logs clawdbot
# Inspect container
docker inspect clawdbot
# Check if port is in use
sudo lsof -i :3000
# Try running interactively
docker run -it --rm clawdbot/clawdbot:latest /bin/bash
Permission Issues
# Fix volume permissions
sudo chown -R 1000:1000 ./clawdbot-data
# Or run as specific user
docker run --user 1000:1000 ...
Out of Disk Space
# Clean up unused images
docker image prune -a
# Remove stopped containers
docker container prune
# Remove unused volumes
docker volume prune
# Complete cleanup
docker system prune -a --volumes
Networking Issues
# Test network connectivity
docker exec clawdbot ping google.com
# Check DNS
docker exec clawdbot nslookup google.com
# Recreate network
docker compose down
docker network prune
docker compose up -d
❓ Docker FAQ
Should I use Docker or native installation?
For personal use on Mac/PC, native is simpler. For servers and production,
Docker is recommended for isolation and easy updates.
How do I update ClawdBot in Docker?
Run
docker compose pull && docker compose up -d. Your data
persists in volumes.Can I run Ollama in the same container?
Not recommended. Run Ollama in a separate container and connect via network
(see docker-compose example).
How much overhead does Docker add?
Minimal - typically 50-100MB RAM and negligible CPU. Much lighter than VMs.
Can I access host files from the container?
Yes, use bind mounts:
-v /host/path:/container/path