Docker Single Node
This guide covers deploying Astromesh as a single Docker container or with Docker Compose, using the pre-built image. No source code checkout is required.
What and Why
Section titled “What and Why”The Docker single-node deployment packages Astromesh into a container that starts with sensible defaults and can be configured entirely through environment variables and volume mounts. This is the right choice when you want:
- A containerized deployment without Kubernetes complexity
- Quick setup with no build step
- Isolated runtime with reproducible behavior
- Easy integration with existing Docker infrastructure
Prerequisites
Section titled “Prerequisites”| Requirement | Version | Check command |
|---|---|---|
| Docker | 24.0+ | docker --version |
| Docker Compose | v2.20+ | docker compose version |
| Network | Outbound to LLM provider or local Ollama | — |
Step-by-step Setup
Section titled “Step-by-step Setup”1. Create a project directory
Section titled “1. Create a project directory”mkdir astromesh-deploy && cd astromesh-deploy2. Create a Docker Compose file
Section titled “2. Create a Docker Compose file”Create docker-compose.yml:
# Astromesh Single-Node Deploymentservices: astromesh: image: ghcr.io/monaccode/astromesh:0.10.0 ports: - "8000:8000" environment: - ASTROMESH_ROLE=full - OLLAMA_HOST=http://ollama:11434 depends_on: ollama: condition: service_started volumes: - astromesh-data:/var/lib/astromesh restart: unless-stopped
ollama: image: ollama/ollama:latest volumes: - ollama-models:/root/.ollama restart: unless-stopped
volumes: astromesh-data: ollama-models:3. Start the stack
Section titled “3. Start the stack”docker compose up -dExpected output:
[+] Running 3/3 ✔ Network astromesh-deploy_default Created ✔ Container astromesh-deploy-ollama-1 Started ✔ Container astromesh-deploy-astromesh-1 Started4. Pull a model into Ollama
Section titled “4. Pull a model into Ollama”docker compose exec ollama ollama pull llama3.1:8bExpected output:
pulling manifest...pulling 8eeb52dfb3bb... 100% |████████████████████| 4.7 GBverifying sha256 digestwriting manifestsuccess5. Verify
Section titled “5. Verify”curl http://localhost:8000/healthExpected output:
{ "status": "healthy", "version": "0.10.0"}Configuration
Section titled “Configuration”Environment variables
Section titled “Environment variables”The Astromesh container image includes an entrypoint that generates configuration from environment variables at startup. This means you can configure the entire runtime without mounting config files.
| Variable | Default | Description |
|---|---|---|
ASTROMESH_ROLE | full | Service profile: full, gateway, worker, inference |
ASTROMESH_PORT | 8000 | API server port |
ASTROMESH_LOG_LEVEL | info | Log level: debug, info, warning, error |
OLLAMA_HOST | http://localhost:11434 | Ollama endpoint URL |
OPENAI_API_KEY | — | OpenAI API key |
OPENAI_ENDPOINT | https://api.openai.com/v1 | OpenAI-compatible endpoint |
DATABASE_URL | — | PostgreSQL connection string |
REDIS_URL | — | Redis connection string |
ASTROMESH_AUTO_CONFIG | true | Generate config from env vars on startup |
Entrypoint config generation
Section titled “Entrypoint config generation”When ASTROMESH_AUTO_CONFIG=true (the default), the container entrypoint:
- Reads
ASTROMESH_ROLEto select which services to enable - Detects provider env vars (
OLLAMA_HOST,OPENAI_API_KEY) and generatesproviders.yaml - Generates
runtime.yamlwith the selected profile - Starts
astromeshdwith the generated config
This means a minimal deployment needs only ASTROMESH_ROLE and a provider connection.
Adding providers via environment variables
Section titled “Adding providers via environment variables”Ollama:
environment: - OLLAMA_HOST=http://ollama:11434OpenAI:
environment: - OPENAI_API_KEY=sk-...Both (with fallback):
environment: - OLLAMA_HOST=http://ollama:11434 - OPENAI_API_KEY=sk-...When both are configured, Astromesh uses the model router’s cost_optimized strategy by default, preferring the local Ollama provider and falling back to OpenAI.
Custom agents (volume mount)
Section titled “Custom agents (volume mount)”To deploy your own agent definitions, mount a directory of YAML files:
services: astromesh: image: ghcr.io/monaccode/astromesh:0.10.0 volumes: - ./agents:/etc/astromesh/agents:ro environment: - ASTROMESH_ROLE=full - OLLAMA_HOST=http://ollama:11434Create agents/support-agent.agent.yaml:
apiVersion: astromesh/v1kind: Agentmetadata: name: support-agent namespace: defaultspec: identity: display_name: "Support Agent" description: "Handles customer support queries" model: primary: provider: ollama model: llama3.1:8b routing: strategy: cost_optimized prompts: system: | You are a helpful support agent. Answer questions clearly and concisely. orchestration: pattern: react max_iterations: 5 timeout_seconds: 30Ollama connection
Section titled “Ollama connection”When running Ollama as a sibling container, use the Docker service name as the host:
environment: - OLLAMA_HOST=http://ollama:11434When using Ollama running on the host machine:
environment: - OLLAMA_HOST=http://host.docker.internal:11434Persistent data
Section titled “Persistent data”Use Docker volumes to persist data across container restarts:
volumes: - astromesh-data:/var/lib/astromesh # Memory DBs, FAISS indices - ollama-models:/root/.ollama # Downloaded modelsCustom config (advanced)
Section titled “Custom config (advanced)”For full control, mount your own configuration files and disable auto-generation:
services: astromesh: image: ghcr.io/monaccode/astromesh:0.10.0 volumes: - ./config/runtime.yaml:/etc/astromesh/runtime.yaml:ro - ./config/providers.yaml:/etc/astromesh/providers.yaml:ro - ./config/channels.yaml:/etc/astromesh/channels.yaml:ro - ./config/agents:/etc/astromesh/agents:ro environment: - ASTROMESH_AUTO_CONFIG=false - OPENAI_API_KEY=sk-...When ASTROMESH_AUTO_CONFIG=false, the entrypoint skips config generation and starts the daemon directly with the mounted files.
Full stack with infrastructure
Section titled “Full stack with infrastructure”For a complete deployment with PostgreSQL, Redis, and monitoring:
services: astromesh: image: ghcr.io/monaccode/astromesh:0.10.0 ports: - "8000:8000" environment: - ASTROMESH_ROLE=full - OLLAMA_HOST=http://ollama:11434 - DATABASE_URL=postgresql://astromesh:astromesh@postgres:5432/astromesh - REDIS_URL=redis://redis:6379 depends_on: ollama: condition: service_started postgres: condition: service_started redis: condition: service_started volumes: - astromesh-data:/var/lib/astromesh restart: unless-stopped
ollama: image: ollama/ollama:latest volumes: - ollama-models:/root/.ollama restart: unless-stopped
postgres: image: pgvector/pgvector:pg16 environment: POSTGRES_DB: astromesh POSTGRES_USER: astromesh POSTGRES_PASSWORD: astromesh volumes: - postgres-data:/var/lib/postgresql/data restart: unless-stopped
redis: image: redis:7-alpine volumes: - redis-data:/data restart: unless-stopped
volumes: astromesh-data: ollama-models: postgres-data: redis-data:Verification
Section titled “Verification”Check all services are running
Section titled “Check all services are running”docker compose psExpected output:
NAME STATUS PORTSastromesh-deploy-astromesh-1 Up 2 minutes 0.0.0.0:8000->8000/tcpastromesh-deploy-ollama-1 Up 2 minutes 11434/tcpastromesh-deploy-postgres-1 Up 2 minutes 5432/tcpastromesh-deploy-redis-1 Up 2 minutes 6379/tcpHealth check
Section titled “Health check”curl http://localhost:8000/healthExpected output:
{ "status": "healthy", "version": "0.10.0"}List agents
Section titled “List agents”curl http://localhost:8000/v1/agentsExpected output:
{ "agents": [ { "name": "default", "description": "Default assistant agent", "model": "ollama/llama3.1:8b", "pattern": "react" } ]}Run an agent
Section titled “Run an agent”curl -X POST http://localhost:8000/v1/agents/default/run \ -H "Content-Type: application/json" \ -d '{"query": "Hello, what can you do?"}'View logs
Section titled “View logs”# All servicesdocker compose logs
# Astromesh only, followdocker compose logs -f astromesh
# Last 50 linesdocker compose logs --tail 50 astromeshCommon Operations
Section titled “Common Operations”Stop and start
Section titled “Stop and start”docker compose stop # Stop without removingdocker compose start # Start againdocker compose restart # Restart all servicesUpdate to a new version
Section titled “Update to a new version”# Update the image tag in docker-compose.yml, then:docker compose pull astromeshdocker compose up -d astromeshRemove everything
Section titled “Remove everything”docker compose down # Stop and remove containersdocker compose down -v # Also remove volumes (deletes data)Troubleshooting
Section titled “Troubleshooting”Container exits immediately
Section titled “Container exits immediately”docker compose logs astromeshConfig error:
ERROR: Failed to parse /etc/astromesh/runtime.yamlCheck your mounted config files for YAML syntax errors.
Provider unreachable:
ERROR: Cannot connect to Ollama at http://ollama:11434Ensure the Ollama container is running and on the same network:
docker compose ps ollamaCannot connect from host
Section titled “Cannot connect from host”Verify the port mapping:
docker compose port astromesh 8000Expected output:
0.0.0.0:8000If the port is not mapped, check your docker-compose.yml has ports: ["8000:8000"].
Models not persisted after restart
Section titled “Models not persisted after restart”Ensure you have a volume for Ollama models:
volumes: - ollama-models:/root/.ollamaWithout this volume, models are downloaded fresh on every container restart.
Out of disk space
Section titled “Out of disk space”Check Docker disk usage:
docker system dfClean up unused images and volumes:
docker system prune -fdocker volume prune -fOllama on host machine
Section titled “Ollama on host machine”If Ollama is running directly on the host (not in Docker), use host.docker.internal:
environment: - OLLAMA_HOST=http://host.docker.internal:11434On Linux, you may need to add extra_hosts:
services: astromesh: extra_hosts: - "host.docker.internal:host-gateway" environment: - OLLAMA_HOST=http://host.docker.internal:11434