Standalone (from source)
This guide walks you through running Astromesh directly from the source repository. This is the fastest way to get started for development, experimentation, and CI pipelines.
What and Why
Section titled “What and Why”Running from source gives you direct access to the codebase, hot-reload on changes, and the full test suite. There is no Docker, no systemd, no packaging layer between you and the runtime. This is the right choice when you are:
- Developing Astromesh itself or writing custom providers/tools
- Experimenting with agent configurations before deploying
- Running agents in a CI pipeline
- Learning how the runtime works
Prerequisites
Section titled “Prerequisites”| Requirement | Version | Check command |
|---|---|---|
| Python | 3.12+ | python3 --version |
| uv | latest | uv --version |
| Git | any | git --version |
| Ollama (optional) | latest | ollama --version |
Install uv if you do not have it:
curl -LsSf https://astral.sh/uv/install.sh | shExpected output:
Downloading uv...Installing to /home/user/.local/binuv installed successfully.Step-by-step Setup
Section titled “Step-by-step Setup”1. Clone the repository
Section titled “1. Clone the repository”git clone https://github.com/monaccode/astromesh.gitcd astromeshExpected output:
Cloning into 'astromesh'...remote: Enumerating objects: 1234, done.remote: Counting objects: 100% (1234/1234), done.Receiving objects: 100% (1234/1234), 256.00 KiB | 2.56 MiB/s, done.2. Install dependencies
Section titled “2. Install dependencies”Base install (API server + core runtime):
uv syncExpected output:
Resolved 42 packages in 1.2sPrepared 42 packages in 3.4sInstalled 42 packages in 0.8sFor production use or to enable all backends, install with extras:
uv sync --extra allAvailable extras
Section titled “Available extras”| Extra | What it adds | When you need it |
|---|---|---|
redis | Redis memory backend (hiredis) | Conversational memory with Redis |
postgres | AsyncPG driver | PostgreSQL episodic memory, pgvector |
sqlite | aiosqlite driver | Lightweight local memory |
chromadb | ChromaDB client | ChromaDB vector store |
qdrant | Qdrant client | Qdrant vector store |
faiss | FAISS CPU | Local FAISS vector search |
embeddings | sentence-transformers | Local embedding models |
onnx | ONNX Runtime | ONNX model inference |
ml | PyTorch | GPU/CPU ML workloads |
observability | OpenTelemetry + Prometheus | Tracing and metrics |
mcp | Model Context Protocol | MCP tool servers |
cli | Typer + Rich | astromeshctl CLI |
daemon | sdnotify | astromeshd systemd integration |
mesh | psutil | Multi-node mesh support |
all | Everything above | Full installation |
You can combine extras:
uv sync --extra redis --extra postgres --extra observability3. Configure a provider
Section titled “3. Configure a provider”Astromesh needs at least one LLM provider. The two easiest options are Ollama (local) or an OpenAI API key.
Option A: Ollama (local, no API key needed)
Install and start Ollama, then pull a model:
ollama serve &ollama pull llama3.1:8bExpected output:
pulling manifest...pulling 8eeb52dfb3bb... 100% |████████████████████| 4.7 GBverifying sha256 digestwriting manifestsuccessThe default provider configuration in config/providers.yaml already points to http://localhost:11434.
Option B: OpenAI API key
Set your API key as an environment variable:
export OPENAI_API_KEY="sk-..."Edit config/providers.yaml to enable the OpenAI provider, or run the init wizard (next step).
4. Run the init wizard (optional)
Section titled “4. Run the init wizard (optional)”The init wizard generates configuration files interactively:
uv run astromeshctl initExpected output:
🔧 Astromesh Init Wizard
? Select provider: Ollama (local)? Ollama endpoint: http://localhost:11434? Select model: llama3.1:8b? Enable memory? Yes? Memory backend: SQLite (local)
✅ Configuration written to config/ - config/runtime.yaml - config/providers.yaml - config/agents/default.agent.yaml5. Start the server
Section titled “5. Start the server”Option A: uvicorn directly (recommended for development)
uv run uvicorn astromesh.api.main:app --host 0.0.0.0 --port 8000 --reloadExpected output:
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)INFO: Started reloader process [12345] using StatReloadINFO: Started server process [12346]INFO: Waiting for application startup.INFO: Application startup complete.The --reload flag watches for file changes and restarts automatically.
Option B: astromeshd daemon (for testing daemon behavior)
uv run astromeshd --config ./config/ --port 8000Expected output:
INFO astromeshd starting (dev mode)INFO Loading config from ./config/INFO Loaded 1 agent(s): defaultINFO Providers: ollama (healthy)INFO API server listening on 0.0.0.0:8000INFO Ready.Verification
Section titled “Verification”Health check
Section titled “Health check”curl http://localhost:8000/healthExpected output:
{ "status": "healthy", "version": "0.10.0"}List loaded agents
Section titled “List loaded agents”curl http://localhost:8000/v1/agentsExpected output:
{ "agents": [ { "name": "default", "description": "Default assistant agent", "model": "ollama/llama3.1:8b", "pattern": "react" } ]}Run an agent
Section titled “Run an agent”curl -X POST http://localhost:8000/v1/agents/default/run \ -H "Content-Type: application/json" \ -d '{"query": "What is the capital of France?"}'Expected output:
{ "response": "The capital of France is Paris.", "agent": "default", "model": "ollama/llama3.1:8b", "tokens": { "prompt": 24, "completion": 8, "total": 32 }}Configuration
Section titled “Configuration”Project structure
Section titled “Project structure”config/├── runtime.yaml # Server settings (host, port, defaults)├── providers.yaml # LLM provider connections├── channels.yaml # Channel adapters (WhatsApp, etc.)└── agents/ └── *.agent.yaml # Agent definitionsEnvironment variables
Section titled “Environment variables”Secrets are passed via environment variables referenced in YAML config:
export OPENAI_API_KEY="sk-..."export WHATSAPP_TOKEN="EAAx..."export DATABASE_URL="postgresql://user:pass@localhost/astromesh"Common Operations
Section titled “Common Operations”Development workflow with hot-reload
Section titled “Development workflow with hot-reload”Start the server with --reload so changes take effect immediately:
uv run uvicorn astromesh.api.main:app --reloadEdit any Python file or YAML config, and uvicorn restarts automatically.
Running tests
Section titled “Running tests”# All testsuv run pytest -v
# Single fileuv run pytest tests/test_api.py
# Single testuv run pytest tests/test_api.py -k "test_health"
# With coverageuv run pytest --cov=astromeshExpected output (all tests):
========================= test session starts ==========================collected 47 items
tests/test_api.py::test_health PASSEDtests/test_api.py::test_list_agents PASSEDtests/test_api.py::test_run_agent PASSED...========================= 47 passed in 3.21s ===========================Linting and formatting
Section titled “Linting and formatting”# Check for lint errorsuv run ruff check astromesh/ tests/
# Auto-format codeuv run ruff format astromesh/ tests/Building Rust native extensions (optional)
Section titled “Building Rust native extensions (optional)”Rust extensions provide 5-50x speedup for CPU-bound paths. They are optional; Python fallback is used automatically without them.
pip install maturinmaturin develop --releaseTo verify Rust extensions are loaded:
python -c "import astromesh._native; print('Rust extensions loaded')"To force Python-only mode:
export ASTROMESH_FORCE_PYTHON=1Troubleshooting
Section titled “Troubleshooting”Port 8000 already in use
Section titled “Port 8000 already in use”ERROR: [Errno 98] Address already in useFind and stop the process using port 8000:
lsof -i :8000kill <PID>Or start on a different port:
uv run uvicorn astromesh.api.main:app --port 8001Ollama not running
Section titled “Ollama not running”ConnectionError: Cannot connect to http://localhost:11434Start Ollama:
ollama serveCheck it is running:
curl http://localhost:11434/api/tagsExpected output:
{ "models": [ {"name": "llama3.1:8b", "size": 4661224960} ]}Wrong Python version
Section titled “Wrong Python version”ERROR: This project requires Python >=3.12 but the running Python is 3.11.xInstall Python 3.12+ and ensure uv uses it:
uv python install 3.12uv syncImport errors after install
Section titled “Import errors after install”ModuleNotFoundError: No module named 'redis'You are missing an optional dependency. Install the extra you need:
uv sync --extra redisOr install everything:
uv sync --extra allConfig file not found
Section titled “Config file not found”FileNotFoundError: config/runtime.yaml not foundMake sure you are running from the repository root directory. The dev mode server looks for config in ./config/ relative to the current working directory. Run the init wizard to generate default config:
uv run astromeshctl init