Skip to content

Standalone (from source)

This guide walks you through running Astromesh directly from the source repository. This is the fastest way to get started for development, experimentation, and CI pipelines.

Running from source gives you direct access to the codebase, hot-reload on changes, and the full test suite. There is no Docker, no systemd, no packaging layer between you and the runtime. This is the right choice when you are:

  • Developing Astromesh itself or writing custom providers/tools
  • Experimenting with agent configurations before deploying
  • Running agents in a CI pipeline
  • Learning how the runtime works
RequirementVersionCheck command
Python3.12+python3 --version
uvlatestuv --version
Gitanygit --version
Ollama (optional)latestollama --version

Install uv if you do not have it:

Terminal window
curl -LsSf https://astral.sh/uv/install.sh | sh

Expected output:

Downloading uv...
Installing to /home/user/.local/bin
uv installed successfully.
Terminal window
git clone https://github.com/monaccode/astromesh.git
cd astromesh

Expected output:

Cloning into 'astromesh'...
remote: Enumerating objects: 1234, done.
remote: Counting objects: 100% (1234/1234), done.
Receiving objects: 100% (1234/1234), 256.00 KiB | 2.56 MiB/s, done.

Base install (API server + core runtime):

Terminal window
uv sync

Expected output:

Resolved 42 packages in 1.2s
Prepared 42 packages in 3.4s
Installed 42 packages in 0.8s

For production use or to enable all backends, install with extras:

Terminal window
uv sync --extra all
ExtraWhat it addsWhen you need it
redisRedis memory backend (hiredis)Conversational memory with Redis
postgresAsyncPG driverPostgreSQL episodic memory, pgvector
sqliteaiosqlite driverLightweight local memory
chromadbChromaDB clientChromaDB vector store
qdrantQdrant clientQdrant vector store
faissFAISS CPULocal FAISS vector search
embeddingssentence-transformersLocal embedding models
onnxONNX RuntimeONNX model inference
mlPyTorchGPU/CPU ML workloads
observabilityOpenTelemetry + PrometheusTracing and metrics
mcpModel Context ProtocolMCP tool servers
cliTyper + Richastromeshctl CLI
daemonsdnotifyastromeshd systemd integration
meshpsutilMulti-node mesh support
allEverything aboveFull installation

You can combine extras:

Terminal window
uv sync --extra redis --extra postgres --extra observability

Astromesh needs at least one LLM provider. The two easiest options are Ollama (local) or an OpenAI API key.

Option A: Ollama (local, no API key needed)

Install and start Ollama, then pull a model:

Terminal window
ollama serve &
ollama pull llama3.1:8b

Expected output:

pulling manifest...
pulling 8eeb52dfb3bb... 100% |████████████████████| 4.7 GB
verifying sha256 digest
writing manifest
success

The default provider configuration in config/providers.yaml already points to http://localhost:11434.

Option B: OpenAI API key

Set your API key as an environment variable:

Terminal window
export OPENAI_API_KEY="sk-..."

Edit config/providers.yaml to enable the OpenAI provider, or run the init wizard (next step).

The init wizard generates configuration files interactively:

Terminal window
uv run astromeshctl init

Expected output:

🔧 Astromesh Init Wizard
? Select provider: Ollama (local)
? Ollama endpoint: http://localhost:11434
? Select model: llama3.1:8b
? Enable memory? Yes
? Memory backend: SQLite (local)
✅ Configuration written to config/
- config/runtime.yaml
- config/providers.yaml
- config/agents/default.agent.yaml

Option A: uvicorn directly (recommended for development)

Terminal window
uv run uvicorn astromesh.api.main:app --host 0.0.0.0 --port 8000 --reload

Expected output:

INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
INFO: Started reloader process [12345] using StatReload
INFO: Started server process [12346]
INFO: Waiting for application startup.
INFO: Application startup complete.

The --reload flag watches for file changes and restarts automatically.

Option B: astromeshd daemon (for testing daemon behavior)

Terminal window
uv run astromeshd --config ./config/ --port 8000

Expected output:

INFO astromeshd starting (dev mode)
INFO Loading config from ./config/
INFO Loaded 1 agent(s): default
INFO Providers: ollama (healthy)
INFO API server listening on 0.0.0.0:8000
INFO Ready.
Terminal window
curl http://localhost:8000/health

Expected output:

{
"status": "healthy",
"version": "0.10.0"
}
Terminal window
curl http://localhost:8000/v1/agents

Expected output:

{
"agents": [
{
"name": "default",
"description": "Default assistant agent",
"model": "ollama/llama3.1:8b",
"pattern": "react"
}
]
}
Terminal window
curl -X POST http://localhost:8000/v1/agents/default/run \
-H "Content-Type: application/json" \
-d '{"query": "What is the capital of France?"}'

Expected output:

{
"response": "The capital of France is Paris.",
"agent": "default",
"model": "ollama/llama3.1:8b",
"tokens": {
"prompt": 24,
"completion": 8,
"total": 32
}
}
config/
├── runtime.yaml # Server settings (host, port, defaults)
├── providers.yaml # LLM provider connections
├── channels.yaml # Channel adapters (WhatsApp, etc.)
└── agents/
└── *.agent.yaml # Agent definitions

Secrets are passed via environment variables referenced in YAML config:

Terminal window
export OPENAI_API_KEY="sk-..."
export WHATSAPP_TOKEN="EAAx..."
export DATABASE_URL="postgresql://user:pass@localhost/astromesh"

Start the server with --reload so changes take effect immediately:

Terminal window
uv run uvicorn astromesh.api.main:app --reload

Edit any Python file or YAML config, and uvicorn restarts automatically.

Terminal window
# All tests
uv run pytest -v
# Single file
uv run pytest tests/test_api.py
# Single test
uv run pytest tests/test_api.py -k "test_health"
# With coverage
uv run pytest --cov=astromesh

Expected output (all tests):

========================= test session starts ==========================
collected 47 items
tests/test_api.py::test_health PASSED
tests/test_api.py::test_list_agents PASSED
tests/test_api.py::test_run_agent PASSED
...
========================= 47 passed in 3.21s ===========================
Terminal window
# Check for lint errors
uv run ruff check astromesh/ tests/
# Auto-format code
uv run ruff format astromesh/ tests/

Building Rust native extensions (optional)

Section titled “Building Rust native extensions (optional)”

Rust extensions provide 5-50x speedup for CPU-bound paths. They are optional; Python fallback is used automatically without them.

Terminal window
pip install maturin
maturin develop --release

To verify Rust extensions are loaded:

Terminal window
python -c "import astromesh._native; print('Rust extensions loaded')"

To force Python-only mode:

Terminal window
export ASTROMESH_FORCE_PYTHON=1
ERROR: [Errno 98] Address already in use

Find and stop the process using port 8000:

Terminal window
lsof -i :8000
kill <PID>

Or start on a different port:

Terminal window
uv run uvicorn astromesh.api.main:app --port 8001
ConnectionError: Cannot connect to http://localhost:11434

Start Ollama:

Terminal window
ollama serve

Check it is running:

Terminal window
curl http://localhost:11434/api/tags

Expected output:

{
"models": [
{"name": "llama3.1:8b", "size": 4661224960}
]
}
ERROR: This project requires Python >=3.12 but the running Python is 3.11.x

Install Python 3.12+ and ensure uv uses it:

Terminal window
uv python install 3.12
uv sync
ModuleNotFoundError: No module named 'redis'

You are missing an optional dependency. Install the extra you need:

Terminal window
uv sync --extra redis

Or install everything:

Terminal window
uv sync --extra all
FileNotFoundError: config/runtime.yaml not found

Make sure you are running from the repository root directory. The dev mode server looks for config in ./config/ relative to the current working directory. Run the init wizard to generate default config:

Terminal window
uv run astromeshctl init