Skip to content

Runtime Configuration

The runtime configuration controls global settings for the Astromesh platform: which services are enabled, network binding, peer connections, mesh networking, and default agent behavior. It is the first file the runtime reads at startup.

Runtime configuration lives at config/runtime.yaml (development) or /etc/astromesh/runtime.yaml (production).

apiVersion: astromesh/v1
kind: RuntimeConfig
metadata:
name: default

Below is a complete runtime.yaml with all available fields:

apiVersion: astromesh/v1
kind: RuntimeConfig
metadata:
name: production
spec:
api:
host: "0.0.0.0"
port: 8000
services:
api: true
agents: true
inference: true
memory: true
tools: true
channels: true
rag: true
observability: true
peers:
- name: worker-1
url: http://worker-1:8000
services: [agents, tools, memory, rag]
- name: inference-1
url: http://inference-1:8000
services: [inference]
mesh:
enabled: true
node_name: gateway
bind: "0.0.0.0:8000"
seeds: []
heartbeat_interval: 5
gossip_interval: 2
gossip_fanout: 3
failure_timeout: 15
dead_timeout: 30
defaults:
orchestration:
pattern: react
max_iterations: 10

Controls the HTTP server binding.

FieldDefaultDescription
host"0.0.0.0"IP address to bind. Use "0.0.0.0" to listen on all interfaces, "127.0.0.1" for localhost only.
port8000TCP port for the API server.
spec:
api:
host: "0.0.0.0"
port: 8000

Eight boolean toggles that control which subsystems are active on this node. Disabling a service means the node will not run that subsystem — requests for that service are either forwarded to peers or rejected.

ServiceDescription
apiThe FastAPI HTTP/WebSocket server. Almost always true — disable only for headless worker nodes that receive work via mesh.
agentsThe agent runtime engine. Loads agent YAML definitions and executes agent queries. Disable on gateway-only or inference-only nodes.
inferenceLLM provider connections and model routing. Disable on nodes that delegate inference to dedicated inference peers.
memoryMemory backends (Redis, PostgreSQL, SQLite) for conversational, semantic, and episodic memory. Disable on nodes that do not manage state.
toolsThe tool registry for internal, MCP, webhook, and RAG-as-tool execution. Disable on nodes that do not run tools.
channelsChannel adapters for external messaging platforms (WhatsApp, etc.). Enable on gateway or standalone nodes that receive external messages.
ragRAG pipeline execution — document chunking, embedding, vector search, and reranking. Disable on nodes that do not serve RAG queries.
observabilityOpenTelemetry tracing, metrics, and logging. Recommended to keep enabled on all nodes for operational visibility.
spec:
services:
api: true
agents: true
inference: true
memory: true
tools: true
channels: true
rag: true
observability: true

When all services are true, the node operates as a standalone deployment. For multi-node setups, disable services that run on other nodes and configure peers or mesh to route requests.

Static peer configuration for multi-node deployments without Maia mesh networking. Each peer declares its URL and which services it provides.

spec:
peers:
- name: worker-1
url: http://worker-1:8000
services: [agents, tools, memory, rag]
- name: inference-1
url: http://inference-1:8000
services: [inference]
FieldDescription
nameA human-readable identifier for the peer. Used in logs and health check reporting.
urlThe peer’s API endpoint URL. Must be reachable from this node.
servicesList of services the peer provides. The runtime routes requests for these services to this peer.

Peers are checked periodically for health. If a peer becomes unreachable, it is temporarily removed from the routing pool.

Use peers when you have a fixed number of nodes with known addresses. For dynamic environments where nodes join and leave, use Maia mesh networking instead.

Maia mesh networking configuration. When enabled, nodes discover each other automatically using a gossip protocol instead of static peer lists.

spec:
mesh:
enabled: true
node_name: gateway
bind: "0.0.0.0:8000"
seeds:
- http://gateway:8000
heartbeat_interval: 5
gossip_interval: 2
gossip_fanout: 3
failure_timeout: 15
dead_timeout: 30
FieldDefaultDescription
enabledfalseEnable Maia mesh networking on this node.
node_nameUnique name for this node in the mesh. Used in gossip protocol messages.
bind"0.0.0.0:8000"Address and port for mesh protocol communication.
seeds[]List of seed node URLs to contact when joining the mesh. Leave empty on the first node (it becomes the seed).
heartbeat_interval5Seconds between heartbeat broadcasts to announce this node is alive.
gossip_interval2Seconds between gossip protocol rounds for state synchronization.
gossip_fanout3Number of random peers to contact during each gossip round.
failure_timeout15Seconds without a heartbeat before a node is marked as suspected failed.
dead_timeout30Seconds without a heartbeat before a node is marked as dead and removed from the mesh.

The seeds list bootstraps mesh membership. When a node starts, it contacts each seed to join the mesh and receive the current member list. The seed node itself should have an empty seeds list — it is the initial contact point.

Default values applied to agents that do not specify these fields in their own YAML.

spec:
defaults:
orchestration:
pattern: react
max_iterations: 10
FieldDefaultDescription
orchestration.patternreactDefault orchestration pattern for agents without an explicit spec.orchestration.pattern.
orchestration.max_iterations10Default maximum iterations for agents without an explicit spec.orchestration.max_iterations.

Agent-level settings always override these defaults.

You can change where the runtime looks for configuration files by setting the ASTROMESH_CONFIG_DIR environment variable:

Terminal window
# Point to a custom config directory
ASTROMESH_CONFIG_DIR=/opt/myapp/config uv run uvicorn astromesh.api.main:app
# Or export it
export ASTROMESH_CONFIG_DIR=/opt/myapp/config

The runtime loads all configuration files (runtime.yaml, providers.yaml, channels.yaml, agents/*.agent.yaml, rag/*.rag.yaml) from the specified directory.

Use separate config directories for different environments:

Terminal window
# Development
ASTROMESH_CONFIG_DIR=./config/dev uv run uvicorn astromesh.api.main:app
# Staging
ASTROMESH_CONFIG_DIR=./config/staging uv run uvicorn astromesh.api.main:app
# Production
ASTROMESH_CONFIG_DIR=/etc/astromesh uv run uvicorn astromesh.api.main:app

A typical project structure for multi-environment configs:

config/
├── dev/
│ ├── runtime.yaml
│ ├── providers.yaml
│ └── agents/
├── staging/
│ ├── runtime.yaml
│ ├── providers.yaml
│ └── agents/
└── prod/
├── runtime.yaml
├── providers.yaml
└── agents/

When running in Docker, mount your config directory into the container:

docker-compose.override.yaml
services:
astromesh:
volumes:
- ./my-configs:/app/config

Or set the environment variable in your docker-compose.yaml:

services:
astromesh:
environment:
- ASTROMESH_CONFIG_DIR=/app/config
volumes:
- ./my-configs:/app/config