Skip to content

Profiles Reference

Profiles are pre-built runtime.yaml files that configure Astromesh for common deployment roles. Instead of writing a runtime configuration from scratch, select a profile that matches your node’s purpose and customize it from there.

Each profile is a complete RuntimeConfig YAML file stored in config/profiles/. It sets spec.services to enable only the services relevant to a specific role and, for mesh profiles, includes the Maia networking configuration.

There are 7 profiles organized in two groups:

  • Standalone profiles (3) — for static multi-node deployments with explicit peer configuration
  • Mesh profiles (3) — for dynamic multi-node deployments with Maia gossip-based discovery
  • Full profile (1) — all services enabled, for single-node and development use

The following table shows which services are enabled in each profile:

Servicefullgatewayworkerinferencemesh-gatewaymesh-workermesh-inference
apiyesyesyesyesyesyesyes
agentsyesyesyes
inferenceyesyesyes
memoryyesyesyes
toolsyesyesyes
channelsyesyesyes
ragyesyesyes
observabilityyesyesyesyesyesyesyes
meshyesyesyes

File: config/profiles/full.yaml Use when: Running everything on a single node — development, testing, or small-scale production.

apiVersion: astromesh/v1
kind: RuntimeConfig
metadata:
name: full
spec:
api:
host: "0.0.0.0"
port: 8000
services:
api: true
agents: true
inference: true
memory: true
tools: true
channels: true
rag: true
observability: true
peers: []
defaults:
orchestration:
pattern: react
max_iterations: 10

All services are enabled and no peers or mesh networking are configured. This is the default profile generated by astromeshctl init --dev when you select the “standalone” role.

File: config/profiles/gateway.yaml Use when: This node is the entry point for external traffic (API requests, WhatsApp webhooks). It routes agent and inference work to backend workers.

apiVersion: astromesh/v1
kind: RuntimeConfig
metadata:
name: gateway
spec:
api:
host: "0.0.0.0"
port: 8000
services:
api: true
agents: false
inference: false
memory: false
tools: false
channels: true
rag: false
observability: true
peers:
- name: worker-1
url: http://worker:8000
services: [agents, tools, memory, rag]

The gateway handles API serving, channel integrations, and observability. Agent execution, inference, memory, tools, and RAG are disabled locally and forwarded to the configured peers. Update the peers list with the actual URLs of your worker nodes.

File: config/profiles/worker.yaml Use when: This node runs agents, tools, memory, and RAG pipelines. It delegates LLM inference to a separate inference node.

apiVersion: astromesh/v1
kind: RuntimeConfig
metadata:
name: worker
spec:
api:
host: "0.0.0.0"
port: 8000
services:
api: true
agents: true
inference: false
memory: true
tools: true
channels: false
rag: true
observability: true
peers:
- name: inference-1
url: http://inference:8000
services: [inference]

Workers handle the agent runtime, tool execution, memory management, and RAG queries. Inference requests are forwarded to the configured inference peers. Channels are disabled because the gateway handles external traffic.

File: config/profiles/inference.yaml Use when: This node is a dedicated LLM inference server. It serves model requests and nothing else.

apiVersion: astromesh/v1
kind: RuntimeConfig
metadata:
name: inference
spec:
api:
host: "0.0.0.0"
port: 8000
services:
api: true
agents: false
inference: true
memory: false
tools: false
channels: false
rag: false
observability: true
peers: []

Only the API, inference, and observability services are active. This profile is for GPU nodes that run LLM providers (Ollama, vLLM, TGI) and receive inference requests from worker peers.

File: config/profiles/mesh-gateway.yaml Use when: Same role as gateway, but with Maia mesh networking enabled for automatic peer discovery. This node is typically the seed node.

apiVersion: astromesh/v1
kind: RuntimeConfig
metadata:
name: mesh-gateway
spec:
api:
host: "0.0.0.0"
port: 8000
services:
api: true
agents: false
inference: false
memory: false
tools: false
channels: true
rag: false
observability: true
mesh:
enabled: true
node_name: gateway
bind: "0.0.0.0:8000"
seeds: []
heartbeat_interval: 5
gossip_interval: 2
gossip_fanout: 3
failure_timeout: 15
dead_timeout: 30

The seeds list is empty because the mesh-gateway is the first node in the mesh — it serves as the seed that other nodes contact to join. Workers and inference nodes point their seeds to this node’s address.

File: config/profiles/mesh-worker.yaml Use when: Same role as worker, but with Maia mesh networking enabled. Joins the mesh via the gateway seed node.

apiVersion: astromesh/v1
kind: RuntimeConfig
metadata:
name: mesh-worker
spec:
api:
host: "0.0.0.0"
port: 8000
services:
api: true
agents: true
inference: false
memory: true
tools: true
channels: false
rag: true
observability: true
mesh:
enabled: true
node_name: worker
bind: "0.0.0.0:8000"
seeds:
- http://gateway:8000
heartbeat_interval: 5
gossip_interval: 2
gossip_fanout: 3
failure_timeout: 15
dead_timeout: 30

The worker contacts http://gateway:8000 to join the mesh. Once connected, the gossip protocol handles peer discovery — the worker automatically learns about inference nodes and other workers.

File: config/profiles/mesh-inference.yaml Use when: Same role as inference, but with Maia mesh networking enabled. Joins the mesh via the gateway seed node.

apiVersion: astromesh/v1
kind: RuntimeConfig
metadata:
name: mesh-inference
spec:
api:
host: "0.0.0.0"
port: 8000
services:
api: true
agents: false
inference: true
memory: false
tools: false
channels: false
rag: false
observability: true
mesh:
enabled: true
node_name: inference
bind: "0.0.0.0:8000"
seeds:
- http://gateway:8000
heartbeat_interval: 5
gossip_interval: 2
gossip_fanout: 3
failure_timeout: 15
dead_timeout: 30

The inference node joins the mesh through the gateway and advertises its inference service. Workers in the mesh automatically discover it and route inference requests to it.

The Docker entrypoint automatically selects a profile based on two environment variables:

ASTROMESH_MESH_ENABLEDASTROMESH_ROLEProfile loaded
falsefullprofiles/full.yaml
falsegatewayprofiles/gateway.yaml
falseworkerprofiles/worker.yaml
falseinferenceprofiles/inference.yaml
truegatewayprofiles/mesh-gateway.yaml
trueworkerprofiles/mesh-worker.yaml
trueinferenceprofiles/mesh-inference.yaml

Example in docker-compose.yaml:

services:
gateway:
image: astromesh:latest
environment:
- ASTROMESH_ROLE=gateway
- ASTROMESH_MESH_ENABLED=true
worker:
image: astromesh:latest
environment:
- ASTROMESH_ROLE=worker
- ASTROMESH_MESH_ENABLED=true
inference:
image: astromesh:latest
environment:
- ASTROMESH_ROLE=inference
- ASTROMESH_MESH_ENABLED=true

The entrypoint script resolves the profile path:

  • When ASTROMESH_MESH_ENABLED=false (or unset): loads profiles/{ASTROMESH_ROLE}.yaml
  • When ASTROMESH_MESH_ENABLED=true: loads profiles/mesh-{ASTROMESH_ROLE}.yaml

To create a custom profile, copy an existing profile and modify the services, peers, or mesh sections:

Terminal window
# Start from the worker profile
cp config/profiles/worker.yaml config/profiles/custom-worker.yaml

Edit the file to match your requirements. For example, a worker that also handles channels:

apiVersion: astromesh/v1
kind: RuntimeConfig
metadata:
name: custom-worker
spec:
api:
host: "0.0.0.0"
port: 8000
services:
api: true
agents: true
inference: false
memory: true
tools: true
channels: true # Added: this worker also receives external messages
rag: true
observability: true
peers:
- name: inference-1
url: http://inference:8000
services: [inference]

To use a custom profile with Docker, set the ASTROMESH_PROFILE environment variable to the profile file path:

services:
worker:
image: astromesh:latest
environment:
- ASTROMESH_PROFILE=/app/config/profiles/custom-worker.yaml