Profiles Reference
Profiles are pre-built runtime.yaml files that configure Astromesh for common deployment roles. Instead of writing a runtime configuration from scratch, select a profile that matches your node’s purpose and customize it from there.
What Profiles Are
Section titled “What Profiles Are”Each profile is a complete RuntimeConfig YAML file stored in config/profiles/. It sets spec.services to enable only the services relevant to a specific role and, for mesh profiles, includes the Maia networking configuration.
There are 7 profiles organized in two groups:
- Standalone profiles (3) — for static multi-node deployments with explicit peer configuration
- Mesh profiles (3) — for dynamic multi-node deployments with Maia gossip-based discovery
- Full profile (1) — all services enabled, for single-node and development use
Service Matrix
Section titled “Service Matrix”The following table shows which services are enabled in each profile:
| Service | full | gateway | worker | inference | mesh-gateway | mesh-worker | mesh-inference |
|---|---|---|---|---|---|---|---|
| api | yes | yes | yes | yes | yes | yes | yes |
| agents | yes | — | yes | — | — | yes | — |
| inference | yes | — | — | yes | — | — | yes |
| memory | yes | — | yes | — | — | yes | — |
| tools | yes | — | yes | — | — | yes | — |
| channels | yes | yes | — | — | yes | — | — |
| rag | yes | — | yes | — | — | yes | — |
| observability | yes | yes | yes | yes | yes | yes | yes |
| mesh | — | — | — | — | yes | yes | yes |
Profile: full
Section titled “Profile: full”File: config/profiles/full.yaml
Use when: Running everything on a single node — development, testing, or small-scale production.
apiVersion: astromesh/v1kind: RuntimeConfigmetadata: name: fullspec: api: host: "0.0.0.0" port: 8000 services: api: true agents: true inference: true memory: true tools: true channels: true rag: true observability: true peers: [] defaults: orchestration: pattern: react max_iterations: 10All services are enabled and no peers or mesh networking are configured. This is the default profile generated by astromeshctl init --dev when you select the “standalone” role.
Profile: gateway
Section titled “Profile: gateway”File: config/profiles/gateway.yaml
Use when: This node is the entry point for external traffic (API requests, WhatsApp webhooks). It routes agent and inference work to backend workers.
apiVersion: astromesh/v1kind: RuntimeConfigmetadata: name: gatewayspec: api: host: "0.0.0.0" port: 8000 services: api: true agents: false inference: false memory: false tools: false channels: true rag: false observability: true peers: - name: worker-1 url: http://worker:8000 services: [agents, tools, memory, rag]The gateway handles API serving, channel integrations, and observability. Agent execution, inference, memory, tools, and RAG are disabled locally and forwarded to the configured peers. Update the peers list with the actual URLs of your worker nodes.
Profile: worker
Section titled “Profile: worker”File: config/profiles/worker.yaml
Use when: This node runs agents, tools, memory, and RAG pipelines. It delegates LLM inference to a separate inference node.
apiVersion: astromesh/v1kind: RuntimeConfigmetadata: name: workerspec: api: host: "0.0.0.0" port: 8000 services: api: true agents: true inference: false memory: true tools: true channels: false rag: true observability: true peers: - name: inference-1 url: http://inference:8000 services: [inference]Workers handle the agent runtime, tool execution, memory management, and RAG queries. Inference requests are forwarded to the configured inference peers. Channels are disabled because the gateway handles external traffic.
Profile: inference
Section titled “Profile: inference”File: config/profiles/inference.yaml
Use when: This node is a dedicated LLM inference server. It serves model requests and nothing else.
apiVersion: astromesh/v1kind: RuntimeConfigmetadata: name: inferencespec: api: host: "0.0.0.0" port: 8000 services: api: true agents: false inference: true memory: false tools: false channels: false rag: false observability: true peers: []Only the API, inference, and observability services are active. This profile is for GPU nodes that run LLM providers (Ollama, vLLM, TGI) and receive inference requests from worker peers.
Profile: mesh-gateway
Section titled “Profile: mesh-gateway”File: config/profiles/mesh-gateway.yaml
Use when: Same role as gateway, but with Maia mesh networking enabled for automatic peer discovery. This node is typically the seed node.
apiVersion: astromesh/v1kind: RuntimeConfigmetadata: name: mesh-gatewayspec: api: host: "0.0.0.0" port: 8000 services: api: true agents: false inference: false memory: false tools: false channels: true rag: false observability: true mesh: enabled: true node_name: gateway bind: "0.0.0.0:8000" seeds: [] heartbeat_interval: 5 gossip_interval: 2 gossip_fanout: 3 failure_timeout: 15 dead_timeout: 30The seeds list is empty because the mesh-gateway is the first node in the mesh — it serves as the seed that other nodes contact to join. Workers and inference nodes point their seeds to this node’s address.
Profile: mesh-worker
Section titled “Profile: mesh-worker”File: config/profiles/mesh-worker.yaml
Use when: Same role as worker, but with Maia mesh networking enabled. Joins the mesh via the gateway seed node.
apiVersion: astromesh/v1kind: RuntimeConfigmetadata: name: mesh-workerspec: api: host: "0.0.0.0" port: 8000 services: api: true agents: true inference: false memory: true tools: true channels: false rag: true observability: true mesh: enabled: true node_name: worker bind: "0.0.0.0:8000" seeds: - http://gateway:8000 heartbeat_interval: 5 gossip_interval: 2 gossip_fanout: 3 failure_timeout: 15 dead_timeout: 30The worker contacts http://gateway:8000 to join the mesh. Once connected, the gossip protocol handles peer discovery — the worker automatically learns about inference nodes and other workers.
Profile: mesh-inference
Section titled “Profile: mesh-inference”File: config/profiles/mesh-inference.yaml
Use when: Same role as inference, but with Maia mesh networking enabled. Joins the mesh via the gateway seed node.
apiVersion: astromesh/v1kind: RuntimeConfigmetadata: name: mesh-inferencespec: api: host: "0.0.0.0" port: 8000 services: api: true agents: false inference: true memory: false tools: false channels: false rag: false observability: true mesh: enabled: true node_name: inference bind: "0.0.0.0:8000" seeds: - http://gateway:8000 heartbeat_interval: 5 gossip_interval: 2 gossip_fanout: 3 failure_timeout: 15 dead_timeout: 30The inference node joins the mesh through the gateway and advertises its inference service. Workers in the mesh automatically discover it and route inference requests to it.
Docker Entrypoint Profile Selection
Section titled “Docker Entrypoint Profile Selection”The Docker entrypoint automatically selects a profile based on two environment variables:
ASTROMESH_MESH_ENABLED | ASTROMESH_ROLE | Profile loaded |
|---|---|---|
false | full | profiles/full.yaml |
false | gateway | profiles/gateway.yaml |
false | worker | profiles/worker.yaml |
false | inference | profiles/inference.yaml |
true | gateway | profiles/mesh-gateway.yaml |
true | worker | profiles/mesh-worker.yaml |
true | inference | profiles/mesh-inference.yaml |
Example in docker-compose.yaml:
services: gateway: image: astromesh:latest environment: - ASTROMESH_ROLE=gateway - ASTROMESH_MESH_ENABLED=true
worker: image: astromesh:latest environment: - ASTROMESH_ROLE=worker - ASTROMESH_MESH_ENABLED=true
inference: image: astromesh:latest environment: - ASTROMESH_ROLE=inference - ASTROMESH_MESH_ENABLED=trueThe entrypoint script resolves the profile path:
- When
ASTROMESH_MESH_ENABLED=false(or unset): loadsprofiles/{ASTROMESH_ROLE}.yaml - When
ASTROMESH_MESH_ENABLED=true: loadsprofiles/mesh-{ASTROMESH_ROLE}.yaml
Custom Profiles
Section titled “Custom Profiles”To create a custom profile, copy an existing profile and modify the services, peers, or mesh sections:
# Start from the worker profilecp config/profiles/worker.yaml config/profiles/custom-worker.yamlEdit the file to match your requirements. For example, a worker that also handles channels:
apiVersion: astromesh/v1kind: RuntimeConfigmetadata: name: custom-workerspec: api: host: "0.0.0.0" port: 8000 services: api: true agents: true inference: false memory: true tools: true channels: true # Added: this worker also receives external messages rag: true observability: true peers: - name: inference-1 url: http://inference:8000 services: [inference]To use a custom profile with Docker, set the ASTROMESH_PROFILE environment variable to the profile file path:
services: worker: image: astromesh:latest environment: - ASTROMESH_PROFILE=/app/config/profiles/custom-worker.yaml