Quick Start
This guide gets Astromesh Node installed and running on your system. Pick your platform and follow the steps.
Step 1: Install
Section titled “Step 1: Install”# Download the latest releasecurl -LO https://github.com/monaccode/astromesh/releases/latest/download/astromesh_latest_amd64.deb
# Installsudo apt install ./astromesh_latest_amd64.deb# Download the latest releasecurl -LO https://github.com/monaccode/astromesh/releases/latest/download/astromesh_latest_x86_64.rpm
# Installsudo rpm -i astromesh_latest_x86_64.rpm# or on Fedora/RHEL 8+:sudo dnf install ./astromesh_latest_x86_64.rpm# Download and extractcurl -LO https://github.com/monaccode/astromesh/releases/latest/download/astromesh_latest_darwin.tar.gztar -xzf astromesh_latest_darwin.tar.gz
# Run the installercd astromesh_latest_darwinsudo ./install.sh# Download and extractInvoke-WebRequest -Uri https://github.com/monaccode/astromesh/releases/latest/download/astromesh_latest_windows.zip -OutFile astromesh.zipExpand-Archive -Path astromesh.zip -DestinationPath astromesh
# Run the installer (as Administrator)cd astromesh.\install.ps1Verify the installation:
astromeshctl versionExpected output:
Astromesh Node v0.18.0Daemon: /opt/astromesh/bin/astromeshdCLI: /opt/astromesh/bin/astromeshctlPython: 3.12.xPlatform: linux/amd64Step 2: Initialize
Section titled “Step 2: Initialize”Run the interactive configuration wizard. This generates your runtime.yaml, providers.yaml, and a default agent:
sudo astromeshctl init --profile fullThe wizard prompts for your LLM provider (Ollama, OpenAI, Anthropic, etc.) and memory backend. For a quick start with a local model:
? Select deployment profile: full? Select provider: Ollama (local)? Ollama endpoint: http://localhost:11434? Select model: llama3.1:8b? Enable memory? Yes? Memory backend: SQLite (local)? Enable observability? No
Configuration written to /etc/astromesh/To use a specific profile without the interactive wizard, pass --profile and provider flags:
sudo astromeshctl init --profile full --provider ollama --model llama3.1:8b --non-interactiveStep 3: Start the Service
Section titled “Step 3: Start the Service”sudo systemctl enable astromeshdsudo systemctl start astromeshdsudo launchctl load /Library/LaunchDaemons/com.astromesh.astromeshd.plistStart-Service AstromeshDaemonSet-Service AstromeshDaemon -StartupType AutomaticStep 4: Verify
Section titled “Step 4: Verify”Check that the daemon is running and agents are loaded:
astromeshctl statusExpected output:
┌──────────────────────────────────────┐│ Astromesh Status │├──────────────┬───────────────────────┤│ Status │ ● Running ││ Version │ 0.18.0 ││ Uptime │ 0h 0m 12s ││ Profile │ full ││ PID │ 4521 ││ Agents │ 1 loaded ││ Providers │ 1 healthy, 0 degraded ││ Memory │ 128.0 MB │└──────────────┴───────────────────────┘Run a full diagnostics check:
astromeshctl doctorStep 5: Run Your First Agent
Section titled “Step 5: Run Your First Agent”With the service running, call the API:
curl -s http://localhost:8000/v1/agents/default/run \ -H "Content-Type: application/json" \ -d '{"query": "Hello! What can you do?"}' | jq .Expected response:
{ "agent": "default", "response": "Hello! I'm your Astromesh assistant. I can answer questions, help with research...", "session_id": "sess_abc123", "tokens_used": 42}Next Steps
Section titled “Next Steps”- Configuration — Customize
runtime.yaml, profiles, and environment variables - CLI Reference — Full
astromeshctlcommand reference - Troubleshooting — Common issues and
astromeshctl doctor - Your First Agent — Define a custom agent in YAML