Skip to content

Studio Guide

Astromesh Studio is the visual agent builder at studio.astromesh.io. It provides a 5-step wizard for creating and deploying agents without writing YAML or code. Under the hood, it generates the same WizardConfig that you can also POST directly to the Cloud API.

  1. Configure the agent’s name and which LLM it uses.

    Fields:

    • Agent name — URL-safe identifier (used in API paths and runtime ID)
    • Display name — Human-readable label shown in Studio
    • Primary model — The LLM to use for responses
    • Fallback model — Used if the primary model fails or is unavailable

    Available models:

    ProviderModels
    OpenAIgpt-4o, gpt-4o-mini, gpt-4-turbo
    Anthropicclaude-3-5-sonnet, claude-3-haiku
    Groqllama3-8b-8192, mixtral-8x7b-32768
    Ollamallama3, mistral, phi3 (platform-hosted, no key required)

    To use OpenAI or Anthropic models, you need to add a provider key first (see Authentication).

  2. Write the system prompt that defines your agent’s behavior and personality.

    Fields:

    • System prompt — The main instruction set for the agent. Supports Jinja2 templating.
    • Persona — Short label used in Studio (e.g. “Support Agent”, “Data Analyst”)

    Jinja2 variables available in prompts:

    VariableDescription
    {{ session_id }}Current session identifier
    {{ context }}Context dict passed in the run request
    {{ context.user_id }}Example nested context access
    {{ today }}Current date

    Example system prompt:

    You are a helpful support agent for Acme Corp.
    You are speaking with user {{ context.user_id | default('a customer') }}.
    Always be concise, friendly, and professional.
    If you cannot answer a question, direct the user to support@acme.com.
  3. Select which tools the agent can call during a conversation.

    Available tools:

    ToolDescriptionStatus
    calculatorArithmetic and math expressionsAvailable
    web_searchSearch the web for current informationAvailable
    weatherGet current weather by locationAvailable
    datetimeGet current date/time in any timezoneAvailable
    code_interpreterExecute Python code snippetsComing soon
    file_readerRead uploaded filesComing soon
    sql_queryQuery a connected databaseComing soon
    http_requestCall external REST APIsComing soon

    Tools are managed by the platform’s ToolRegistry. The agent automatically decides when to call a tool based on the user’s query — you don’t need to configure invocation logic.

    MCP Tools: Custom tools via Model Context Protocol (MCP) are supported in the core runtime. Studio support for MCP tool configuration is coming in a future release.

  4. Configure how the agent remembers information across turns in a conversation.

    Memory type:

    TypeDescription
    conversationalStores recent chat history (recommended for most agents)
    semanticVector embeddings for long-term knowledge retrieval
    episodicEvent log of past interactions

    Memory strategy (for conversational memory):

    StrategyDescription
    sliding_windowKeep the last N messages. Fast, predictable cost.
    summarySummarize older messages to stay within token limits
    token_budgetKeep messages until a token budget is reached

    Window size: Number of messages to retain (for sliding_window). Default: 20.

    Memory is scoped to session_id. Reusing the same session_id across calls gives the agent continuity. Using a new session_id starts fresh.

  5. Add safety filters and resource constraints.

    Input guardrails (applied to user messages before the model sees them):

    GuardrailDescription
    pii_filterDetect and redact personal identifiable information
    prompt_injectionBlock prompt injection attempts
    topic_filterRestrict to allowed topics (requires configuration)

    Output guardrails (applied to model responses before returning to the user):

    GuardrailDescription
    content_safetyFilter harmful or inappropriate content
    hallucination_checkFlag responses with low confidence
    format_validatorEnsure output matches expected format

    Limits:

    • Max tokens — Maximum response length. Default: 1000.
    • Max iterations — Maximum tool call loops per request. Default: 5.
    • Timeout — Request timeout in seconds. Default: 30.

After completing the 5-step wizard, click Deploy to transition the agent from draft to deployed. Studio calls POST /orgs/{'{slug}'}/agents/{'{name}'}/deploy on your behalf.

Once deployed:

  • The agent gets a public endpoint you can copy from Studio
  • You can run test queries in the Studio playground (uses POST .../test)
  • Usage metrics appear on the org dashboard

To edit a deployed agent in Studio:

  1. Open the agent
  2. Click Pause — this deregisters the agent from the runtime
  3. Make your changes in the wizard
  4. Click Deploy to bring it back online

Changes to the system prompt, tools, or model take effect immediately on the next deploy.

Everything Studio does maps directly to Cloud API calls. If you prefer working in code or CI:

Terminal window
# Create (equivalent to completing the wizard)
curl -X POST "https://api.astromesh.io/api/v1/orgs/$ORG/agents" \
-H "Authorization: Bearer $TOKEN" \
-d '{ ... WizardConfig ... }'
# Deploy (equivalent to clicking Deploy)
curl -X POST "https://api.astromesh.io/api/v1/orgs/$ORG/agents/my-bot/deploy" \
-H "Authorization: Bearer $TOKEN"
# Test in playground (equivalent to Studio playground)
curl -X POST "https://api.astromesh.io/api/v1/orgs/$ORG/agents/my-bot/test" \
-H "Authorization: Bearer $TOKEN" \
-d '{"query": "Hello", "session_id": "test-1"}'

See the API Reference for complete documentation.