agentbreeder

Registry Guide

How to use the AgentBreeder org-wide registry.

Registry Guide — Prompts, Tools, RAG, Memory & Agents

Step-by-step workflows for creating, editing, testing, versioning, and registering every resource type in AgentBreeder.


Overview

AgentBreeder's registry is a shared catalog of reusable resources. Instead of copy-pasting configs between agents, you register a resource once and reference it by name everywhere.

ResourceWhat it isReference syntax
PromptSystem/user prompt template with variablesprompts/support-system-v3
ToolFunction, API, or MCP server an agent can calltools/zendesk-lookup
Knowledge BaseVector-indexed documents for RAG retrievalkb/product-docs
MemoryConversation history / state storage configmemory/session-buffer
MCP ServerModel Context Protocol server with discoverable toolsmcp/slack-server
AgentA deployed AI agentagents/support-agent

The workflow for every resource is the same:

Create (YAML or API)  →  Validate  →  Register  →  Reference in agent.yaml  →  Deploy

Prompts

What is a Prompt?

A prompt is a versioned template with variable placeholders ({{variable}}). Prompts live in the registry so multiple agents can share and version them independently.

Step 1: Create a Prompt YAML File

Create prompt.yaml:

spec_version: v1
name: support-system-v3
version: 1.0.0
description: "System prompt for tier-1 customer support agents"
team: customer-success
owner: alice@company.com
tags: [support, system-prompt, production]

# Inline content (for short prompts)
content: |
  You are a helpful customer support agent for {{company_name}}.

  Your responsibilities:
  - Answer questions about {{product_name}}
  - Look up orders using the order-lookup tool
  - Escalate billing issues to a human

  Tone: Professional but friendly.
  Language: {{language}}

  If you don't know the answer, say so. Never make up information.

# Define variables with defaults
variables:
  - name: company_name
    description: "The company name to use in responses"
    default: "Acme Corp"
    required: true
  - name: product_name
    description: "Primary product name"
    default: "AcmeBot"
  - name: language
    description: "Response language"
    default: "English"

# Optional hints for the model
metadata:
  model_hint: claude-sonnet-4
  max_tokens: 4096
  temperature: 0.7

For long prompts, use a separate file:

# prompt.yaml — reference external content
name: support-system-v3
version: 1.0.0
content_ref: ./prompts/support-system.md   # Path to .md file
variables:
  - name: company_name
    required: true

Step 2: Validate the Prompt

agentbreeder validate prompt.yaml

Expected output:

✅ YAML syntax valid
✅ JSON Schema valid (prompt.schema.json)
✅ All required fields present
✅ Variable names are valid identifiers

Step 3: Register the Prompt in the Registry

=== "CLI (Git workflow)"

# Submit creates a review branch and opens a PR
agentbreeder submit prompt support-system-v3 \
  --message "Initial system prompt for support agents"

Output:

✅ Created branch: draft/alice/prompt/support-system-v3
✅ PR opened: #42 — Update prompt/support-system-v3
   Files changed: 1 added
   Reviewers: auto-assigned from team customer-success

=== "API (direct registration)"

curl -X POST http://localhost:8000/api/v1/registry/prompts \
  -H "Content-Type: application/json" \
  -d '{
    "name": "support-system-v3",
    "version": "1.0.0",
    "content": "You are a helpful customer support agent for {{company_name}}...",
    "description": "System prompt for tier-1 customer support agents",
    "team": "customer-success"
  }'

=== "Dashboard"

  1. Go to Registry → Prompts → Create New
  2. Fill in the name, version, team, and content
  3. Click Register

Step 4: Test the Prompt

Test how your prompt renders with variables and how the model responds:

=== "API"

curl -X POST http://localhost:8000/api/v1/prompts/test \
  -H "Content-Type: application/json" \
  -d '{
    "prompt_text": "You are a helpful support agent for {{company_name}}. Product: {{product_name}}.",
    "variables": {
      "company_name": "Acme Corp",
      "product_name": "AcmeBot"
    },
    "model_name": "claude-sonnet-4",
    "temperature": 0.7
  }'

Response:

{
  "data": {
    "rendered_prompt": "You are a helpful support agent for Acme Corp. Product: AcmeBot.",
    "response_text": "Hello! I'm here to help you with AcmeBot...",
    "model_name": "claude-sonnet-4",
    "input_tokens": 42,
    "output_tokens": 128,
    "total_tokens": 170,
    "latency_ms": 850
  }
}

=== "Dashboard"

  1. Go to Registry → Prompts → your prompt
  2. Click the Test tab
  3. Fill in variables and click Run Test
  4. See the rendered prompt and model response side-by-side

Step 5: Reference the Prompt in an Agent

# agent.yaml
name: support-agent
framework: langgraph
model:
  primary: claude-sonnet-4

prompts:
  system: prompts/support-system-v3    # ← registry reference

Or inline for simple agents:

prompts:
  system: "You are a helpful assistant."

Edit an Existing Prompt

=== "CLI"

# View current content
agentbreeder describe prompt support-system-v3

=== "API — Simple update"

curl -X PUT http://localhost:8000/api/v1/registry/prompts/{prompt_id} \
  -H "Content-Type: application/json" \
  -d '{
    "content": "Updated prompt content with {{new_variable}}...",
    "description": "Updated description"
  }'

=== "API — Update with version snapshot"

# This auto-creates a version snapshot for rollback
curl -X PUT http://localhost:8000/api/v1/registry/prompts/{prompt_id}/content \
  -H "Content-Type: application/json" \
  -d '{
    "content": "Updated prompt content...",
    "change_summary": "Added escalation instructions",
    "author": "alice@company.com"
  }'

Version History and Diff

# List all versions
curl http://localhost:8000/api/v1/registry/prompts/{prompt_id}/versions/history

# Compare two versions
curl http://localhost:8000/api/v1/registry/prompts/{prompt_id}/versions/history/{v1_id}/diff/{v2_id}

Response (diff):

{
  "data": {
    "version_a": { "version": "1.0.0", "content": "..." },
    "version_b": { "version": "1.1.0", "content": "..." },
    "diff": [
      "--- v1.0.0",
      "+++ v1.1.0",
      "@@ -3,2 +3,4 @@",
      " Your responsibilities:",
      "+- Escalate billing issues to a human",
      "+- Log all interactions"
    ]
  }
}

Duplicate a Prompt

# Create a copy as a new version (useful for A/B testing)
curl -X POST http://localhost:8000/api/v1/registry/prompts/{prompt_id}/duplicate

Review and Publish (Git Workflow)

# List pending reviews
agentbreeder review list --status submitted --type prompt

# Show PR details
agentbreeder review show {pr_id}

# Approve
agentbreeder review approve {pr_id}

# Publish to registry (merges the PR)
agentbreeder publish prompt support-system-v3 --version 1.0.0

Tools

What is a Tool?

A tool is something an agent can call — a function, an API endpoint, or an MCP server. Tools have typed input/output schemas so the agent knows how to use them.

Tool Types

TypeDescriptionWhen to use
functionPython/TypeScript function bundled with the agentSimple, self-contained logic
apiExternal HTTP API endpointCalling third-party services
mcpModel Context Protocol serverRich tool ecosystems, shared tool servers

Step 1: Create a Tool YAML File

Create tool.yaml:

spec_version: v1
name: order-lookup
version: 1.0.0
description: "Look up customer orders by order ID or email"
team: customer-success
owner: alice@company.com
tags: [orders, support, api]

type: function

input_schema:
  type: object
  properties:
    order_id:
      type: string
      description: "Order ID (e.g., ORD-12345)"
    email:
      type: string
      format: email
      description: "Customer email address"
  oneOf:
    - required: [order_id]
    - required: [email]

output_schema:
  type: object
  properties:
    order_id:
      type: string
    status:
      type: string
      enum: [pending, shipped, delivered, cancelled]
    items:
      type: array
      items:
        type: object
        properties:
          name: { type: string }
          quantity: { type: integer }
          price: { type: number }

implementation:
  language: python
  entrypoint: handler.py:run
  dependencies:
    - httpx>=0.25
    - pydantic>=2.0

timeout_seconds: 30
network_access: true

Step 2: Write the Implementation

Create handler.py:

import httpx


async def run(input_data: dict) -> dict:
    """Look up an order by ID or email."""
    async with httpx.AsyncClient() as client:
        if "order_id" in input_data:
            resp = await client.get(
                f"https://api.internal/orders/{input_data['order_id']}"
            )
        else:
            resp = await client.get(
                "https://api.internal/orders",
                params={"email": input_data["email"]},
            )
        resp.raise_for_status()
        return resp.json()

Step 3: Test in the Sandbox

Before registering, test your tool in an isolated sandbox:

curl -X POST http://localhost:8000/api/v1/tools/sandbox/execute \
  -H "Content-Type: application/json" \
  -d '{
    "code": "async def run(input_data):\n    return {\"order_id\": input_data[\"order_id\"], \"status\": \"shipped\", \"items\": [{\"name\": \"Widget\", \"quantity\": 2, \"price\": 9.99}]}",
    "input_json": {"order_id": "ORD-12345"},
    "timeout_seconds": 30,
    "network_enabled": false
  }'

Response:

{
  "data": {
    "execution_id": "exec-abc123",
    "output": {
      "order_id": "ORD-12345",
      "status": "shipped",
      "items": [{"name": "Widget", "quantity": 2, "price": 9.99}]
    },
    "stdout": "",
    "stderr": "",
    "exit_code": 0,
    "duration_ms": 45,
    "timed_out": false
  }
}

Step 4: Register the Tool

=== "CLI"

agentbreeder submit tool order-lookup \
  --message "Order lookup tool for support agents"

=== "API"

curl -X POST http://localhost:8000/api/v1/registry/tools \
  -H "Content-Type: application/json" \
  -d '{
    "name": "order-lookup",
    "description": "Look up customer orders by order ID or email",
    "tool_type": "function",
    "schema_definition": {
      "input": {
        "type": "object",
        "properties": {
          "order_id": {"type": "string"},
          "email": {"type": "string", "format": "email"}
        }
      },
      "output": {
        "type": "object",
        "properties": {
          "order_id": {"type": "string"},
          "status": {"type": "string"}
        }
      }
    },
    "source": "manual"
  }'

Step 5: Reference the Tool in an Agent

# agent.yaml
name: support-agent
framework: langgraph
model:
  primary: claude-sonnet-4

tools:
  - ref: tools/order-lookup           # ← registry reference
  - ref: tools/zendesk-mcp

  # Or define inline for simple tools:
  - name: calculator
    type: function
    description: "Perform arithmetic calculations"
    schema:
      input:
        type: object
        properties:
          expression: { type: string }

Check Which Agents Use a Tool

curl http://localhost:8000/api/v1/registry/tools/{tool_id}/usage
{
  "data": [
    {"agent_id": "uuid-1", "agent_name": "support-agent", "agent_status": "running"},
    {"agent_id": "uuid-2", "agent_name": "sales-agent", "agent_status": "stopped"}
  ]
}

MCP Servers

What is an MCP Server?

An MCP (Model Context Protocol) server exposes a set of tools over a standard protocol. Instead of defining each tool individually, you register one MCP server and all its tools become available to your agents.

Step 1: Register an MCP Server

=== "API"

curl -X POST http://localhost:8000/api/v1/mcp-servers \
  -H "Content-Type: application/json" \
  -d '{
    "name": "slack-server",
    "endpoint": "http://mcp-slack:3000",
    "transport": "sse"
  }'

=== "CLI (scan for MCP servers)"

# Auto-discover MCP servers from your environment
agentbreeder scan

Transport options:

TransportDescriptionExample endpoint
stdioStandard I/O (local process)npx -y @modelcontextprotocol/server-slack
sseServer-Sent Events over HTTPhttp://mcp-slack:3000
streamable_httpStreamable HTTPhttp://mcp-slack:3000/mcp

Step 2: Test Connectivity

curl -X POST http://localhost:8000/api/v1/mcp-servers/{server_id}/test
{
  "data": {
    "success": true,
    "latency_ms": 45,
    "error": null
  }
}

Step 3: Discover Available Tools

curl -X POST http://localhost:8000/api/v1/mcp-servers/{server_id}/discover
{
  "data": {
    "tools": [
      {
        "name": "send_message",
        "description": "Send a message to a Slack channel",
        "schema_definition": {
          "type": "object",
          "properties": {
            "channel": {"type": "string"},
            "text": {"type": "string"}
          }
        }
      },
      {
        "name": "list_channels",
        "description": "List all Slack channels",
        "schema_definition": {}
      }
    ],
    "total": 2
  }
}

Step 4: Test a Tool on the Server

curl -X POST "http://localhost:8000/api/v1/mcp-servers/{server_id}/execute?tool_name=list_channels"

Step 5: Reference in an Agent

# agent.yaml
mcp_servers:
  - ref: mcp/slack-server
    transport: sse

# Or with inline tools from agent.yaml
tools:
  - ref: tools/slack-send-message    # If registered as individual tool

Manage MCP Servers

# List all registered servers
curl http://localhost:8000/api/v1/mcp-servers

# Update a server
curl -X PUT http://localhost:8000/api/v1/mcp-servers/{server_id} \
  -H "Content-Type: application/json" \
  -d '{"endpoint": "http://new-host:3000", "status": "active"}'

# Delete a server
curl -X DELETE http://localhost:8000/api/v1/mcp-servers/{server_id}

Knowledge Bases (RAG / Vector DB)

What is a Knowledge Base?

A knowledge base is a collection of documents indexed in a vector database for retrieval-augmented generation (RAG). When an agent needs to answer a question, it searches the knowledge base for relevant context before generating a response.

Step 1: Create a Knowledge Base YAML

Create rag.yaml:

spec_version: v1
name: product-docs
version: 1.0.0
description: "Product documentation and FAQ for support agents"
team: customer-success
owner: alice@company.com
tags: [docs, support, rag]

backend: pgvector                # pgvector or in_memory

embedding_model:
  provider: openai
  name: text-embedding-3-small
  dimensions: 1536

chunking:
  strategy: recursive            # recursive or fixed_size
  chunk_size: 512                # tokens per chunk
  chunk_overlap: 50              # overlap between chunks

sources:
  - type: file
    path: ./docs/product-guide.pdf
  - type: file
    path: ./docs/faq.md
  - type: url
    url: https://docs.company.com/api-reference

search:
  hybrid: true                   # Combine vector + keyword search
  vector_weight: 0.7             # Weight for vector similarity
  text_weight: 0.3               # Weight for keyword matching
  default_top_k: 5               # Number of results to return

Step 2: Create a Vector Index

curl -X POST http://localhost:8000/api/v1/rag/indexes \
  -H "Content-Type: application/json" \
  -d '{
    "name": "product-docs",
    "description": "Product documentation for support agents",
    "embedding_model": "text-embedding-3-small",
    "chunk_strategy": "recursive",
    "chunk_size": 512,
    "chunk_overlap": 50,
    "source": "manual"
  }'

Response:

{
  "data": {
    "id": "idx-abc123",
    "name": "product-docs",
    "description": "Product documentation for support agents",
    "embedding_model": "text-embedding-3-small",
    "chunk_strategy": "recursive",
    "chunk_size": 512,
    "chunk_overlap": 50,
    "dimensions": 1536,
    "doc_count": 0,
    "chunk_count": 0,
    "created_at": "2026-04-10T12:00:00Z"
  }
}

Step 3: Ingest Documents

Upload files into your index:

curl -X POST http://localhost:8000/api/v1/rag/indexes/{index_id}/ingest \
  -F "files=@docs/product-guide.pdf" \
  -F "files=@docs/faq.md" \
  -F "files=@docs/return-policy.txt"

Supported file formats: PDF, TXT, MD, CSV, JSON

Response:

{
  "data": {
    "id": "job-xyz789",
    "index_id": "idx-abc123",
    "status": "processing",
    "total_files": 3,
    "processed_files": 0,
    "total_chunks": 0,
    "embedded_chunks": 0,
    "progress_pct": 0
  }
}

Step 4: Monitor Ingestion Progress

curl http://localhost:8000/api/v1/rag/indexes/{index_id}/ingest/{job_id}
{
  "data": {
    "id": "job-xyz789",
    "status": "completed",
    "total_files": 3,
    "processed_files": 3,
    "total_chunks": 247,
    "embedded_chunks": 247,
    "progress_pct": 100
  }
}

Run a hybrid search to verify your index returns relevant results:

curl -X POST http://localhost:8000/api/v1/rag/search \
  -H "Content-Type: application/json" \
  -d '{
    "index_id": "idx-abc123",
    "query": "What is the return policy for electronics?",
    "top_k": 5,
    "vector_weight": 0.7,
    "text_weight": 0.3
  }'

Response:

{
  "data": {
    "index_id": "idx-abc123",
    "query": "What is the return policy for electronics?",
    "top_k": 5,
    "results": [
      {
        "chunk_id": "chunk-001",
        "text": "Electronics can be returned within 30 days of purchase with original packaging...",
        "source": "return-policy.txt",
        "score": 0.92,
        "metadata": {"page": 3, "section": "Electronics"}
      },
      {
        "chunk_id": "chunk-042",
        "text": "All returns require a receipt or order confirmation email...",
        "source": "faq.md",
        "score": 0.85,
        "metadata": {}
      }
    ],
    "total": 2
  }
}

Step 6: Reference in an Agent

# agent.yaml
name: support-agent
framework: langgraph
model:
  primary: claude-sonnet-4

knowledge_bases:
  - ref: kb/product-docs           # ← registry reference
  - ref: kb/return-policy

Manage Indexes

# List all indexes
curl http://localhost:8000/api/v1/rag/indexes

# Get index details (doc count, chunk count)
curl http://localhost:8000/api/v1/rag/indexes/{index_id}

# Delete an index
curl -X DELETE http://localhost:8000/api/v1/rag/indexes/{index_id}

Tuning Tips

ParameterDefaultWhen to change
chunk_size512Increase for long documents, decrease for Q&A-style content
chunk_overlap50Increase if search misses context at chunk boundaries
vector_weight0.7Increase for semantic/conceptual queries
text_weight0.3Increase for keyword-heavy/exact-match queries
top_k5Increase if agent needs more context, decrease for speed

Memory

What is Memory?

Memory gives agents the ability to remember previous conversations and maintain state across interactions. Without memory, every message is independent.

Memory Types

TypeDescriptionUse case
buffer_windowKeeps last N messagesMost chatbots (default)
bufferKeeps all messagesShort conversations that need full history
summarySummarizes old messagesLong conversations with token limits
entityTracks entities mentionedCRM-style agents that track people/things
semanticRetrieves by similarityAgents that need to recall specific past topics

Memory Backends

BackendDescriptionWhen to use
in_memoryStored in process memoryDevelopment, testing
postgresqlStored in PostgreSQLProduction, persistence across restarts
redisStored in RedisHigh-throughput, TTL-based expiration

Step 1: Create a Memory Config YAML

Create memory.yaml:

spec_version: v1
name: support-session
version: 1.0.0
description: "Session memory for support conversations"
team: customer-success
owner: alice@company.com
tags: [memory, support]

backend: postgresql
memory_type: buffer_window

config:
  max_messages: 20                      # Keep last 20 messages
  ttl_seconds: 86400                    # Expire after 24 hours
  namespace_pattern: "{agent_id}:{session_id}"

scope: agent                            # agent, team, or global

Step 2: Create via API

curl -X POST http://localhost:8000/api/v1/memory/configs \
  -H "Content-Type: application/json" \
  -d '{
    "name": "support-session",
    "backend_type": "postgresql",
    "memory_type": "buffer_window",
    "max_messages": 20,
    "namespace_pattern": "{agent_id}:{session_id}",
    "scope": "agent",
    "description": "Session memory for support conversations"
  }'

Step 3: Store and Retrieve Messages

Store a message:

curl -X POST http://localhost:8000/api/v1/memory/configs/{config_id}/messages \
  -H "Content-Type: application/json" \
  -d '{
    "session_id": "session-abc123",
    "role": "user",
    "content": "What is your return policy?",
    "agent_id": "support-agent",
    "metadata": {"channel": "web-chat"}
  }'

Retrieve conversation history:

curl "http://localhost:8000/api/v1/memory/configs/{config_id}/messages?session_id=session-abc123"

Step 4: Reference in an Agent

# agent.yaml
name: support-agent
framework: langgraph
model:
  primary: claude-sonnet-4

memory:
  ref: memory/support-session          # ← registry reference

Monitor Memory Usage

# Get memory stats
curl http://localhost:8000/api/v1/memory/configs/{config_id}/stats
{
  "data": {
    "config_id": "mem-abc123",
    "backend_type": "postgresql",
    "memory_type": "buffer_window",
    "message_count": 1542,
    "session_count": 89,
    "storage_size_bytes": 524288,
    "linked_agent_count": 3
  }
}

Agents

Step 1: Create an Agent

=== "Interactive wizard"

agentbreeder init

The wizard asks 5 questions:

  1. Framework — LangGraph, OpenAI Agents, Claude SDK, CrewAI, Google ADK, or Custom
  2. Cloud target — Local, AWS, GCP, or Kubernetes
  3. Agent name — lowercase with hyphens (e.g., support-agent)
  4. Team — your team name
  5. Owner email — who is responsible

It generates:

support-agent/
├── agent.yaml          # Configuration
├── agent.py            # Working agent code
├── requirements.txt    # Dependencies
├── .env.example        # Environment template
└── README.md           # Getting started

=== "Manual YAML"

Create agent.yaml:

name: support-agent
version: 1.0.0
description: "Tier-1 customer support agent"
team: customer-success
owner: alice@company.com
tags: [support, production]

framework: langgraph

model:
  primary: claude-sonnet-4
  fallback: gpt-4o
  temperature: 0.7
  max_tokens: 4096

prompts:
  system: prompts/support-system-v3

tools:
  - ref: tools/order-lookup
  - ref: tools/zendesk-mcp

knowledge_bases:
  - ref: kb/product-docs

mcp_servers:
  - ref: mcp/slack-server
    transport: sse

guardrails:
  - pii_detection
  - hallucination_check
  - content_filter

deploy:
  cloud: gcp
  runtime: cloud-run
  region: us-central1
  scaling:
    min: 1
    max: 10
    target_cpu: 70
  resources:
    cpu: "1"
    memory: "2Gi"
  secrets:
    - OPENAI_API_KEY
    - ZENDESK_API_KEY

access:
  visibility: team
  allowed_callers:
    - team:customer-success
    - team:engineering

=== "API (from YAML)"

curl -X POST http://localhost:8000/api/v1/agents/from-yaml \
  -H "Content-Type: application/json" \
  -d '{
    "yaml_content": "name: support-agent\nversion: 1.0.0\nframework: langgraph\n..."
  }'

=== "API (structured)"

curl -X POST http://localhost:8000/api/v1/agents \
  -H "Content-Type: application/json" \
  -d '{
    "name": "support-agent",
    "version": "1.0.0",
    "description": "Tier-1 customer support agent",
    "team": "customer-success",
    "owner": "alice@company.com",
    "framework": "langgraph",
    "model_primary": "claude-sonnet-4",
    "model_fallback": "gpt-4o",
    "tags": ["support", "production"]
  }'

Step 2: Validate

agentbreeder validate agent.yaml
✅  YAML syntax valid
✅  JSON Schema valid (agent.schema.json)
✅  Framework "langgraph" is supported
✅  Team "customer-success" exists in registry
✅  All tool references resolve
✅  All prompt references resolve
✅  All knowledge base references resolve

You can also validate via API:

curl -X POST http://localhost:8000/api/v1/agents/validate \
  -H "Content-Type: application/json" \
  -d '{"yaml_content": "name: support-agent\n..."}'
{
  "data": {
    "valid": true,
    "errors": [],
    "warnings": ["Consider adding guardrails for production use"]
  }
}

Step 3: Deploy

# Deploy locally
agentbreeder deploy agent.yaml --target local

# Deploy to GCP Cloud Run
agentbreeder deploy agent.yaml --target cloud-run --region us-central1

# Deploy to AWS ECS
agentbreeder deploy agent.yaml --target aws --region us-east-1

The 8-step atomic pipeline runs:

✅  YAML parsed & validated
✅  RBAC check passed
✅  Dependencies resolved (tools, prompts, KBs from registry)
✅  Container built (langgraph runtime)
✅  Deployed to Cloud Run
✅  Health check passed
✅  Registered in org registry
✅  Endpoint returned: https://support-agent-xyz.run.app

If any step fails, the entire deploy rolls back.

Step 4: Verify and Interact

# Check status
agentbreeder status

# Tail logs
agentbreeder logs support-agent --follow

# Chat with the agent
agentbreeder chat support-agent

Edit an Agent

# Update via API
curl -X PUT http://localhost:8000/api/v1/agents/{agent_id} \
  -H "Content-Type: application/json" \
  -d '{
    "version": "1.1.0",
    "description": "Updated support agent with billing tools",
    "tags": ["support", "billing", "production"]
  }'

Clone an Agent

curl -X POST http://localhost:8000/api/v1/agents/{agent_id}/clone \
  -H "Content-Type: application/json" \
  -d '{
    "name": "support-agent-v2",
    "version": "2.0.0"
  }'

Teardown

agentbreeder teardown support-agent

Search across all resource types from one endpoint:

# Search everything
curl "http://localhost:8000/api/v1/registry/search?q=support"
{
  "data": [
    {"entity_type": "agent", "id": "...", "name": "support-agent", "description": "Tier-1 support"},
    {"entity_type": "prompt", "id": "...", "name": "support-system-v3", "description": "System prompt"},
    {"entity_type": "tool", "id": "...", "name": "zendesk-lookup", "description": "Zendesk tickets"}
  ]
}

Search specific resource types:

# Search only prompts
curl "http://localhost:8000/api/v1/registry/prompts?team=customer-success"

# Search only tools
curl "http://localhost:8000/api/v1/registry/tools?tool_type=mcp_server"

# Search only models
curl "http://localhost:8000/api/v1/registry/models?provider=anthropic"

The Git Review Workflow

For teams that want change control over registry resources:

Create resource  →  Submit (opens PR)  →  Review  →  Approve  →  Publish (merges PR)

Submit a Change

agentbreeder submit <type> <name> --message "description"

Where <type> is one of: agent, prompt, tool, model, knowledge-base

Review Pending Changes

# List all pending reviews
agentbreeder review list --status submitted

# Filter by type
agentbreeder review list --type prompt

# View details
agentbreeder review show {pr_id}

# Add a comment
agentbreeder review comment {pr_id} --message "Looks good, but add error handling"

# Approve
agentbreeder review approve {pr_id}

# Reject with reason
agentbreeder review reject {pr_id} --message "Missing test coverage"

Publish an Approved Change

# Merge the PR and register in the catalog
agentbreeder publish prompt support-system-v3 --version 1.0.0

Output:

✅ Merged PR #42
✅ Tagged: prompt/support-system-v3@1.0.0
✅ Registered in registry
   Usage: prompts/support-system-v3

LLM Models in the Registry

Register a Model

curl -X POST http://localhost:8000/api/v1/registry/models \
  -H "Content-Type: application/json" \
  -d '{
    "name": "claude-sonnet-4",
    "provider": "anthropic",
    "description": "Fast, intelligent model for most tasks",
    "context_window": 200000,
    "max_output_tokens": 8192,
    "input_price_per_million": 3.0,
    "output_price_per_million": 15.0,
    "capabilities": ["function_calling", "vision", "streaming"]
  }'

Compare Models

curl "http://localhost:8000/api/v1/registry/models/compare?ids=uuid-1,uuid-2"

Check Which Agents Use a Model

curl http://localhost:8000/api/v1/registry/models/{model_id}/usage
{
  "data": [
    {"agent_id": "...", "agent_name": "support-agent", "agent_status": "running", "usage_type": "primary"},
    {"agent_id": "...", "agent_name": "sales-agent", "agent_status": "running", "usage_type": "fallback"}
  ]
}

Quick Reference: All Resource YAML Schemas

ResourceRequired fieldsYAML file
Promptname, versionprompt.yaml
Toolname, version, description, typetool.yaml
Knowledge Basename, versionrag.yaml
Memoryname, versionmemory.yaml
Agentname, version, team, owner, framework, model.primary, deploy.cloudagent.yaml

All resources share these common fields:

spec_version: v1                    # Always v1
name: my-resource                   # Lowercase, hyphens, 2-63 chars
version: 1.0.0                      # Semantic versioning
description: "What this does"       # Max 500 chars
team: my-team                       # Team that owns this
owner: me@company.com               # Responsible person
tags: [tag1, tag2]                  # For discovery

Next Steps

WhatWhere
All CLI commandsCLI Reference
Every agent.yaml fieldagent.yaml Reference
Multi-agent orchestrationHow-To Guide
Common workflowsHow-To Guide
API versioningAPI Stability

On this page

Registry Guide — Prompts, Tools, RAG, Memory & AgentsOverviewPromptsWhat is a Prompt?Step 1: Create a Prompt YAML FileStep 2: Validate the PromptStep 3: Register the Prompt in the RegistryStep 4: Test the PromptStep 5: Reference the Prompt in an AgentEdit an Existing PromptVersion History and DiffDuplicate a PromptReview and Publish (Git Workflow)ToolsWhat is a Tool?Tool TypesStep 1: Create a Tool YAML FileStep 2: Write the ImplementationStep 3: Test in the SandboxStep 4: Register the ToolStep 5: Reference the Tool in an AgentCheck Which Agents Use a ToolMCP ServersWhat is an MCP Server?Step 1: Register an MCP ServerStep 2: Test ConnectivityStep 3: Discover Available ToolsStep 4: Test a Tool on the ServerStep 5: Reference in an AgentManage MCP ServersKnowledge Bases (RAG / Vector DB)What is a Knowledge Base?Step 1: Create a Knowledge Base YAMLStep 2: Create a Vector IndexStep 3: Ingest DocumentsStep 4: Monitor Ingestion ProgressStep 5: Test SearchStep 6: Reference in an AgentManage IndexesTuning TipsMemoryWhat is Memory?Memory TypesMemory BackendsStep 1: Create a Memory Config YAMLStep 2: Create via APIStep 3: Store and Retrieve MessagesStep 4: Reference in an AgentMonitor Memory UsageAgentsStep 1: Create an AgentStep 2: ValidateStep 3: DeployStep 4: Verify and InteractEdit an AgentClone an AgentTeardownRegistry SearchThe Git Review WorkflowSubmit a ChangeReview Pending ChangesPublish an Approved ChangeLLM Models in the RegistryRegister a ModelCompare ModelsCheck Which Agents Use a ModelQuick Reference: All Resource YAML SchemasNext Steps