How-To Guide
Step-by-step guides for common AgentBreeder tasks.
How-To Guide
Practical recipes for common AgentBreeder workflows. Each section is self-contained — jump to what you need.
Install AgentBreeder
Option 1: PyPI (recommended)
# Full CLI + API server + engine
pip install agentbreeder
# Verify
agentbreeder --helpOption 2: Homebrew (macOS / Linux)
brew tap rajitsaha/agentbreeder
brew install agentbreeder
# Verify
agentbreeder --helpOption 3: Docker (for CI/CD pipelines)
# CLI image — use in CI/CD, no Python needed
docker pull rajits/agentbreeder-cli
docker run rajits/agentbreeder-cli --help
# API server — run the platform
docker pull rajits/agentbreeder-api
docker run -p 8000:8000 rajits/agentbreeder-api
# Dashboard — visual agent builder
docker pull rajits/agentbreeder-dashboard
docker run -p 3001:3001 rajits/agentbreeder-dashboardOption 4: SDK only (for programmatic use)
Python:
pip install agentbreeder-sdkfrom agenthub import Agent
agent = Agent("my-agent", version="1.0.0", team="eng")
print(agent.to_yaml())TypeScript / JavaScript:
npm install @agentbreeder/sdkimport { Agent } from "@agentbreeder/sdk";
const agent = new Agent("my-agent", { version: "1.0.0", team: "eng" });
console.log(agent.toYaml());Option 5: From source (for contributors)
git clone https://github.com/rajitsaha/agentbreeder.git
cd agentbreeder
python -m venv venv && source venv/bin/activate
pip install -e ".[dev]"Build Your First Agent
The fastest way to scaffold a new agent project is the AI Agent Architect:
# Run this in Claude Code
/agent-buildIt asks you 6 questions (or recommends the best stack for your use case) and generates a complete, production-ready project. See the full walkthrough →
Prefer to scaffold manually? The steps below walk through each file.
Step 1: Scaffold
agentbreeder initThe wizard asks 5 questions and generates a ready-to-run project:
| Question | Example | Notes |
|---|---|---|
| Framework | LangGraph | Choose from 6 frameworks |
| Cloud target | Local | Where to deploy |
| Agent name | support-agent | Lowercase, hyphens allowed |
| Team | engineering | Your team for RBAC and cost tracking |
| Owner email | alice@company.com | Who is responsible |
Step 2: Review the generated files
cd support-agent
cat agent.yamlname: support-agent
version: 0.1.0
description: "support-agent — powered by langgraph"
team: engineering
owner: alice@company.com
tags:
- langgraph
- generated
framework: langgraph
model:
primary: gpt-4o
deploy:
cloud: localStep 3: Customize
Edit agent.yaml to match your needs:
name: support-agent
version: 1.0.0
description: "Handles tier-1 customer support tickets"
team: customer-success
owner: alice@company.com
framework: langgraph
model:
primary: claude-sonnet-4 # Change the model
fallback: gpt-4o # Add a fallback
tools: # Add tools from the registry
- ref: tools/zendesk-mcp
- ref: tools/order-lookup
prompts:
system: "You are a helpful customer support agent for Acme Corp."
guardrails:
- pii_detection # Strip PII from outputs
- content_filter # Block harmful content
deploy:
cloud: local
scaling:
min: 1
max: 5Step 4: Validate
agentbreeder validate agent.yamlStep 5: Deploy
agentbreeder deploy agent.yaml --target localStep 6: Test
# Interactive chat
agentbreeder chat support-agent
# Check status
agentbreeder status
# View logs
agentbreeder logs support-agent --followUse the Agent Architect (/agent-build)
/agent-build is a Claude Code skill that acts as an AI Agent Architect. Run it inside Claude Code at the root of any directory where you want to scaffold a new agent project.
It supports two paths:
- Fast Path — you know your stack. Six quick questions, then scaffold.
- Advisory Path — you describe your use case. It recommends the best framework, model, RAG, memory, MCP/A2A, deployment, and eval setup — with reasoning — before scaffolding begins.
Fast Path
$ /agent-build
Do you already know your stack, or would you like me to recommend?
(a) I know my stack — I'll ask 6 quick questions and scaffold your project
(b) Recommend for me — ...
> a
What should we call this agent?
> support-agent
What will this agent do?
> Handle tier-1 customer support tickets
Which framework?
1. LangGraph 2. CrewAI 3. Claude SDK 4. OpenAI Agents 5. Google ADK 6. Custom
> 1
Where will it run?
1. Local 2. AWS 3. GCP 4. Kubernetes (planned)
> 2
What tools should this agent have?
> zendesk lookup, knowledge base search
Team name and owner email? [engineering / you@company.com]
> (enter)
┌─────────────────────────────────────┐
│ Framework LangGraph │
│ Cloud AWS (ECS Fargate) │
│ Model gpt-4o │
│ Tools zendesk, kb-search │
│ Team engineering │
└─────────────────────────────────────┘
Look good? I'll generate your project. > yes
✓ 10 files generated in support-agent/Advisory Path
$ /agent-build
> b
What problem does this agent solve, and for whom?
> Reduce tier-1 support tickets for our SaaS by deflecting common questions
What does the agent need to do, step by step?
> User sends ticket → search knowledge base → look up order status →
respond if found, escalate to human if not
Does your agent need: (a) loops/retries (b) checkpoints (c) human-in-the-loop
(d) parallel branches (e) none
> a, c
Primary cloud provider? (a) AWS (b) GCP (c) Azure (d) Local
Language preference? (a) Python (b) TypeScript (c) No preference
> a a
What data does this agent work with?
(a) Unstructured docs (b) Structured DB (c) Knowledge graph
(d) Live APIs (e) None
> a, d
Traffic pattern?
(a) Real-time interactive (b) Async batch
(c) Event-driven (d) Internal/low-volume
> a
── Recommendations ───────────────────────────────
Framework LangGraph — Full Code
Model claude-sonnet-4-6
RAG Vector (pgvector)
Memory Short-term (Redis)
MCP MCP servers
Deploy ECS Fargate
Evals deflection-rate, CSAT, escalation-rate
Override anything, or proceed? > proceed
✓ 19 files generated in support-agent/What gets generated
| File / Directory | Purpose | Path |
|---|---|---|
agent.yaml | AgentBreeder config — framework, model, deploy, tools, guardrails | Both paths |
agent.py | Framework entrypoint | Both paths |
tools/ | Tool stub files, one per tool named in the interview | Both paths |
requirements.txt | Framework + provider dependencies | Both paths |
.env.example | Required API keys and env vars | Both paths |
Dockerfile | Multi-stage container image | Both paths |
deploy/ | docker-compose.yml or cloud deploy config | Both paths |
criteria.md | Eval criteria | Both paths |
README.md | Project overview + quick-start | Both paths |
memory/ | Redis / PostgreSQL setup | Advisory (if recommended) |
rag/ | Vector or Graph RAG index + ingestion scripts | Advisory (if recommended) |
mcp/servers.yaml | MCP server references | Advisory (if recommended) |
tests/evals/ | Eval harness + use-case criteria | Advisory |
ARCHITECT_NOTES.md | Reasoning behind every recommendation | Advisory |
CLAUDE.md | Agent-specific Claude Code context | Advisory |
AGENTS.md | AI skill roster for iterating on this agent | Advisory |
.cursorrules | Framework-specific Cursor IDE rules | Advisory |
.antigravity.md | Hard constraints for this agent | Advisory |
Next steps after scaffolding
cd support-agent/
# Validate the generated agent.yaml
agentbreeder validate
# Deploy locally first
agentbreeder deploy --target local
# Chat with your agent
agentbreeder chat
# When ready, deploy to cloud
agentbreeder deployDeploy to Different Targets
Local (Docker Compose)
agentbreeder deploy agent.yaml --target localNo cloud credentials needed. Starts a Docker Compose stack on your machine.
GCP Cloud Run
# Prerequisites: gcloud CLI authenticated, project set
gcloud auth login
gcloud config set project my-project
# Deploy
agentbreeder deploy agent.yaml --target cloud-run --region us-central1Your agent.yaml should specify GCP:
deploy:
cloud: gcp
region: us-central1
scaling:
min: 0 # Scale to zero when idle
max: 10
secrets:
- OPENAI_API_KEY # Must exist in GCP Secret ManagerAWS ECS Fargate (planned)
# Coming soon
agentbreeder deploy agent.yaml --target aws --region us-east-1Use Different Frameworks
AgentBreeder is framework-agnostic. Your agent.yaml specifies which framework to use, and the engine builds the right container.
LangGraph
framework: langgraph
model:
primary: gpt-4o# agent.py
from langgraph.graph import StateGraph, START, END
class State(TypedDict):
messages: Annotated[list, add_messages]
graph = StateGraph(State)
graph.add_node("chatbot", chatbot)
graph.add_edge(START, "chatbot")
graph.add_edge("chatbot", END)
app = graph.compile()OpenAI Agents
framework: openai_agents
model:
primary: gpt-4o# agent.py
from agents import Agent, Runner
agent = Agent(
name="support-agent",
instructions="You are a helpful assistant.",
)
result = Runner.run_sync(agent, "Hello!")Claude SDK
framework: claude_sdk
model:
primary: claude-sonnet-4-6# agent.py
import anthropic
client = anthropic.AsyncAnthropic()
# Export as `agent` — AgentBreeder discovers it automatically
agent = clientThe runtime wraps your client and handles routing, tool injection, and streaming automatically.
Enable adaptive thinking
Adaptive thinking lets Claude reason through complex problems before answering. Enable it with the optional claude_sdk: block:
framework: claude_sdk
model:
primary: claude-sonnet-4-6
claude_sdk:
thinking:
type: adaptive # Activates thinking when beneficial
effort: high # "low" | "medium" | "high"Enable prompt caching
Prompt caching reduces latency and cost when the system prompt is long and reused across many requests:
framework: claude_sdk
model:
primary: claude-sonnet-4-6
claude_sdk:
prompt_caching: true # Cache system prompts ≥8 192 chars (Sonnet) or ≥16 384 chars (other)Set AGENT_SYSTEM_PROMPT at deploy time to provide the system prompt that will be cached.
Google ADK
framework: google_adk
model:
primary: gemini-2.0-flash# agent.py
from google.adk.agents import LlmAgent
root_agent = LlmAgent(
name="my-agent",
model="gemini-2.0-flash",
instruction="You are a helpful assistant.",
)Export the agent as root_agent, agent, or app.
Use Vertex AI session and memory backends
For production deployments, persist session state and memory in Google Cloud:
framework: google_adk
model:
primary: gemini-2.0-flash
google_adk:
session_backend: vertex_ai # Persist sessions in Vertex AI
memory_service: vertex_ai_bank # Persist memory in Vertex AI Memory BankRequires GOOGLE_CLOUD_PROJECT env var. Set it in deploy.env_vars:
deploy:
cloud: gcp
env_vars:
GOOGLE_CLOUD_PROJECT: my-project
GOOGLE_CLOUD_LOCATION: us-central1Use database session storage
For non-GCP deployments, use PostgreSQL for session persistence:
google_adk:
session_backend: database
session_db_url: "" # Falls back to DATABASE_URL env varCrewAI
framework: crewai
model:
primary: claude-sonnet-4-6# crew.py
from crewai import Agent, Crew, Task
researcher = Agent(role="Researcher", goal="Research the topic", backstory="...")
writer = Agent(role="Writer", goal="Write the report", backstory="...")
research_task = Task(description="Research {topic}", agent=researcher)
write_task = Task(description="Write a report on {topic}", agent=writer)
crew = Crew(agents=[researcher, writer], tasks=[research_task, write_task])Export the crew as crew. The runtime automatically propagates AGENT_MODEL and AGENT_TEMPERATURE (from the top-level model: block) into each agent's LLM — you don't need to set them manually.
Use hierarchical process with a manager
from crewai import Agent, Crew, Task, Process
manager = Agent(role="Manager", goal="Coordinate the team", backstory="...", allow_delegation=True)
analyst = Agent(role="Analyst", goal="Analyze data", backstory="...")
task = Task(description="Analyze {dataset}", agent=analyst)
crew = Crew(
agents=[analyst],
tasks=[task],
manager_agent=manager,
process=Process.hierarchical,
)Custom (bring your own)
framework: custom
model:
primary: any-model# agent.py — whatever you want
def run(user_message: str) -> str:
# Your custom agent logic here
return "response"Stream Agent Responses
All deployed agents expose a /stream endpoint that returns Server-Sent Events. Use it when you want to display partial responses in real time.
Stream with curl
curl -N -X POST https://<agent-endpoint>/stream \
-H "Content-Type: application/json" \
-d '{"input": "Write a detailed report on renewable energy"}'Stream with JavaScript (browser or Node.js)
const response = await fetch("https://<agent-endpoint>/stream", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ input: "Write a detailed report on renewable energy" }),
});
const reader = response.body.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) break;
const chunk = decoder.decode(value);
for (const line of chunk.split("\n")) {
if (line.startsWith("data: ")) {
const data = line.slice(6);
if (data === "[DONE]") break;
const event = JSON.parse(data);
if (event.text) process.stdout.write(event.text); // Claude SDK
if (event.output) console.log("Final:", event.output); // CrewAI result
}
}
}Stream with Python
import httpx
with httpx.stream("POST", "https://<agent-endpoint>/stream",
json={"input": "Write a detailed report"}) as response:
for line in response.iter_lines():
if line.startswith("data: "):
data = line[6:]
if data == "[DONE]":
break
import json
event = json.loads(data)
if "text" in event:
print(event["text"], end="", flush=True)SSE event format by framework
| Framework | Event type | Payload |
|---|---|---|
| Claude SDK | data: | {"text": "..."} — one event per text chunk |
| CrewAI (step) | event: step | {"description": "...", "result": "..."} |
| CrewAI (final) | event: result | {"output": "..."} |
| Google ADK | data: | {"text": "...", "is_final": false} |
| All | data: | [DONE] — end of stream |
Use Local Models with Ollama
No cloud API keys required. Run everything locally.
Step 1: Install and start Ollama
# macOS
brew install ollama
# Start the server
ollama serve &
# Pull a model
ollama pull llama3Step 2: Configure your agent
# agent.yaml
model:
primary: ollama/llama3
gateway: ollamaStep 3: Deploy
agentbreeder deploy agent.yaml --target localThe engine routes all LLM calls through your local Ollama instance. No data leaves your machine.
Configure LLM Providers
Add a provider
# Add OpenAI
agentbreeder provider add openai --api-key sk-...
# Add Anthropic
agentbreeder provider add anthropic --api-key sk-ant-...
# Add Google
agentbreeder provider add google --credentials-file sa.json
# Add Ollama (local)
agentbreeder provider add ollama --base-url http://localhost:11434List providers
agentbreeder provider listUse fallback chains
If the primary model is unavailable, AgentBreeder automatically falls back:
model:
primary: claude-sonnet-4 # Try this first
fallback: gpt-4o # Fall back to this
gateway: litellm # Route through LiteLLM for 100+ modelsUse LiteLLM gateway
Route all model calls through LiteLLM for unified access to 100+ models:
model:
primary: claude-sonnet-4
gateway: litellm# Start LiteLLM proxy
litellm --model claude-sonnet-4
# Or set the base URL
export LITELLM_BASE_URL=http://localhost:4000Manage Secrets
AgentBreeder supports four secrets backends. Your agents reference secrets by name — the backend handles storage.
Environment variables (default)
# .env file
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...# agent.yaml
deploy:
secrets:
- OPENAI_API_KEY
- ANTHROPIC_API_KEYAWS Secrets Manager
# Store a secret
agentbreeder secret set OPENAI_API_KEY --backend aws --value sk-...
# List secrets
agentbreeder secret list --backend awsGCP Secret Manager
agentbreeder secret set OPENAI_API_KEY --backend gcp --value sk-...HashiCorp Vault
agentbreeder secret set OPENAI_API_KEY --backend vault --value sk-...Orchestrate Multiple Agents
Build multi-agent pipelines with 6 strategies.
Strategy overview
| Strategy | Use case | How it works |
|---|---|---|
router | Triage + routing | One agent classifies, routes to specialists |
sequential | Pipeline | Agents execute in order, passing state |
parallel | Fan-out | Multiple agents run simultaneously |
hierarchical | Management | Manager delegates to worker agents |
supervisor | Quality control | Supervisor reviews and corrects |
fan_out_fan_in | Map-reduce | Fan out to workers, aggregate results |
Example: Router pipeline
# orchestration.yaml
name: support-pipeline
version: "1.0.0"
team: customer-success
strategy: router
agents:
triage:
ref: agents/triage-agent
routes:
- condition: billing
target: billing
- condition: technical
target: technical
- condition: default
target: general
billing:
ref: agents/billing-agent
technical:
ref: agents/technical-agent
general:
ref: agents/general-agent
shared_state:
type: session_context
backend: redis
deploy:
target: local# Validate
agentbreeder orchestration validate orchestration.yaml
# Deploy
agentbreeder orchestration deploy orchestration.yaml
# Chat with the pipeline
agentbreeder orchestration chat support-pipelineExample: Sequential pipeline
strategy: sequential
agents:
researcher:
ref: agents/researcher
order: 1
writer:
ref: agents/writer
order: 2
editor:
ref: agents/editor
order: 3Programmatic orchestration (Full Code SDK)
from agenthub import Orchestration
pipeline = (
Orchestration("support-pipeline", strategy="router", team="eng")
.add_agent("triage", ref="agents/triage-agent")
.add_agent("billing", ref="agents/billing-agent")
.add_agent("general", ref="agents/general-agent")
.with_route("triage", condition="billing", target="billing")
.with_route("triage", condition="default", target="general")
.with_shared_state(state_type="session_context", backend="redis")
)
pipeline.deploy()Use the Python SDK
The SDK is for programmatic agent definitions — the "Full Code" tier.
Install
pip install agentbreeder-sdkDefine an agent
from agenthub import Agent
agent = (
Agent("support-agent", version="1.0.0", team="engineering")
.with_model(primary="claude-sonnet-4", fallback="gpt-4o")
.with_tools(["tools/zendesk-mcp", "tools/order-lookup"])
.with_prompt(system="You are a helpful customer support agent.")
.with_deploy(cloud="gcp", min_scale=1, max_scale=10)
)Export to YAML
# Generate agent.yaml
yaml_str = agent.to_yaml()
print(yaml_str)
# Save to file
agent.save("agent.yaml")Load from YAML
agent = Agent.from_yaml_file("agent.yaml")
print(agent.config.name) # support-agent
print(agent.config.team) # engineeringDeploy programmatically
agent.deploy()Migrate from Another Framework
Already have agents built with another framework? Wrap them in agent.yaml without rewriting your code.
From LangGraph
framework: langgraphYour existing agent.py with StateGraph works as-is. See full migration guide.
From OpenAI Agents
framework: openai_agentsYour existing Agent + Runner code works as-is. See full migration guide.
From CrewAI
framework: crewaiSee full migration guide.
From AutoGen
See full migration guide.
From custom code
framework: customSee full migration guide.
Eject from YAML to Full Code
Start with YAML config, eject to SDK code when you need more control.
# Generate Python SDK code from your agent.yaml
agentbreeder eject agent.yaml --language python
# Generate TypeScript SDK code
agentbreeder eject agent.yaml --language typescriptThis creates an agent.py (or agent.ts) that uses the SDK and can be customized freely. Your original agent.yaml is preserved.
Tier mobility: No Code (visual builder) → Low Code (YAML) → Full Code (SDK). Move freely between tiers — no lock-in at any level.
Use MCP Servers
AgentBreeder has native MCP (Model Context Protocol) support. MCP servers are injected as sidecar containers alongside your agent.
Reference MCP servers from the registry
# agent.yaml
tools:
- ref: tools/zendesk-mcp # MCP server from org registry
- ref: tools/slack-mcpDiscover available MCP servers
# Scan for MCP servers on your network
agentbreeder scan
# List registered MCP servers
agentbreeder list toolsRegister a custom MCP server
agentbreeder submit tools/my-custom-mcp --type mcpManage Teams and RBAC
Every deploy is governed by RBAC. Agents belong to teams, and teams control who can deploy what.
Configure access in agent.yaml
# agent.yaml
team: customer-success # Required — who owns this agent
owner: alice@company.com # Required — individual responsible
access:
visibility: team # public | team | private
allowed_callers: # Who can invoke this agent
- team:engineering
- team:customer-success
require_approval: false # If true, deploys need admin approvalWhat happens at deploy time
- AgentBreeder checks if the deploying user belongs to the agent's team
- If
require_approval: true, the deploy is queued for admin review - Cost is attributed to the team
- An audit entry is written with who, what, when, where
There is no way to bypass this. Governance is structural.
Track Costs
Every deploy tracks cost attribution automatically.
View costs
# Costs by team
agentbreeder list costs --group-by team
# Costs by agent
agentbreeder list costs --group-by agent
# Costs by model
agentbreeder list costs --group-by modelHow cost tracking works
Every LLM call made by a deployed agent is logged with:
- Token count (input + output)
- Model used (including fallback)
- Team and agent attribution
- Timestamp
No configuration needed — this happens automatically for every deployed agent.
Use the Git Workflow
AgentBreeder has a built-in review workflow for changes to agents, tools, and prompts.
Submit a change for review
# Create a PR for your agent changes
agentbreeder submit agent.yaml --title "Update support agent prompt"Review submissions
# List pending reviews
agentbreeder review list
# Show a specific PR
agentbreeder review show 42
# Approve
agentbreeder review approve 42
# Reject with feedback
agentbreeder review reject 42 --comment "Needs guardrails for PII"Publish approved changes
agentbreeder publish 42This merges the PR, bumps the version, and updates the registry.
Run Evaluations
Test your agents against golden datasets before deploying to production.
# Run evals
agentbreeder eval run --agent support-agent --dataset golden-test-cases.json
# View results
agentbreeder eval results --agent support-agentUse Agent Templates
Start from pre-built templates instead of blank scaffolds.
# List available templates
agentbreeder template list
# Create from a template
agentbreeder template use customer-support --name my-support-agent
# Publish your agent as a template
agentbreeder template create --from agent.yaml --name "My Template"Teardown a Deployed Agent
# Remove with confirmation prompt
agentbreeder teardown support-agent
# Force remove (no confirmation)
agentbreeder teardown support-agent --forceThis stops the agent, removes the container, and archives the registry entry. The audit log is preserved.
Use AgentBreeder in CI/CD
GitHub Actions
# .github/workflows/deploy-agent.yml
name: Deploy Agent
on:
push:
paths: ['agents/support-agent/**']
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Validate
run: |
docker run --rm -v $PWD:/work -w /work \
rajits/agentbreeder-cli validate agents/support-agent/agent.yaml
- name: Deploy
run: |
docker run --rm -v $PWD:/work -w /work \
-e GOOGLE_APPLICATION_CREDENTIALS=/work/sa.json \
rajits/agentbreeder-cli deploy agents/support-agent/agent.yaml --target cloud-runGitLab CI
# .gitlab-ci.yml
deploy-agent:
image: rajits/agentbreeder-cli:latest
script:
- agentbreeder validate agents/support-agent/agent.yaml
- agentbreeder deploy agents/support-agent/agent.yaml --target cloud-runRun the Platform with Docker Compose
Run the full AgentBreeder stack (API + Dashboard + Database) with Docker:
# Using pre-built images
docker compose -f deploy/docker-compose.yml up -dOr create a custom compose file:
# docker-compose.yml
services:
api:
image: rajits/agentbreeder-api:latest
ports:
- "8000:8000"
environment:
DATABASE_URL: postgresql+asyncpg://agentbreeder:agentbreeder@db:5432/agentbreeder
REDIS_URL: redis://redis:6379
SECRET_KEY: change-me-in-production
depends_on:
- db
- redis
dashboard:
image: rajits/agentbreeder-dashboard:latest
ports:
- "3001:3001"
depends_on:
- api
db:
image: postgres:16
environment:
POSTGRES_USER: agentbreeder
POSTGRES_PASSWORD: agentbreeder
POSTGRES_DB: agentbreeder
volumes:
- pgdata:/var/lib/postgresql/data
redis:
image: redis:7
volumes:
pgdata:docker compose up -d
# Dashboard: http://localhost:3001
# API: http://localhost:8000
# API Docs: http://localhost:8000/docsTroubleshooting
"agentbreeder: command not found"
# Check if it's installed
pip show agentbreeder
# If installed but not in PATH, use python -m
python -m cli.main --help
# Or reinstall
pip install agentbreeder"Validation failed: unknown framework"
Supported frameworks: langgraph, openai_agents, claude_sdk, crewai, google_adk, custom
Check your agent.yaml:
framework: langgraph # Must be one of the above"RBAC check failed"
The deploying user must belong to the team specified in agent.yaml:
team: engineering # Your user must be a member of this team"Container build failed"
# Check Docker is running
docker info
# Try building manually
docker build -t my-agent .
# Check the generated Dockerfile
agentbreeder deploy agent.yaml --target local --dry-run"Deploy rolled back"
The pipeline is atomic — if any of the 8 steps fails, everything rolls back. Check which step failed:
agentbreeder logs my-agent
agentbreeder status my-agentDashboard won't start
The dashboard container needs the API to be running:
# Start API first
docker run -d -p 8000:8000 rajits/agentbreeder-api
# Then dashboard
docker run -d -p 3001:3001 rajits/agentbreeder-dashboard