agentbreeder
Migrations

From AutoGen

Migrate an existing AutoGen project to AgentBreeder.

Migrate from Microsoft AutoGen to AgentBreeder

Time to migrate: ~30 minutes Difficulty: Moderate What changes: You add an agent.yaml file and restructure your entry point slightly. Your AutoGen agent logic stays the same.


Before You Start

  • You have an existing AutoGen agent or multi-agent system
  • Your agent code uses autogen-agentchat (v0.2+) or pyautogen
  • Python 3.11+ is installed
  • Docker is installed and running
  • You have installed AgentBreeder: pip install agentbreeder

The Big Picture

AutoGen is designed around conversable agents and group chat patterns. AgentBreeder does not replace AutoGen's conversation engine. It wraps your AutoGen code in a production container and adds governance, multi-cloud deploy, and org-wide discoverability.

The main migration effort is structuring your AutoGen code so AgentBreeder's server wrapper can call it. This usually means wrapping your GroupChat or ConversableAgent in a callable function.


Before & After

Before: Raw AutoGen

my-autogen-agents/
  agent.py            # GroupChat + agents
  config_list.json    # OAI config
  requirements.txt
  # No deploy infrastructure -- you run it locally with python agent.py

After: AutoGen + AgentBreeder

my-autogen-agents/
  agent.py            # MODIFIED (minor: add a callable entry point)
  requirements.txt    # UNCHANGED
  agent.yaml          # NEW

Step-by-Step Migration

Step 1: Understand the entry point contract

AgentBreeder's server wrapper expects to call your agent with an input message and get a response. For AutoGen, this means wrapping your conversation flow in a function.

Before (typical AutoGen pattern):

# agent.py -- runs as a script
import autogen

config_list = [{"model": "gpt-4o", "api_key": os.environ["OPENAI_API_KEY"]}]

assistant = autogen.AssistantAgent(
    name="assistant",
    llm_config={"config_list": config_list},
)

user_proxy = autogen.UserProxyAgent(
    name="user_proxy",
    human_input_mode="NEVER",
    max_consecutive_auto_reply=10,
    code_execution_config={"work_dir": "coding", "use_docker": False},
)

# This blocks -- runs a conversation
user_proxy.initiate_chat(assistant, message="Write a Python function to sort a list")

After (AG-compatible):

# agent.py -- export an 'agent' callable
import os
import autogen

config_list = [{"model": "gpt-4o", "api_key": os.environ.get("OPENAI_API_KEY", "")}]

assistant = autogen.AssistantAgent(
    name="assistant",
    llm_config={"config_list": config_list},
)

user_proxy = autogen.UserProxyAgent(
    name="user_proxy",
    human_input_mode="NEVER",
    max_consecutive_auto_reply=10,
    code_execution_config={"work_dir": "/tmp/coding", "use_docker": False},
)


class AutoGenAgent:
    """Wrapper that makes AutoGen compatible with AgentBreeder's server."""

    def __init__(self):
        self.assistant = assistant
        self.user_proxy = user_proxy

    async def invoke(self, message: str) -> str:
        """Run a conversation and return the final response."""
        chat_result = self.user_proxy.initiate_chat(
            self.assistant,
            message=message,
        )
        # Extract the last assistant message
        if chat_result and hasattr(chat_result, "chat_history"):
            for msg in reversed(chat_result.chat_history):
                if msg.get("role") == "assistant" or msg.get("name") == "assistant":
                    return msg.get("content", "")
        return "No response generated."


# Export for AgentBreeder
agent = AutoGenAgent()

Step 2: Create agent.yaml

name: autogen-coder
version: 1.0.0
description: "Code generation agent using AutoGen"
team: engineering
owner: you@company.com
tags: [autogen, coding, code-gen]

framework: custom

model:
  primary: gpt-4o
  fallback: gpt-4o-mini

deploy:
  cloud: local
  resources:
    cpu: "1"
    memory: "2Gi"
  secrets:
    - OPENAI_API_KEY

Note: Use framework: custom for AutoGen since AgentBreeder does not have a dedicated AutoGen runtime (yet). The custom framework works with any Python agent that exposes an invoke method or a callable.

Step 3: Create a simple server wrapper

Since we are using framework: custom, create a server.py that wraps your agent:

# server.py
from fastapi import FastAPI
from pydantic import BaseModel

from agent import agent

app = FastAPI()


class InvokeRequest(BaseModel):
    input: dict


class InvokeResponse(BaseModel):
    output: str


@app.get("/health")
async def health():
    return {"status": "healthy"}


@app.post("/invoke")
async def invoke(request: InvokeRequest):
    message = request.input.get("message", "")
    result = await agent.invoke(message)
    return InvokeResponse(output=result)

Step 4: Update requirements.txt

pyautogen>=0.2.0
fastapi>=0.110.0
uvicorn[standard]>=0.27.0
httpx>=0.27.0
pydantic>=2.0.0

Step 5: Validate and deploy

agentbreeder validate agent.yaml
agentbreeder deploy agent.yaml --target local

Step 6: Test

curl -X POST http://localhost:8080/invoke \
  -d '{"input": {"message": "Write a Python function to sort a list using quicksort"}}' \
  -H 'Content-Type: application/json'

Concept Mapping: AutoGen to AgentBreeder

AutoGen ConceptAgentBreeder EquivalentNotes
AssistantAgentIndividual agent in agent.yamlWrapped in a custom callable
UserProxyAgentServer wrapper handles the "user" roleAG server sends messages as the user
GroupChatorchestration.yaml with strategy: supervisor or parallelAG can orchestrate at platform level
GroupChatManagerSupervisor agent in orchestration.yamlAG supervisor handles delegation
ConversableAgentBase agent class in your code (unchanged)AG wraps it
config_listmodel.primary + deploy.secretsDeclarative model config + secret refs
code_execution_configdeploy.env_vars + custom DockerfileSandboxed execution in container
human_input_modeaccess.require_approvalDeploy-level human-in-the-loop
AutoGen StudioAgentBreeder DashboardVisual builder for agents

Mapping GroupChat to AG Orchestration

AutoGen's GroupChat is its multi-agent coordination primitive. Here is how to map it to AgentBreeder orchestration:

AutoGen GroupChat (Before)

import autogen

config_list = [{"model": "gpt-4o", "api_key": os.environ["OPENAI_API_KEY"]}]

researcher = autogen.AssistantAgent(
    name="researcher",
    system_message="You research topics thoroughly.",
    llm_config={"config_list": config_list},
)

critic = autogen.AssistantAgent(
    name="critic",
    system_message="You critically evaluate research quality.",
    llm_config={"config_list": config_list},
)

writer = autogen.AssistantAgent(
    name="writer",
    system_message="You write clear, engaging content.",
    llm_config={"config_list": config_list},
)

user_proxy = autogen.UserProxyAgent(
    name="admin",
    human_input_mode="NEVER",
)

groupchat = autogen.GroupChat(
    agents=[user_proxy, researcher, critic, writer],
    messages=[],
    max_round=12,
)

manager = autogen.GroupChatManager(
    groupchat=groupchat,
    llm_config={"config_list": config_list},
)

AgentBreeder Orchestration (After)

Option A: Keep GroupChat inside one agent (simple)

Deploy the entire GroupChat as a single AG agent with framework: custom. This is the fastest migration:

# agent.yaml
name: research-group
framework: custom
model:
  primary: gpt-4o
deploy:
  cloud: local
  secrets:
    - OPENAI_API_KEY

Option B: Split into AG orchestration (advanced)

Deploy each AutoGen agent independently and use AG to coordinate:

# orchestration.yaml
name: research-team
version: 1.0.0
team: research
owner: you@company.com
strategy: supervisor

supervisor_config:
  supervisor_agent: manager
  max_iterations: 4

agents:
  manager:
    ref: agents/research-manager
  researcher:
    ref: agents/researcher
    fallback: general-researcher
  critic:
    ref: agents/critic
  writer:
    ref: agents/writer

deploy:
  target: local
  resources:
    cpu: "2"
    memory: "4Gi"

GroupChat pattern to AG strategy mapping

AutoGen PatternAG StrategyWhen to use
GroupChat with GroupChatManagerstrategy: supervisorManager decides who speaks next
Sequential agent callsstrategy: sequentialFixed chain: A then B then C
All agents answer, pick beststrategy: parallel + custom mergeCompetitive evaluation
Nested chatsstrategy: hierarchicalMulti-level delegation
Two-agent chatstrategy: sequential (2 agents)Simple back-and-forth

Handling AutoGen-Specific Features

Code Execution

AutoGen's code_execution_config lets agents execute code. In the container, set up a writable directory:

deploy:
  env_vars:
    AUTOGEN_WORK_DIR: "/tmp/coding"

In your agent code:

user_proxy = autogen.UserProxyAgent(
    name="user_proxy",
    human_input_mode="NEVER",
    code_execution_config={
        "work_dir": os.environ.get("AUTOGEN_WORK_DIR", "/tmp/coding"),
        "use_docker": False,  # already in a container
    },
)

OAI Config Lists

Replace config_list.json with environment-based configuration:

# Before
config_list = autogen.config_list_from_json("OAI_CONFIG_LIST")

# After
config_list = [
    {
        "model": os.environ.get("PRIMARY_MODEL", "gpt-4o"),
        "api_key": os.environ.get("OPENAI_API_KEY"),
    }
]
deploy:
  env_vars:
    PRIMARY_MODEL: gpt-4o
  secrets:
    - OPENAI_API_KEY

Teachable Agents

AutoGen's TeachableAgent stores learnings in a local database. For persistence in AG:

knowledge_bases:
  - ref: kb/agent-learnings

deploy:
  resources:
    memory: "4Gi"  # teachable agents need more memory

What You Gain

FeatureAutoGen OnlyAutoGen + AgentBreeder
Multi-agent chatGroupChatGroupChat + AG orchestration
DeployManual (python agent.py)agentbreeder deploy agent.yaml
Multi-cloudNot availableOne-line change
Code executionLocal PythonSandboxed in container
RBACNot availableAutomatic
Cost trackingNot built-inPer-agent, per-model
Agent registryAutoGen StudioOrg-wide registry
Health checksNot availableAutomatic
Model fallbackManual config_listDeclarative + automatic
GuardrailsNot built-inDeclarative

What Stays the Same

  • Your AssistantAgent and UserProxyAgent definitions
  • Your GroupChat and GroupChatManager configuration
  • Your ConversableAgent subclasses
  • Your code execution capabilities
  • Your tool/function registrations

Troubleshooting

AutoGen's synchronous API in async container

AutoGen's initiate_chat() is synchronous. In the async server wrapper, use asyncio.to_thread:

import asyncio

class AutoGenAgent:
    async def invoke(self, message: str) -> str:
        result = await asyncio.to_thread(
            self.user_proxy.initiate_chat,
            self.assistant,
            message=message,
        )
        return self._extract_response(result)

Long-running conversations

AutoGen GroupChats can run for many rounds. Set appropriate timeouts:

deploy:
  env_vars:
    AUTOGEN_MAX_ROUNDS: "12"
  resources:
    cpu: "2"
    memory: "4Gi"

Missing config_list.json

Do not bundle config_list.json (it contains API keys). Use environment variables instead:

deploy:
  secrets:
    - OPENAI_API_KEY
    - AZURE_OPENAI_API_KEY

Container runs out of disk space

AutoGen code execution creates files. Mount a tmp volume or increase container resources:

deploy:
  resources:
    memory: "4Gi"
  env_vars:
    AUTOGEN_WORK_DIR: "/tmp/coding"

Full Example

agent.py:

import asyncio
import os
import autogen

config_list = [
    {
        "model": os.environ.get("PRIMARY_MODEL", "gpt-4o"),
        "api_key": os.environ.get("OPENAI_API_KEY", ""),
    }
]

assistant = autogen.AssistantAgent(
    name="assistant",
    system_message=(
        "You are a helpful coding assistant. Write clean, well-documented "
        "Python code. Always include type hints and docstrings."
    ),
    llm_config={"config_list": config_list},
)

user_proxy = autogen.UserProxyAgent(
    name="user_proxy",
    human_input_mode="NEVER",
    max_consecutive_auto_reply=5,
    code_execution_config={
        "work_dir": os.environ.get("AUTOGEN_WORK_DIR", "/tmp/coding"),
        "use_docker": False,
    },
)


class AutoGenAgent:
    """Wrapper for AgentBreeder compatibility."""

    def __init__(self):
        self.assistant = assistant
        self.user_proxy = user_proxy

    async def invoke(self, message: str) -> str:
        result = await asyncio.to_thread(
            self.user_proxy.initiate_chat,
            self.assistant,
            message=message,
        )
        if result and hasattr(result, "chat_history"):
            for msg in reversed(result.chat_history):
                if msg.get("name") == "assistant":
                    return msg.get("content", "")
        return "No response generated."


agent = AutoGenAgent()

requirements.txt:

pyautogen>=0.2.0
fastapi>=0.110.0
uvicorn[standard]>=0.27.0
httpx>=0.27.0
pydantic>=2.0.0

server.py:

from fastapi import FastAPI
from pydantic import BaseModel

from agent import agent

app = FastAPI()


class InvokeRequest(BaseModel):
    input: dict


@app.get("/health")
async def health():
    return {"status": "healthy"}


@app.post("/invoke")
async def invoke(request: InvokeRequest):
    message = request.input.get("message", "")
    result = await agent.invoke(message)
    return {"output": result}

agent.yaml:

name: autogen-coder
version: 1.0.0
description: "Code generation assistant using AutoGen"
team: engineering
owner: dev@company.com
tags: [autogen, coding, custom]

framework: custom

model:
  primary: gpt-4o
  fallback: gpt-4o-mini

guardrails:
  - content_filter

deploy:
  cloud: local
  resources:
    cpu: "1"
    memory: "2Gi"
  env_vars:
    AUTOGEN_WORK_DIR: "/tmp/coding"
    PRIMARY_MODEL: "gpt-4o"
  secrets:
    - OPENAI_API_KEY

access:
  visibility: team

Deploy:

agentbreeder deploy agent.yaml

On this page