Speaker: Oluwasogo Ogundowole
Talk: Autogen 101: Building Your First Multi-Agent System
This repository contains beginner-friendly demos showcasing how to build multi-agent systems using Microsoft AutoGen. Each demo illustrates core concepts like agent roles, tool calling, triage/planning, and collaborative workflows.
- Python 3.10+
- Poetry (for dependency management)
- OpenAI API key or compatible LLM endpoint
- Install Poetry (if not already installed):
curl -sSL https://install.python-poetry.org | python3 -- Install dependencies:
poetry install- Configure your LLM:
Create a
.envfile in the root directory:
OPENAI_API_KEY=your_api_key_here
# Or use a local model endpoint
# OPENAI_API_BASE=http://localhost:11434/v1Location: demo1_content_pipeline/
A multi-agent system for collaborative content creation with role-based specialization.
- Planner (Triage): Breaks down content requests into actionable steps
- Researcher: Gathers relevant information using knowledge base tools
- Writer: Creates content based on research findings
- Critic: Reviews content for accuracy, clarity, and quality
- ✅ Simple, no external API dependencies (uses local knowledge base)
- ✅ Clear role separation and triage pattern
- ✅ Tool calling with function schemas
- ✅ Human-in-the-loop for final approval
- ✅ Message protocol with structured turn-taking
poetry run python demo1_content_pipeline/main.pyThe system will:
- Accept a content topic (e.g., "Python async/await basics")
- Planner decomposes the task and assigns to Researcher
- Researcher queries knowledge base for relevant facts
- Writer drafts content based on research
- Critic reviews and suggests improvements (if needed)
- System outputs final content with audit trail
Location: demo2_customer_support/
A multi-agent customer support system that routes inquiries to specialized agents.
Each agent has a clear responsibility, policy (LLM + rules), and tools.
The Planner agent acts as a router, decomposing tasks and assigning to appropriate specialists.
- Read-only tools: Safe knowledge lookups (e.g.,
search_knowledge_base) - Action tools: Side-effect operations with approval gates (e.g.,
publish_content)
GroupChatManager orchestrates who speaks next based on:
- Message content and context
- Agent capabilities and current state
- Custom selection policies
- Max rounds to prevent infinite loops
- Human-in-the-loop for critical decisions
- Structured message protocols for predictability
PYCONNG_2025/
├── pyproject.toml # Poetry dependency management
├── README.md # This file
├── .env # LLM configuration (create this)
├── demo1_content_pipeline/ # Demo 1: Content creation
│ ├── main.py # Entry point
│ ├── config.py # LLM and agent configuration
│ ├── agents/ # Agent definitions
│ │ ├── planner.py
│ │ ├── researcher.py
│ │ ├── writer.py
│ │ └── critic.py
│ └── tools/ # Tool implementations
│ └── knowledge_tools.py
└── demo2_customer_support/ # Demo 2 (future)
Run poetry install to install dependencies, then use poetry run python ... or poetry shell to activate the environment.
- Reduce
max_consecutive_auto_replyin agent configs - Add delays between turns
- Use a local model (Ollama, LM Studio) to avoid API costs
- Check agent system messages for clarity
- Review speaker selection logic in GroupChatManager
- Ensure message protocols are consistent
MIT License - Feel free to use for learning and teaching!
Questions? Reach out to Oluwasogo Ogundowole at [email] or [@handle]