Skip to content

say828/agent-farm

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

52 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

banner

🤖 Agent Farm

Agent Farm License Python Next.js Docker Kubernetes

AI agent management platform with visual editor, execution tracking, and container orchestration.

🚀 Quick Start📖 Documentation🎯 Features🏗️ Architecture🤝 Contributing

📋 Table of Contents

🎯 Overview

Agent Farm is a comprehensive No-Code AI Agent Development Platform that empowers users to build, manage, and orchestrate complex AI workflows through an intuitive visual interface. By simply dragging and dropping pre-built 'nodes' (such as LLM, RAG, and Memory), users can design and deploy their own customized AI agents. Built with modern technologies and designed for scalability, it supports multiple LLM providers, real-time collaboration, advanced RAG pipelines, and a sophisticated multi-layered memory system.

🌟 Key Highlights

  • No-Code Visual Workflow Editor: Intuitive drag-and-drop interface powered by React Flow for rapid AI agent prototyping and deployment.
  • Multi-LLM & Multi-Provider Support: Seamless integration with leading LLM providers like OpenAI, Anthropic, and Google Gemini, with an extensible architecture for custom models.
  • Advanced RAG Pipeline: Features like re-ranking, query transformation, and LangGraph orchestration for highly accurate and contextually relevant responses from vast knowledge bases.
  • Multimodal RAG: Supports both text and image embeddings, enabling comprehensive search and retrieval across diverse data types using models like clip-ViT-B-32.
  • Multi-layered Agent Memory System: Incorporates Conversation Buffer (short-term), Summary (mid-term), Vector (long-term associative), and Entity (long-term factual) memories for intelligent and context-aware agent behavior.
  • Agent-to-Agent Communication (A2A): Implements the Model Context Protocol (MCP) for seamless and type-safe communication between agents.
  • Real-time Execution & Monitoring: WebSocket-based progress tracking and live execution logs for transparent workflow management.
  • Production Ready: Designed for scalability and reliability, with Kubernetes deployment options and comprehensive monitoring capabilities.

🎯 Features

Core Features

  • 🎨 Visual Workflow Designer

    • Drag-and-drop node-based interface
    • Support for LLM, Tool, RAG, Agent, and Conditional nodes
    • Real-time workflow validation
    • Undo/redo functionality
  • 🤖 Multi-LLM Integration

    • Seamless integration with leading LLM providers:
      • OpenAI (GPT-4o, GPT-4o Mini, GPT-4 Turbo, GPT-4, GPT-3.5 Turbo)
      • Anthropic (Claude Opus, Sonnet, Haiku)
      • Google (Gemini 2.5 Pro, Gemini 2.5 Flash, Gemini 1.5 Pro, Gemini 1.5 Flash)
    • Support for local models via custom API endpoints (e.g., Ollama, LM Studio, Text Gen WebUI)
    • Extensible architecture for new providers and custom models
  • 📚 Knowledge Base Management (Advanced RAG)

    • Multi-Collection Strategy: Create and manage independent knowledge bases (vector collections) for different topics, ensuring precise information retrieval.
    • File Upload & Processing: Supports various document formats (PDF, TXT, DOCX) with automatic chunking and embedding.
    • Multimodal RAG: Integrates clip-ViT-B-32 for embedding and searching both text and images in a unified vector space.
    • Advanced RAG Pipeline: Implements a sophisticated 5-stage RAG pipeline:
      • Query Transformation: Utilizes LLMs (e.g., GPT-4 Turbo) to automatically transform user queries into multiple, more specific sub-queries for improved search.
      • Retrieval: Semantic similarity and keyword-based search across vector stores.
      • Re-ranking: Employs lightweight re-ranker models to refine the relevance of retrieved documents based on the original query.
      • Orchestration: Leverages LangGraph to manage complex RAG flows, including conditional logic for query transformation and re-ranking.
      • Citation & Hallucination Check: Ensures answers are grounded in retrieved documents and minimizes factual errors.
    • Performance Evaluation: Integrates Ragas and ARES frameworks for automated evaluation of RAG pipeline metrics (Faithfulness, Answer Relevancy).
  • 🧠 Multi-layered Agent Memory System

    • Conversation Buffer Memory (Short-term): Retains the raw content of recent conversations.
    • Summary Memory (Mid-term): Periodically summarizes long conversations using LLMs (e.g., GPT-4 Turbo) to capture core themes.
    • Vector Memory (Long-term Associative): Embeds conversation history to retrieve semantically similar past memories.
    • Entity Memory (Long-term Factual): Extracts and stores key information from conversations in a Key-Value format for precise factual recall.
  • 👥 Team Collaboration

    • Project-based access control
    • Role-based permissions (Owner, Editor, Viewer)
    • Real-time collaboration features
    • Activity tracking and audit logs
  • 🔄 Real-time Execution

    • WebSocket-based progress monitoring
    • Live execution logs
    • Error handling and recovery
    • Batch execution support

Advanced Features

  • 🔗 Agent-to-Agent Communication (A2A)

    • MCP (Model Context Protocol) implementation
    • Session-based context sharing
    • Automatic message routing
    • TTL-based session management
  • 🛠️ Extensibility

    • Custom tool integration
    • Plugin system for new node types
    • API-first architecture
    • Webhook support
  • 📊 Monitoring & Analytics

    • Execution metrics and statistics
    • Performance monitoring
    • Usage analytics
    • Error tracking
  • 🔐 Security & Compliance

    • JWT-based authentication
    • OAuth 2.0 integration (Kakao)
    • API key management
    • PIPA compliance (Korean Personal Information Protection Act)

🛠️ Tech Stack

Backend

  • Framework: FastAPI + SQLAlchemy + Alembic
  • Language: Python 3.11+
  • AI/ML: LangChain, LangGraph, Pydantic, OpenAI, Anthropic, Google GenAI, Sentence-Transformers, Lightweight Reranker, Ragas, ARES
  • Database: MySQL 8.0 (production), SQLite (development), ChromaDB (vector store)
  • Cache: Redis 7
  • Storage: MinIO (S3-compatible)
  • Real-time: WebSocket, Redis Pub/Sub

Frontend

  • Framework: Next.js 15 + React 19 + TypeScript
  • State Management: TanStack Query, Zustand
  • UI Components: Tailwind CSS, Radix UI
  • Workflow: React Flow (@xyflow/react)
  • Internationalization: next-intl (Korean/English)

Infrastructure

  • Development: Docker Compose
  • Production: Kubernetes
  • CI/CD: GitLab CI + ArgoCD
  • Monitoring: Grafana, Prometheus
  • Container Registry: Harbor

🏗️ Architecture

System Architecture

┌─────────────────┐    ┌─────────────────┐    ┌─────────────────┐
│   Frontend      │    │   Backend       │    │   Database      │
│   (Next.js)     │◄──►│   (FastAPI)     │◄──►│   (MySQL)       │
│   Port: 3000    │    │   Port: 8000    │    │   Port: 3306    │
└─────────────────┘    └─────────────────┘    └─────────────────┘
         │                       │                       │
         │                       │                       │
         ▼                       ▼                       ▼
┌─────────────────┐    ┌─────────────────┐    ┌─────────────────┐
│   MCP Server    │    │   Redis Cache   │    │   MinIO Storage │
│   (Python)      │    │   (Cache/Queue) │    │   (File Storage)│
│   Port: 9000    │    │   Port: 6379    │    │   Port: 9100    │
└─────────────────┘    └─────────────────┘    └─────────────────┘

Project Structure

agent-farm/
├── api/                    # FastAPI Backend
│   ├── app/
│   │   ├── api/v1/        # REST API endpoints
│   │   ├── core/          # Core configuration
│   │   ├── models/        # SQLAlchemy models
│   │   ├── schemas/       # Pydantic schemas
│   │   ├── services/      # Business logic
│   │   └── utils/         # Utilities
│   ├── alembic/           # Database migrations
│   └── tests/             # Backend tests
├── app/                   # Next.js Frontend
│   ├── src/app/           # App Router
│   ├── src/components/    # React components
│   ├── src/hooks/         # Custom hooks
│   ├── src/lib/           # Utilities
│   └── src/types/         # TypeScript types
├── mcp-server/            # MCP Protocol Server
├── landing/               # Landing page
├── k8s/                   # Kubernetes manifests
├── knowledge/             # Project documentation
└── compose.yml            # Docker Compose

Database Design

  • No Foreign Key Constraints: All relationships are maintained at the application level for maximum flexibility and scalability.
  • Application-Level Relationships: Referential integrity is managed within the application layer, not at the database level.
  • JSON Fields: Extensive use of JSON columns for flexible configuration storage (e.g., agent config, node data, canvas data).
  • UUID Support: Execution tracking and other key entities utilize UUIDs for distributed system compatibility.
  • Enum Types: Strongly typed status and type fields using SQLAlchemy enums for enhanced data integrity.

🚀 Quick Start

Prerequisites

  • Docker and Docker Compose
  • Git

1. Clone the Repository

git clone https://github.com/say828/agent-farm.git
cd agent-farm

2. Environment Setup

# Copy environment example
cp .env.example .env

# Edit environment variables (optional for development)
# Add your API keys for OpenAI, Anthropic, etc.

Important Environment Variables:

  • OPENAI_API_KEY: Your OpenAI API key for GPT models
  • ANTHROPIC_API_KEY: Your Anthropic API key for Claude models
  • GOOGLE_API_KEY: Your Google AI API key for Gemini models
  • KAKAO_CLIENT_ID & KAKAO_CLIENT_SECRET: For OAuth authentication
  • JWT_SECRET_KEY: For JWT token signing (change in production)
  • SECRET_KEY: Application secret key (change in production)

Service-Specific Configuration: Each service has its own .env.example file:

  • api/.env.example: Backend API configuration
  • app/.env.example: Frontend configuration
  • mcp-server/.env.example: MCP server configuration

3. Start the Development Environment

# Start all services
docker-compose up -d

# Check service status
docker-compose ps

# View logs
docker-compose logs -f api app

4. Access the Application

📦 Installation

Option 1: Docker Compose (Recommended)

This is the easiest way to get started with Agent Farm.

# Clone and start
git clone https://github.com/say828/agent-farm.git
cd agent-farm
docker-compose up -d

# Stop and cleanup
docker-compose down -v

Option 2: Manual Setup

For development or custom deployments:

Backend Setup

cd api
pip install poetry
poetry install
poetry run alembic upgrade head
poetry run uvicorn app.main:app --reload --host 0.0.0.0 --port 8000

Frontend Setup

cd app
npm install -g pnpm
pnpm install
pnpm dev

Database Setup

# MySQL (recommended for production)
docker run -d --name mysql \
  -e MYSQL_ROOT_PASSWORD=root \
  -e MYSQL_DATABASE=agent_farm \
  -e MYSQL_USER=user \
  -e MYSQL_PASSWORD=user \
  -p 3306:3306 mysql:8.0

# Redis
docker run -d --name redis -p 6379:6379 redis:7

# MinIO
docker run -d --name minio \
  -p 9000:9000 -p 9001:9001 \
  -e MINIO_ROOT_USER=admin \
  -e MINIO_ROOT_PASSWORD=admin123 \
  minio/minio server /data --console-address ":9001"

🎮 Usage

1. Create Your First Agent with Advanced RAG

  1. Navigate to http://localhost:3000
  2. Sign up or log in (OAuth with Kakao supported)
  3. Create a new project.
  4. Use the visual editor to design your workflow:
    • Add LLM nodes for AI processing.
    • Connect Tool nodes for external integrations.
    • Utilize RAG nodes for advanced knowledge retrieval from your custom knowledge bases.
    • Implement Conditional nodes for dynamic decision-making.
    • Orchestrate Agent nodes for Agent-to-Agent (A2A) communication.

2. Configure Knowledge Base and Memory

  1. Knowledge Bases: Go to the Knowledge Bases section to upload documents (PDF, TXT, DOCX). Documents are automatically processed, chunked, and indexed for RAG.
  2. Memory Systems: Configure memory settings within LLM and RAG nodes to leverage:
    • Conversation Buffer: For short-term context.
    • Summary Memory: For mid-term conversation summaries.
    • Vector Memory: For long-term associative recall.
    • Entity Memory: For factual information extraction and storage.

3. Execute Workflows and Monitor

  1. Click "Execute" in the agent editor to run your workflow.
  2. Monitor real-time progress and live execution logs via WebSockets.
  3. View detailed execution results, including RAG citations and memory interactions.
  4. Export results or share with team members.

4. Team Collaboration

  1. Create or join a project.
  2. Invite team members with appropriate roles (Owner, Editor, Viewer).
  3. Share agents, knowledge bases, and collaborate on workflow design.
  4. Track team activity and changes through audit logs.

📚 API Documentation

REST API

The API is fully documented with OpenAPI/Swagger:

WebSocket API

Real-time features use WebSocket connections:

// Connect to execution updates
const ws = new WebSocket('ws://localhost:8000/ws/execution/{execution_id}');

ws.onmessage = (event) => {
  const data = JSON.parse(event.data);
  console.log('Execution update:', data);
};

Key Endpoints

  • GET /api/v1/agents - List agents
  • POST /api/v1/agents - Create agent
  • POST /api/v1/agents/{id}/execute - Execute agent
  • GET /api/v1/executions/{id} - Get execution details
  • POST /api/v1/knowledge-bases - Create knowledge base
  • POST /api/v1/knowledge-bases/{id}/documents - Upload documents

🚀 Deployment

Docker Compose (Development)

# Start development environment
docker-compose up -d

# Production-like setup
docker-compose -f compose.prod.yml up -d

Kubernetes (Production)

Prerequisites

  • Kubernetes cluster
  • kubectl configured
  • Helm (optional)

Quick Deploy

cd k8s

# Setup Kind cluster (for local testing)
./scripts/setup-kind.sh

# Build and load images
./scripts/build-images.sh

# Deploy all services
./scripts/deploy.sh

Manual Deploy

# Deploy to specific namespace
kubectl apply -k k8s/manifests/ab/

# Check deployment status
kubectl get pods -n ab

# Access services
kubectl port-forward -n ab svc/app 3000:3000

Production Considerations

  • TLS Certificates: Configure Let's Encrypt or provide your own
  • Resource Limits: Set appropriate CPU/memory limits
  • Persistent Storage: Configure persistent volumes for MySQL and MinIO
  • Monitoring: Deploy Grafana and Prometheus
  • Backup: Set up database and file storage backups

Environment Variables

Root Configuration (.env)

# External API Keys (Required for AI functionality)
OPENAI_API_KEY=sk-your-openai-api-key-here
ANTHROPIC_API_KEY=sk-ant-your-anthropic-api-key-here
GOOGLE_API_KEY=your-google-genai-api-key-here

# OAuth Configuration
KAKAO_CLIENT_ID=your-kakao-app-key-here
KAKAO_CLIENT_SECRET=your-kakao-client-secret-here

# Security Settings
JWT_SECRET_KEY=your-jwt-secret-key-change-in-production-min-32-chars
SECRET_KEY=change-this-in-production-super-secret-key-min-32-chars

# Service URLs
NEXT_PUBLIC_API_URL=http://localhost:8000
API_BASE_URL=http://localhost:8000

# Database Configuration
MYSQL_HOST=mysql
MYSQL_USER=user
MYSQL_PASSWORD=user
MYSQL_DATABASE=agent_farm

# Redis Configuration
REDIS_HOST=redis
REDIS_PORT=6379

Service-Specific Configuration

Backend API (api/.env.example):

  • Database connection settings
  • External API keys
  • Security configuration
  • CORS settings

Frontend (app/.env.example):

  • API endpoint configuration
  • Public environment variables
  • Feature flags

MCP Server (mcp-server/.env.example):

  • MCP protocol settings
  • Tool execution configuration
  • Virtual environment settings

🧪 Testing

Backend Tests

cd api
poetry run pytest
poetry run pytest --cov=app --cov-report=html

Frontend Tests

cd app
pnpm test
pnpm test:coverage

Integration Tests

# Start test environment
docker-compose -f compose.test.yml up -d

# Run integration tests
cd api
poetry run pytest tests/test_integration.py

🔧 Development

Code Quality

# Backend
cd api
poetry run black .          # Format code
poetry run isort .          # Sort imports
poetry run flake8           # Lint
poetry run mypy .           # Type check

# Frontend
cd app
pnpm lint                   # ESLint
pnpm lint:fix               # Auto-fix
pnpm type-check             # TypeScript check

Database Migrations

cd api

# Create migration
poetry run alembic revision --autogenerate -m "description"

# Apply migrations
poetry run alembic upgrade head

# Rollback
poetry run alembic downgrade -1

Adding New Features

  1. Follow the existing code structure
  2. Add tests for new functionality
  3. Update documentation
  4. Submit pull request

🤝 Contributing

We welcome contributions from the community! Here's how to get started:

1. Fork the Repository

git clone https://github.com/yourusername/agent-farm.git
cd agent-farm
git checkout -b feature/your-feature-name

2. Development Setup

# Start development environment
docker-compose up -d

# Make your changes
# Test your changes
# Submit pull request

3. Contribution Guidelines

  • Follow the existing code style
  • Add tests for new features
  • Update documentation
  • Ensure all tests pass
  • Write clear commit messages

4. Issues and Feature Requests

  • Use GitHub Issues for bug reports
  • Use Discussions for feature requests
  • Check existing issues before creating new ones

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

MIT License

Copyright (c) 2025 Agent Farm

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

🙏 Acknowledgments

  • LangChain for AI framework
  • React Flow for visual workflow editor
  • FastAPI for backend framework
  • Next.js for frontend framework
  • All contributors and the open-source community

⬆ Back to Top

Made with ❤️ by say828

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors