Skip to content

Yn0t-studio/Agent-Launchpad

Repository files navigation

Agent Launchpad Template

🚀 Get up and running with a local conversational agent in minutes.

This project is a minimal, deep-agentic template designed for experimenting with local LLMs. It combines Chainlit for the chat interface, Ollama for local model inference, and LangGraph for custom agentic workflows.

Use this as a starting point to build your own custom AI personas and agents.

Features

  • Instant Setup: Clone, install, and chat.
  • 🏗️ Template Structure: Clean, modular code ready for customization.
  • 💬 Interactive UI: Polished chat interface out-of-the-box.
  • 🧠 Extended Thinking: Visualizes the agent's reasoning process using Chainlit steps.
  • 🔗 Native LangGraph: Custom graph implementation (Reasoning -> Response) for full control.
  • 📊 Telemetry Ready: Integrated with Langfuse for observability.
  • 🔒 100% Local: Privacy-first using Llama 3.1 via Ollama.

Prerequisites

  1. Python 3.10 or higher
  2. Ollama running locally.

Setup Ollama

Install Ollama and pull the Llama 3.1 model:

ollama pull llama3.1

Start the Ollama server:

ollama serve

Installation

  1. Clone the repository:

    git clone https://github.com/Yn0t-studio/Agent-Launchpad.git
    cd Agent-Launchpad
  2. Create and activate a virtual environment:

    python -m venv .venv
    source .venv/bin/activate  # Windows: .venv\Scripts\activate
  3. Install dependencies:

    pip install -r requirements.txt

Usage

  1. Ensure Ollama is running (ollama serve).

  2. Run the application:

    chainlit run app.py -w
  3. Open http://localhost:8000 to chat.

Configuration

Telemetry (Langfuse)

To enable tracing and observability with Langfuse:

  1. Create a .env file in the root directory.

  2. Add your Langfuse keys:

    LANGFUSE_PUBLIC_KEY=pk-lf-...
    LANGFUSE_SECRET_KEY=sk-lf-...
    LANGFUSE_HOST=https://cloud.langfuse.com # or your self-hosted instance
  3. Restart the application. Telemetry will automatically be enabled if keys are present.

Changing the Model or Tools

Edit agent.py to configure the agent:

def get_agent():
    # ...
    model = ChatOllama(model="llama3.1", base_url="http://localhost:11434")
    
    # Define custom nodes and workflow
    # See agent.py for the full graph definition
    
    return workflow.compile()

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages