Skip to content

tanben/langflow-aiconfig-demo

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LaunchDarkly Model Router for Langflow

A custom Langflow component that dynamically routes requests to different AI models based on LaunchDarkly feature flag configuration. This enables A/B testing, gradual rollouts, and user-targeted model selection without code changes.

Flow Overview

Features

  • Dynamic Model Routing: Route requests to different LLMs based on LaunchDarkly AI Config flags
  • User Targeting: Target specific users or segments with different models using LaunchDarkly's targeting rules
  • System Prompt Management: Configure system prompts directly in LaunchDarkly
  • Automatic Metrics Tracking: Track token usage, duration, and success/failure metrics
  • Fallback Support: Automatically fall back to a default model if routing fails
  • Multiple Model Support: Connect multiple language models and let LaunchDarkly decide which one to use

Prerequisites

  • Docker and Docker Compose
  • A LaunchDarkly account with a server-side SDK key
  • An AI Config feature flag configured in LaunchDarkly

Quick Start

1. Clone and Start the Services

docker-compose up -d

This starts:

  • Ollama on port 11434 - Local LLM server
  • Langflow on port 7860 - Flow builder UI with LaunchDarkly SDK pre-installed

2. Install Ollama Models (Optional)

If you want to use local Ollama models, pull them after the services are running:

docker exec -it ollama ollama pull llama3.2:3b
docker exec -it ollama ollama pull qwen3:8b

Note: Model downloads can take several minutes depending on your internet connection. The models are persisted in the ./ollama_models directory.

Alternatively, you can skip Ollama models and use cloud-based LLMs instead (Anthropic Claude, OpenAI, Google Gemini, etc.) by configuring the appropriate model components in Langflow with your API keys.

3. Access Langflow

Open http://localhost:7860 in your browser.

4. Use the Auto-Loaded Flow

The flow AIConfig-Model-Selector-Chatbot.json is automatically loaded when Langflow starts. Simply open Langflow and you'll find it in your flows list.

Note: Flows in the ./flows directory are auto-loaded via the LANGFLOW_LOAD_FLOWS_PATH environment variable.

5. Create a Flow Manually (Alternative)

Alternatively, you can build the flow from scratch:

  1. Create a new flow

    • Click "New Flow" in the Langflow dashboard
    • Select "Blank Flow"
  2. Add model components

    You can use local Ollama models or cloud-based LLMs:

    Option A: Ollama (local models)

    • In the sidebar, go to ModelsOllama
    • Drag two Ollama components onto the canvas
    • Configure each with Base URL http://ollama:11434 and your desired model names (e.g., llama3.2:3b, qwen3:8b)

    Option B: Cloud LLMs (Anthropic, OpenAI, Google)

    • In the sidebar, go to Models and select your provider (e.g., Anthropic, OpenAI, Google Generative AI)
    • Drag model components onto the canvas
    • Configure with your API key and desired model (e.g., claude-sonnet-4-20250514, gpt-4o, gemini-pro)
  3. Add the LD Model Router component

    • In the sidebar, go to Custom ComponentsLD Model Router
    • Drag it onto the canvas
    • Configure the component:
      • Set LaunchDarkly SDK Key to your server-side SDK key
      • Set Flag Key to your AI Config flag key (e.g., model-router)
  4. Add Chat Input and Output

    • Go to InputsChat Input and drag it onto the canvas
    • Go to OutputsChat Output and drag it onto the canvas
  5. Connect the components

    • Connect both Ollama components to the Language Models input on the LD Model Router
    • Connect Chat Input to the Input Message input on the LD Model Router
    • Connect the Output from LD Model Router to Chat Output
  6. Save and run

    • Click the save icon to save your flow
    • Use the Playground to test your flow

Component in Sidebar

Component Configuration

Inputs

Input Required Description
Language Models Yes Connect one or more Language Model components (e.g., Ollama, OpenAI)
Input Message Yes Connect a Chat Input component
LaunchDarkly SDK Key Yes Your server-side SDK key from LaunchDarkly
Flag Key Yes The feature flag key for model routing
User Key Override No Override the user key for targeting (defaults to sender_name from chat input)
User Attributes No Additional attributes for targeting rules
Fallback to First Model No Use the first connected model as fallback when routing fails (default: true)

Outputs

Output Description
Output The response message from the selected model
Selected Model Info Data object with model name, index, and LD config details
Routing Decision Human-readable explanation of why a particular model was selected

LaunchDarkly AI Config Setup

  1. Create an AI Config flag in LaunchDarkly with the key matching your component's Flag Key (e.g., model-router)
  2. Configure variations with different model names and system prompts
  3. Set up targeting rules to route users to different model variations

See Create AI Configs for details.

Docker Compose Configuration

The included docker-compose.yml provides a complete development environment:

services:
  ollama:
    image: ollama/ollama:latest
    ports:
      - "11434:11434"
    volumes:
      - ./ollama_models:/root/.ollama

  langflow:
    image: langflowai/langflow:latest
    ports:
      - "7860:7860"
    environment:
      - OLLAMA_BASE_URL=http://ollama:11434
      - LANGFLOW_COMPONENTS_PATH=./custom_components
      - LANGFLOW_LOAD_FLOWS_PATH=/app/flows
    volumes:
      - ./custom_components:/app/custom_components
      - ./flows:/app/flows

Key configurations:

  • Custom components are mounted from ./custom_components
  • Flows in ./flows are automatically loaded on startup
  • LaunchDarkly SDKs are installed automatically on startup
  • Ollama is accessible to Langflow via the internal network

Volume Mounts

The Docker containers use volumes mounted relative to the directory where docker-compose is invoked:

Host Path Container Path Purpose
./ollama_models /root/.ollama Persists downloaded Ollama models between container restarts
./custom_components /app/custom_components Custom Langflow components (includes the LD Model Router)
./flows /app/flows Langflow flows that are auto-loaded on startup

This means after running docker-compose up, you'll see these directories created in your project folder:

  • ollama_models/ - Contains all downloaded LLM models (can be several GB)
  • custom_components/ - Contains the ld_model_router.py component used by Langflow
  • flows/ - Contains exported Langflow flows (auto-loaded via LANGFLOW_LOAD_FLOWS_PATH)

Example Flow

  1. Add two Ollama components configured with different models (e.g., llama3.2:3b and qwen3:8b)
  2. Add a Chat Input component
  3. Add the LD Model Router component
  4. Connect both Ollama models to the "Language Models" input
  5. Connect Chat Input to the "Input Message" input
  6. Add your LaunchDarkly SDK key and flag key
  7. Connect the Output to a Chat Output component

How Model Selection Works

  1. The component builds a LaunchDarkly context using the sender name (or override) and any custom attributes
  2. It evaluates the AI Config flag to get the target model configuration
  3. It matches the configured model name against connected models (case-insensitive, partial match)
  4. The matched model receives the request along with any system prompts from the flag
  5. Metrics (tokens, duration, success) are automatically tracked back to LaunchDarkly

Troubleshooting

Component not appearing in Langflow?

  • Ensure the custom_components volume is correctly mounted
  • Check that LANGFLOW_COMPONENTS_PATH is set in the environment
  • Restart the Langflow container

LaunchDarkly client not initializing?

  • Verify your SDK key is correct (server-side key, not client-side)
  • Check container logs: docker-compose logs langflow

Model not being matched?

  • The component uses partial, case-insensitive matching
  • Check the "Routing Decision" output for debugging info
  • Ensure the model name in LaunchDarkly matches one of your connected models

License

MIT

About

LaunchDarkly model router for Langflow that dynamically selects LLMs via AI Config flags for targeting, rollouts, and A/B testing.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages