A custom Langflow component that dynamically routes requests to different AI models based on LaunchDarkly feature flag configuration. This enables A/B testing, gradual rollouts, and user-targeted model selection without code changes.
- Dynamic Model Routing: Route requests to different LLMs based on LaunchDarkly AI Config flags
- User Targeting: Target specific users or segments with different models using LaunchDarkly's targeting rules
- System Prompt Management: Configure system prompts directly in LaunchDarkly
- Automatic Metrics Tracking: Track token usage, duration, and success/failure metrics
- Fallback Support: Automatically fall back to a default model if routing fails
- Multiple Model Support: Connect multiple language models and let LaunchDarkly decide which one to use
- Docker and Docker Compose
- A LaunchDarkly account with a server-side SDK key
- An AI Config feature flag configured in LaunchDarkly
docker-compose up -dThis starts:
- Ollama on port
11434- Local LLM server - Langflow on port
7860- Flow builder UI with LaunchDarkly SDK pre-installed
If you want to use local Ollama models, pull them after the services are running:
docker exec -it ollama ollama pull llama3.2:3b
docker exec -it ollama ollama pull qwen3:8bNote: Model downloads can take several minutes depending on your internet connection. The models are persisted in the
./ollama_modelsdirectory.
Alternatively, you can skip Ollama models and use cloud-based LLMs instead (Anthropic Claude, OpenAI, Google Gemini, etc.) by configuring the appropriate model components in Langflow with your API keys.
Open http://localhost:7860 in your browser.
The flow AIConfig-Model-Selector-Chatbot.json is automatically loaded when Langflow starts. Simply open Langflow and you'll find it in your flows list.
Note: Flows in the
./flowsdirectory are auto-loaded via theLANGFLOW_LOAD_FLOWS_PATHenvironment variable.
Alternatively, you can build the flow from scratch:
-
Create a new flow
- Click "New Flow" in the Langflow dashboard
- Select "Blank Flow"
-
Add model components
You can use local Ollama models or cloud-based LLMs:
Option A: Ollama (local models)
- In the sidebar, go to Models → Ollama
- Drag two Ollama components onto the canvas
- Configure each with Base URL
http://ollama:11434and your desired model names (e.g.,llama3.2:3b,qwen3:8b)
Option B: Cloud LLMs (Anthropic, OpenAI, Google)
- In the sidebar, go to Models and select your provider (e.g., Anthropic, OpenAI, Google Generative AI)
- Drag model components onto the canvas
- Configure with your API key and desired model (e.g.,
claude-sonnet-4-20250514,gpt-4o,gemini-pro)
-
Add the LD Model Router component
- In the sidebar, go to Custom Components → LD Model Router
- Drag it onto the canvas
- Configure the component:
- Set LaunchDarkly SDK Key to your server-side SDK key
- Set Flag Key to your AI Config flag key (e.g.,
model-router)
-
Add Chat Input and Output
- Go to Inputs → Chat Input and drag it onto the canvas
- Go to Outputs → Chat Output and drag it onto the canvas
-
Connect the components
- Connect both Ollama components to the Language Models input on the LD Model Router
- Connect Chat Input to the Input Message input on the LD Model Router
- Connect the Output from LD Model Router to Chat Output
-
Save and run
- Click the save icon to save your flow
- Use the Playground to test your flow
| Input | Required | Description |
|---|---|---|
| Language Models | Yes | Connect one or more Language Model components (e.g., Ollama, OpenAI) |
| Input Message | Yes | Connect a Chat Input component |
| LaunchDarkly SDK Key | Yes | Your server-side SDK key from LaunchDarkly |
| Flag Key | Yes | The feature flag key for model routing |
| User Key Override | No | Override the user key for targeting (defaults to sender_name from chat input) |
| User Attributes | No | Additional attributes for targeting rules |
| Fallback to First Model | No | Use the first connected model as fallback when routing fails (default: true) |
| Output | Description |
|---|---|
| Output | The response message from the selected model |
| Selected Model Info | Data object with model name, index, and LD config details |
| Routing Decision | Human-readable explanation of why a particular model was selected |
- Create an AI Config flag in LaunchDarkly with the key matching your component's Flag Key (e.g.,
model-router) - Configure variations with different model names and system prompts
- Set up targeting rules to route users to different model variations
See Create AI Configs for details.
The included docker-compose.yml provides a complete development environment:
services:
ollama:
image: ollama/ollama:latest
ports:
- "11434:11434"
volumes:
- ./ollama_models:/root/.ollama
langflow:
image: langflowai/langflow:latest
ports:
- "7860:7860"
environment:
- OLLAMA_BASE_URL=http://ollama:11434
- LANGFLOW_COMPONENTS_PATH=./custom_components
- LANGFLOW_LOAD_FLOWS_PATH=/app/flows
volumes:
- ./custom_components:/app/custom_components
- ./flows:/app/flowsKey configurations:
- Custom components are mounted from
./custom_components - Flows in
./flowsare automatically loaded on startup - LaunchDarkly SDKs are installed automatically on startup
- Ollama is accessible to Langflow via the internal network
The Docker containers use volumes mounted relative to the directory where docker-compose is invoked:
| Host Path | Container Path | Purpose |
|---|---|---|
./ollama_models |
/root/.ollama |
Persists downloaded Ollama models between container restarts |
./custom_components |
/app/custom_components |
Custom Langflow components (includes the LD Model Router) |
./flows |
/app/flows |
Langflow flows that are auto-loaded on startup |
This means after running docker-compose up, you'll see these directories created in your project folder:
ollama_models/- Contains all downloaded LLM models (can be several GB)custom_components/- Contains theld_model_router.pycomponent used by Langflowflows/- Contains exported Langflow flows (auto-loaded viaLANGFLOW_LOAD_FLOWS_PATH)
- Add two Ollama components configured with different models (e.g.,
llama3.2:3bandqwen3:8b) - Add a Chat Input component
- Add the LD Model Router component
- Connect both Ollama models to the "Language Models" input
- Connect Chat Input to the "Input Message" input
- Add your LaunchDarkly SDK key and flag key
- Connect the Output to a Chat Output component
- The component builds a LaunchDarkly context using the sender name (or override) and any custom attributes
- It evaluates the AI Config flag to get the target model configuration
- It matches the configured model name against connected models (case-insensitive, partial match)
- The matched model receives the request along with any system prompts from the flag
- Metrics (tokens, duration, success) are automatically tracked back to LaunchDarkly
Component not appearing in Langflow?
- Ensure the
custom_componentsvolume is correctly mounted - Check that
LANGFLOW_COMPONENTS_PATHis set in the environment - Restart the Langflow container
LaunchDarkly client not initializing?
- Verify your SDK key is correct (server-side key, not client-side)
- Check container logs:
docker-compose logs langflow
Model not being matched?
- The component uses partial, case-insensitive matching
- Check the "Routing Decision" output for debugging info
- Ensure the model name in LaunchDarkly matches one of your connected models
MIT