Version: 0.1.29
Wger MCP Server + A2A Agent
Wger Workout Manager — exercise database, workout routines, nutrition plans, body measurements, and progress tracking.
This repository is actively maintained - Contributions are welcome!
The MCP Server can be run in two modes: stdio (for local testing) or http (for networked access).
WGER_INSTANCE: The URL of the target service.WGER_ACCESS_TOKEN: The API token or access token.
export WGER_INSTANCE="http://localhost:8080"
export WGER_ACCESS_TOKEN="your_token"
wger-mcp --transport "stdio"export WGER_INSTANCE="http://localhost:8080"
export WGER_ACCESS_TOKEN="your_token"
wger-mcp --transport "http" --host "0.0.0.0" --port "8000"export WGER_INSTANCE="http://localhost:8080"
export WGER_ACCESS_TOKEN="your_token"
wger-agent --provider openai --model-id gpt-4o --api-key sk-...docker build -t wger-agent .docker run -d \
--name wger-agent \
-p 8000:8000 \
-e TRANSPORT=http \
-e WGER_INSTANCE="http://your-service:8080" \
-e WGER_ACCESS_TOKEN="your_token" \
knucklessg1/wger-agent:latestservices:
wger-agent:
image: knucklessg1/wger-agent:latest
environment:
- HOST=0.0.0.0
- PORT=8000
- TRANSPORT=http
- WGER_INSTANCE=http://your-service:8080
- WGER_ACCESS_TOKEN=your_token
ports:
- 8000:8000{
"mcpServers": {
"wger": {
"command": "uv",
"args": [
"run",
"--with",
"wger-agent",
"wger-mcp"
],
"env": {
"WGER_INSTANCE": "http://your-service:8080",
"WGER_ACCESS_TOKEN": "your_token"
}
}
}
}python -m pip install wger-agentuv pip install wger-agentThis agent uses pydantic-graph orchestration for intelligent routing and optimal context management.
---
title: Wger Agent Graph Agent
---
stateDiagram-v2
[*] --> RouterNode: User Query
RouterNode --> DomainNode: Classified Domain
RouterNode --> [*]: Low confidence / Error
DomainNode --> [*]: Domain Result
- RouterNode: A fast, lightweight LLM (e.g.,
nvidia/nemotron-3-super) that classifies the user's query into one of the specialized domains. - DomainNode: The executor node. For the selected domain, it dynamically sets environment variables to temporarily enable ONLY the tools relevant to that domain, creating a highly focused sub-agent (e.g.,
gpt-4o) to complete the request. This preserves LLM context and prevents tool hallucination.