![]() |
|
Option 1 — One command (recommended):
mkdir my-bot && cd my-bot
npx @ngirchev/open-daimonThe wizard will:
- Configure
.envwith your credentials - Let you choose AI provider (OpenRouter or Ollama)
- For Ollama — check the connection and pull
qwen2.5:3bautomatically - Generate ready-to-run
docker-compose.ymlandapplication-local.yml - Offer to start the stack immediately
1. Docker (required) — install Docker Desktop and start it. Node.js 18+ required for the npx wizard.
2. Ollama (optional — local AI models) — install from ollama.com. The wizard checks the connection and pulls a model automatically.
3. OpenRouter (optional — cloud AI, free models available):
- Sign up at openrouter.ai (GitHub OAuth or email)
- Go to openrouter.ai/keys → Create Key → copy the key (starts with
sk-or-v1-...) — this is yourOPENROUTER_KEY
You need at least one of Ollama or OpenRouter. Both can be active simultaneously.
Telegram bot — see setup-telegram.md: get a token from @BotFather and your user ID from @userinfobot.
After the wizard completes, check that the app started:
docker compose logs -f opendaimon-appOption 2 — Manual setup (after git clone): See Quick start below.
OpenDaimon (formerly ai-bot) is a multi-module Java platform for building AI-powered chat agents and chatbots. It connects to various AI providers via Spring AI (OpenRouter, Ollama) and exposes them through Telegram, REST API, and Web UI. Use it as a library to assemble your own pipelines and integrations, or run the full app as a private, self-hosted chat assistant.
Java/Spring teams building conversational AI or internal bots; developers who want one backend with Telegram, REST, and Web UI; users who prefer to run a chat agent on their own infrastructure with local or OpenRouter models and no external subscriptions; anyone who needs trusted group access (e.g. family or team) without per-user signups elsewhere.
- Spring AI as a library — Integrate conversational AI into your apps with agent-style capabilities; plug in only the modules you need (Telegram, REST, UI, Spring AI).
- Easy to customize for business — Configure the chat agent (prompts, roles, memory, RAG) via properties and optional extensions; no need to fork the whole project.
- Resilience and prioritization — Built-in bulkhead (Resilience4j) and two user tiers: VIP and regular (plus admin), with configurable concurrency and wait limits.
- Custom dialog summarization — Long conversations are summarized automatically; context window and triggers are configurable.
- Open, modular architecture — Spring Boot auto-configurations let you enable/disable features and replace components without touching core code.
- Ready-made interfaces — Telegram bot, REST API, and Web UI out of the box; two UI languages supported; * default and custom system roles* for the assistant.
- Foundation for pipelines — Solid base for building pipelines and integrations with various systems and AI providers for chatbots and automation.
- Your data stays with you — Run the agent on your own machine or server. Use OpenRouter or Ollama ( local models); all conversations are stored locally in your database. No need to send private data to third-party APIs or pay for external chat subscriptions.
- Trusted Telegram groups — Add Telegram groups (e.g. family, friends) as trusted; members get access without signing up on other services and without dealing with per-user limits on external platforms.
- Streaming — SSE for REST and Web UI; Telegram receives replies as they are generated (chunk-by-chunk).
- OpenRouter intelligence — Automatic retry with model switch on rate limits (429) or errors; capability-based model selection (chat, tool calling, web, vision); optional free-model rotation with scheduled registry refresh so VIP/regular users can use free OpenRouter models without manual switching.
- Multimodal — Images from Telegram (or REST) stored in MinIO and sent to vision-capable models; optional RAG pipeline for PDFs (chunking, embeddings, similarity search).
- Production-ready — Published to Maven Central; CI (GitHub Actions), SonarCloud, Testcontainers, Flyway migrations, Docker Compose; API keys only in environment variables (no secrets in config files).
- Observability — Micrometer, Prometheus, Grafana, optional Elasticsearch/Kibana; custom metrics for request timing, bulkhead usage, and OpenRouter stream retries.
- Quick Setup — npx wizard
- Who it's for
- Why OpenDaimon? — For developers, For end users, Technical highlights
- Features
- User Priorities and Bulkhead
- Requirements
- Tech stack
- Modules
- Quick start — Running the app (no Java experience)
- Build and run
- Server deployment
- Useful links
- Testing
- Monitoring and debugging
- Troubleshooting
- Documentation
- Project structure
- Additional commands
- License
- Multiple interfaces: Telegram bot, REST API, Web UI
- Spring AI integration: OpenRouter, Ollama, chat memory, optional RAG; OpenRouter retry and free-model rotation
- Streaming: SSE (REST/UI) and chunk-by-chunk replies in Telegram
- Multimodal: image uploads (MinIO + vision models), optional PDF RAG (embeddings, similarity search)
- Modular architecture: enable only the modules you need; extensible via Spring auto-configurations
- Request prioritization: bulkhead (ADMIN/VIP/REGULAR) and per-user concurrency; trusted Telegram groups for shared access
- Dialog summarization: configurable long-conversation summarization and context window
- Roles and i18n: default and custom system roles; two UI languages
- Observability: Prometheus, Grafana, Elasticsearch, Kibana; custom metrics
- Distribution: Maven Central, Docker images, CI and SonarCloud
The system uses a Bulkhead pattern to manage AI request limits based on user priority.
| Priority | Description | Max Concurrent Requests | Max Wait Time |
|---|---|---|---|
| ADMIN | Bot administrators | 10 (configurable) | 1s |
| VIP | Paid users or channel members | 5 (configurable) | 1s |
| REGULAR | Free users in whitelist | 1 (configurable) | 500ms |
| BLOCKED | Not in whitelist — access denied | 0 | — |
Priority is checked in this order (first match wins):
- ADMIN — in config list (
admin.idsoradmin.channels) ORisAdmin = truein database - BLOCKED — not in whitelist, not in any configured channel
- VIP — in config list (
vip.ids) ORisPremium = true(Telegram Premium) OR invip.channels - REGULAR — all other users in whitelist
User access is configured via environment variables (not hardcoded in YAML):
# Admin users by Telegram ID
TELEGRAM_ACCESS_ADMIN_IDS=123456789,987654321
# Admin channel (members get ADMIN)
TELEGRAM_ACCESS_ADMIN_CHANNELS=-1000000000000,@admins
# VIP users by Telegram ID
TELEGRAM_ACCESS_VIP_IDS=111111111,222222222
# VIP channels (members get VIP)
TELEGRAM_ACCESS_VIP_CHANNELS=-1002000000000,@vipgroup
# Regular users by Telegram ID
TELEGRAM_ACCESS_REGULAR_IDS=333333333
# Regular channels (members get REGULAR)
TELEGRAM_ACCESS_REGULAR_CHANNELS=-1003000000000,@community# Admin emails
[email protected]
# VIP emails
[email protected],[email protected]
# Regular emails
[email protected],[email protected]Edit application.yml to change request limits:
open-daimon:
common:
bulkhead:
enabled: true
instances:
ADMIN:
maxConcurrentCalls: 10
maxWaitDuration: 1s
VIP:
maxConcurrentCalls: 5
maxWaitDuration: 1s
REGULAR:
maxConcurrentCalls: 1
maxWaitDuration: 500ms- Add admin: Set
TELEGRAM_ACCESS_ADMIN_IDSorREST_ACCESS_ADMIN_EMAILSenv variable - Add VIP: Set
TELEGRAM_ACCESS_VIP_IDSorREST_ACCESS_VIP_EMAILSenv variable - Add to whitelist (REGULAR): Use TelegramWhitelistService or DB table
telegram_whitelist - Database fields:
isAdmin,isPremiumin user tables (legacy, config takes priority)
Startup initialization of direct users: On application startup, all users listed in REST_ACCESS_*_EMAILS and
TELEGRAM_ACCESS_*_IDS (admin, vip, regular) are created or updated in the database with flags set by level. If a user
appears in more than one level, the highest level wins (ADMIN > VIP > REGULAR). Groups/channels are not used for this;
only the direct ids/emails from config are initialized. For Telegram, when the bot is available, the initializer calls
the getChat API for each configured id to fetch real username, first name, and last name; new users are then created
with these values instead of a placeholder (e.g. id_<telegramId>). If getChat fails (e.g. user never chatted with the
bot), the placeholder is used.
UserPriority.java— enum with priority levelsTelegramUserPriorityService.java— Telegram priority logicRestUserPriorityService.java— REST priority logicPriorityRequestExecutor.java— bulkhead executionapplication.yml— bulkhead limitsTelegramProperties.java,RestProperties.java— access configuration
- Java 21 (LTS)
- Maven 3.6+
- Docker & Docker Compose (for PostgreSQL, Prometheus, Grafana; optional Elasticsearch, Kibana)
- Java 21 (LTS), Spring Boot 3.3.3
- PostgreSQL 17.0 with Flyway migrations
- Prometheus + Grafana for metrics, Elasticsearch + Kibana for logging
You can add only the modules you need. All modules use groupId io.github.ngirchev; set opendaimon.version in your
POM or use a concrete version.
graph TD
common[opendaimon-common]
telegram[opendaimon-telegram] --> common
rest[opendaimon-rest] --> common
ui[opendaimon-ui] --> rest
springai[opendaimon-spring-ai] --> common
mock[opendaimon-gateway-mock] --> common
| Module | Description | Depends on |
|---|---|---|
opendaimon-common |
Core: entities, services, request prioritization | — |
opendaimon-telegram |
Telegram Bot interface | opendaimon-common |
opendaimon-rest |
REST API (controllers, Swagger) | opendaimon-common |
opendaimon-ui |
Web UI (Thymeleaf) | opendaimon-rest |
opendaimon-spring-ai |
Spring AI (OpenRouter, Ollama, chat memory, RAG) | opendaimon-common |
opendaimon-gateway-mock |
Mock AI provider for tests | opendaimon-common |
Minimal setup for a Telegram bot with AI:
<dependency>
<groupId>io.github.ngirchev</groupId>
<artifactId>opendaimon-telegram</artifactId>
<version>${opendaimon.version}</version>
</dependency>
<dependency>
<groupId>io.github.ngirchev</groupId>
<artifactId>opendaimon-spring-ai</artifactId>
<version>${opendaimon.version}</version>
</dependency>No Telegram; REST and browser UI only:
<dependency>
<groupId>io.github.ngirchev</groupId>
<artifactId>opendaimon-ui</artifactId>
<version>${opendaimon.version}</version>
</dependency>
<dependency>
<groupId>io.github.ngirchev</groupId>
<artifactId>opendaimon-spring-ai</artifactId>
<version>${opendaimon.version}</version>
</dependency>Use the assembled application module (includes Telegram, REST, UI, Spring AI, gateway-mock):
<dependency>
<groupId>io.github.ngirchev</groupId>
<artifactId>opendaimon-app</artifactId>
<version>${opendaimon.version}</version>
</dependency>Pull and run the latest published image — no build needed:
# Pull the image
docker pull ghcr.io/ngirchev/open-daimon:latest
# Run with your environment variables
docker run -p 8080:8080 --env-file .env ghcr.io/ngirchev/open-daimon:latestSpecific version: docker pull ghcr.io/ngirchev/open-daimon:1.2.3
Note: The app requires PostgreSQL, MinIO, and other services. Use
docker-compose.ymlfor a full local setup (see below).
If you are new to Java, follow these steps. You will need a terminal (command line): on Windows use PowerShell or Command Prompt; on macOS/Linux use Terminal.
1. Install Java 21
The app runs on Java (a runtime). You need Java 21 specifically.
- Windows / macOS / Linux: download and install from Eclipse Temurin (Adoptium) — choose your OS and install the JDK 21.
- After installation, open a new terminal and run:
java -version. You should see something likeopenjdk version "21.x.x".
2. Install Docker
The app uses PostgreSQL (a database). The easiest way is to run it in Docker.
- Install Docker Desktop (includes Docker Compose). Start Docker so it is running in the background.
3. Prepare configuration
- In the project folder, copy the example config: copy
.env.exampleto a new file named.env. - Open
.envin a text editor and set at least:TELEGRAM_USERNAME,TELEGRAM_TOKEN,OPENROUTER_KEY,POSTGRES_PASSWORD. Do not commit.env(it contains secrets).
4. Start the database
In the terminal, from the project folder:
docker-compose up -d postgres prometheus grafana5. Build and run
- If you have the source code and want to build yourself: install Maven (
build tool for Java). Then in the project folder run:
mvn clean install java -jar opendaimon-app/target/opendaimon-app-1.0.0-SNAPSHOT.jar
- If someone gave you a ready JAR file: put the JAR in a folder, put your
.envin the same folder (or set the same variables in the environment), then run:java -jar opendaimon-app-1.0.0-SNAPSHOT.jar
The app will start. You can open the Web UI or use the Telegram bot according to your configuration. For more options ( e.g. run everything in Docker), see the sections below.
Create a .env file in the project root (do not commit it; add .env to .gitignore).
Use .env.example as a template:
cp .env.example .env
# Edit .env and set TELEGRAM_USERNAME, TELEGRAM_TOKEN, OPENROUTER_KEY, POSTGRES_PASSWORD, etc.For local run without Docker Compose you can also export variables in the shell.
-
Start infrastructure:
docker-compose up -d postgres prometheus grafana
-
Build the project:
mvn clean install
-
Run the application:
mvn spring-boot:run -pl opendaimon-app
-
Create
.envfrom .env.example and set required values ( see Environment variables above).Create
application-local.ymlfor app overrides (optional but recommended):cp application-local.yml.example application-local.yml
-
Build the project:
mvn clean package -DskipTests
-
Start all services:
docker-compose up -d
Or with image rebuild:
docker-compose up -d --build -
Check status:
docker-compose ps docker-compose logs -f opendaimon-app
- Java 21:
java -version - Maven 3.6+:
mvn -version - Docker (for DB and monitoring):
docker --version
# PostgreSQL, Prometheus, Grafana, Elasticsearch, Kibana
docker-compose up -d
docker-compose psmvn clean install
mvn clean install -DskipTests # without tests
mvn clean install -pl opendaimon-telegram # single module
mvn clean install -pl opendaimon-app -am # module and dependenciesOption 1: Maven (development)
mvn spring-boot:run -pl opendaimon-appOption 2: Run the built JAR
After mvn clean install (or mvn clean package -pl opendaimon-app -am), run the executable JAR. Set environment
variables or use a .env file in the current directory (see Environment variables).
java -jar opendaimon-app/target/opendaimon-app-1.0.0-SNAPSHOT.jarJAR name follows the project version from the parent POM (e.g. 1.0.0-SNAPSHOT). Use Java 21: java -version.
mvn flyway:migrate
mvn flyway:info
mvn flyway:clean # use with cautionDetailed production deployment guide: DEPLOYMENT.md
After starting the application:
| Service | URL |
|---|---|
| Swagger UI | http://localhost:8080/swagger-ui/index.html |
| Actuator Health | http://localhost:8080/actuator/health |
| Prometheus metrics | http://localhost:8080/actuator/prometheus |
| Prometheus UI | http://localhost:9090 |
| Grafana | http://localhost:3000 (admin/admin123456) |
| Kibana | http://localhost:5601 |
mvn testmvn test -pl opendaimon-common
mvn test -pl opendaimon-telegram# Example from README
mvn test -Dtest=repository.telegram.io.github.ngirchev.opendaimon.common.TelegramUserRepositoryTest -pl opendaimon-app
# Specific method
mvn test "-Dtest=repository.telegram.io.github.ngirchev.opendaimon.common.TelegramUserRepositoryTest#whenSaveUser_thenUserIsSaved" -pl opendaimon-app
# SpringAIGatewayIT (streaming)
mvn test -pl opendaimon-spring-ai -Dtest=SpringAIGatewayIT- mvnw.cmd requires JAVA_HOME (JDK 21). Common path:
C:\Users\<user>\.jdks\corretto-21.0.10(IDEA) or File → Project Structure → SDKs. - PowerShell from project root:
(replace
$env:JAVA_HOME = "C:\Users\<user>\.jdks\corretto-21.0.10"; cd c:\path\to\open-daimon; .\mvnw.cmd test -pl opendaimon-spring-ai -Dtest=SpringAIGatewayIT
<user>and path with your JDK and project location). - If a single-module test fails with "Could not find artifact opendaimon-common", run
.\mvnw.cmd install -DskipTestsfirst, then thetestcommand. - From IntelliJ IDEA: right-click
SpringAIGatewayIT→ Run 'SpringAIGatewayIT'.
Uses Testcontainers for PostgreSQL:
- Docker container with PostgreSQL is started automatically
- Flyway migrations are applied
- Container is removed after tests
- TelegramMockGatewayIntegrationTest — main test for the Telegram part
- SpringAIGatewayOpenRouterIntegrationTest — main test for the Spring AI part
- SpringAIGatewayIT — streaming test (no Ollama, mocked Flux with delays)
- Swagger UI: http://localhost:8080/swagger-ui/index.html
- Actuator Metrics: http://localhost:8080/actuator/metrics/telegram.message.processing.time
- Prometheus: http://localhost:9090/query
- Grafana: http://localhost:3000/ (admin/admin123456)
- Kibana: http://localhost:5601/
- Elasticsearch: http://localhost:9200/
Logs are sent to Elasticsearch via Logstash (TCP on port 5044). Index pattern: opendaimon-logs-*.
The application also writes logs to a local file: logs/opendaimon.log (overwritten on every app start).
You can override the file path with environment variable LOG_FILE_PATH.
Quick check for file logs:
tail -f logs/opendaimon.logTo view logs in Kibana:
- Open Kibana (http://localhost:5601)
- Stack Management → Data Views → Create data view
- Configure:
- Name:
opendaimon-logs - Index pattern:
opendaimon-logs-* - Timestamp field:
@timestamp
- Name:
- Save, then go to Observability → Logs
Query logs via Dev Tools:
GET opendaimon-logs-*/_search?size=10
Check log count:
curl "http://localhost:9200/opendaimon-logs-*/_count"Metrics are sent to Prometheus and visualized in Grafana. See Monitoring and debugging section above.
# Check status
mvn flyway:info
# Force apply
mvn flyway:migrate
# Baseline if needed
mvn flyway:baseline- Ensure Docker is running
- Testcontainers starts PostgreSQL automatically
- Check logs:
docker logs open-daimon-postgres
On Windows, Docker Desktop may return 400 over npipe and Testcontainers cannot connect. Enable TCP access to the daemon:
- Docker Desktop → Settings → General → enable "Expose daemon on tcp://localhost:2375 without TLS" → Apply & Restart.
- Before running tests, set (PowerShell):
$env:DOCKER_HOST = "tcp://localhost:2375"
- Run tests:
Or in one line:
.\mvnw.cmd verify -q
$env:DOCKER_HOST = "tcp://localhost:2375"; .\mvnw.cmd verify -q
# Rebuild with dependencies
mvn clean install -am
# Refresh IDE (IntelliJ IDEA)
File -> Invalidate Caches / Restart- Check Prometheus: http://localhost:9090/targets
- Ensure the app exports metrics: http://localhost:8080/actuator/prometheus
- Restart Grafana:
docker-compose restart grafana
- Verify Elasticsearch has logs:
curl "http://localhost:9200/opendaimon-logs-*/_count" - Create a Data View in Kibana (see Kibana Setup for Logs)
- Check Logstash is running:
docker compose logs logstash
- docs/setup-telegram.md — Create a Telegram bot and get your user ID
- docs/setup-serper.md — Enable web search (optional)
- AGENTS.md — Detailed documentation for AI agents (architecture, module structure, code style)
- CONTRIBUTING.md — How to contribute (setup, code style, testing, PR requirements)
- SECURITY.md — How to report security vulnerabilities
- DEPLOYMENT.md — Server deployment guide
- MODULAR_MIGRATIONS.md — Flyway modular migrations
open-daimon/
├── opendaimon-common/ # Core module with shared logic
├── opendaimon-telegram/ # Telegram Bot interface
├── opendaimon-rest/ # REST API interface
├── opendaimon-ui/ # Web UI interface
├── opendaimon-spring-ai/ # Spring AI integration
├── opendaimon-gateway-mock/ # Mock provider for tests
└── opendaimon-app/ # Main application module
docker run -d \
--name open-webui \
-p 3000:8080 \
--add-host=host.docker.internal:host-gateway \
-e OLLAMA_BASE_URL=http://host.docker.internal:11434 \
-v open-webui:/app/backend/data \
ghcr.io/open-webui/open-webui:mainssh -N -L 23750:/var/run/docker.sock [email protected]docker-compose -H tcp://localhost:23750 down -v
docker-compose -H tcp://localhost:23750 up -dSee LICENSE file for details.

