Skip to content

feat(ai): local LLM support via Ollama — zero cloud dependency for self-hosters#57

Open
nymulinfoinlet wants to merge 1 commit intomainfrom
feat/ollama-support
Open

feat(ai): local LLM support via Ollama — zero cloud dependency for self-hosters#57
nymulinfoinlet wants to merge 1 commit intomainfrom
feat/ollama-support

Conversation

@nymulinfoinlet
Copy link
Copy Markdown
Contributor

Summary

  • Add AI_PROVIDER env var to switch between openrouter (default), ollama, and openai backends
  • When AI_PROVIDER=ollama, the OpenAI SDK client points at Ollama's local API (http://localhost:11434/v1) with a dummy API key — no data leaves the machine
  • Embedding service also respects the provider switch, using nomic-embed-text (768-dim) for Ollama
  • Existing OpenRouter behavior is completely unchanged when AI_PROVIDER is unset or set to openrouter

What changed

File Change
backend/.env.example Added AI_PROVIDER, OLLAMA_BASE_URL, OLLAMA_MODEL, OLLAMA_EMBEDDING_MODEL
backend/src/modules/ai/ai.service.ts Multi-provider client initialization + model routing
backend/src/modules/ai/embedding.service.ts Provider-aware base URL, API key, model, and vector dimension

New env vars

AI_PROVIDER=ollama              # openrouter | ollama | openai
OLLAMA_BASE_URL=http://localhost:11434/v1
OLLAMA_MODEL=llama3.2
OLLAMA_EMBEDDING_MODEL=nomic-embed-text

How to test

  1. Install Ollama: https://ollama.com
  2. Pull a model: ollama pull llama3.2 && ollama pull nomic-embed-text
  3. Set AI_PROVIDER=ollama in backend/.env
  4. Start the backend — all LLM calls now go to localhost

Honesty contract

  • I have tested that npx tsc --noEmit passes with 0 errors
  • No frontend changes were made
  • Default behavior (OpenRouter) is preserved when AI_PROVIDER is unset

Closes #53

🤖 Generated with Claude Code

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

feat(ai): Add local LLM support via Ollama for privacy-sensitive deployments

1 participant