Skip to content

feat: add Ollama local LLM provider support#7

Merged
MrSibe merged 1 commit intodevelopfrom
feat/add-ollama-provider
Dec 17, 2025
Merged

feat: add Ollama local LLM provider support#7
MrSibe merged 1 commit intodevelopfrom
feat/add-ollama-provider

Conversation

@MrSibe
Copy link
Owner

@MrSibe MrSibe commented Dec 17, 2025

  • Add Ollama to builtin providers with OpenAI-compatible API support
  • Support both chat and embedding capabilities
  • Default base URL: http://localhost:11434/v1
  • Add Ollama model list endpoint handler with custom baseUrl support
  • Fix model name truncation issue in ProviderManager
    • Preserve full model IDs including version numbers (e.g., qwen3:0.6b)
    • Affects both chat and embedding model parsing
  • Add Ollama to frontend provider settings UI
    • Display in provider list with configuration panel
    • Support cached model loading and fetching

- Add Ollama to builtin providers with OpenAI-compatible API support
- Support both chat and embedding capabilities
- Default base URL: http://localhost:11434/v1
- Add Ollama model list endpoint handler with custom baseUrl support
- Fix model name truncation issue in ProviderManager
  - Preserve full model IDs including version numbers (e.g., qwen3:0.6b)
  - Affects both chat and embedding model parsing
- Add Ollama to frontend provider settings UI
  - Display in provider list with configuration panel
  - Support cached model loading and fetching
@MrSibe MrSibe self-assigned this Dec 17, 2025
@MrSibe MrSibe added the enhancement New feature or request label Dec 17, 2025
@MrSibe MrSibe merged commit fdf9062 into develop Dec 17, 2025
@MrSibe MrSibe deleted the feat/add-ollama-provider branch December 17, 2025 11:06
MciG-ggg pushed a commit to MciG-ggg/KnowNote that referenced this pull request Dec 17, 2025
- Add Ollama to builtin providers with OpenAI-compatible API support
- Support both chat and embedding capabilities
- Default base URL: http://localhost:11434/v1
- Add Ollama model list endpoint handler with custom baseUrl support
- Fix model name truncation issue in ProviderManager
  - Preserve full model IDs including version numbers (e.g., qwen3:0.6b)
  - Affects both chat and embedding model parsing
- Add Ollama to frontend provider settings UI
  - Display in provider list with configuration panel
  - Support cached model loading and fetching
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant