Skip to content

Commit 04e382f

Browse files
committed
Version v7 - Preview Version
1 parent 90f1639 commit 04e382f

15 files changed

Lines changed: 4137 additions & 137 deletions

File tree

.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -95,3 +95,4 @@ real_data/
9595

9696
# Claude Code session data
9797
.claude/
98+
eeee.txt

CHANGELOG.md

Lines changed: 34 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,40 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
77

88
---
99

10+
## [0.7.0] — 2026-03-21
11+
12+
### Added — Self-Aligned Context Engine (Phase 1)
13+
14+
#### `feather_db.providers` — LLM Provider Abstraction
15+
- **`LLMProvider`** (ABC) — minimal `complete(messages, max_tokens, temperature) -> str` interface shared by all providers
16+
- **`ClaudeProvider`** — Anthropic Claude via `anthropic` SDK. Default: `claude-haiku-4-5-20251001`. Separates `system` from conversation messages per Anthropic API spec.
17+
- **`OpenAIProvider`** — OpenAI Chat Completions API and any compatible endpoint (Groq, Mistral, Together AI, vLLM, LM Studio). `json_mode=True` sets `response_format=json_object`.
18+
- **`OllamaProvider`** — Ollama local server. Subclass of `OpenAIProvider`; default `base_url="http://localhost:11434/v1"`. No API key required.
19+
- **`GeminiProvider`** — Google Gemini via `google-genai` SDK. Uses `response_mime_type="application/json"` for native JSON mode.
20+
- All providers default to `temperature=0.0` for deterministic JSON output.
21+
22+
#### `feather_db.engine` — ContextEngine
23+
- **`ContextEngine`** — self-aligned ingestion pipeline that wraps `DB` with LLM-powered classification.
24+
- **`ingest(text, hint) -> int`** — 10-step pipeline: embed → sample context → LLM classify → apply hint → store → link → episode → watch triggers → contradiction check → auto-save.
25+
- **`ingest_batch(texts, hints) -> list[int]`** — batch ingestion helper.
26+
- LLM JSON schema: `{entity_type, importance, confidence, ttl, namespace, episode_id, suggested_links}`
27+
- **`_heuristic_classify(text)`** — built-in keyword-based fallback; activated when `provider=None` or LLM fails. Fully offline, zero latency.
28+
- 5-stage JSON extraction — direct parse → strip code fences → balanced brace scan → trailing comma repair → heuristic fallback. Robust on small/local models.
29+
- Node ID: SHA-256 of `text[:200] + timestamp + PID` mod 2^50.
30+
- Integrates with `WatchManager`, `EpisodeManager`, `ContradictionDetector` (all optional).
31+
32+
#### New exports in `feather_db`
33+
- `LLMProvider`, `ClaudeProvider`, `OpenAIProvider`, `OllamaProvider`, `GeminiProvider`, `ContextEngine`
34+
35+
#### New example
36+
- `examples/context_engine_demo.py` — auto-detects provider from env vars (Claude → OpenAI → Gemini → Ollama → heuristic), ingests 6 records, runs semantic search and context chain.
37+
38+
### Changed
39+
- `feather_db.__version__``"0.7.0"`
40+
- `README.md` — updated to v0.7.0 architecture; added Self-Aligned Context Engine, LLM agent connectors, MCP server, LangChain/LlamaIndex sections.
41+
42+
---
43+
1044
## [0.6.1] — 2026-03-14
1145

1246
### Fixed

0 commit comments

Comments
 (0)