All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
- Structured plan-then-execute reasoning for
jam run: the agent now generates a typedExecutionPlan(ordered steps with success criteria) before acting, replacing the free-text ReAct loop - Read-before-write gate: write tools are automatically blocked until the target file has been read, preventing silent overwrites of unread files
- Post-write shrinkage guard: if a
write_filecall produces a file suspiciously smaller than the original, the file is auto-restored from git and the model is redirected --yesflag onjam runfor non-interactive auto-approval of all write operationsStepVerifierto validate each plan step before execution- Working memory + tool-result caching for the agent loop
- Critic evaluation and correction pass after the tool loop completes
- Past-session search and symbol index builder for richer context injection
--providerCLI flag no longer inheritsbaseUrlfrom the active profile when switching providers (e.g.--provider openaino longer accidentally hitslocalhost:11434)- Removed unnecessary type assertions in
run.ts - Removed unnecessary escape characters in
agent.ts
- Embedded provider (
--provider embedded): run SmolLM2-1.7B fully in-process vianode-llama-cpp— no external server needed - Default embedded model upgraded to
smollm2-1.7b-instruct-q4_k_m(1.7B, q4_k_m) with 8192-token context window - One-time model download from GitHub releases with progress reporting
jam commit --provider embedded— commit message generation works offline with diff-stat fallback for large diffssupportsTools/contextWindowfields onProviderInfofor capability-aware routing- Lean system prompt path for small models that cannot follow tool-call JSON schemas
- Lint errors in embedded provider download stream handler (
Unsafe array destructuring/Unsafe member access)
- Initial release of Jam CLI
jam ask— one-shot AI questions with streaming outputjam chat— interactive multi-turn chat REPL (Ink/React TUI)jam explain— AI-powered code explanationjam search— codebase search with ripgrep (JS fallback)jam diff— git diff review with AI analysisjam patch— AI-generated unified diffs with validation and applyjam run— agentic task workflow with tool-calling loopjam auth— provider authentication managementjam config— configuration management (init, show)jam models list— list available models from providerjam history— chat session history (list, show)jam completion install— shell completion for bash/zshjam doctor— system diagnostics and health checks- Ollama provider with NDJSON streaming
- Pluggable provider architecture (adapter pattern)
- Layered configuration (global → repo → CLI flags)
- Named profiles for multiple provider/model configs
- Secure secrets via OS keychain (keytar) with env var fallback
- Model-callable tools: read_file, list_dir, search_text, git_status, git_diff, write_file, apply_patch
- Tool permission enforcement (ask_every_time, allowlist, never)
- Chat session persistence (JSON files)
- Log redaction for sensitive patterns
- Markdown rendering in terminal (marked + marked-terminal)
- Initial project setup
- Core CLI framework with Commander.js
- Ollama integration
- Basic tool system
- Configuration with Zod schema validation