The fastest and most efficient code intelligence engine for AI coding agents. Full-indexes an average repo in seconds, the Linux kernel in 3 minutes. Tree-sitter + LSP hybrid parsing (Go, C, C++ — more coming). Single static binary — download, install, done.
Built-in 3D graph visualization (UI variant) — explore your knowledge graph at localhost:9749
Static binary for macOS, Linux, and Windows — no Docker, no runtime dependencies. One install command configures all 8 agents: Claude Code, Codex CLI, Gemini CLI, Zed, OpenCode, Antigravity, Aider, KiloCode.
AI coding agents explore codebases by reading files one at a time. Every structural question triggers a cascade of grep → read file → grep again → read more files.
Five structural questions about a real codebase: ~412,000 tokens via file-by-file search. ~3,400 tokens via knowledge graph queries.
The 120x reduction isn't about fitting in the context window. It's about cost ($3-15/M tokens adds up), latency (<1ms vs seconds), and accuracy (less noise = better answers, no "lost in the middle" problem).
| Question Type | Graph | File-by-file | Savings |
|---|---|---|---|
| Find function by pattern | ~200 | ~45,000 | 225x |
| Trace call chain (depth 3) | ~800 | ~120,000 | 150x |
| Dead code detection | ~500 | ~85,000 | 170x |
| List all routes | ~400 | ~62,000 | 155x |
| Architecture overview | ~1,500 | ~100,000 | 67x |
| Total | ~3,400 | ~412,000 | 121x |
Tested across 31 languages with agent-vs-agent methodology (372 questions). Full benchmark report →
Python, Go, JS, TS, TSX, Rust, Java, C++, C#, C, PHP, Ruby, Kotlin, Scala, Zig, Elixir, Haskell, OCaml, Swift, Dart, MATLAB, Lean 4, Wolfram, and 41 more via vendored tree-sitter grammars.
RAM-first pipeline: LZ4 compression, in-memory SQLite, fused Aho-Corasick pattern matching. Linux kernel (28M LOC) indexed in 3 minutes.
Built-in 3D graph UI (optional). Explore nodes, edges, and clusters visually at localhost:9749. Ships as a separate binary variant.
Trace callers and callees across files and packages. Import-aware, type-inferred resolution. BFS traversal up to depth 5.
One command configures Claude Code, Codex, Gemini CLI, Zed, OpenCode, Antigravity, Aider, and KiloCode with MCP configs, instructions, and hooks.
Find functions with zero callers, with smart filtering that excludes entry points (route handlers, main(), framework decorators).
Discovers REST routes and matches them to HTTP call sites across services with confidence scoring.
Background watcher detects git changes. Optional auto-indexing on session start. No manual reindex needed.
search_graph, trace_call_path, detect_changes, query_graph, get_architecture, manage_adr, get_code_snippet, and 7 more.
| Feature | codebase-memory-mcp | GitNexus |
|---|---|---|
| Languages | 64 | 8-11 |
| Runtime | Single static binary | Node.js (npx) |
| Runtime dependency | None | Node.js |
| Stress test | Linux kernel (2.1M nodes, 3 min) | Not published |
| Embedded LLM | No (uses your MCP client) | Yes (extra API key + cost) |
| Published benchmarks | Yes (31 langs, 372 questions) | No |
| Auto-sync | Yes | No |
| MCP tools | 14 | 7 |
| Agents supported | 8 | 1-2 |
| Cross-service HTTP linking | Yes | No |
| Cypher queries | Yes | No |
| Incremental reindex | Yes (<1ms no-op) | No |
| Pre-tool hooks | Yes (agents prefer graph tools) | No |
| Visual web UI | Yes (3D graph) | Yes |
| Graph RAG / embeddings | Not needed* | Yes |
*Graph RAG and semantic embeddings solve a human problem — fuzzy "find something similar" queries. MCP agents don't need this: they make precise structural queries via tool calls and synthesize results themselves. codebase-memory-mcp is purpose-built for agents — exact patterns, call chain tracing, and structural search at machine speed.