Give your AI agent
permanent codebase knowledge

Architecture, dependencies, risk areas, hidden coupling. Pre-built and served via MCP so your agent skips the cold start. ~50% fewer tokens. 2.5x fewer tool calls. Same quality answers.

Understands 27 languages natively

TypeScript
Python
Go
JavaScript
C++
Java
Swift
Terminal
$ codecortex init
Step 1/6: Discovering project structure...
Found 44 files in 7 modules
Step 2/6: Extracting symbols with tree-sitter...
Extracted 976 symbols, 131 imports, 2489 call edges
Step 3/6: Building dependency graph...
7 modules, 14 external deps
Step 4/6: Analyzing git history...
49 hotspots, 26 coupling pairs
Step 5/6: Writing knowledge files...
Step 6/6: Generating constitution...
CodeCortex initialized!
Symbols: 976 | Modules: 7 | Hidden deps: 14
codecortex</> main
+New Terminal
...
Claude
OpenCode
Codex
Gemini
Cursor Agent
Terminals (5)

Agent Agnostic

One knowledge layer.
Every agent.

Switch between Claude Code, Cursor, Codex, and Gemini without losing context. CodeCortex speaks MCP, the universal protocol for AI tools. Your knowledge persists no matter which agent you use.

Why It Works

Knowledge that compounds, not context that burns

Every session your agent spends re-discovering architecture is a cold start. CodeCortex builds persistent knowledge that eliminates it.

27 Languages

Native tree-sitter extraction across TypeScript, Python, Go, Rust, C, Java, and 21 more. Every symbol, every import, every edge.

13 MCP Tools

8 read + 5 write tools via the Model Context Protocol. Not a middleware between your agent and code. Your agent queries knowledge, then reads source files directly.

Temporal Intelligence

Some files are secretly coupled. Zero imports between them, but they always change together. Only git history reveals what code structure hides.

~50% Fewer Tokens

Tested on a 6,400-file codebase: 43K tokens vs 79K without CodeCortex. 2.5x fewer tool calls. Same answer quality.

Deep Understanding

A living map
of your codebase

Six layers of knowledge, from raw structure to architectural decisions, stored as flat files in .codecortex/. No database. No cloud. Commit it to git.

.codecortex/10 files
Structuralgraph.json
Semanticmodules/*.md
Temporaltemporal.json
Decisionsdecisions/*.md
Patternspatterns.md
Sessionssessions/*.md
cortex.yaml
constitution.md
overview.md
graph.json
symbols.json
temporal.json
modules/
decisions/
sessions/
patterns.md

Battle-Tested

We threw the Linux kernel at it

93,000 files. 5.3 million symbols. 30 years of git history. The world's largest open-source codebase, understood in 90 seconds. Zero crashes.

Zero Config

One prompt. That's it.

Paste this into your AI agent. It installs CodeCortex, analyzes your codebase, and wires up MCP. From the next session on, your agent skips the cold start.

Setup prompt - works with any LLM
Install codecortex-ai globally, then initialize it on this project. After init completes, add the MCP server config so I can use CodeCortex tools in future sessions.

Steps:
1. Run: npm install -g codecortex-ai
2. Run: codecortex init
3. Add this to my MCP config (claude_desktop_config.json or .cursor/mcp.json):
{
  "mcpServers": {
    "codecortex": {
      "command": "codecortex",
      "args": ["serve"]
    }
  }
}
4. Confirm it worked by running: codecortex status

Quick Start

From amnesia to understanding in three commands

1

Analyze your codebase

Discovers files, extracts symbols with tree-sitter, builds the dependency graph, and analyzes git history for hidden coupling.

$ codecortex init
2

Start the MCP server

Starts a local MCP server over stdio. Any MCP-compatible AI agent can connect and read your codebase knowledge.

$ codecortex serve
3

Connect your agent

Add the MCP config to Claude Code, Cursor, or any agent. It starts every session already knowing the codebase.

{
  "mcpServers": {
    "codecortex": {
      "command": "codecortex",
      "args": ["serve"]
    }
  }
}

Frequently
asked questions

Any agent that supports MCP (Model Context Protocol). That includes Claude Code, Cursor, Codex, Windsurf, Gemini CLI, Zed, OpenCode, and more. The MCP server communicates over stdio, so any MCP-compatible client can connect and start using your codebase knowledge immediately.
CodeCortex is not a middleware that sits between your agent and your code. It's pre-built knowledge your agent loads to skip the cold start: architecture maps, dependency graphs, hidden temporal coupling (files that always change together despite zero imports), risk scores, and hotspots. Your agent still reads actual source files directly. CodeCortex just means it knows where to look and what's risky before it starts.
No. Everything runs locally on your machine. The structural extraction (symbols, imports, call graph, temporal analysis) uses native tree-sitter parsing. The semantic layer (module analysis, decisions, patterns) is written by your AI agent during normal coding sessions. No API keys, no cloud, no data leaves your machine.
27 languages with native tree-sitter grammars: TypeScript, JavaScript, Python, Go, Rust, C, C++, Java, Kotlin, Scala, C#, Swift, Dart, Ruby, PHP, Lua, Bash, Elixir, OCaml, Elm, Solidity, Vue, Zig, and more. Each language has a dedicated extraction strategy optimized for its syntax and patterns.
CodeCortex uses streaming JSON writers for symbols and dependency graphs, so it can handle repositories with millions of symbols without hitting Node.js memory limits. It has been tested on large open-source projects like the Linux kernel (5.3M symbols) and OpenClaw (129K symbols).
4 navigation tools (overview, search, symbol lookup, module context), 4 risk tools (edit briefing, hotspots, change coupling, dependency graph), and 5 memory tools (session briefing, decision history, record decision, update patterns, record observation).
The opposite. In testing on a 6,400-file codebase, agents with CodeCortex used ~50% fewer tokens and 2.5x fewer tool calls while achieving the same answer quality (23/25 vs 23/25). Agents navigate faster because they start with a map instead of exploring blindly.
Everything lives in a .codecortex/ folder at the root of your project as flat files (JSON, Markdown, YAML). No database, no binary formats. You can commit it to git, inspect it manually, or delete it anytime. Fully portable between machines and agents.
Yes. CodeCortex is MIT licensed and completely free to use. Install it with npm install -g codecortex-ai and run codecortex init in any project to get started.