Skip to main content

What is Graph Memory?

Graph Memory is an MCP (Model Context Protocol) server that turns any project directory into a queryable semantic knowledge base. It indexes your markdown docs, source code, and files into interconnected graphs, then exposes them as 70 MCP tools and a full-featured web UI.

Graph Memory Dashboard

Who is it for?

  • Developers who want their AI assistant (Claude, Cursor, Windsurf) to deeply understand their codebase
  • Teams that need a persistent knowledge base that AI assistants can read and write to
  • Anyone tired of AI losing context between conversations

What it does

FeatureDescription
Docs indexingParses markdown into heading-based chunks with cross-file links and code block extraction
Code indexingExtracts functions, classes, interfaces via tree-sitter AST parsing (TypeScript/JavaScript)
File indexIndexes all project files with metadata, language detection, directory hierarchy
Knowledge graphPersistent notes and facts with typed relations and cross-graph links
Task managementKanban workflow with priorities, assignees, due dates, and cross-graph context
SkillsReusable recipes with steps, triggers, and usage tracking
Hybrid searchBM25 keyword + vector cosine similarity with graph expansion
Real-timeFile watching + WebSocket push to UI
Multi-projectOne process manages multiple projects from a single config
Web UIDashboard, kanban board, code browser, search, prompt builder

How it works

Your Project → Graph Memory → AI Assistant
│ │ │
files, graphs, 70 MCP tools
docs, embeddings, for search,
code web UI CRUD, linking
  1. Point Graph Memory at your project directory
  2. It indexes docs, code, and files into interconnected graphs
  3. It embeds every node locally using an embedding model (~560 MB, no API calls)
  4. AI assistants query the graphs through 70 MCP tools
  5. You manage knowledge, tasks, and skills through MCP tools or the web UI
  6. File mirror syncs notes/tasks/skills to .notes/, .tasks/, .skills/ folders for IDE editing

Key principles

  • Everything is local — embeddings run on your machine, no data leaves your network
  • Zero config to startnpm install -g @graphmemory/server && graphmemory serve
  • Graphs, not chunks — structured graphs with relationships, not flat vector stores
  • AI-native — designed for MCP clients, not just humans
  • Multi-project — one server, many projects, shared workspaces