Every conversation starts from zero. Users repeat themselves. Context is lost. Anura Memory gives your AI agents persistent memory — a knowledge graph for structured facts and markdown file storage with semantic search. Sign up, get an API key, and your agents remember everything.
Two memory systems in one platform: GraphRag for structured facts with human review, and FilesRag for markdown documents with semantic search. Nothing to deploy.
Vector databases give you fuzzy similarity. Anura Memory gives you structured facts and searchable files.
Sign up, generate a key from the dashboard. No credit card required. Takes 30 seconds.
Paste the MCP config into Claude/Cursor, or install the SDK (npm or pip) for custom agents. One line of config.
Facts are extracted into a knowledge graph. Documents are stored as searchable markdown files. Both are recalled on future queries. Automatically.
MCP for Claude & Cursor. TypeScript & Python SDKs for custom agents. Same API underneath.
{
"mcpServers": {
"anura-memory": {
"url": "https://anuramemory.com/api/mcp",
"headers": {
"X-API-KEY": "gm_your_key_here"
}
}
}
}12 tools: remember, search, get_context for facts + write_file, read_file, search_files for documents. Your AI learns from every conversation.
import { GraphMem } from '@anura-gate/anura-graph';
const mem = new GraphMem({
apiKey: 'gm_your_key_here',
});
// Your agent learns
await mem.remember("Alice is VP of Eng at Acme");
// Your agent recalls
const ctx = await mem.getContext("Alice");
// => alice --works_at--> acme, alice --has_role--> vp of engfrom graphmem import GraphMem
mem = GraphMem(api_key="gm_your_key_here")
# Your agent learns
mem.remember("Alice is VP of Eng at Acme")
# Your agent recalls
ctx = mem.get_context("Alice")
# => alice --works_at--> acme, alice --has_role--> vp of engAnura Memory is a memory layer, not a storage engine. Graph memory and file memory, out of the box.
Call remember() with raw text. An LLM extracts structured facts, deduplicates entities, and builds the graph. You write zero extraction code.
Store markdown files and search them semantically. Your agent can write_file, read_file, and search_files — long-form context that persists across sessions.
Graph traversal returns exact relationships, not "top-k similar chunks." Your AI knows Alice works at Acme — not that some paragraph mentions both names.
Extracted facts land in a pending queue. You approve what enters the graph. No hallucination drift, no garbage accumulation.
Community detection, LLM summaries, hybrid search (graph + vector + communities). Three retrieval lanes, one API call.
We host everything — graph engine, file storage, vector embeddings, and the API. You get an API key and start building.
| Anura Memory | Vector DB | Build It Yourself | |
|---|---|---|---|
| LLM extraction | ✓ | ✗ | you build |
| Entity deduplication | ✓ | ✗ | you build |
| Markdown file memory | ✓ | ✗ | you build |
| Semantic file search | ✓ | ✗ | you build |
| Human review queue | ✓ | ✗ | you build |
| Provenance tracking | ✓ | ✗ | you build |
| Graph + vector + community search | ✓ | vector only | you build |
| MCP server (12 tools) | ✓ | ✗ | you build |
| TypeScript + Python SDK | ✓ | ✗ | you build |
| Setup time | 60 seconds | minutes | weeks |
| Infrastructure | hosted for you | managed service | varies |
Start free. Upgrade when your agents need more.