Connect your tools. Tell it what matters. Hand off the work.
Open source. Self-hostable. Your data, your control.
You shouldn't be managing your inbox, chasing updates, or repeating context to every tool you use.
CORE is a personal butler — an always-on AI that knows your tools, your preferences, and your people. You hand something off once and it handles it today, tomorrow, and next week. Without being asked again.
CORE reads every email with the context of your relationships. It knows who this person is, how you've handled them before, and what you'd want to say. It drafts responses, flags what needs you, and handles the rest silently.
Scheduling conflicts, follow-up reminders, meeting prep. CORE manages your time the way you'd expect someone who knows your work style to manage it.
A Sentry alert fires. CORE checks it, creates a GitHub issue, assigns the right engineer based on git blame, and posts a summary to your team's Slack — without you touching any of it. 200+ actions across 50+ apps, coordinated from a single prompt.
CORE monitors what's happening across your tools and surfaces only what needs your attention. New PR merges, Linear status changes, Slack threads you're mentioned in — it watches everything so you don't have to.
The things that happen every day that shouldn't require you. CORE monitors events — emails, GitHub alerts, calendar changes — and evaluates them against what it knows about you. It acts proactively. You only see what requires a decision.
Gmail, Slack, GitHub, Calendar, Linear — connect once. CORE scans them to understand your context, your relationships, and how you work. Your butler starts with real knowledge from day one.
Your preferences, your people, your rules. Every conversation, decision, and directive is stored in a temporal knowledge graph — not raw text but classified facts that surface exactly when they're needed. Ask about something from weeks ago and the full context is there.
CORE acts. Proactively. Across everything. It understands your intent, routes to the right tools and memory, coordinates multi-step workflows, and can spawn other agents (Claude Code sessions, browser agents) to get things done. You only see what requires you.
CORE is built on three layers:
-
Memory: A temporal knowledge graph that stores episodes, entities, and classified facts. Every piece of information is categorized (preference, decision, directive, goal, etc.) and connected over time. This is what makes retrieval intent-driven instead of keyword-based. Docs →
-
Toolkit: A unified actions layer for any MCP-compatible agent. Connect your apps once (GitHub, Linear, Slack, Gmail, Calendar, etc.) and every connected AI tool gets access to 200+ actions through a single endpoint. Docs →
-
CORE Agent: The orchestrator that ties memory and toolkit together. It understands intent, searches memory, picks tools, spawns other agents, and acts proactively based on triggers and rules. Docs →
Your butler doesn't require you to be at a desk.
Talk to CORE from WhatsApp, email, Slack, or the web dashboard — same context, same memory, wherever you are. A message from WhatsApp can spin up a Claude Code session to fix a bug. An email can trigger a multi-step workflow across five apps. The interface changes; the intelligence doesn't.
Give Claude Code, Cursor, and other AI tools the same memory and toolkit your butler uses. Your coding agent remembers project architecture, past decisions, and preferences across sessions. No more re-explaining context. One connection point — every agent gets smarter.
Claude Code (Recommended: Plugin)
npm install -g @redplanethq/corebrainThen in Claude Code:
/plugin marketplace add redplanethq/core
/plugin install core_brain
Restart Claude Code and run /mcp to authenticate.
The plugin auto-loads your persona (preferences, rules, decisions) at every session start and ingests conversations into memory when you're done.
Claude Code (Manual MCP)
claude mcp add --transport http --scope user core-memory https://app.getcore.me/api/v1/mcp?source=Claude-CodeThen type /mcp and open core-memory for authentication.
OpenClaw
openclaw plugins install @redplanethq/openclaw-corebrainSet your API key via environment variable or config:
export CORE_API_KEY=your_api_key_hereGet your API key from app.getcore.me → Settings → API Key.
Claude Desktop
- Copy MCP URL:
https://app.getcore.me/api/v1/mcp?source=Claude - Navigate to Settings → Connectors → Add custom connector
- Click "Connect" and grant Claude permission to access CORE
30+ more providers — Windsurf, VS Code, Cline, Codex, Gemini CLI, Copilot, and more. See all setup guides →
Sync your ChatGPT and Gemini conversations into CORE via browser extension. Searchable, reusable, and available to every connected agent.
- Sign up at app.getcore.me
- Connect Gmail & Calendar — CORE scans them so your butler starts with real context from day one
- Name your butler — give it a name, set your preferences, tell it what matters
- Hand off the work — talk to CORE from the web, email, WhatsApp, or Slack
Quick Deploy
Or with Docker
git clone https://github.com/RedPlanetHQ/core.git
cd core
# Configure AI provider settings in `core/hosting/docker/.env` (at minimum `OPENAI_API_KEY`).
# If using an OpenAI-compatible proxy, also set `OPENAI_BASE_URL` and `OPENAI_API_MODE=chat_completions`.
docker-compose up -dView complete self-hosting guide →
One workspace per context — work, personal, client. Fully isolated. Open source and self-hostable so your data never leaves your control.
Building AI agents? Offload memory and integrations to CORE so you can focus on your agent's logic.
- Offload memory — Use CORE's temporal knowledge graph as your agent's long-term memory. Store conversations, retrieve context with intent-driven search, and let your agent build knowledge over time without managing your own vector DB or graph.
- Offload integrations — Connect apps once in CORE, and your agent gets MCP tools for all of them. No OAuth flows to build, no API maintenance, no per-integration code.
- Build via MCP or API — Connect your agent to CORE via MCP (single endpoint) or use the REST API directly.
Example Projects
- core-cli — Task manager agent with memory and Linear/GitHub sync
- holo — Turn your CORE memory into a personal website with chat
CORE achieves 88.24% average accuracy on the LoCoMo benchmark across single-hop, multi-hop, open-domain, and temporal reasoning tasks.
View benchmark methodology and results →
- CASA Tier 2 Certified — third-party audited to meet Google's OAuth requirements
- 88.24% on LoCoMo benchmark — across single-hop, multi-hop, open-domain, and temporal reasoning · View results →
- TLS 1.3 in transit · AES-256 at rest
- Workspace-based isolation, role-based permissions
- Your data is never used for AI model training
- Self-hosting option for full data isolation
Security Policy → · Vulnerability Reporting: [email protected]
- Welcome — Introduction to CORE
- Concepts — Memory, Agent, and Toolkit explained
- Connect — Channels and AI providers
- Toolkit — Actions and integrations
- Open Source — Local setup, contributing, self-hosting
- API Reference — REST API and endpoints
- Changelog — Product updates
- Discord: Join core-support channel
- Documentation: docs.getcore.me
- Email: [email protected]