Skip to content

RedPlanetHQ/core

Repository files navigation

CORE logo

CORE

Every great person has someone who handles the rest.

Add to Claude Code Add to Cursor Add to OpenClaw Deploy on Railway

Website Docs Discord


Connect your tools. Tell it what matters. Hand off the work.

Open source. Self-hostable. Your data, your control.


The Handoff

You shouldn't be managing your inbox, chasing updates, or repeating context to every tool you use.

CORE is a personal butler — an always-on AI that knows your tools, your preferences, and your people. You hand something off once and it handles it today, tomorrow, and next week. Without being asked again.


What You Can Hand Off

Your Inbox

CORE reads every email with the context of your relationships. It knows who this person is, how you've handled them before, and what you'd want to say. It drafts responses, flags what needs you, and handles the rest silently.

Learn more about Memory →

Your Calendar

Scheduling conflicts, follow-up reminders, meeting prep. CORE manages your time the way you'd expect someone who knows your work style to manage it.

Your Dev Workflow

A Sentry alert fires. CORE checks it, creates a GitHub issue, assigns the right engineer based on git blame, and posts a summary to your team's Slack — without you touching any of it. 200+ actions across 50+ apps, coordinated from a single prompt.

Learn more about Toolkit →

Your Team Updates

CORE monitors what's happening across your tools and surfaces only what needs your attention. New PR merges, Linear status changes, Slack threads you're mentioned in — it watches everything so you don't have to.

Your Recurring Ops

The things that happen every day that shouldn't require you. CORE monitors events — emails, GitHub alerts, calendar changes — and evaluates them against what it knows about you. It acts proactively. You only see what requires a decision.

Learn more about Concepts →


How It Works

1. Connect Your Tools

Gmail, Slack, GitHub, Calendar, Linear — connect once. CORE scans them to understand your context, your relationships, and how you work. Your butler starts with real knowledge from day one.

2. Tell It What Matters

Your preferences, your people, your rules. Every conversation, decision, and directive is stored in a temporal knowledge graph — not raw text but classified facts that surface exactly when they're needed. Ask about something from weeks ago and the full context is there.

3. Walk Away

CORE acts. Proactively. Across everything. It understands your intent, routes to the right tools and memory, coordinates multi-step workflows, and can spawn other agents (Claude Code sessions, browser agents) to get things done. You only see what requires you.


Architecture

CORE is built on three layers:

  • Memory: A temporal knowledge graph that stores episodes, entities, and classified facts. Every piece of information is categorized (preference, decision, directive, goal, etc.) and connected over time. This is what makes retrieval intent-driven instead of keyword-based. Docs →

  • Toolkit: A unified actions layer for any MCP-compatible agent. Connect your apps once (GitHub, Linear, Slack, Gmail, Calendar, etc.) and every connected AI tool gets access to 200+ actions through a single endpoint. Docs →

  • CORE Agent: The orchestrator that ties memory and toolkit together. It understands intent, searches memory, picks tools, spawns other agents, and acts proactively based on triggers and rules. Docs →


Reach It From Anywhere

Your butler doesn't require you to be at a desk.

Talk to CORE from WhatsApp, email, Slack, or the web dashboard — same context, same memory, wherever you are. A message from WhatsApp can spin up a Claude Code session to fix a bug. An email can trigger a multi-step workflow across five apps. The interface changes; the intelligence doesn't.

Get started →

Supercharge Your AI Agents

Give Claude Code, Cursor, and other AI tools the same memory and toolkit your butler uses. Your coding agent remembers project architecture, past decisions, and preferences across sessions. No more re-explaining context. One connection point — every agent gets smarter.

Get started →

Claude Code (Recommended: Plugin)
npm install -g @redplanethq/corebrain

Then in Claude Code:

/plugin marketplace add redplanethq/core
/plugin install core_brain

Restart Claude Code and run /mcp to authenticate.

The plugin auto-loads your persona (preferences, rules, decisions) at every session start and ingests conversations into memory when you're done.

Claude Code (Manual MCP)
claude mcp add --transport http --scope user core-memory https://app.getcore.me/api/v1/mcp?source=Claude-Code

Then type /mcp and open core-memory for authentication.

Cursor

Install MCP Server

OpenClaw
openclaw plugins install @redplanethq/openclaw-corebrain

Set your API key via environment variable or config:

export CORE_API_KEY=your_api_key_here

Get your API key from app.getcore.me → Settings → API Key.

Claude Desktop
  1. Copy MCP URL: https://app.getcore.me/api/v1/mcp?source=Claude
  2. Navigate to Settings → Connectors → Add custom connector
  3. Click "Connect" and grant Claude permission to access CORE

30+ more providers — Windsurf, VS Code, Cline, Codex, Gemini CLI, Copilot, and more. See all setup guides →

Turn AI Chats into Memory

Sync your ChatGPT and Gemini conversations into CORE via browser extension. Searchable, reusable, and available to every connected agent.

Get started →


Quick Start

Cloud

  1. Sign up at app.getcore.me
  2. Connect Gmail & Calendar — CORE scans them so your butler starts with real context from day one
  3. Name your butler — give it a name, set your preferences, tell it what matters
  4. Hand off the work — talk to CORE from the web, email, WhatsApp, or Slack

Self-Host

Quick Deploy

Deploy on Railway

Or with Docker

git clone https://github.com/RedPlanetHQ/core.git
cd core
# Configure AI provider settings in `core/hosting/docker/.env` (at minimum `OPENAI_API_KEY`).
# If using an OpenAI-compatible proxy, also set `OPENAI_BASE_URL` and `OPENAI_API_MODE=chat_completions`.
docker-compose up -d

View complete self-hosting guide →


Yours. Completely.

One workspace per context — work, personal, client. Fully isolated. Open source and self-hostable so your data never leaves your control.


For Agent Builders

Building AI agents? Offload memory and integrations to CORE so you can focus on your agent's logic.

  • Offload memory — Use CORE's temporal knowledge graph as your agent's long-term memory. Store conversations, retrieve context with intent-driven search, and let your agent build knowledge over time without managing your own vector DB or graph.
  • Offload integrations — Connect apps once in CORE, and your agent gets MCP tools for all of them. No OAuth flows to build, no API maintenance, no per-integration code.
  • Build via MCP or API — Connect your agent to CORE via MCP (single endpoint) or use the REST API directly.

Example Projects

  • core-cli — Task manager agent with memory and Linear/GitHub sync
  • holo — Turn your CORE memory into a personal website with chat

API Reference → · SDK Docs →


Benchmark

CORE achieves 88.24% average accuracy on the LoCoMo benchmark across single-hop, multi-hop, open-domain, and temporal reasoning tasks.

benchmark

View benchmark methodology and results →


Built to Be Trusted

  • CASA Tier 2 Certified — third-party audited to meet Google's OAuth requirements
  • 88.24% on LoCoMo benchmark — across single-hop, multi-hop, open-domain, and temporal reasoning · View results →
  • TLS 1.3 in transit · AES-256 at rest
  • Workspace-based isolation, role-based permissions
  • Your data is never used for AI model training
  • Self-hosting option for full data isolation

Security Policy → · Vulnerability Reporting: [email protected]


Documentation


Support

Contributors