Skip to content

rexdivakar/HippocampAI

Repository files navigation

HippocampAI — Enterprise Memory Engine for Intelligent AI Systems

PyPI version Python Versions Downloads License Documentation

HippocampAI is a production-ready, enterprise-grade memory engine that transforms how AI systems remember, reason, and learn from interactions. It provides persistent, intelligent memory capabilities that enable AI agents to maintain context across sessions, understand user preferences, detect behavioral patterns, and deliver truly personalized experiences.

The name "HippocampAI" draws inspiration from the hippocampus - the brain region responsible for memory formation and retrieval - reflecting our mission to give AI systems human-like memory capabilities.

Current Release: v0.5.0 — Intelligent memory features: knowledge graph, graph-aware retrieval, feedback loops, triggers, procedural memory, and embedding migration.


Package Structure

HippocampAI is organized into two main components for flexibility:

Package Description Use Case
hippocampai.core Core memory engine (no SaaS dependencies) Library integration, embedded use
hippocampai.platform SaaS platform (API, auth, Celery, monitoring) Self-hosted SaaS deployment
# Core library only (lightweight)
from hippocampai.core import MemoryClient, Memory, Config

# SaaS platform features
from hippocampai.platform import run_api_server, AutomationController

# Or use the main package (includes everything, backward compatible)
from hippocampai import MemoryClient

Quick Start

Installation

# Core library (lightweight - 10 dependencies)
pip install hippocampai

# With SaaS features (API, auth, background tasks)
pip install "hippocampai[saas]"

# With specific LLM providers
pip install "hippocampai[openai]"     # OpenAI support
pip install "hippocampai[anthropic]"  # Anthropic Claude
pip install "hippocampai[groq]"       # Groq support

# Everything (development, all features)
pip install "hippocampai[all,dev]"

Your First Memory (30 seconds)

from hippocampai import MemoryClient

# Initialize client
client = MemoryClient()

# Store a memory
memory = client.remember(
    "I prefer oat milk in my coffee and work remotely on Tuesdays",
    user_id="alice",
    type="preference"
)

# Recall memories
results = client.recall("work preferences", user_id="alice")
print(f"Found: {results[0].memory.text}")

That's it! You now have intelligent memory for your AI application.


Key Features

Feature Description Learn More
Intelligent Memory Hybrid search, importance scoring, semantic clustering Features Guide
High Performance 50-100x faster with Redis caching, 500-1000+ RPS Performance
Advanced Search Vector + BM25 + reranking, temporal queries Search Guide
Analytics Pattern detection, habit tracking, behavioral insights Analytics
AI Integration Works with OpenAI, Anthropic, Groq, Ollama, local models Providers
Session Management Conversation tracking, summaries, hierarchical sessions Sessions
SaaS Platform Multi-tenant auth, rate limiting, background tasks SaaS Guide
Memory Quality Health monitoring, duplicate detection, quality tracking Memory Management
Background Tasks Celery-powered async operations, scheduled jobs Celery Guide
Memory Consolidation ⭐ NEW Sleep phase architecture with intelligent compaction Sleep Phase
Multi-Agent Collaboration ⭐ NEW Shared memory spaces for agent coordination Collaboration
React Dashboard ⭐ NEW Full-featured UI with analytics and visualization Frontend
Predictive Analytics ⭐ NEW Memory usage predictions and pattern forecasting New Features
Auto-Healing ⭐ NEW Automatic detection and repair of memory issues New Features
Knowledge Graph NEW Real-time entity/relationship extraction on every remember() Features
Graph-Aware Retrieval NEW 3-way RRF fusion: vector + BM25 + graph Features
Relevance Feedback NEW User feedback loop with exponential decay scoring Features
Memory Triggers NEW Event-driven webhooks, websocket, and log actions Features
Procedural Memory NEW Self-optimizing prompts via learned behavioral rules Features
Embedding Migration NEW Safe model migration with Celery background processing Features
Plugin System Custom processors, scorers, retrievers, filters New Features
Memory Namespaces Hierarchical organization with permissions New Features
Export/Import Portable formats (JSON, Parquet, CSV) for backup New Features
Offline Mode Queue operations when backend unavailable New Features
Tiered Storage Hot/warm/cold storage tiers for efficiency New Features
Framework Integrations LangChain & LlamaIndex adapters New Features
Bi-Temporal Facts Track facts with validity periods and time-travel queries Bi-Temporal Guide
Context Assembly Automated context pack generation with token budgeting Context Assembly
Custom Schemas Define entity/relationship types without code changes Schema Guide
Benchmarks Reproducible performance benchmarks Benchmarks

Why Choose HippocampAI?

vs. Traditional Vector Databases

  • Built-in Intelligence: Pattern detection, insights, behavioral analysis
  • Memory Types: Facts, preferences, goals, habits, events (not just vectors)
  • Temporal Reasoning: Native time-based queries and narratives

vs. Other Memory Platforms

  • 5-100x Faster: Redis caching, optimized retrieval
  • Deployment Flexibility: Local, self-hosted, or SaaS
  • Full Control: Complete source access and customization

vs. Building In-House

  • Ready in Minutes: pip install hippocampai
  • 102+ Methods: Complete API covering all use cases
  • Production-Tested: Battle-tested in real applications

See detailed comparison →


Documentation

Complete documentation is available in the docs/ directory.

Quick Links

What do you want to do? Go here
Get started in 5 minutes Getting Started Guide | Quickstart
Try interactive demo Chat Demo Guide
See all 102+ functions API Reference | Library Reference
Deploy as SaaS platform SaaS Platform Guide ⭐ NEW
Monitor memory quality Memory Management ⭐ NEW
Set up background tasks Celery Guide ⭐ NEW
Deploy to production User Guide | Deployment
Configure settings Configuration Guide | Providers
Monitor & observe Monitoring | Telemetry
Troubleshoot issues Troubleshooting
Use new features New Features Guide ⭐ NEW
View all documentation Documentation Hub

Documentation Index

Complete Documentation Index - Browse all 26 documentation files organized by topic

Core Documentation:

Advanced Topics:

Production & Operations:


Configuration

Local Development

# .env file
QDRANT_URL=http://localhost:6333
LLM_PROVIDER=ollama
LLM_MODEL=qwen2.5:7b-instruct

Cloud/Production

from hippocampai import MemoryClient
from hippocampai.adapters import GroqLLM

client = MemoryClient(
    llm_provider=GroqLLM(api_key="your-key"),
    qdrant_url="https://your-qdrant-cluster.com",
    redis_url="redis://your-redis:6379"
)

See all configuration options →


Deployment Options

Local Development

docker run -d -p 6333:6333 qdrant/qdrant
pip install hippocampai

Production Stack

git clone https://github.com/rexdivakar/HippocampAI.git
cd HippocampAI
docker-compose up -d  # Includes Qdrant, Redis, API, Celery, Monitoring

Includes:

  • FastAPI server (port 8000)
  • React Dashboard (port 3001)
  • Celery workers with Beat scheduler
  • Flower monitoring (port 5555)
  • Prometheus metrics (port 9090)
  • Grafana dashboards (port 3000)

React Dashboard (New in v0.4.0)

cd frontend
npm install
npm run dev  # Development server on port 5173

Production deployment guide →


Use Cases

AI Agents & Chatbots

  • Personalized assistants with context across sessions
  • Customer support with interaction history
  • Educational tutoring that adapts to students

Enterprise Applications

  • Knowledge management for teams
  • CRM enhancement with interaction intelligence
  • Compliance monitoring and audit trails

Research & Analytics

  • Behavioral pattern analysis
  • Long-term trend detection
  • User experience personalization

More use cases →


Performance

Metric Performance
Query Speed 50-100x faster with caching
Throughput 500-1000+ requests/second
Latency 1-2ms (cached), 5-15ms (uncached)
Availability 99.9% uptime

See benchmarks →


Community & Support


Examples

Code Examples

Over 25 working examples in the examples/ directory:

# Basic operations
python examples/01_basic_usage.py

# Advanced features
python examples/11_intelligence_features_demo.py
python examples/13_temporal_reasoning_demo.py
python examples/14_cross_session_insights_demo.py

# New v0.4.0 features
python examples/20_collaboration_demo.py      # Multi-agent collaboration
python examples/21_predictive_analytics_demo.py  # Predictive analytics
python examples/22_auto_healing_demo.py       # Auto-healing pipeline
python examples/consolidation_demo.py         # Memory consolidation

View all examples →


Contributing

We welcome contributions! See our Contributing Guide for details.

git clone https://github.com/rexdivakar/HippocampAI.git
cd HippocampAI
pip install -e ".[dev]"
pytest

License

Apache 2.0 - Use freely in commercial and open-source projects.


Star History

If you find HippocampAI useful, please star the repo! It helps others discover the project.


Built with by the HippocampAI team


Quick Reference Card

from hippocampai import MemoryClient

client = MemoryClient()

# Core operations
memory = client.remember("text", user_id="alice")
results = client.recall("query", user_id="alice", k=5)
client.update_memory(memory_id, text="new text")
client.delete_memory(memory_id)

# Intelligence
facts = client.extract_facts("John works at Google")
entities = client.extract_entities("Elon Musk founded SpaceX")
patterns = client.detect_patterns(user_id="alice")

# Analytics
habits = client.detect_habits(user_id="alice")
changes = client.track_behavior_changes(user_id="alice")
stats = client.get_memory_statistics(user_id="alice")

# Sessions
session = client.create_session(user_id="alice", title="Planning")
client.complete_session(session.id, generate_summary=True)

# Bi-Temporal Facts (NEW)
from hippocampai.models.bitemporal import BiTemporalQuery
fact = client.store_bitemporal_fact(
    user_id="alice",
    subject="alice",
    predicate="works_at",
    object_value="Acme Corp",
    valid_from=datetime(2024, 1, 1),
)
facts = client.query_bitemporal_facts(BiTemporalQuery(
    user_id="alice",
    valid_at=datetime(2024, 6, 1),
))

# Context Assembly (NEW)
from hippocampai.context.models import ContextConstraints
context = client.assemble_context(
    user_id="alice",
    query="What are Alice's work preferences?",
    constraints=ContextConstraints(token_budget=4000),
)
print(context.final_context_text)

# Custom Schema Validation (NEW)
from hippocampai.schema import SchemaRegistry
registry = SchemaRegistry()
result = registry.validate_entity("person", {"name": "Alice"})

# Relevance Feedback (NEW v0.5.0)
client.rate_recall(
    memory_id=results[0].memory.id,
    user_id="alice",
    query="coffee preferences",
    feedback_type="relevant"
)

# See docs/LIBRARY_COMPLETE_REFERENCE.md for the full method reference

Full API Reference | REST API Reference