Inspiration
Modern AI chatbots are stateless by default — they forget context, repeat questions, and fail to explain why they respond the way they do. We were inspired by this gap between how humans remember (events, preferences, decisions) and how most AI systems operate today. Our goal was to build an AI agent with transparent, persistent memory that can reason over past interactions instead of treating every conversation as a blank slate.
What it does
Our project is an AI agent with structured, persistent memory powered by a graph database.
It:
- Stores conversations, user preferences, and actions as a memory graph
- Distinguishes between short-term, long-term (episodic), and semantic memory
- Uses Neo4j to model relationships between users, messages, topics, and decisions
- Allows the agent to explain why it answered something by showing the memory path
- Prevents repeated questions and maintains continuity across sessions
How we built it
- Neo4j as the core memory store (entities + relationships)
- MemMachine-style memory layer to classify and manage memory types
- LLM API for reasoning and natural language understanding
- Backend (Python) to handle memory writes, reads, and promotion rules
- Graph RAG to retrieve structured context instead of flat text chunks
- Docker for local deployment and reproducibility
Memory is written after each interaction and selectively retrieved before each response, ensuring relevance without context overload.
Challenges we ran into
- Designing a memory schema that avoids duplication and memory bloat
- Deciding what should be remembered vs what should be forgotten
- Ensuring graph queries stay fast as memory grows
- Avoiding hallucinations by grounding responses strictly in stored memory
- Managing Docker containers, images, and persistence correctly under time pressure
Accomplishments that we're proud of
- Built a working persistent memory system, not just session history
- Visualized agent memory as an explainable graph
- Implemented clear separation between episodic and semantic memory
- Demonstrated multi-hop reasoning using graph traversal
- Delivered a clean, demo-ready system within hackathon constraints
What we learned
- Memory is more than storage — it’s about policy, structure, and lifecycle
- Graph databases are extremely powerful for explainable AI systems
- Most “AI memory” problems are actually system design problems
- Proper memory management dramatically improves user trust and experience
What's next for Untitled
- Memory decay and confidence scoring
- Multi-agent shared memory graphs
- Fine-grained access control per user
- Deeper integrations with tools and external data sources
- Production-ready memory debugging and observability tools
Built With
- memachine
- neo4j
- openai
- strands
Log in or sign up for Devpost to join the conversation.