Introduction
Imagine you’re using multiple AI agents in the same workspace — one for personal journaling and another for scheduling meetings. If all agents can freely access shared stateful memory, your scheduling agent could accidentally read private thoughts or personal data stored by your journaling agent.
By encrypting and restricting access, our system ensures that each agent only sees what it’s meant to. This protects sensitive data (like messages, credentials, or private notes) and prevents unintended data leakage between agents — a crucial step as agent ecosystems grow and start working together on shared platforms.
Inspiration
Inspired by CS 161: Computer Security class at UC Berkeley, we noticed a gap on how memory is managed for agents in Letta. We realized that memory blocks storing user conversations could contain sensitive data. However, current implementation allows for any agents to access all stateful memory in the system, even if the memory wasn't shared to that agent specifically. This was a perfect opportunity to apply cryptographic techniques in an innovative way. Tackling this problem allowed us to combine security concepts with a novel use case for agent memory.
What it does
Our project securely stores memory blocks created during a user’s conversation with an agent via encryption and authentication, preventing access by other agents. Memory can be shared with a specific agent using hybrid encryption, ensuring that only the intended recipient agent(s) can access the data.
How we built it
We used Letta as our primary platform for building custom tooling that enables agent communication and memory sharing. Memory blocks are encrypted using private key encryption, and we maintain a keystore to manage cryptographic keys, identities to keep track of which agent owned which memory block. To enable secure memory sharing between agents, we implemented hybrid encryption. All encryption and key management are supported through standard cryptographic libraries.
Challenges we ran into
We encountered challenges in implementing message passing between agents, ensuring they could properly use the tools, send and receive memory blocks, and perform encryption and decryption to retrieve the block data correctly. Additionally, the design was very complicated due to the nature of this problem. Finding ways to implement this design securely was a fun challenge!
Accomplishments that we’re proud of
We are proud of our design process, iterating and refining to tackle cryptographic challenges applied to an exciting problem of agent memory sharing. Learning to use Letta for the first time was an exciting challenge, and we enjoyed building a solution that addresses gaps in current agent capabilities. Above all, we are proud of our team’s collaboration and the fun we had creating this project together. It was satisfying to see how we were able to use what we learned in school to build such a cool solution to a very real-world problem.
What we learned
We gained hands-on experience building custom tools in Letta and interfacing them with agents. This included designing secure cryptographic sharing and implementing reliable message passing to enable agents to communicate and share blocks effectively. We learned to communicate and work together as a team (apparently explaining complicated design ideas is quite hard).
What’s next for Secure Shared Memory for Agents
The next step for our project is implementing revocation. Currently, shared memory blocks cannot have access revoked. By modifying the encryption keys, we can ensure that only authorized agents can access a memory block moving forward, adding an important layer of control and security.
Built With
- agents
- ai
- cryptography
- gpt
- letta
- python

Log in or sign up for Devpost to join the conversation.