LocalBrain is the protocol to give AI apps your life's context.
Whether it's an agent like Poke or a chat app like Claude, next gen AI apps rely on accurate personal context. This raises an issue at both sides, for the AI app and the user.
For AI apps:
- Engineering and maintaining a system to gather and use personal context eats up engineering time, ends up working okay at best in practice, and pulls focus away from shipping the core product.
For users:
- Linking all of your connectors (Gmail, Slack, iMessage, etc) to every AI app you use is high-friction, a privacy risk, and leaves your own context fragmented and inaccessible.
LocalBrain bridges this gap; it automatically organizes personal context from all your connectors into a local, readable knowledge base that any AI app can query to safely understand you.
Inspiration
We were inspired by apps like Beeper and Plaid. Both of these apps turn an annoying, scattered process into a simple product that makes like a lot better. Every AI app functions best when it has all of your context, but providing your context to every AI app is a scattered, high-friction, and privacy-compromising process. We thought this was an important issue to solve, since we're getting closer and closer to a world adjusted to AI and this problem is only going to get larger as AI evolves.
What it does
LocalBrain finds relevant info about you from your online presence via connector plugins, and uses that info to turn your life into an organized folder-based journal of text files. This journal, a local "knowledge base" of your life, can then be queried by any AI app you allow via mcp or api, reaching a 90% LongMemEval score for retrieval.
How we built it
To turn connector data into a readable knowledge bank, we modeled our ingestion closely with SoTA coding agents. Since we're working with agentic file edits and needle-in-a-haystack retrieval, a lot of the functionality also carries over. Knowing this, we analyzed how these agents worked, and it led to us implementing techniques like structured prompt chaining, ripgrep-based file retrieval, fuzzy section matching, validation loops, and targeted context windows, and results turned out to be really good.
Challenges we ran into
The mcp was really tricky to figure out. It was pretty tedious getting the local mcp running so that it could execute changes on the knowledge base through the protocol, but getting the protocol proxied over an http remote server was a significant time sink we dealt with. We couldn't get the tunnel between the local function execution and the remote server IP working for a long time, and we eventually solved this by reading a lot of MCP documentation and at some point, completely restarting our MCP process since we had deeply integrated some incorrect fragments.
Accomplishments that we're proud of
We're really proud of how much work we poured into this, and how we were able to collaborate through it all. The technical stuff in this project was pretty tricky, so the fact that we were able to hack it all together for a working MVP is something we're also proud of showing. Also, one of our teammates stayed up for 48 hours straight. Shoutout Taymur.
What we learned
We learned that a good approach to make something work is by looking at a product that already exists and has features that align in core functionality, and seeing how it was implemented and why. Once you understand why something was done before, it can help bring insight to what's likely to work and not work. After doing some independent tests, it's a good way to iterate quickly.
What's next for LocalBrain
We want to continue developing LocalBrain as an open-sourced project, as well as solving this same problem in the enterprise domain at a large scale. We think this is genuinely a real problem that will have to be addressed, and we're confident we can take it on.
Built With
- electron
- fast-api
- figma
- mcp
- next.js
- ngrok
- node.js
- python
- tailwind
- typescript
- websockets



Log in or sign up for Devpost to join the conversation.