Inspiration

Emotional abuse and relationship manipulation are often completely invisible. Unlike physical harm, they leave no visible marks, making them incredibly hard to recognize, especially for the person experiencing them. We realized that many people in unhealthy relationships (romantic, familial, or professional) don't have access to safe, private tools that help them reflect on their reality. Traditional resources like hotlines can feel like too high a barrier, or might not be safe to access openly if a partner is checking their phone.

Thus, we had the idea to create Dear Diary: a relationship safety companion disguised as a cozy, aesthetically warm personal journal. We wanted to build a tool that feels "like a warm hug," but has the technical power to gently identify red flags, teach users about healthy boundaries, and provide support without blowing their cover.

What it does

(づ  ̄ ³ ̄)づ~♥ Semantic Pattern Recognition: Users log traditional journal entries or paste conversations. Under the hood, the app analyzes the text for subtle emotional manipulation tactics (gaslighting, love bombing, minimizing, DARVO, isolation).

(ღ˘⌣˘ღ) Empathetic AI Responses: It doesn't just flag abuse like a clinical robot. It leads with validation, gently names the manipulation tactic, explains why it is harmful, and provides a reflection question.

♪♪(O*゜∇゜)O~♪♪ Emotion-Aware Voice Output: Using ElevenLabs, the app speaks its responses using a custom-cloned voice of our 3 team members. The voice parameters dynamically adjust based on the user's detected distress level: sounding calm and grounding during emergencies, or warm and gentle during normal reflections.

(⊙O⊙)! Longitudinal Memory (RAG): The AI doesn't have localized amnesia. It remembers your past entries to identify long-term patterns of manipulation across weeks or months.

How we built it

We built a React/Vite frontend utilizing Tailwind CSS to create our soft, non-clinical design language, paired with smooth GSAP animations to bring the cozy aesthetic to life.

The heavy lifting happens in our Node.js/Express backend, which is dockerized and deployed to Google Cloud Run for auto-scaling. We implemented a Semantic RAG (Retrieval-Augmented Generation) pipeline to give our AI long-term memory.

  • We use Google's text-embedding-004 model to turn every journal entry into a 768-dimensional vector coordinate.
  • We store these vectors in a MongoDB Atlas database.
  • When a user writes a new entry, we use Atlas $vectorSearch to instantly retrieve their most semantically relevant past memories.
  • We inject this context into Gemini 1.5 Flash, allowing the LLM to detect long-term emotional abuse patterns rather than just analyzing a single event. Finally, the AI's response is passed to the ElevenLabs API for dynamic Text-to-Speech generation before returning to the frontend.

Challenges we ran into

Our biggest technical hurdle was implementing the Semantic RAG pipeline. Giving the AI long-term memory by sending the user's entire journal history to the Gemini API every time was too slow and consumed too many tokens. We had to learn how to create mathematical embeddings, properly configure vector indexes inside MongoDB Atlas, and write complex aggregation pipelines to perform Approximate Nearest Neighbor (ANN) searches.

Additionally, we tried deploying on DigitalOcean and did a lot of research as well as read a lot of documentation, however, since something was wrong with the DigitalOcean link and it did not let us add our payment methods (we tried all types of cards) and it didn't work for any of us, we couldn't have it integrated directly into our project. But, we do have the set up files ready for it whenever DigitalOcean works and we are able to get our credits!

We also faced challenges protecting our API quotas for the demo, which we solved by writing custom Express middleware for IP-based rate limiting and timed-out request abortion.

Accomplishments that we're proud of

We are incredibly proud of how seamless our Cloud architecture ended up being. Watching a frontend request hit Vercel, pass to Google Cloud Run, retrieve vector data from MongoDB Atlas, fetch an LLM response from Gemini, synthesize audio through ElevenLabs, and render beautifully on screen, all in under a few seconds, was a massive technical victory.

But our greatest accomplishment is seeing the emotional impact of the product. We are incredibly proud to see our AI agent provide such deeply humanized responses; it truly feels like you are talking to a deeply empathetic therapist without having to pay hundreds of dollars per session. It honestly feels like our younger selves are healing just from interacting with DD.

What we learned

We learned an incredible amount across the entire stack this weekend:

  • Semantic RAG Implementation: We learned how to effectively implement Retrieval-Augmented Generation to give our AI long-term memory. We learned how to translate text into high-dimensional vector embeddings and execute continuous vector searches in MongoDB Atlas, vastly improving our AI's accuracy and context.
  • Prompt Engineering & Empathy: We learned how to deeply refine our Gemini system prompts so the AI could recognize nuanced human emotions, output reliable JSON data, and tailor its responses to be highly humanized (e.g., smoothly shifting between a "casual friend" persona and a "grounding therapist" persona).
  • Voice Synthesis: We learned how to utilize the ElevenLabs API to make our AI feel truly human, including dynamically adjusting voice stability based on the user's detected distress level.
  • Advanced UI Animations: We learned how to build dynamic, interactive, and aesthetic frontend animations using GSAP (GreenSock) in our React/Vite app to create the soft, comforting "warm hug" vibe we were aiming for. And most importantly, we learned that sleep and having fun is the two most crucial things to code well!

What's next for DD: Dear Diary

If we continue development, our roadmap includes:

  • Advanced UI & Interactive Elements: We want to make our interface even more intuitive and engaging by adding interactive 3D elements to the website, pushing our warm, cozy aesthetic even further.
  • Clinical-Grade AI Refinement: We plan to refine our AI agent significantly with advanced prompt engineering and fine-tuning, allowing it to understand human emotions and analyze conversational nuance with the accuracy of a licensed therapist/doctor.
  • Advanced RAG Architecture: We want to research and implement more robust ways to handle our Retrieval-Augmented Generation. Pure vectorization can sometimes lead to context loss, so we plan to explore building a ReAct agent using LangChain and implementing BM25 (hybrid search) for our RAG pipeline to ensure zero loss of critical user context.
  • Deployed App: We want to deploy our website as an app on both iOS and Android. Since we wanted to deploy our project, and none of us had money(D;), we couldn't afford to get our app actually deployed. So we chose to do a responsive website so we could have a working link for everyone to test out (yes, accessible!!). But in the future, we definitely want to make our website even more accessible and available on all app stores.
  • Safety Features: Expanding our "Covert Mode" toggle, allowing the app to instantly reskin itself to look like an unrelated productivity or recipe app to ensure total safety for survivors whose devices might be monitored. We truly hope DD can become a real, impactful safety resource for women everywhere.

Built With

+ 2 more
Share this project:

Updates