Inspiration
Today, more people are facing emotional challenges, but fewer are able to recognize them. Nearly 1 in 8 people globally live with a mental health condition, and numbers continue to rise rapidly. Yet many people lack the tools or language to understand their feelings, let alone seek support. Emotional clarity is a missing layer in everyday mental health. Toweel was created to fill this gap, offering an accessible space to pause, reflect, and understand emotions.
What it does
Toweel is a self-guided tool that helps users pause, reflect, and better understand their emotions. Using an interactive Emotion Wheel, gentle AI-guided conversation, and a growing library of Emotion Cards, Toweel creates a space for thoughtful emotional exploration and support.
Here’s how it works
Users share whatever they’re feeling or thinking, and Toweel responds with step-by-step prompts to help them dig deeper. Based on their input, Toweel draws three emotion cards — each offering a clear emotional insight, body cues, and suggested coping strategies. Users can save or download these sessions as part of their personal emotion journal, empowering them to build emotional clarity before negative emotions become overwhelming.
How we built it
To build the Toweel, we used a combination of the following technologies:
- Backend
- FastAPI: Served as the backend framework, chosen for its speed and compatibility with async tasks, ideal for a lightweight API service.
- Google Cloud Run: Used to deploy our FastAPI backend as a serverless container, allowing us to scale efficiently without managing infrastructure.
- Google Vertex AI: Integrated
text-embedding-005for vectorization, andGemini 2.0for tasks like content evaluation, emotion reasoning, and generating reflective feedback via RAG. - Google Cloud VPC and network settings: Set up a VPC connector and NAT gateway to provide a static IP, ensuring secure, whitelisted access to MongoDB Atlas.
- MongoDB Atlas: Hosted our cloud databases, with one collection for the vectorized dataset and another for storing user queries.
- MongoDB Vector Search: Enable semantic retrieval of emotion-labelled text by comparing vectorized user input against the
GoEmotiondataset.
- Frontend
- React & TailwindCSS: Built a responsive, component-based UI using React and styled efficiently with utility-first TailwindCSS. React Hooks were used extensively for managing state, lifecycle, and animation triggers.
- Firebase Hosting: Used to deploy the frontend with seamless CI/CD integration for quick updates and reliable delivery.
- Framer Motion: Powered fluid, micro-interaction SVG animations and transitions to bring emotion cards and modals to life with smooth UI motion.
- Web Speech API: Implemented native voice-to-text functionality to allow users to input emotional reflections via speech, enhancing accessibility and user engagement.
- React-pdf API: Used to generate downloadable summary reports that reflect user emotion insights in a clean, readable format.
Challenges we ran into
- Seamlessly Integrating AI into the User Journey: We wanted Toweel to act as a quiet, human-like, supportive guide, not a chatbot. Designing AI interactions without drawing attention to the "AI-ness" of the tool was a significant challenge, especially as we balanced two distinct modes of interaction: card exploration and conversational guidance.
- Handling large-scale vectorized data within memory limits: The
GoEmotiondataset is too large for search, even after cleaning and deduplication. Vectorizing all of them resulted in extremely high memory usage, far beyond the limits of MongoDB Atlas’ M0 tier, especially with vector search enabled. We tested multiple embedding models and went through many iterations before setting on Google’stext-embedding-005because it allows reducing vector size without losing accuracy. - Effective prompts for Gemini: As first-time builders of an AI-integrated backend, one of the biggest challenges we faced was crafting effective prompts for Gemini. Unlike casual use of LLMs, production scenarios require precise, structured prompting to guide the AI towards consistent and interpretable outputs. It took many iterations to get the AI to return professional and effective guidance and advice.
- Emotion Data Flow & Session Handling: From a frontend perspective, coordinating the data flow in a component-based React framework proved complex. Since multiple UI elements rely on shared input and analysis results, we had to carefully manage how information moved between components without breaking the user experience. Also, it was challenging to keep each conversation tied to the same
session_id, especially when users switched between voice and text input. We had to make sure the session stayed consistent across different parts of the app. - Animation Timing & Scroll Sync: One more tricky part from the frontend side was syncing the wheel spin and card draw animations with page behaviour. After a dialogue session ends, the page needs to auto-scroll down to reveal the emotion cards, but ensuring this happens at the right moment, after animations are ready and content has rendered, was challenging. We had to carefully time the scroll with the component updates to avoid cutting off the animation or showing blank space.
Accomplishments that we're proud of
- Emotionally Inviting UI on a Tight Timeline: We built a custom set of emotion cards with animations and visual assets that made emotional reflection approachable, playful, and trustworthy, all within a limited design and development window.
- Integration of Vector Search and AI: Despite having no prior experience deploying AI models in production, we managed to integrate Gemini into a real-time feedback loop. The semantic search with MongoDB Vector Search worked better than expected, giving us high-quality emotional analogues from the GoEmotions dataset.
- Interactive and Responsive layout: Built with React and TailwindCSS, the app delivers a clean, responsive interface that works smoothly across devices, with fluid animations and state transitions.
- Auto-Generated Emotion Reports: Users receive a downloadable PDF summary of their emotional session, a meaningful takeaway that turns abstract feelings into tangible insights.
What we learned
- Prompt Engineering: This project taught us a great deal about working with LLMs in real-world applications. Prompt engineering turned out to be a key skill. It’s not just about asking questions, but about guiding the model’s reasoning and structure.
- Frontend–Backend Coordination: We gained hands-on experience in using console.log()/Google GCP console effectively to trace API responses, monitor port activity, and inspect data structures. This helped us better understand how the frontend and backend communicate, align request formats, and quickly catch mismatches.
What's next for Toweel
While Toweel was built as a hackathon prototype, we see strong potential for turning it into a long-term product.
User Account
The first thing we want to add is user accounts, so people can track their emotional patterns over time and across devices. Right now, the demo stores queries in local storage only, which is great for privacy, but limits continuity.
Voice Input
We also plan to extend voice input beyond just transcription. By analyzing vocal tone and pitch using datasets such as RADVESS, we aim to incorporate richer emotional signals. This would enable Toweel to respond not just to what users say, but also to how they sound.
Mobile Responsive
We plan to refine the mobile experience further by optimising layout scaling, touch interactions, and animation performance on smaller screens. This will ensure users can fully engage with voice input, emotion cards, and reports seamlessly, whether on a desktop or mobile device.
System Flow Overview
[User Input]
|
|---> [Frontend (React + Firebase Hosting)]
|
|---> Input Capture & Processing:
| • Text input (JavaScript event handling)
| • Speech-to-text (Web Speech API)
| • UI styling & layout (Tailwind CSS)
| • Input animations (GSAP)
|
|---> Sends processed input to backend
↓
[Backend API (FastAPI on Cloud Run)]
|
|---> Step 1: Quality Check (Gemini API - Vertex AI)
| - Emotional depth
| - Clarity
| - Contextual completeness
| - Personal voice
|
|---> Step 2: If passed, vectorize input
| - Use text-embedding-005
|
|---> Step 3: Vector Search in MongoDB Atlas
| - Search top 30 similar sentences
|
|---> Step 4: RAG (Gemini API again)
| - Combine user input + retrieved docs
| - Use VA model filtering
| - Output 3 emotions + percentages
|
|---> Step 5: Final Report Generation (Gemini)
| - Emotion definitions
| - Evidence from user input
| - Reflection + Action Suggestions
|
↓
[Response Sent to Frontend]
|
|---> Result Rendering & Display:
| • Show 3 Emotion Cards on Wheel (React components)
| • Wheel animations & transitions (GSAP)
| • Summary Report layout (Tailwind CSS)
| • User interaction handling (JavaScript)
| • Save input in MongoDB (query collection)


Log in or sign up for Devpost to join the conversation.