Inspiration
In a world of constant digital noise, we rarely see how we actually come across in conversations. We built Empathy Mirror to help people understand their emotional presence in real time — to make communication more aware, kinder, and more authentic. It’s inspired by research in affective computing, active listening therapy, and the idea that small awareness loops can shift behavior faster than long feedback sessions.
What it does
Empathy Mirror uses your camera and microphone to sense emotional cues — facial affect, voice tone, and speech rhythm — and reflects them back through a gentle, real-time overlay. When you speak, the system shows live emotional feedback (“calm”, “tense”, “warm”), summaries of conversational tone, and LLM-generated micro-insights like “You sound more open than before”. It runs locally or in the cloud with full privacy controls — no raw video leaves your device.
How we built it
• Backend: FastAPI + WebSocket core for realtime streaming, orchestrating three microservices:
• Affect API (facial & vocal emotion inference)
• LLM Overlay (language model that translates affect vectors into human-readable reflections)
• MCP Server (message coordination + mirror loop integration)
• Frontend: React + Electron desktop app using WebRTC and a lightweight HUD for the overlay.
• Data Flow: Affect → Core Backend → LLM Overlay → WebSocket → UI.
• Infra: Dockerized microservices, Redis for pub/sub, optional Postgres, Prometheus telemetry.
.
Challenges we ran into
• Making the overlay feel helpful instead of judgmental; wording and colors mattered a lot.
• Integrating three services cleanly with WebSocket back-pressure and error handling.
• Debugging cross-process device access (camera/mic permissions).
• Designing privacy defaults that still allow useful analytics.
Accomplishments that we’re proud of
• Created a visually clean overlay that runs smoothly even on low-end laptops.
• Demonstrated how empathy feedback can be both real-time and respectful.
What’s next for Empathy Mirror
• Conversational coaching mode: LLM gives subtle prompts (“try pausing longer,” “mirror back what they said”).
• Open API for researchers studying emotion, bias, and communication patterns.
• Accessibility features: captions, voice feedback, haptics.
Log in or sign up for Devpost to join the conversation.