Inspiration

Studying alone is hard. Distractions are constant, motivation fluctuates, and most productivity tools either feel too passive or too controlling. We were inspired by the idea of “body doubling” and the Turing City theme, where AGI exists not to replace humans, but to quietly support them in everyday life. We asked a simple question: what if an AI could sit next to you while you study, notice when you drift, and help you refocus in a way that feels human, optional, and even a little funny? That question became Still there?

What it does

Still there? is an individual-focused AI productivity system for students. During a study session, the user can optionally enable their camera so the system can estimate focus using posture and visual attention signals. If attention drops, the AI sends light, non-intrusive messages. Depending on user settings, these can range from calm reminders to humorous or more aggressive nudges.

The system also supports structured study techniques like Pomodoro, voice-based goal setting at the start of a session, and spoken reflections at the end. Everything is customizable. Monitoring is optional, tone is adjustable, and the goal is accountability without pressure.

How we built it

We built "Still there?" as a web-based application designed around real-time feedback and user control. Overshoot AI is used to analyze posture and visual attention in real time. We implemented a lightweight attention state system that favors forgiveness over strict enforcement. Voice input and output are powered by ElevenLabs, enabling spoken goals, reflections, and optional voice nudges.

We designed the system to be modular: attention detection, browser monitoring, voice feedback, and study techniques can all be enabled or disabled independently.

Challenges we ran into

One of the biggest challenges was avoiding a “surveillance” feeling. Camera-based attention tracking can easily feel invasive if handled poorly, so we spent significant time designing opt-in flows, adjustable sensitivity, and supportive language.

Another challenge was accuracy. Humans naturally move, stretch, or look away while still being productive. Balancing sensitivity with forgiveness required careful tuning and thoughtful defaults. From a technical perspective, coordinating real-time video analysis, AI feedback, and UI responsiveness within hackathon constraints was also challenging.

Accomplishments that we're proud of

  • Building a real-time AI system that feels supportive rather than controlling
  • Successfully integrating live video, computer vision, and voice feedback
  • Creating a fully customizable experience where the user stays in control
  • Designing AI interactions that can be serious, gentle, or funny depending on preference
  • Delivering a polished, demo-ready product under tight time constraints

What we learned

We learned that productivity tools are as much about psychology as they are about technology. Small, well-timed nudges are more effective than constant enforcement. We also learned how important transparency and user control are when AI operates in personal spaces. From a technical standpoint, we gained experience integrating real-time systems, monitoring AI behavior, and designing AI interactions that fail gracefully. Most importantly, we learned how to build AI that supports humans without trying to dominate their workflow.

What's next for Still there?

Next, we want to improve personalization by adapting feedback styles over time based on what actually helps each student stay focused. We also plan to refine distraction detection, expand study techniques, and explore deeper insights into focus patterns, always keeping privacy and user control at the center.

In the long term, Still there? represents how AI can exist in a futuristic city like Turing City: quietly present, respectful of boundaries, and focused on helping people show up for themselves.

Built With

Share this project:

Updates