Inspiration
Parents who are separated from their children because of work, divorce, immigration, or military deployment often rely on video calls to stay connected. But while these tools are good for a quick chat, they don’t support familiar, comforting routines like reading a bedtime story together. The passive nature of a standard video call struggles to hold a young child's attention. This inspired us to ask: how could we transform that passive screen time into an active, shared experience that fosters connection and learning? We envisioned a tool that would help an away parent remain a loving and present part of their child’s daily life.
What it does
We created On The Go Storyteller, an interactive reading platform that bridges the distance between parents and children. When a parent and child log in, they enter a shared virtual reading room with a live, peer-to-peer video call. The parent's screen acts as a control panel, allowing them to guide the narrative line-by-line.
As the parent reads, the story text appears in real-time on both screens. When a story character has a line of dialogue, their avatar on the child's screen animates and "speaks" the line using text-to-speech, bringing the story to life. This creates a magical, shared experience that keeps the child engaged and makes storytime a fun, collaborative activity, no matter the miles between them.
How we built it
Our stack was chosen for performance, real-time interactivity, and simplicity.
- WebRTC: At the core of our application is a direct, peer-to-peer video and audio connection powered by WebRTC. This ensures a low-latency, high-quality call without routing media through our server.
- Node.js & Socket.IO for Real-Time Events: We used a lightweight Node.js and Express server with Socket.IO for two critical functions. First, it acts as the "signaling server" that allows the parent and child's browsers to find each other and negotiate the WebRTC connection. Second, it serves as a real-time message broker. When the parent advances the story, a
story_lineevent is sent to the server, which instantly broadcasts it to the child's client, keeping both views perfectly synchronized. - Client-Side Text-to-Speech: To make the characters feel alive, we used the browser's built-in Web Speech API on the child's client. This voices the character dialogue with distinct pitches and rates, adding a layer of immersion without extra server load.
- Two Synchronized Views: We built the entire frontend with vanilla HTML, CSS, and JavaScript. This allowed us to craft two distinct user interfaces—a control-focused view for the parent and an engaging, visual-first view for the child—and manage the state between them precisely.
Challenges we ran into
Implementing WebRTC from scratch was our biggest challenge. The signaling process—orchestrating the exchange of session descriptions (SDP) and network candidates (ICE) before a peer-to-peer connection can be established—is incredibly complex. Debugging why a connection failed required a deep understanding of the WebRTC lifecycle and careful event handling on both the client and server.
Accomplishments that we're proud of
We are incredibly proud of creating two distinct yet seamlessly connected user interfaces. The parent's UI is functional and empowering, giving them full control of the narrative pace. The child's UI is immersive and delightful, designed to capture their imagination with animated avatars and spoken dialogue. Engineering these two views to communicate and stay in perfect sync was a major technical achievement and is central to the product's magic.
What we learned
This project was a deep dive into the world of real-time communication on the web. We learned the intricate details of the WebRTC API and came to appreciate the critical role a signaling server plays in a peer-to-peer architecture. More importantly, we learned how to manage application state across two different clients, a fundamental challenge in building any collaborative software.
What's next for On The Go Storyteller
The current platform is a solid foundation, and we're excited about the future. Our next steps include:
- Integrating High-Quality AI Voices: We've already set up an endpoint for the ElevenLabs API on our server and plan to fully integrate it to provide rich, emotive, and unique voices for each character.
- Adding More Interactivity: We want to build in comprehension questions, vocabulary pop-ups, and even "choose your own adventure" story paths to make the experience even more engaging.
- Creating a Parent Dashboard: A dedicated space for parents to track their child's reading habits and see which stories they love the most.
- Expanding the Story Library: Adding more stories and eventually allowing parents to upload their own.


Log in or sign up for Devpost to join the conversation.