Inspiration

The inspiration behind EchoTales came from a desire to create a truly engaging and nurturing reading experience for children. We wanted to combine the magic of storytelling with the power of AI to make stories not only more interactive but also emotionally supportive. We aim to create a comforting, personalized environment that adapts to each child’s unique reactions, fostering a love for reading and emotional growth.

What it does

EchoTales allows users to not only hear the story, but also interact with it aswell, through talking to the characters, creating voices for their favorite characters, and even jump to their favorite parts of a story.

  1. Dynamic Character Voices EchoTales uses AI to simulate character voices that change in tone, pitch, and emotion based on the story's plot and context. As the narrative progresses, characters express a range of emotions, making the story more vivid and engaging for children. This feature allows the app to deliver an immersive experience, bringing characters to life and capturing the child’s imagination.

  2. Voice Cloning EchoTales includes a voice cloning option, enabling parents or friends to narrate stories in familiar voices. This feature uses LMNT’s technology to create realistic voice replicas, making bedtime stories more comforting and personal. By hearing the voices of loved ones, children can feel more connected to the story, which helps create a sense of security and warmth during storytelling sessions.

  3. Interaction with Characters, Change Pace of Story Users are allowed to interact with the characters in the story, whether its figuring out the current events of the story, understanding the characters thought's behind their actions, also talking to the narrator aswell, and asking the narrator to jump to a certain section of the text.

How we built it

We developed EchoTales using TypeScript and React Native for the front end, ensuring a responsive and intuitive user interface for young users. The backend integrates LMNT for voice cloning and Hume AI for voice to voice interaction and narration, enabling dynamic voice adjustments and personalized audio interactions. This combination of technologies allows for a seamless, adaptive audiobook experience tailored to each child's emotional response.

Challenges we ran into

One challenge was figuring out how to allow multiple voices to be applied. For instance, if Character A is saying a dialogue, Character B would now have a dialogue after, but would require a different voice model to use. This was fixed through determining the total number of characters in a story so the creation of base voices will be dont at the beginning.

Accomplishments that we're proud of

We're very proud of being able to apply applications that we are new to into our idea! And of course, being able to work as a team and planning the overall workflow in an effective manner.

What we learned

We learned a lot of new tools throughout this project and exploring how to integrate various API's such as Hume AI and LMNT.

What's next for EchoTales

We plan on adding an emotion sense technology that can determine the reactions of the reader throughout the story. The idea would be to integrate Hume AI’s emotion-sensing technology to detect children’s facial expressions during storytelling. If the system detects confusion, disinterest, or drowsiness, EchoTales can adjust the pacing, tone, or level of detail in real time. For example, if a child looks confused, the app pauses and offers a simpler explanation, while signs of drowsiness prompt a slower, more soothing narration. This adaptive feature ensures that the child remains engaged and receives appropriate emotional support throughout the story.

Share this project:

Updates