Inspiration

We chose to develop an app for children with lisps because we saw a gap in how speech therapy is delivered and experienced. Traditional speech therapy is often limited to short, scheduled sessions with a specialist, which can make progress slow and inconsistent. For children, especially, practicing outside of therapy can feel repetitive and discouraging since there aren't many resources for this problem.

Our idea came from asking: What if we could make speech practice fun, accessible, and personalized—so kids actually wanted to do it every day?

That’s why we built an AI-powered tutor combined with a gamified learning experience. The AI acts as a patient, always-available companion who listens to a child’s pronunciation and provides real-time feedback. The gamification layer turns practice into play rewarding effort, celebrating progress, and framing challenges as achievements. This approach not only keeps children engaged but also reduces the frustration and stigma that can sometimes come with speech difficulties.

Ultimately, we wanted to empower children to build confidence in their voices at their own pace, in a supportive and playful environment. By blending AI technology with the principles of gamification, we’re making speech therapy more engaging, consistent, and effective for kids with lisps.

Fun fact: All of the accessories in the app are hand-drawn!

What it does

Our app is designed to help children with lisps improve their speech in a fun, interactive way. When a child first uses the app, they go through an initial sentence test so ElevenLabs can understand how they speak. This helps the AI tutor understand what type of lisp they have and personalize the learning journey for them.

From there, the child enters the home screen, where their avatar lives. The avatar can travel to different “planet levels,” each with its own set of verbal challenges guided by an AI voice companion. The companion gives real-time feedback, encouragement, and tips to help the child practice specific sounds and words.

As children complete levels, they earn points. These points can be spent on costumes, accessories, and upgrades for their avatar, adding a gamified reward system that keeps them motivated to practice consistently.

How we built it

We built the UI and all of the frontend using React Native, TypeScript, and Expo to create, build, and deploy our application. The accessories and aliens featured in the app were hand-drawn, giving it a unique, personalized style. On the backend, we used FastAPI to create an endpoint to our Python server, which handled all the machine learning logic. The server used Hugging Face transformers to classify different types of lisps in speech and determine the appropriate responses. It also managed API calls to ElevenLabs to generate voice outputs based on the classification results, seamlessly connecting the AI-powered analysis to the interactive app experience.

Challenges we ran into

  • Limited data/audio recordings of children speaking with lisps

  • Hard to convert audio with lisps to audio without lisps with the same voice.

Accomplishments that we're proud of

  • Designing a gamified experience with planets, levels, and avatar customization that makes practicing speech fun instead of repetitive.

  • Made therapy more accessible by building a solution children can use at home, reducing reliance on limited in-person sessions.

  • Integrated AI voice feedback to provide real-time corrections and encouragement, simulating the guidance of a speech tutor.

What we learned

This hackathon pushed us to explore and learn a lot of new technologies in a short time. We figured out how to use Hugging Face Transformers to classify different types of lisps, which gave us a deeper understanding of how NLP models can be applied beyond text. We also learned how to integrate the ElevenLabs API, giving our app realistic AI voice feedback that made the speech tutor feel more engaging and human-like.

For several of our team members, this was the first time working with React Native, and we quickly picked up the framework to build a cross-platform mobile app from scratch. Along the way, we practiced troubleshooting issues with dependencies, GitHub merges, and API integrations—all under the fast pace of a hackathon.

What's next for Phoniverse

In the future, we plan to expand the app with:

  • More Levels & Worlds – New planet environments, challenges, and themes to keep children excited and engaged as they progress.

  • Personalized Learning Tracks – Tailored lesson paths that adapt more deeply to the specific type of lisp (frontal, lateral, palatal, etc.), so every child gets the support they need.

  • Smarter AI Feedback – Even more natural, encouraging, and precise real-time corrections powered by advances in speech recognition.

  • Expanded Rewards & Customization – More costumes, accessories, and creative avatar options so kids stay motivated to practice regularly.

Built With

Share this project:

Updates