💡 Inspiration

I’ve always struggled with turning my goals into action. I would write down dreams, set intentions, try productivity apps — but none of them helped with the real problem: starting. I realized what I needed wasn’t a better app — it was a better conversation. Something (or someone) that could listen, help me unpack my goals, break them into steps, and keep me emotionally invested.

The idea for InfiniteGame came from this gap — the longing for a guide.

A character. A coach. A friend.

Not just a to-do list, but a dialogue with your future self.


🧠 What it does

InfiniteGame is an MVP for an AI to-do app powered by conversation.

  • You speak your goals aloud to a lifelike character.
  • The character responds with an encouraging message and breaks your goals into actionable tasks using GPT-4.
  • These tasks are scheduled and shown in a task list with a dynamic calendar view.
  • You can view, complete, and manage your to-dos in an intuitive, gamified interface.

It’s not just productivity — it’s emotional momentum.


🛠️ How I built it

  • Frontend: Built with Vite/React, Tailwind CSS, and shadcn/ui for fast, reactive UI.
  • Speech Recognition: ElevenLabs for capturing user input via microphone and performing Speech-To-Text.
  • AI Logic: OpenAI GPT-4.1 processes speech and returns two outputs: a friendly spoken message and a structured JSON with tasks.
  • Speech Generation: ElevenLabs performs Text-To-Speech and responds to the user.
  • Backend: Supabase handles authentication, task storage, and real-time updates.
  • Deployment: Deployed to Netlify with Bolt's integration.

⚔️ Challenges I ran into

  • Overthinking and paralysis: I spent weeks thinking through every option — from animation pipelines to hosting — and almost never started.
  • Time crunch: Although I ideated on May 30th, I only started building 2 days before the deadline. That made shipping even a basic MVP a sprint against time.
  • Stack overload: Tools like ElectricSQL were evaluated and dropped to keep the scope realistic. Also I discovered that Remix, a framework I was planning on using, upon request was actually transformed into Next.js, however Next.js didn't have support for Supabase integration in Bolt.new, so remaining with the standard Vite/React/Typescript starter was essential
  • Overly complicated MVP: The initial MVP was supposed to use a 3D avatar created with Ready Player Me and Matomo for animating the character and using a Text-To-Speech from Azure that provided viseme data, however, this proved to be way too complicated to implement, so I remained with a simple pulsating orb that did the talking
  • Working with Bolt.new: In general it's quite a tedious process of going back and forth and sometimes debugging why something doesn't build was quite complicated.

🏆 Accomplishments that I'm proud of

  • Built a working full-stack MVP for an AI-powered assistant in under 48 hours.
  • Seamlessly combined cutting-edge voice tech and LLMs into a single experience.
  • Created a meaningful emotional arc — not just a product, but a presence that people connect with.
  • Stayed focused on the experience, not just the features.

📚 What I learned

  • Start before you’re ready — clarity comes from doing, not thinking.
  • It’s better to have one feature that feels magical than ten that feel meh.
  • LLMs are great at planning when you give them structure (JSON outputs, tagged blocks).
  • People respond to emotion a lot and it's probably one of the main things that influence us, we'd like to think we're rational, but in fact people are highly emotional

🚀 What’s next for InfiniteGame

  • First of all implementing a subscription plan for the product as it is currently running on the ElevenLabs Creator Plan and also using my $100 OpenAI credit for generating everything and it can become costly quite fast
  • Implementing the original vision for the product:
    • 3D Avatar: Created using Ready Player Me, animated with Mixamo, and rendered in Three.js.
    • The avatar lip-syncs your assistant’s voice in real-time and celebrates when you complete tasks.
    • Lip Sync & Voice: Azure Text-to-Speech with viseme support for real-time lip-syncing.
  • Improve the overall To-Do app experience and make it more robust, currently still in MVP phase
  • Add memory: let the assistant learn from your patterns and past completions.
  • Improve character emotional range and facial animations.
  • Allow natural edits to tasks via voice (“Move that to tomorrow”).
  • Add more celebration animations and streak tracking.
  • Explore mobile and AR versions of the experience. Currently only the desktop version is optimised.
  • Long-term: let the assistant evolve with you — as a co-pilot for personal growth, not just productivity.

Challenge Compliance

  • Deploy Challenge (Use Netlify to deploy your full-stack Bolt.new application): I have successfully deployed InfiniteGame as a full-stack application using Netlify, the provided link showcases my ability to deploy to Netlify: https://sage-twilight-13f7da.netlify.app/
  • Custom Domain Challenge (Use Entri to get an IONOS Domain Name and publish your Bolt.new app on the domain): Bought the custom domain infinitegame.site for the project. The name chosen for the project is inspired partially from Simon Sinek's book The Infinite Game and I see self-improvement as one of those games that you can play to infinity.
  • Voice AI Challenge (Use ElevenLabs to make your Bolt.new app conversational): The voice function is the main driver of the app and I've used the Jessica voice from ElevenLabs for all interactions. Also I used ElevenLabs for the Speech-To-Text
  • Startup Challenge (Use Supabase to prep your Bolt.new project to scale to millions): I've used Supabase as the Database, Auth and Storage provider for the project

Built With

  • bolt.new
  • elevenlabs
  • netlify
  • radixui
  • react
  • shadcn-ui
  • supabase
  • vite
+ 18 more
Share this project:

Updates

posted an update

Hello, I've added a small change, in some places the name of the agent is Aria instead of Nova (the LLM has added that by himself), so I changed it and I've added some tracking with Umami to see where people are coming from on the site. I discovered a bug with the Avatar uploads, but unfortunately as I had started working on it read the email that we're not supposed to bring changes to the product, so it's still there, will resolve it after judging is done. Another issue that is there is that the LLM, gpt-4o-mini, doesn't properly generate your tasks across multiple days, it will only schedules them for today, will investigate this after judging into how to improve task generation. Thank you all!

Log in or sign up for Devpost to join the conversation.