I-nteractive — Prompt-to-Video that Talks Back
Inspiration
Short-form video usually feels one-way. We wanted a loop that talks back—you type a prompt, get a tailored video, respond again, and the system adapts. Think “chat,” but the replies are cinematic clips instead of text.
What it does
- Prompt → Video: Enter a natural-language prompt and receive a short, thematically consistent video.
- Customizable Generation: Choose settings, characters, and themes before rendering.
- Guided Scripts: Optionally include an auto-generated script so users can follow along or re-shoot.
- Conversational Loop: Each new prompt can build on the last video, creating an iterative story or tutorial.
How we built it
Frontend: Next.js (App Router), React, Tailwind CSS
Backend/ML: Python, PyTorch
Services: Luma AI (video generation), Fetch.ai (orchestration/agents), ElevenLabs (voice), OpenAI (planning/prompting)
DataBase TinyDB
High-level flow:
- User submits prompt + options from the Next.js UI.
- Backend orchestrates generation (FetchAI/OpenAI for planning → Luma for video → ElevenLabs for VO), with status polling.
- Electrical Section → FetechAI for determining what type of circuit is needed → Google Gemini for making the netlist and .asc file → PYLTspice for simulating and creating a circuit diagram.
- Pytorch VAE model for Query search.
- The resulting asset is returned to the UI for playback and iteration.
Challenges we ran into
- Latency & Throughput: Generation queues caused noticeable delays.
- Duration vs. Cost vs. Quality: Faster, higher-quality, longer clips spike costs—hard to optimize during a hackathon.
- Tooling Glue: Wiring multiple services reliably under time pressure (auth, retries, and fallbacks).
Accomplishments we’re proud of
- Shipped an end-to-end, multi-service pipeline in hackathon time.
- Built a usable UI that hides complexity and lets users focus on creative iteration.
- Established a scalable orchestration pattern we can harden after the event.
What we learned
- Scope ruthlessly: Clear MVP boundaries kept us shipping.
- Design for failure: Retries, progress states, and timeouts are table stakes for generative pipelines.
- Team comms matter: Fast decisions and clear ownership beat polishing non-essentials.
What’s next for I-nteractive
- Smoother UX: Better progress feedback, cancel/modify mid-render, and more intelligent defaults.
- Efficiency: Model/pricing tiers, caching, and partial re-use to cut both latency and cost.
- Quality: More consistent characters and styles across the conversational loop.
- Safety & Rights: Clear guardrails on inputs/outputs and attribution for generated assets.
Quick FAQ (optional in submission)
- Why now? Short-form creation is exploding, but iteration still feels clunky. We make it truly conversational.
- Who’s it for? Creators, educators, support teams—anyone who wants rapid video iteration without editing timelines.

Log in or sign up for Devpost to join the conversation.