Inspiration I’ve always felt the gap between “textbook language” and real, living language is too big. Music is emotional, repetitive, and contextual—perfect for learning—so I wanted an app where practicing feels as natural as replaying your favorite chorus Hackathon story (real talk): I found the hackathon about 2 hours before the deadline. I started with Google/CLI trying to understand deployment and setup, then used Claude to build the core fast. My Claude tokens ran out mid‑way, so I continued in manual copy‑paste mode (Ctrl+C / Ctrl+V) to finish wiring deploys and config. Under extreme time pressure (and constantly running out of tokens), I learned to ignore small rough edges, stay calm, and focus on the main goal: ship a working, interesting product. Thank you — it was fun. One more honest detail: I didn’t write a single line of code completely by myself—this project was built through prompting, reviewing, integrating, and debugging AI‑generated code, plus a lot of manual glue work under deadline.
What it does Search & import songs (LRCLIB: metadata + lyrics) Instant line analysis: translation, short grammar notes, vocabulary hints Word‑by‑word / interlinear mode to understand sentence structure Personal vocabulary: save words with context and languages, listen to words, view translations in other languages Exercises: check your translation and get feedback Text‑to‑speech for lines/words with audio caching in S3 storage
How I built it Frontend: React + Vite + Tailwind CSS, React Router, Zustand, TanStack React Query, Axios Backend: FastAPI + SQLAlchemy (async) + Alembic, HTTPX Database: PostgreSQL (Render) Deployment: Vercel (frontend) + Render (backend) Audio storage: Vultr Object Storage (S3‑compatible) External APIs: Cerebras — lyric analysis, interlinear mode, translation checking ElevenLabs — text‑to‑speech LRCLIB — song search + lyrics Challenges I ran into Deploying a monorepo under pressure (Vercel/Render settings, root directory, environment variables) Keeping database migrations reliable in production Integrating TTS + S3 storage (public URLs, permissions, caching, retries) Making language settings understandable (separating UI language from learning/translation languages) Shipping despite hard time limits and AI token constraints Accomplishments that I'm proud of Delivered a full-stack product end-to-end in a very short time window Built a fast learning loop (hover analysis + interlinear + one-click saving to vocabulary) Got production deployments working (frontend + backend + Postgres + object storage) What I learned How to prioritize ruthlessly: ship the core experience first, polish later Practical deployment/debugging under pressure (logs, env vars, migrations) How to keep momentum when tooling is limited (including manual workflows) How to “program by integration”: evaluate AI output, connect systems, and debug quickly What's next for Song2learn Better discovery (genre/language “surprise me” playlists, popularity filters) Richer interlinear mode (toggle punctuation, multi-word phrases, highlight alignments) Spaced repetition and daily review for saved vocabulary UI localization for the full interface (not only language settings) More pronunciation options (voices, speed, per-word audio, offline caching)
Built With
- axios-backend:-fastapi
- caching-(cachetools-ttlcache)
- compose
- dev
- docker
- for
- httpx-database:-postgresql-(hosted-on-render)-cloud-/-hosting:-vercel-(frontend)
- languages:-javascript-(frontend)
- local
- pydantic
- python-(backend)-frontend:-react-+-vite
- rate-limiting-(slowapi)
- react-router
- render-(backend-+-postgres)-object-storage-(audio):-vultr-object-storage-(s3-compatible)-external-apis:-lrclib-(song-search-+-lyrics)-cerebras-(ai-analysis/translation-features)-elevenlabs-(text-to-speech)-other:-jwt-auth
- sqlalchemy-(async)-+-alembic
- structured-logging-(structlog)
- tailwind-css
- tanstack-react-query
- uvicorn
- zustand
Log in or sign up for Devpost to join the conversation.