Hide Your CLB
A visual, anatomy-based Mandarin pronunciation trainer for Singaporeans who want to level up fast.
Inspiration
I built this for all my fellow bananas (and anyone else) who panic every CNY when it’s time to bai nian and suddenly realise… we can’t speak Chinese. I wanted something that teaches the way a friend would: sound by sound, syllable by syllable, with an extra visual layer that finally makes everything click.
What it does
Hide Your CLB is a visual Chinese-pronunciation trainer that teaches Mandarin using real-time mouth and tongue feedback.
- Shows exact articulatory positions (tongue, mouth shape)
- Gives instant scoring as you speak
- Progresses from individual sounds → words → phrases Perfect for visual learners who need more than “just repeat after me.”
How we built it
Frontend:
- React 18 + TypeScript (Vite)
- Flow: intro → breakdown → practice screens → completion
- Web Audio API for mic capture
- SVG-based mouth/tongue visualisation
Backend:
- FastAPI + WebSockets
- Parselmouth (Praat) for real-time analysis (F1/F2/F3 formants, pitch, intensity)
- Processes audio in ~32ms chunks for low-latency feedback
Challenges
- Finding a riggable mouth model online was almost impossible
- Spent 4+ hours trying to build/rig my own
- Settled on a clean 2D SVG grid system that stretches shapes based on articulatory estimates
Accomplishments
- First solo hackathon project
- Two teammates flaked the morning of—but still completed the entire build in time
- Shipped every core feature I planned
What we learned
- Documenting pronunciation is extremely hard
- Non-audio descriptions barely capture the nuance of how sounds are physically formed
- Visualizing articulation bridges that gap more than expected
What’s next
- 11Labs integration — speed up content generation without manually trimming videos in CapCut 😭
- More languages — the system can scale, but I need fluent speakers to help validate pronunciations and scoring
Log in or sign up for Devpost to join the conversation.