Inspiration
In the age of SORA 2 and Google Veo 3, deepfakes have blurred the line between human and AI-generated faces. Our inspiration for Freak-cha came from iOS 26’s accessibility tongue-scrolling feature — an innovative way to interact with technology using subtle gestures. We wanted to take that idea further by designing a verification system that AI simply couldn’t replicate. Freak-cha is a tongue-based biometric authentication system that challenges users to prove their humanity through facial expressions and tongue gestures, turning security into a playful, interactive experience.
What it does
We built Freak-cha as a CAPTCHA process with multiple stages. We show a series of true or false questions, which the user answers using their tongue: flicking their tongue up and down for true, and flicking their tongue to the side for false. These micro-expressions and gestures are processed in real time to verify that the user is truly human, not a deepfake nor an AI-generated clone.
How we built it
Our tech stack combines modern web technologies with state-of-the-art computer vision tools. The frontend was built using Next.js and Tailwind CSS, while tRPC handled type-safe communication between frontend and backend. We used Supabase for authentication and to store user embeddings, leveraging pgvector in PostgreSQL for vector similarity search. On the AI side, we trained a custom YOLOv8 model in Python using proprietary tongue detection datasets to improve the accuracy of facial and tongue recognition.
Challenges we ran into
The biggest challenge we faced was data labeling. Annotating hundreds of tongue positions and facial expressions across different data points (we had to ask people to wiggle their tongues) was time-intensive and mentally exhausting, yet crucial for model accuracy. Balancing accuracy, and user engagement required constant iteration and creative problem-solving.
Accomplishments that we're proud of
We’re incredibly proud of how Freak-cha evolved from a wild idea into a fully functional prototype within just 36 hours. We successfully integrated real-time facial recognition and tongue gesture detection into a web-based interface, proving that biometric verification can be both secure and fun. One of our biggest accomplishments was training a custom YOLOv8 model capable of detecting micro tongue movements. This is something that, to our knowledge... hasn’t been widely explored in the field of computer vision.
We also managed to create a seamless, privacy-conscious authentication flow powered by Supabase and pgvector, enabling real-time similarity checks between live facial embeddings and stored user profiles. Beyond the technical success, we’re proud that Freak-cha makes security more human. As AI gets more advanced, it's crucial to build something that makes people laugh and think critically about the future of identity verification.
What we learned
Through this project, we learned how to train and fine-tune computer vision models, label data effectively, and design secure authentication mechanisms. Building a system capable of interpreting micro facial gestures gave us valuable insight into how models perceive subtle human expressions — and how uniquely difficult it is for AI to fake them. We also learned the importance of optimizing latency and data pipelines to achieve real-time feedback from a webcam feed.
What's next for freak-cha
Freak-cha reimagines identity verification for an AI-saturated world — a system that’s secure, human-centered, and just a little freaky. We hope to inspire more freaky ways for humans to stay human — in an increasingly robotic world.
Built With
- next.js
- python
- supabase
- tailwind
- trpc
- yolov8



Log in or sign up for Devpost to join the conversation.