Inspiration
As AI-generated content becomes more and more indistinguishable from reality, non-consensual deepfakes pose a significant threat to digital privacy and identity. This issue is especially alarming amongst young women, whose photos are misused in harmful and abusive ways, which inspired us to build Cloaked. We wanted to create a cloak; a protection layer for your photos that makes them invisible to AI manipulators while remaining perfectly visible to humans. Our goal was to bring accessibility to technology that typically requires expensive hardware and that's hidden behind research protections.
The field of "Adversarial Machine Learning" for identity protection is growing, but most tools are currently academic or aimed at professional artists rather than everyday social media users. The gap in the market is that most current solutions are desktop-first or enterprise only. Some notable ones we found through our research:
• Fawkes/Glaze: Fawkes is one of the first tools to use "cloaking" to prevent facial recognition systems from building a model of your face, and Glaze is specifically designed for artists to prevent AI models from "learning" their style or using their work for training. But both require running heavy software on a PC/Mac. There is no one-tap mobile app for everyday use.
• C2PA (Content Credentials): This is the industry standard (Adobe, Google, Microsoft), but it focuses on proving a photo is real (metadata), not preventing it from being used. It doesn't stop an AI from scraping your face; it just proves the deepfake isn't the original.
• Invisible Watermarking (SynthID, Steg.ai): These are mostly B2B tools for companies to protect their IP. They aren't consumer-facing "vaccines" for your personal identity.
We are building a consumer-first, mobile-first tool that poisons the data stream rather than just verifying the source.
What it does
Cloaked is a deepfake protection shield for your photos. When you upload an image, our system applies "adversarial perturbations" (subtle, invisible noise patterns that are imperceptible to the human eye but confusing to AI models.)
Key features include:
Adversarial Cloaking: We use Projected Gradient Descent (PGD) to shift image embeddings away from their true representation, preventing deepfake models (like Stable Diffusion or face-swappers) from successfully manipulating the face.
Smart Targeting: The system detects faces and applies targeted protection, ensuring the most sensitive parts of the image are secured.
Proof of Protection: A unique verification feature that attempts to "attack" both your original and protected images effectively proving the shield works when the protected version resists manipulation.
Cross-Platform Access: Available as a web platform and mobile app for on-the-go protection. Our mobile app allows you to easily upload and download images straight from and to your gallery, and our web application provides interactive UI dashboards and landing pages for a seamless cloaking experience.
How we built it
Backend: We used FastAPI and Python to power our "Digital Witchcraft" engine.
Core AI: The protection logic uses PyTorch. We implemented Projected Gradient Descent (PGD) attacks, leveraging CLIP (Contrastive Language-Image Pre-training) models to generate transferable adversarial examples that work against a wide range of diffusion models. We also integrated InsightFace for precise face detection to enable targeted attacks.
Frontend: The web interface is built with Next.js, React, and TypeScript, styled with Tailwind CSS. We used Framer Motion to create a premium, interactive user experience (like our animated camera interface!).
Infrastructure: We use Supabase for database needs and Google Gen AI for auxiliary processing.
Mobile: Expo to extend protection to phone cameras and for ease of use.
Challenges we ran into
UI/UX: We made many iterations and different designs for the UI, but ultimately ended up choosing a Polaroid/vintage(ish) camera theme. We believe that this theme not only allows for a lot of interactivity and opportunities for a polished UX, but also provides a sense of calm, ease, and relatability for our targeted audience. Since our main goal was to help women, especially younger generations as their photos are most prone to deepfakes, we wanted a theme that enhanced their experience with our application.
Performance Optimization: Running iterative PGD attacks is computationally expensive. We had to optimize our PyTorch pipeline to run efficiently, adding support for MPS (Metal Performance Shaders) for Mac users to ensure fast local development and processing.
Accomplishments that we're proud of
We are pretty proud of our UI, especially certain animations like the shutter flash that imitates a real camera and the transitions between elements. We are also really stoked about our mobile application that provides convenience and can also sync user data between web and mobile for a cohesive experience.
What we learned
Finding the sweet spot where protection is effective but remains undetectable to the human eye is incredibly nuanced. We learned a lot about how human perception differs from machine vision.
Complexities of AI Deployment: Deploying heavy ML models (like CLIP and InsightFace) alongside a responsive web app requires careful architecture. We gained deep experience in optimizing Python backends for near real-time processing.
Mobile App Development: We learned how to develop mobile applications!
What's next for Cloaked
Browser Extension: A "Cloak as you Upload" browser extension that automatically protects images before they are posted to social media platforms like Instagram or LinkedIn.
Adaptive Defense: Implementing a reinforcement learning system that evolves our protection methods as new deepfake generators are released, ensuring our shield stays one step ahead.
Try it out
hackviolet2026.vercel.app https://github.com/urjitc/hackviolet2026
Log in or sign up for Devpost to join the conversation.