Inspiration
Artists have been complaining for years about AI companies scraping their work without permission to train image generation models. Tools like Midjourney and Stable Diffusion are trained on billions of images pulled from the internet, and most of them were never licensed for that. We wanted to give artists an actual technical way to fight back, not just a petition or some terms of service complaint, but something that works at the pixel level. Using adversarial machine learning against the very systems that take advantage of creative work felt like the right move. We built ViperProtection so any artist can protect their images before posting them online, even if they have zero technical background.
What it does
ViperProtection lets artists upload any image and apply invisible adversarial protection before sharing it publicly. The main feature, Viper Poison, adds tiny pixel level changes to the image that mess up AI training models when they try to learn from it. This uses machine learning to compute a backward pass on the image encoder to change the values of the image. To a person looking at it the image looks essentially the same, but any AI that trains on it ends up learning garbage patterns instead of the actual content. On top of poisoning, users can also apply a viper watermark or a custom watermark drawn directly on the image in a canvas editor, or hide sensitive details with blur and pixelate filters. The tool is free, no account needed, and everything processes in seconds through a simple drag and drop interface. The site also built in scraping prevention. It includes a robot.txt file telling scraper to not scrape. If the scraper continues, there is a built in honeypot, trapping scrapers with an invisible link and leading to a loop of garbage data.
How we built it
The frontend is a React single page app styled with Tailwind CSS. We used Framer Motion for all the animations, including scroll triggered entry animations, the three state upload flow with smooth transitions between the dropzone, processing, and success screens, and the sliding settings panel. The upload page has a full canvas based watermark editor built with the HTML5 Canvas API, with brush controls, color picker, eraser, and undo/redo history. Files get composited on a hidden canvas before being sent to the backend as a multipart form upload.
The backend is a FastAPI app deployed on DigitalOcean. It handles file ingestion, stores originals in a PostgreSQL database, and routes requests to the right processing pipeline. For blur, pixelate, and watermark the processing happens directly with Pillow. For Viper Poison, the image gets forwarded to a separate adversarial engine running locally on a GPU and exposed via ngrok. We weren't able to access cloud GPU resources during the hackathon, but in the future we'd like to migrate this to DigitalOcean GPU Droplets for a fully cloud-hosted solution. That engine is built on PyTorch and uses a dual loss optimization loop that combines a VAE style loss and a CLIP semantic loss. It runs gradient steps with momentum, computing perturbations that maximize how much they mess with how models like Stable Diffusion perceive and encode the image. The epsilon parameter controls how strong the perturbation is, going from subtle to aggressive.
The entire infrastructure runs on DigitalOcean, the React frontend is deployed as a static site on App Platform, the FastAPI backend runs as a web service on App Platform, data is stored on a DigitalOcean Managed PostgreSQL database, and processed images are hosted on DigitalOcean Spaces with CDN for fast global delivery.
Challenges we ran into
The biggest challenge was definetly the adversarial poisoning algorithm. Getting the perturbations strong enough to actually hurt model training while keeping them invisible to the human eye took a lot of careful tuning of the loss weights, step size, momentum, and epsilon range. Too weak and the protection doesnt do anything. Too strong and the image starts looking visibly different. We also had to balance processing time since running 50 gradient steps on a GPU isnt instant, and users expect things to be fast. On the frontend side, getting the canvas watermark editor to feel smooth and professional while also correctly compositing the drawing onto the original image at full resolution before sending it to the backend was tricky. It required careful handling of coordinate scaling and image data. Getting the three state UI synced up so animations never conflict and the layout doesnt randomly jump around took a lot of iteration too.
Accomplishments that we're proud of
We're really proud that the core protection actually works. The adversarial perturbations we generate measurably mess up VAE feature representations and CLIP semantic embeddings, which are the exact things image generation models rely on during training. We also built a metrics system that gives back a protection score, similarity score, and an "AI view" of the original vs poisoned image so users can actually see what the model sees compared to what humans see. That visualization makes the invisible protection feel real and understandable. On the product side, we're proud of how polished the frontend turned out given the time we had, and the fact that the whole tool is completely free with no account needed. In addition the scraping tool prevents AI companies from downloading the watermarked images to build preventive measures against watermarking.
What we learned
We learned a ton about adversarial machine learning, specifically how to build perturbations using gradient based optimization against a frozen model. Working with the VAE encoder from Stable Diffusion as an attack target taught us alot about how these models encode visual information internally. On the engineering side we learned how to set up a full stack app where a React frontend, a FastAPI orchestration layer, and a separate GPU compute service all talk to eachother cleanly. We also figured out how to handle the UX challenges of async processing, keeping the interface responsive and informative while the backend is doing heavy computation in the background.
What's next for ViperProtection
The most important next step is running proper tests to measure how much our poisoning actually degrades downstream model fine tuning, and publishing those results openly. We also want to look into batch processing so artists can protect whole portfolios at once instead of one image at a time. Adding browser extension support would let artists protect images directly on platforms like ArtStation or DeviantArt without leaving the site. Long term the goal is to make this a community maintained open source tool that artists can actually trust and build on.
Built With
- digitalocean
- framermotion
- lucide-icons
- python
- react
- tailwind


Log in or sign up for Devpost to join the conversation.