🎨 Poisoned Beauty: Protecting Artists in the Age of AI

  • (AI in Cybersecurity Track)

🚀 About the Project

As a team of technologists deeply immersed in the world of AI, we’ve always been captivated by its potential, but also wary of its unintended consequences. Over the past year, we watched as many of our close friends, illustrators, digital painters, photographers, saw their work scraped, repurposed, and fed into massive AI models without their consent, credit, or compensation.

We wanted to change that.

💡 The Idea

What if there was a way for artists to fight back, not with takedown notices or lawsuits, but with code?

What if you could cloak your artwork, making it toxic to AI training models while remaining beautiful and untouched to human viewers?
What if you could track where your art spreads online, get proof of ownership, and even pursue compensation when companies misuse your work?

That’s the tool we built: a platform that empowers artists with AI-based image protection, adversarial cloaking, and internet-wide tracking, all rolled into one.


💡 Inspiration

The inspiration came from a mix of empathy and outrage. Watching our artist friends feel helpless, seeing their work appear in Midjourney outputs or used in marketing copy generated by LLMs, was deeply frustrating. We realized that despite all the conversations around AI ethics, there weren’t many actual tools being built to protect the creators themselves.

We also saw that the impact goes beyond individual artists. Big media companies like Getty, Adobe, and news outlets have started to realize that their image libraries are being mined for free. There’s a growing hunger not just for ethics, but for enforceable control and monetization.

This tool offers both.


🧱 How We Built It

We started with a simple principle: build an MVP that actually poisons AI models, and then scale the vision from there.

🔐 Key Components:

  • Adversarial Image Cloaking
    Using TensorFlow and adversarial attack techniques (FGSM, PGD), we generate imperceptible pixel-level noise that prevents models from learning anything useful. Human viewers see the same image; models get garbage.

  • Vision Embedding + Search Agent
    Using CLIP and OpenCLIP, we built a system to monitor the web for similar images, even if cropped, color-shifted, or filtered. It lets artists know where their art is being used.

  • LLM-Powered Context Analysis
    We integrated an LLM (GPT-3.5 lol, want to use our own post-hackathon tho) to analyze text around found images and determine: is this a repost with credit, an AI generation, or a case of theft?

  • Auth-Backed Ownership Tagging
    Every uploaded image gets a cryptographic signature + metadata fingerprint to prove original ownership and usage intent.

  • Artist Dashboard
    A clean, actionable dashboard that shows where your art is, how it’s being used, and lets you prepare takedown notices or replacement uploads.


🧠 What We Learned

  • Adversarial image cloaking is wildly effective, but tricky, small perturbations need to balance invisibility and poison strength.
  • CLIP-based similarity search can match images even across heavy transformations, making it a killer tool for tracking.
  • Artists want control, but they also want it to be easy, so we learned how to package hard tech in a way that feels simple and empowering.
  • Business-wise, this tech is much bigger than indie artists, it has legs in the enterprise space, licensing, and even copyright law enforcement.

⚔️ Challenges We Faced

  • Speed: Running adversarial cloaking on many images can be slow, so we had to optimize the pipeline and batch operations.
  • Similarity detection: Filtering false positives from web crawls required fine-tuning both embedding distance thresholds and LLM prompts.
  • Ethical considerations: We had long internal discussions about how to build this defensively, we want to protect creators, not break models recklessly.
  • Web limitations: Not all platforms allow easy replacement of uploaded images, so we had to build workarounds and browser-based enhancement tools.

🌍 The Vision

This tool is more than a hackathon project. It’s a blueprint for an ecosystem that:

  • Protects artists from unauthorized AI training
  • Enables businesses to monetize their media archives more ethically
  • Pressures AI companies to license content fairly
  • Changes the default narrative, from “scraping everything is fine” to “creators have power.”

This isn’t just a defensive tool, it’s a market signal. A visible, viral, creator-led pushback that forces AI developers to think ethically and economically before using copyrighted material.


💼 Business Potential

  • Sell enterprise licenses to Getty, Adobe, Shutterstock, news orgs
  • Offer subscription tiers for independent artists
  • Develop a plugin layer for social platforms and CMS tools
  • Partner with legal firms to offer built-in takedown generation services
  • API for AI model developers to license clean data, flipping the power dynamic

❤️ Final Thoughts

We didn’t just build this because it was technically cool. We built it because the people we care about are being hurt by a system that forgot them.

Now, we want to give them a sword, something that says,

“Your art has value. Your voice matters. And now, you have power.”

Built With

Share this project:

Updates