Inspiration

Hackathons are getting more competitive, with bigger prizes and more AI-generated projects. I wanted to test: how far could you push cheating at a hackathon using AI? But also — could that same AI help catch it?

This project started as an experiment in red-teaming, and evolved into an AI Safety & Control Framework to detect and prevent unethical code stealing.


What it does

  • Clones a public GitHub repo.
  • Rewrites the entire commit history in your name.
  • Alters commit timestamps to fall within the hackathon period using randomized intervals — simulating an "all-nighter grind."

How we built it

  • Used Git + Python to rebase repos with new commit authors and timestamps.
  • Queried GitHub’s API using LLama AI-generated keywords from the stolen codebase.
  • Selected top 20 similar repos based on code structure.
  • Compared the code for similar lines.

Accomplishments that we're proud of

  • Successfully simulated realistic cheating behavior.
  • Built a working detection pipeline to reverse-engineer and flag plagiarism.
  • Tied both tools into a real AI safety and control use case.

What we learned

  • AI can easily be used to manipulate trust-based systems.
  • Red-teaming is critical for building guardrails.
  • You can’t detect bad actors without first thinking like one.

What's next for Cheatathon

  • Using AI to rewrite the codebase
  • Add deeper analysis for commit graphs and contributor behavior to catch odd commits.

Built With

  • bolt
  • githubapi
  • llama
  • orchids
Share this project:

Updates