jailbreak-prompts
Here are 10 public repositories matching this topic...
HacxGPT Jailbreak 🚀: Unlock the full potential of top AI models like ChatGPT, LLaMA, and more with the world's most advanced Jailbreak prompts 🔓.
-
Updated
Mar 15, 2026
[New Update](Added Agentic-Mode, Dark-GODMode)The Real BlackHat GPT - ai can do your illegal stuffs without saying anything. Use At Your Own Risk!
-
Updated
Mar 29, 2026
A rationalist ruleset for "debugging" LLMs, auditing their internal reasoning and uncovering biases; also a jailbreak.
-
Updated
Nov 1, 2025
Utterly unelegant prompts for local LLMs, with scary results.
-
Updated
Aug 22, 2025
Bootstra AI Jailbreak for iOS: The World’s First AI-Powered Jailbreaking Tool
-
Updated
Jan 13, 2026
A tool for auditing bias through large language models
-
Updated
Jan 19, 2026 - Python
Ethical AI Hacking Lab 2026 🛡️ - Learn Cybersecurity & Defense | Free Tools
-
Updated
Apr 14, 2026
🔍 Track contradictions in AI and human content with LBOS-LCAS, enhancing bias and coherence analysis for clearer understanding and insights.
-
Updated
Apr 22, 2026 - Python
A framework to evaluate how open-source language models handle adversarial prompts. Tests across hallucination traps and jailbreak scenarios.
-
Updated
Apr 12, 2026 - Jupyter Notebook
Improve this page
Add a description, image, and links to the jailbreak-prompts topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the jailbreak-prompts topic, visit your repo's landing page and select "manage topics."