Skip to content

tessera-ops/awesome-ai-security

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 

Repository files navigation

Awesome AI Security Awesome

A curated list of tools, frameworks, papers, and resources for AI/ML security testing, adversarial machine learning, LLM red-teaming, and agentic AI safety.

Contributions welcome! See CONTRIBUTING.md for guidelines.


Contents


Frameworks

Comprehensive security testing frameworks that cover multiple attack categories.

Tool Stars Coverage License
Tessera Stars 42 OWASP tests, 5 categories (MOD/APP/INF/DAT/AGT), full Agentic AI Top 10 Apache 2.0
Garak Stars LLM vulnerability probing Apache 2.0
Counterfit Stars Adversarial ML attack automation MIT
AIShield Stars ML model security Apache 2.0

LLM Red-Teaming Tools

Tools specifically designed for testing Large Language Models.

Tool Stars Focus License
Tessera Stars 14 APP tests + 10 AGT tests, 3-phase methodology Apache 2.0
Garak Stars LLM vulnerability probing and scanning Apache 2.0
PyRIT Stars Python Risk Identification Toolkit for GenAI (Microsoft) MIT
Agentic Radar Stars Agentic workflow security scanner Apache 2.0
ClawMoat Stars Runtime security scanner for AI agents MIT
LLMFuzzer Stars Fuzzing framework for LLMs MIT
Rebuff Stars Prompt injection detection Apache 2.0

Adversarial ML Libraries

Libraries for adversarial attacks and defenses on ML models.

Tool Stars Focus License
IBM ART Stars Adversarial attacks, defenses, certifications MIT
Foolbox Stars Adversarial perturbations MIT
CleverHans Stars Adversarial examples for ML MIT
TextAttack Stars NLP adversarial attacks MIT
AugLy Stars Data augmentation for robustness testing MIT

Agentic AI Security

Tools and resources specific to AI agent security — the OWASP Top 10 for Agentic Applications (ASI 2026).

ASI Risk Description Test Tools
ASI-01 Agent Goal Hijacking Tessera AGT-03
ASI-02 Tool Misuse Tessera AGT-02
ASI-03 Identity & Privilege Abuse Tessera AGT-05
ASI-04 Agentic Supply Chain Tessera AGT-01
ASI-05 Unexpected Code Execution Tessera AGT-06
ASI-06 Memory & Context Poisoning Tessera AGT-04
ASI-07 Insecure Inter-Agent Comms Tessera AGT-07
ASI-08 Cascading Failures Tessera AGT-08
ASI-09 Human-Agent Trust Exploitation Tessera AGT-09
ASI-10 Rogue Agents Tessera AGT-10

Guardrails & Runtime Protection

Tools that protect AI systems at runtime.

Tool Stars Focus License
LLM Guard Stars Input/output guardrails for LLMs MIT
NeMo Guardrails Stars Programmable guardrails for LLM apps Apache 2.0
Guardrails AI Stars Input/output validation for LLMs Apache 2.0
Lakera Guard Prompt injection detection API SaaS
Detoxify Stars Toxicity detection Apache 2.0

Compliance & Governance

Frameworks and tools for AI regulatory compliance.

Resource Type Coverage
Tessera --format compliance Tool EU AI Act (42-test mapping), NIST AI RMF, SOC 2, ISO 27001
EU AI Act Regulation EU regulation on AI systems (deadline: Aug 2, 2026)
NIST AI RMF Framework US AI risk management framework
ISO/IEC 42001 Standard AI management system standard
Fairlearn Tool ML fairness assessment
AI Verify Tool AI governance testing framework (Singapore)

Vulnerability Databases

Resource Description
OWASP Top 10 for LLM Applications Top 10 risks for LLM-based applications
OWASP Top 10 for Agentic Applications (ASI 2026) Top 10 risks for AI agent systems
MITRE ATLAS Adversarial Threat Landscape for AI Systems
AI Incident Database Database of AI-related incidents and failures
AVID AI Vulnerability Database

Standards & Guidelines

Standard Organization Focus
OWASP AI Testing Guide OWASP AI security testing methodology
NIST AI 100-2 NIST Adversarial ML taxonomy
ISO/IEC 27090 ISO Cybersecurity for AI
EU AI Act European Union AI regulation
Singapore AI Verify IMDA AI governance framework

Research Papers

Surveys

Agentic AI Security

Prompt Injection

Courses & Training

Course Provider Topic
AI Red Teaming Microsoft Red teaming AI systems
Adversarial Machine Learning Academic Adversarial ML fundamentals
LLM Security Community LLM-specific security
Damn Vulnerable LLM Agent WithSecure Hands-on LLM agent security

Conferences & Events

Event Focus
OWASP Global AppSec Application security (AI track)
DEF CON AI Village AI security research
NeurIPS ML Safety Workshop ML safety and robustness
IEEE SaTML Security and trustworthy ML

Contributing

Contributions welcome! Please submit a PR with:

  • Tool name, link, and brief description
  • Star badge if it's a GitHub project
  • Correct category placement

License

CC0

About

A curated list of awesome AI security tools, frameworks, and resources. OWASP AI Testing Guide, Agentic AI Top 10, EU AI Act, adversarial ML, LLM red-teaming, prompt injection.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors