Comprehensive, peer-reviewed security awareness training for deepfakes, prompt injection, and AI security threats.
π Read the Course | π€ Contribute | π¬ Discuss
- Detection techniques (visual, audio, metadata)
- Prevention strategies (authentication, verification)
- Incident response procedures
- Forensic analysis methods
- Prompt injection attacks and defenses
- LLM security best practices
- Input validation and sanitization
- Output filtering and monitoring
- NIST AI Risk Management Framework
- OWASP Top 10 for LLM Applications
- ISO/IEC 42001:2023 (AI Management)
- C2PA Content Authenticity
- GDPR, CCPA, BIPA regulations
- Criminal statutes (18 USC Β§2252, CFAA)
- Civil liability and remedies
- International frameworks
20+ Chapters organized into:
- Basics - Understanding deepfakes and AI threats
- Detection - Identifying manipulated content
- Prevention - Protecting systems and users
- Response - Handling security incidents
- Advanced - Forensics, legal, standards, threat intelligence
- Community - Learning paths and resources
Visit durellwilson.github.io/security-awareness-course
# Clone repository
git clone https://github.com/durellwilson/security-awareness-course.git
cd security-awareness-course
# Install mdBook
cargo install mdbook
# Serve locally
cd book && mdbook serve
# Open http://localhost:3000We welcome contributions from the security community!
- π Research: Submit peer-reviewed findings
- π» Code: Add detection/prevention examples
- π Documentation: Improve explanations
- π Case Studies: Document real incidents
- π Translation: Internationalize content
See CONTRIBUTING.md for detailed guidelines.
- π± Contributor: 1+ merged PR
- πΏ Regular: 5+ merged PRs
- π³ Core: 20+ merged PRs
All content is backed by peer-reviewed research:
- β Academic journals (IEEE, ACM, California Law Review)
- β Government standards (NIST, CISA, NSA)
- β Industry frameworks (OWASP, MITRE ATT&CK)
- β Reputable vendors (Microsoft, Google, IBM)
15+ Citations with DOI/arXiv verification links.
- Documentation: mdBook
- CI/CD: GitHub Actions (test, build, deploy)
- Hosting: GitHub Pages
- Security: Trivy scanning, markdown linting
- Quality: Automated spell checking, link validation
- 500% increase in deepfake videos (2022-2024)
- 73% of LLM apps vulnerable to prompt injection
- 96% of deepfakes are non-consensual content
- $4.5M average data breach cost
Sources: Sensity AI, Microsoft Security, IBM Security
Introduction β Basics β Detection β Prevention
Beginner + Prompt Injection β Advanced Detection β Incident Response
Intermediate + Forensics β Legal β Standards β Threat Intelligence
Part of the Detroit tech community contributing to:
- Developer education
- Security awareness
- Open source collaboration
- Hacktoberfest initiatives
- GitHub Discussions: Ask questions, share insights
- Issues: Report bugs, request features
- Pull Requests: Contribute improvements
MIT License - see LICENSE for details.
Found a vulnerability? See SECURITY.md for responsible disclosure.
Built with contributions from the open source security community.
Key Research Sources:
- Chesney & Citron (2019) - California Law Review
- Tolosana et al. (2020) - Information Fusion
- Greshake et al. (2023) - arXiv
- NIST AI RMF (2023)
- OWASP LLM Top 10 (2024)
β Star this repo to support security education!
π Start Learning Now