Skip to content

durellwilson/security-awareness-course

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

15 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Security Awareness Course

Deploy mdBook License: MIT Hacktoberfest PRs Welcome

Comprehensive, peer-reviewed security awareness training for deepfakes, prompt injection, and AI security threats.

πŸ“š Read the Course | 🀝 Contribute | πŸ’¬ Discuss


🎯 What You'll Learn

Deepfake Security

  • Detection techniques (visual, audio, metadata)
  • Prevention strategies (authentication, verification)
  • Incident response procedures
  • Forensic analysis methods

AI Security

  • Prompt injection attacks and defenses
  • LLM security best practices
  • Input validation and sanitization
  • Output filtering and monitoring

Industry Standards

  • NIST AI Risk Management Framework
  • OWASP Top 10 for LLM Applications
  • ISO/IEC 42001:2023 (AI Management)
  • C2PA Content Authenticity

Legal & Compliance

  • GDPR, CCPA, BIPA regulations
  • Criminal statutes (18 USC Β§2252, CFAA)
  • Civil liability and remedies
  • International frameworks

πŸ“– Course Structure

20+ Chapters organized into:

  1. Basics - Understanding deepfakes and AI threats
  2. Detection - Identifying manipulated content
  3. Prevention - Protecting systems and users
  4. Response - Handling security incidents
  5. Advanced - Forensics, legal, standards, threat intelligence
  6. Community - Learning paths and resources

πŸš€ Quick Start

Read Online

Visit durellwilson.github.io/security-awareness-course

Run Locally

# Clone repository
git clone https://github.com/durellwilson/security-awareness-course.git
cd security-awareness-course

# Install mdBook
cargo install mdbook

# Serve locally
cd book && mdbook serve
# Open http://localhost:3000

🀝 Contributing

We welcome contributions from the security community!

Ways to Contribute

  • πŸ“Š Research: Submit peer-reviewed findings
  • πŸ’» Code: Add detection/prevention examples
  • πŸ“ Documentation: Improve explanations
  • πŸ” Case Studies: Document real incidents
  • 🌍 Translation: Internationalize content

See CONTRIBUTING.md for detailed guidelines.

Recognition Levels

  • 🌱 Contributor: 1+ merged PR
  • 🌿 Regular: 5+ merged PRs
  • 🌳 Core: 20+ merged PRs

πŸ”¬ Research Quality

All content is backed by peer-reviewed research:

  • βœ… Academic journals (IEEE, ACM, California Law Review)
  • βœ… Government standards (NIST, CISA, NSA)
  • βœ… Industry frameworks (OWASP, MITRE ATT&CK)
  • βœ… Reputable vendors (Microsoft, Google, IBM)

15+ Citations with DOI/arXiv verification links.

πŸ› οΈ Technical Stack

  • Documentation: mdBook
  • CI/CD: GitHub Actions (test, build, deploy)
  • Hosting: GitHub Pages
  • Security: Trivy scanning, markdown linting
  • Quality: Automated spell checking, link validation

πŸ“Š Statistics

  • 500% increase in deepfake videos (2022-2024)
  • 73% of LLM apps vulnerable to prompt injection
  • 96% of deepfakes are non-consensual content
  • $4.5M average data breach cost

Sources: Sensity AI, Microsoft Security, IBM Security

πŸŽ“ Learning Paths

Beginner (2-4 weeks)

Introduction β†’ Basics β†’ Detection β†’ Prevention

Intermediate (4-8 weeks)

Beginner + Prompt Injection β†’ Advanced Detection β†’ Incident Response

Advanced (8-12 weeks)

Intermediate + Forensics β†’ Legal β†’ Standards β†’ Threat Intelligence

🌟 Community

Detroit Open Source

Part of the Detroit tech community contributing to:

  • Developer education
  • Security awareness
  • Open source collaboration
  • Hacktoberfest initiatives

Connect

  • GitHub Discussions: Ask questions, share insights
  • Issues: Report bugs, request features
  • Pull Requests: Contribute improvements

πŸ“œ License

MIT License - see LICENSE for details.

πŸ”’ Security

Found a vulnerability? See SECURITY.md for responsible disclosure.

πŸ™ Acknowledgments

Built with contributions from the open source security community.

Key Research Sources:

  • Chesney & Citron (2019) - California Law Review
  • Tolosana et al. (2020) - Information Fusion
  • Greshake et al. (2023) - arXiv
  • NIST AI RMF (2023)
  • OWASP LLM Top 10 (2024)

⭐ Star this repo to support security education!

πŸš€ Start Learning Now

About

πŸ›‘οΈ Comprehensive security course: Deepfakes & Prompt Injections - Detection, Prevention & Response

Topics

Resources

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages