This repository holds resources related to my research and study journey in the field of AI Security.
Over time, I’ve been collecting a wide range of materials that I now aim to organize and maintain here. These include foundational and advanced resources that support my ongoing exploration of security challenges in artificial intelligence systems such as llm models, ai agents and mcp servers.
Curated posts and personal notes on AI security topics, vulnerabilities, and real-world incidents.
Key academic and industry papers covering prompt injection, model inversion, data poisoning, secure model training, red teaming, and related areas.
A collection of open-source tools and frameworks useful for analysis, red teaming, and securing AI models. This includes mcp scanners.
Contributions are welcome!
If you’d like to add a new tool, paper, or blog, please check out the CONTRIBUTING.md file for guidelines on formatting and submission.