AI-Powered Vulnerability Scanner with LLM Explanations
VulnScanner-LLM is an advanced security scanning tool that combines traditional SAST (Static Application Security Testing) with AI-powered explanations and remediation suggestions.
- Multi-Scanner Support: Integrates Semgrep, Bandit, CodeQL, and Safety
- AI Explanations: LLM-powered detailed vulnerability explanations
- Smart Remediation: Automated fix suggestions with code examples
- SARIF Format: Full SARIF 2.1.0 support for CI/CD integration
- REST API: FastAPI-based API for integration
- CLI Tool: Rich command-line interface
- Async Architecture: High-performance async scanning
- Type Safety: Full type hints and mypy compliance
Author: Ayi NEDJIMI Website: ayinedjimi-consultants.fr HuggingFace: AYI-NEDJIMI Contact: [email protected]
VulnScanner-LLM is a production-ready vulnerability scanner designed by cybersecurity expert Ayi NEDJIMI. It combines the power of multiple security tools with cutting-edge AI to provide comprehensive vulnerability assessment and remediation guidance.
VulnScanner-LLM est un scanner de vulnérabilités production-ready conçu par l'expert en cybersécurité Ayi NEDJIMI. Il combine la puissance de plusieurs outils de sécurité avec l'IA de pointe pour fournir une évaluation complète des vulnérabilités et des conseils de remédiation.
# Clone the repository
git clone https://github.com/AYI-NEDJIMI/VulnScanner-LLM.git
cd VulnScanner-LLM
# Install dependencies
pip install -r requirements.txt
# Install the package
pip install -e .
# Install scanning tools
pip install semgrep bandit- Python 3.11+
- OpenAI API key (for LLM features)
- Semgrep (optional)
- Bandit (optional)
- CodeQL (optional)
Create a .env file:
OPENAI_API_KEY=your_api_key_here
OPENAI_MODEL=gpt-4# Basic scan
vulnscanner scan /path/to/code
# Scan with specific scanner
vulnscanner scan /path/to/code --scanner semgrep
# Scan with AI explanations
vulnscanner scan /path/to/code --explain --remediate
# Export to SARIF
vulnscanner scan /path/to/code --output results.sarif --format sarif
# Filter by severity
vulnscanner scan /path/to/code --severity highfrom vulnscanner_llm import VulnerabilityScanner, ScanConfig, ScannerType
# Configure scan
config = ScanConfig(
target_path="/path/to/code",
scanners=[ScannerType.SEMGREP, ScannerType.BANDIT],
severity_threshold="medium"
)
# Run scan
with VulnerabilityScanner(config) as scanner:
results = scanner.scan()
# Display results
print(f"Found {len(results.vulnerabilities)} vulnerabilities")
for vuln in results.vulnerabilities:
print(f"{vuln['severity']}: {vuln['title']} in {vuln['file']}")# Start API server
uvicorn vulnscanner_llm.api:app --reload
# Use the API
curl -X POST http://localhost:8000/scan \
-H "Content-Type: application/json" \
-d '{"target_path": "/path/to/code"}'VulnScanner-LLM/
βββ src/vulnscanner_llm/
β βββ scanner.py # Core scanning engine
β βββ llm_explainer.py # AI explanation generator
β βββ sarif_parser.py # SARIF format handler
β βββ remediation.py # Remediation engine
β βββ api.py # FastAPI REST API
β βββ cli.py # Command-line interface
βββ tests/ # Test suite
βββ examples/ # Usage examples
βββ docs/ # Documentation
| Scanner | Language Support | Features |
|---|---|---|
| Semgrep | Multi-language | Fast, customizable rules |
| Bandit | Python | Security-focused |
| CodeQL | Multi-language | Deep semantic analysis |
| Safety | Python | Dependency vulnerabilities |
from vulnscanner_llm import VulnerabilityScanner, LLMExplainer, ScanConfig
# Scan code
config = ScanConfig(target_path="./myapp")
scanner = VulnerabilityScanner(config)
results = scanner.scan()
# Get AI explanations
explainer = LLMExplainer()
for vuln in results.vulnerabilities[:5]: # First 5
explanation = explainer.explain({
"vulnerability_id": vuln["id"],
"title": vuln["title"],
"severity": vuln["severity"],
"description": vuln["description"],
"code_snippet": vuln.get("code", "")
})
print(f"\n=== {vuln['title']} ===")
print(explanation.summary)
print(f"\nImpact: {explanation.security_impact}")from vulnscanner_llm import SARIFParser
# Parse SARIF file
parser = SARIFParser()
parsed = parser.parse_file("results.sarif")
print(f"Found {parsed.result_count} results")
print(f"Errors: {parsed.error_count}")
print(f"Warnings: {parsed.warning_count}")
# Convert to vulnerabilities
vulnerabilities = parser.to_vulnerabilities(parsed)# Run all tests
pytest
# Run with coverage
pytest --cov=vulnscanner_llm --cov-report=html
# Run specific test
pytest tests/test_scanner.py -v- Scans 10,000+ files/minute
- Async architecture for parallel scanning
- Optimized SARIF parsing
- Batch LLM processing for efficiency
- ThreatIntel-GPT - Threat intelligence with GPT
- LogParser-AI - AI log analysis
- ComplianceBot - Compliance automation
- KVortex - Knowledge management
MIT License - see LICENSE file
Copyright (c) 2024 Ayi NEDJIMI
- Website: ayinedjimi-consultants.fr
- HuggingFace: huggingface.co/AYI-NEDJIMI
- GitHub: github.com/AYI-NEDJIMI
For professional inquiries, consulting, or support:
Ayi NEDJIMI Email: [email protected] Website: https://ayinedjimi-consultants.fr
Built with:
- OpenAI GPT-4 for AI capabilities
- Semgrep for security scanning
- FastAPI for API framework
- Rich for beautiful CLI output
Made with β€οΈ by Ayi NEDJIMI | ayinedjimi-consultants.fr