DeepShield is a comprehensive AI-powered system designed to detect and defend against deepfake multimedia and fake news. This ecosystem leverages cutting-edge technologies to tackle the challenges posed by manipulated visual, audio, video, and textual content.
- Deepfake Image Detection: Detect inconsistencies and manipulations in images.
- Deepfake Voice Detection: Identify tampered audio using advanced RNN and transformer models.
- Deepfake Video Detection: Analyze frames and audio together to identify deepfake videos.
- Fake News Detection: Evaluate credibility using NLP techniques like sentiment analysis and entity recognition.
- Adversarial Attack and Defense: Test model robustness with evasion attacks and build defenses using adversarial training.
deepfake-detection/
├── data/
│ ├── images/
│ ├── audio/
│ ├── videos/
│ └── fake_news/
├── models/
│ ├── image_detection/
│ ├── voice_detection/
│ ├── video_detection/
│ ├── fake_news_detection/
│ └── adversarial_defense/
├── notebooks/
│ ├── image_detection.ipynb
│ ├── voice_detection.ipynb
│ ├── video_detection.ipynb
│ ├── adversarial_attack_defense.ipynb
│ └── fake_news_detection.ipynb
├── requirements.txt
└── README.md
- Clone the repository:
git clone https://github.com/HarshaBeth/deepshield-detection.git
- Navigate to the project directory:
cd deepshield-detection - Install the required dependencies:
pip install -r requirements.txt
Explore interactive notebooks in the notebooks/ folder for experimentation:
image_detection.ipynbvoice_detection.ipynbvideo_detection.ipynbfake_news_detection.ipynbadversarial_attack_defense.ipynb
- Images Dataset - 140k real and fake faces
- ASVspoof 2019 dataset
- Fake News Detection Dataset "WELFake"
- Deep Fake Detection (DFD) "Videos"
The models are stored in this Google Drive because of their large size. You can download them from Models.
- Adversarial Robustness Toolbox (ART) for adversarial testing and defense.