Skip to content

HarshaBeth/deepshield-detection

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 

Repository files navigation

DeepShield: AI Ecosystem for Detecting and Defending Against Deepfakes

DeepShield is a comprehensive AI-powered system designed to detect and defend against deepfake multimedia and fake news. This ecosystem leverages cutting-edge technologies to tackle the challenges posed by manipulated visual, audio, video, and textual content.

Features

  • Deepfake Image Detection: Detect inconsistencies and manipulations in images.
  • Deepfake Voice Detection: Identify tampered audio using advanced RNN and transformer models.
  • Deepfake Video Detection: Analyze frames and audio together to identify deepfake videos.
  • Fake News Detection: Evaluate credibility using NLP techniques like sentiment analysis and entity recognition.
  • Adversarial Attack and Defense: Test model robustness with evasion attacks and build defenses using adversarial training.

Project Structure

deepfake-detection/
├── data/
│   ├── images/
│   ├── audio/
│   ├── videos/
│   └── fake_news/
├── models/
│   ├── image_detection/
│   ├── voice_detection/
│   ├── video_detection/
│   ├── fake_news_detection/
│   └── adversarial_defense/
├── notebooks/
│   ├── image_detection.ipynb
│   ├── voice_detection.ipynb
│   ├── video_detection.ipynb
│   ├── adversarial_attack_defense.ipynb
│   └── fake_news_detection.ipynb
├── requirements.txt
└── README.md

Installation

  1. Clone the repository:
    git clone https://github.com/HarshaBeth/deepshield-detection.git
  2. Navigate to the project directory:
    cd deepshield-detection
  3. Install the required dependencies:
    pip install -r requirements.txt

Usage

Explore interactive notebooks in the notebooks/ folder for experimentation:

  • image_detection.ipynb
  • voice_detection.ipynb
  • video_detection.ipynb
  • fake_news_detection.ipynb
  • adversarial_attack_defense.ipynb

Datasets

Models

The models are stored in this Google Drive because of their large size. You can download them from Models.

Acknowledgments

  • Adversarial Robustness Toolbox (ART) for adversarial testing and defense.

About

Deepfake Detection in Multimedia

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors