Reality Check, an instance segementation detection model.
With the latest innovations in deep fake technologies and video production, it has become more challenging to distinguish between real and AI-generated images and videos. As a consequence, malicious and harmful media can now be created and disseminated online with relative ease, which can cause significant harm to the mental health, reputation, and overall character of an individual. To address this issue, we created "Reality Check," an image detection model that utilizes instances of AI manipulation to accurately analyze and determine authentic and deepfaked images, rather than predictions on pure classification data alone.
After downloading the dataset from Kaggle, we labeled instances of AI manipulation in the "fake images' folder and labeled instances of human facial proportions in the "real images" folder. We then uploaded the labeled dataset to Google Colab to begin pre-processing and training on the model. To ensure the maximum training, testing, and accuracy of the detection model, we utilized a 70-20-10 method to divide the labeled data. After training, the model we tested with the training and validation dataset, and a summary of the results can referenced below.
By our tests, our model can perform segmentation, analysis, and classification of images in as little as 2.0ms, with a 75-80% accuracy.
Dataset - https://www.kaggle.com/datasets/manjilkarki/deepfake-and-real-images
- Python
- Google Colaboratory
- Heroku
- Roboflow
- OpenCV
- Flask
This project was developed by:
- Mark-Anthony Delva ( @MrkAnthony )
- Nmesoma Duru ( @nmesosphere )
- Micheal Johnson ( @SolaMike )
