This project implements a real-time Facial Expression Recognition (FER) system using a Convolutional Neural Network (CNN) based on ResNet-18, trained on the FER-2013 dataset. It classifies seven basic emotions from facial images and supports both single-image prediction and real-time webcam-based emotion detection.
- Real-time emotion classification using webcam
- Trained on FER-2013 dataset
- ResNet-18 with custom classifier head
- Mixed-precision training with PyTorch AMP
- Supports single image inference
- Evaluation metrics: Accuracy, Confusion Matrix, Classification Report
- Angry
- Disgust
- Fear
- Happy
- Neutral
- Sad
- Surprise
| Emotion | Precision | Recall | F1-Score | Support |
|---|---|---|---|---|
| Angry | 0.59 | 0.60 | 0.59 | 945 |
| Disgust | 0.71 | 0.62 | 0.66 | 111 |
| Fear | 0.54 | 0.52 | 0.53 | 1024 |
| Happy | 0.88 | 0.87 | 0.88 | 1774 |
| Neutral | 0.61 | 0.63 | 0.62 | 1233 |
| Sad | 0.56 | 0.55 | 0.56 | 1247 |
| Surprise | 0.79 | 0.83 | 0.81 | 831 |
- Overall Accuracy:
68%
- Python >= 3.7
- PyTorch
- torchvision
- OpenCV
- scikit-learn
- tqdm
- matplotlib
- PIL