Skip to content

sunwei925/Efficient-FIQA

Repository files navigation

Efficient-FIQA

visitors GitHub stars PyTorch License Hugging Face Spaces arXiv

πŸ† πŸ₯‡ Winner Solution for ICCV VQualA 2025 Face Image Quality Assessment Challenge

Official Implementation of "Efficient Face Image Quality Assessment via Self-training and Knowledge Distillation"

πŸ“– Paper | πŸ€— Demo | πŸ“Š Challenge Results


πŸ“‹ Table of Contents


🎯 Introduction

Face Image Quality Assessment (FIQA) is crucial for various face-related applications such as face recognition, face detection, and biometric systems. While significant progress has been made in FIQA research, the computational complexity remains a key bottleneck for real-world deployment.

This repository presents Efficient-FIQA, a novel approach that achieves state-of-the-art performance with extremely low computational overhead through:

  • πŸ”¬ Self-training Strategy: Enhances teacher model capacity using pseudo-labeled data
  • πŸŽ“ Knowledge Distillation: Transfers knowledge from powerful teacher to lightweight student
  • ⚑ Efficient Architecture: Student model achieves comparable performance with minimal computational cost

πŸ† Key Achievements

  • πŸ₯‡ 1st Place in ICCV VQualA 2025 FIQA Challenge

πŸ† Challenge Results

Rank Team Score GFLOPs Params (M)
πŸ₯‡ 1 ECNU-SJTU VQA Team (Ours) 0.9664 0.3313 1.1796
2 MediaForensics 0.9624 0.4687 1.5189
3 Next 0.9583 0.4533 1.2224
4 ATHENAFace 0.9566 0.4985 2.0916
5 NJUPT-IQA-Group 0.9547 0.4860 3.7171
6 ECNU VIS Lab 0.9406 0.4923 3.2805

Score = (SRCC + PLCC) / 2

For more results on the ICCV VQualA 2025 FIQA Challenge, please refer to the challenge report.


πŸ“¦ Installation

Requirements

  • Python >= 3.9
  • PyTorch >= 1.13
  • CUDA >= 11.0 (for GPU training)

Environment Setup

# Create and activate conda environment
conda create -n EfficientFIQA python=3.9
conda activate EfficientFIQA

# Install other dependencies
pip install -r requirements.txt

πŸ“Š Dataset

Download Links

File Google Drive Baidu Yun
Training Dataset Download Download Password: 3epx
Ground Truth Scores Download Download Password: bhat
Validation Dataset Download Download Password: jsdi

Dataset Structure

dataset/
β”œβ”€β”€ train/
β”‚   β”œβ”€β”€ image1.jpg
β”‚   β”œβ”€β”€ image2.jpg
β”‚   └── ...
β”œβ”€β”€ val/
β”‚   β”œβ”€β”€ image1.jpg
β”‚   β”œβ”€β”€ image2.jpg
β”‚   └── ...
└── train.csv  # Format: image_name,score

πŸ”§ Training

Step 1: Train Teacher Model

  1. Configure paths in config_SwinB.py:
# Data paths
data_dir: str = '/path/to/your/training/images'
csv_path: str = 'data_file/train.csv'

# Model save path
model_save_dir: str = '/path/to/save/teacher/model'
  1. Start training:
python train_teacher_model.py

Step 2: Train Student Model

The teacher model is first used to generate pseudo-labels for unlabeled images to enhance training data. Since we cannot provide the original unlabeled images due to copyright restrictions, we use GFIQA-20K as a representative example dataset.

  1. Generate pseudo-labels using teacher model:
# Configure in test_unlabeled_images.py
image_dir = '/path/to/GFIQA/images'
output_csv = 'data_file/gfiqa_results.csv'
python test_unlabeled_images.py
  1. Configure student training in config_Edgenet.py:
# Data paths
data_dir: str = '/path/to/original/training/images'
gfiqa_data_dir: str = '/path/to/GFIQA/images'
csv_path: str = 'data_file/train.csv'
gfiqa_csv_path: str = 'data_file/gfiqa_results.csv'

# Model save path
model_save_dir: str = '/path/to/save/student/model'
  1. Start student training:
python train_student_model.py

πŸ§ͺ Testing

Pre-trained Models

Test on your images

python test.py \
    --model_name FIQA_EdgeNeXt_XXS \
    --model_weights_file ckpts/EdgeNeXt_XXS_checkpoint.pt \
    --image_file your_image.jpg \
    --image_size 352 \
    --gpu_ids 0

Usage Examples

# Test with student model (recommended)
python test.py \
    --model_name FIQA_EdgeNeXt_XXS \
    --model_weights_file ckpts/EdgeNeXt_XXS_checkpoint.pt \
    --image_file demo_images/z06399.png \
    --image_size 352 \
    --gpu_ids 0

# Test with teacher model
python test.py \
    --model_name FIQA_Swin_B \
    --model_weights_file ckpts/Swin_B_plus_checkpoint.pt \
    --image_file demo_images/z06399.png \
    --image_size 448 \
    --gpu_ids 0

# CPU inference
python test.py \
    --model_name FIQA_EdgeNeXt_XXS \
    --model_weights_file ckpts/EdgeNeXt_XXS_checkpoint.pt \
    --image_file demo_images/z06399.png \
    --image_size 352 \
    --gpu_ids cpu

Command Line Options

Option Description Default
--model_name Model architecture (FIQA_EdgeNeXt_XXS or FIQA_Swin_B) -
--model_weights_file Path to model weights -
--image_size Input image size (352 for EdgeNeXt, 448 for Swin-B) -
--image_file Path to input image -
--gpu_ids GPU IDs or "cpu" "0"

🌐 Online Demo

Try our online demo on Hugging Face Spaces: Hugging Face Spaces


πŸ“š Citation

If you find this work useful for your research, please cite our paper:

@inproceedings{sun2025efficient,
  title={Efficient Face Image Quality Assessment via Self-training and Knowledge Distillation},
  author={Sun, Wei and Zhang, Weixia and Cao, Linhan and Jia, Jun and Zhu, Xiangyang and Zhu, Dandan and Min, Xiongkuo and Zhai, Guangtao},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision Workshops},
  pages={1-9},
  year={2025}
}

⭐ Star this repository if you find it helpful!

About

πŸ† πŸ₯‡ Winner solution for ICCV VQualA 2025 Face Image Quality Assessment Challenge

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors