DEFFA-UNet is a state-of-the-art deep learning model for automated retinal vessel segmentation in fundus images. This repository implements the novel architecture described in our research paper, which addresses critical challenges in medical image analysis for diagnosing ocular and cardiovascular diseases.
- Key Features
- Architecture Overview
- Installation
- Dataset Information
- Results & Performance
- Contributing
- Citation
- Dual Encoding Architecture: Enhanced feature extraction with domain-invariant processing
- Feature Filtering Fusion (FFF) Module: Precise feature filtering with channel and spatial attention
- Feature Reconstructing Fusion (FRF) Module: Attention-guided reconstruction replacing traditional skip connections
- JESB Data Balancing: Jaccard-Enhanced Synthetic Balancing for addressing data imbalance
- SOTA-CSA Augmentation: State-of-the-art color space augmentation techniques
- Cross-Dataset Generalization: Superior performance across multiple retinal datasets
| Component | Requirement |
|---|---|
| Python | 3.7+ |
| PyTorch | 1.9.0+ |
| GPU | NVIDIA GPU with CUDA (recommended) |
| RAM | 16GB+ |
- Create environment
conda create --name DEFFA-Unet python=3.7 -c conda-forge- Activate environment
conda activate DEFFA-Unet- Install PyTorch (choose based on your system)
# For CPU only:
conda install pytorch torchvision torchaudio cpuonly -c pytorch
# For CUDA (if you have NVIDIA GPU):
conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia- Install other ML packages
conda install numpy opencv matplotlib scikit-learn pillow tqdm -c conda-forge- Install additional dependencies using pip
pip install pyyaml tensorboard wandb albumentationsOur model was evaluated on five public retinal vessel segmentation datasets:
| Dataset | Images | Resolution | Characteristics | Source |
|---|---|---|---|---|
| DRIVE | 40 | 565×584 | Standard benchmark, consistent quality | Link |
| CHASE_DB1 | 28 | 999×960 | High resolution, challenging | Link |
| STARE | 20 | 700×605 | Includes pathological cases | Link |
| HRF | 45 | 3504×2336 | High-resolution, variable quality | Link |
| IOSTAR | 30 | 1024×1024 | SLO imaging technique | Link |
| Dataset | Accuracy | AUC | DSC | IoU | MCC |
|---|---|---|---|---|---|
| DRIVE | 0.9701 | 0.9861 | 0.8347 | 0.7154 | 0.8079 |
| CHASE_DB1 | 0.9712 | 0.9823 | 0.8156 | 0.6891 | 0.7892 |
| STARE | 0.9689 | 0.9834 | 0.8234 | 0.7012 | 0.7945 |
| HRF | 0.9723 | 0.9845 | 0.8289 | 0.7089 | 0.8012 |
| IOSTAR | 0.9698 | 0.9829 | 0.8178 | 0.6923 | 0.7856 |
| Method | DRIVE AUC | CHASE_DB1 AUC | STARE AUC | HRF AUC |
|---|---|---|---|---|
| U-Net | 0.9752 | 0.9781 | 0.9726 | 0.9792 |
| Attention U-Net | 0.9807 | 0.9801 | 0.9783 | 0.9812 |
| CSG-Net | 0.9821 | 0.9823 | 0.9801 | 0.9834 |
| DEFFA-UNet | 0.9861 | 0.9823 | 0.9834 | 0.9845 |
Our model demonstrates superior cross-dataset performance without requiring target dataset fine-tuning:
- DRIVE → CHASE_DB1: AUC improvement of 3.2% over baseline
- STARE → HRF: DSC improvement of 4.1% over existing methods
- Consistent vessel connectivity: 15% better preservation compared to traditional U-Net
We welcome contributions from the research community! Here's how you can participate:
- Bug Reports: Help us identify and fix issues
- Feature Requests: Suggest improvements and new capabilities
- Model Extensions: Adapt the architecture for other medical imaging tasks
- Dataset Integration: Add support for additional retinal vessel datasets
- Documentation: Improve tutorials and documentation
- Fork the repository
- Create your feature branch (
git checkout -b feature/YourFeature) - Commit your changes (
git commit -m 'Add some feature') - Push to the branch (
git push origin feature/YourFeature) - Open a Pull Request
If you use this code in your research, please cite our paper:
@article{islam2025dual,
title={Dual encoding feature filtering generalized attention UNET for retinal vessel segmentation},
author={Islam, Md Tauhidul and Da-Wen, Wu and Qing-Qing, Tang and Kai-Yang, Zhao and Teng, Yin and Yan-Fei, Li and Wen-Yi, Shang and Jing-Yu, Liu and Hai-Xian, Zhang},
journal={arXiv preprint arXiv:2506.02312},
year={2025}
}
