Skip to content

TauhidScu/DEFFA-Unet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DEFFA-UNet : Dual Encoding Feature Filtering Attention UNet

OCT Segmentation Banner

Python PyTorch Version

DEFFA-UNet is a state-of-the-art deep learning model for automated retinal vessel segmentation in fundus images. This repository implements the novel architecture described in our research paper, which addresses critical challenges in medical image analysis for diagnosing ocular and cardiovascular diseases.


📋 Table of Contents


Key Features

  • Dual Encoding Architecture: Enhanced feature extraction with domain-invariant processing
  • Feature Filtering Fusion (FFF) Module: Precise feature filtering with channel and spatial attention
  • Feature Reconstructing Fusion (FRF) Module: Attention-guided reconstruction replacing traditional skip connections
  • JESB Data Balancing: Jaccard-Enhanced Synthetic Balancing for addressing data imbalance
  • SOTA-CSA Augmentation: State-of-the-art color space augmentation techniques
  • Cross-Dataset Generalization: Superior performance across multiple retinal datasets

Architecture Overview

DEFFA-UNet Architecture


Installation

System Requirements

Component Requirement
Python 3.7+
PyTorch 1.9.0+
GPU NVIDIA GPU with CUDA (recommended)
RAM 16GB+

Environment Setup

  • Create environment
conda create --name DEFFA-Unet python=3.7 -c conda-forge
  • Activate environment
conda activate DEFFA-Unet
  • Install PyTorch (choose based on your system)
# For CPU only:
conda install pytorch torchvision torchaudio cpuonly -c pytorch

# For CUDA (if you have NVIDIA GPU):
conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia
  • Install other ML packages
conda install numpy opencv matplotlib scikit-learn pillow tqdm -c conda-forge
  • Install additional dependencies using pip
pip install pyyaml tensorboard wandb albumentations

Dataset Information

Our model was evaluated on five public retinal vessel segmentation datasets:

Dataset Images Resolution Characteristics Source
DRIVE 40 565×584 Standard benchmark, consistent quality Link
CHASE_DB1 28 999×960 High resolution, challenging Link
STARE 20 700×605 Includes pathological cases Link
HRF 45 3504×2336 High-resolution, variable quality Link
IOSTAR 30 1024×1024 SLO imaging technique Link

Results & Performance

Performance Across Datasets

Dataset Accuracy AUC DSC IoU MCC
DRIVE 0.9701 0.9861 0.8347 0.7154 0.8079
CHASE_DB1 0.9712 0.9823 0.8156 0.6891 0.7892
STARE 0.9689 0.9834 0.8234 0.7012 0.7945
HRF 0.9723 0.9845 0.8289 0.7089 0.8012
IOSTAR 0.9698 0.9829 0.8178 0.6923 0.7856

Comparison with SOTA Methods

Method DRIVE AUC CHASE_DB1 AUC STARE AUC HRF AUC
U-Net 0.9752 0.9781 0.9726 0.9792
Attention U-Net 0.9807 0.9801 0.9783 0.9812
CSG-Net 0.9821 0.9823 0.9801 0.9834
DEFFA-UNet 0.9861 0.9823 0.9834 0.9845

Cross-Dataset Generalization

Our model demonstrates superior cross-dataset performance without requiring target dataset fine-tuning:

  • DRIVE → CHASE_DB1: AUC improvement of 3.2% over baseline
  • STARE → HRF: DSC improvement of 4.1% over existing methods
  • Consistent vessel connectivity: 15% better preservation compared to traditional U-Net

Contributing

We welcome contributions from the research community! Here's how you can participate:

Ways to Contribute

  1. Bug Reports: Help us identify and fix issues
  2. Feature Requests: Suggest improvements and new capabilities
  3. Model Extensions: Adapt the architecture for other medical imaging tasks
  4. Dataset Integration: Add support for additional retinal vessel datasets
  5. Documentation: Improve tutorials and documentation

Contribution Guidelines

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/YourFeature)
  3. Commit your changes (git commit -m 'Add some feature')
  4. Push to the branch (git push origin feature/YourFeature)
  5. Open a Pull Request

Citation

If you use this code in your research, please cite our paper:

@article{islam2025dual,
  title={Dual encoding feature filtering generalized attention UNET for retinal vessel segmentation},
  author={Islam, Md Tauhidul and Da-Wen, Wu and Qing-Qing, Tang and Kai-Yang, Zhao and Teng, Yin and Yan-Fei, Li and Wen-Yi, Shang and Jing-Yu, Liu and Hai-Xian, Zhang},
  journal={arXiv preprint arXiv:2506.02312},
  year={2025}
}

Contributors


About

DEFFA-UNet: PyTorch implementation for retinal vessel segmentation with dual encoding and attention mechanisms. Outperforms existing methods on DRIVE, CHASE_DB1, and STARE datasets for automated ophthalmology diagnosis.

Topics

Resources

Stars

Watchers

Forks

Contributors

Languages