This repository contains the official implementation of our paper: Domain-Specific Pretraining and Fine-Tuning with Contrastive Learning for Fluorescence Microscopic Image Segmentation
- Domain-specific pretraining: Vision Transformer pretrained on fluorescence microscopy images.
- Cross-image foreground-background contrastive learning: Improves semantic boundary recognition and cross-dataset generalization.
- State-of-the-art performance: Significant IoU and Dice gains over baselines, including on unseen biomarkers.
We provide a Jupyter Notebook
Notebook_v1
that allows you to quickly test the model on your own fluorescence microscopy images.
- Place your fluorescence microscopy image files into the specified input directory.
- Open and run
quick_start.ipynb. - Follow the instructions inside the notebook to:
- Load your image
- Perform segmentation
- Save the predicted mask
Within a few steps, you can generate segmentation results for your own data.
FMI-ViT/
├── pretrain/ # Code for self-supervised pretraining
├── fine-tuning/ # Code for fine-tuning the model
└── README.md # Project description and usage instructions
Prepare fluorescence microscopy datasets as described in the paper.
- FMI-ViT Pretrain Data and VO Data: Public access authorization is in progress.
- Cell Tracking Challenge: Download Link
CUDA_VISIBLE_DEVICES=0,1,2,3 \
python -m torch.distributed.launch --nproc_per_node=4 main_dino.py \
--arch vit_small \
--batch_size_per_gpu 400 \
--data_path /path/to/dataset \
--output_dir /path/to/save_model_dir
bash tools/train4.sh configs/our/small_upernet_our1.py \
--work-dir /path/to/save_dir
bash tools/test.sh configs/our/small_upernet_test.py \
/path/to/checkpoint.pth \
--show-dir /path/to/output_visualization \
--work-dir /path/to/output_results \
--out /path/to/output_predictions
You can choose to download only the pretrained teacher backbone weights for downstream tasks, or the full checkpoint containing the backbone as well as the projection head weights for both the teacher network and the teacher network. We also provide the pretrained teacher backbone weights in the MMSegmentation Pretrained weights and fine-tuned models can be downloaded here:
| arch | params | download |
|---|---|---|
| ViT-S/16 | 21M | full ckpt | teacher backbone only | teacher backbone only (mmseg) | |
More pretrained weights for additional architectures will be released gradually...
If you use this repository or our pretrained weights, please cite:
@inproceedings{yourbibkey2025,
title={Domain-Specific Pretraining and Fine-Tuning with Contrastive Learning for Fluorescence Microscopic Image Segmentation},
author={Yunheng Wu, et al.},
booktitle={Proceedings of ...},
year={2025}
}
This repository is built upon the excellent works of:
- DINO — Pretraining
- MMsegmentation — Fine-tuning
We sincerely thank the authors for releasing their codes and making this research possible.
