Fuzzy Adaptive Rebalancing and Contrastive Uncertainty Learning for Semi-Supervised Semantic Segmentation
Ebenezer Tarubinga, Jenifer Kalafatovich, Seong-Whan Lee Department of Artificial Intelligence, Korea University, Seoul, Korea
Published in Neural Networks, 2026.
FARCLUSS is a unified framework for semi-supervised semantic segmentation that transforms prediction uncertainty into a learning asset through four components:
- Fuzzy Pseudo-Labeling — Preserves soft class distributions from top-K teacher predictions instead of discarding uncertain pixels
- Uncertainty-Aware Dynamic Weighting — Modulates pixel-wise loss contributions via normalized entropy
- Adaptive Class Rebalancing — Dynamically adjusts loss weights based on per-batch pseudo-label frequencies
- Lightweight Contrastive Regularization — Prototype-based contrastive loss using class centroids
Built on DeepLabV3+ with ResNet-50/101 backbones and a mean-teacher EMA architecture.
pip install -r requirements.txtRequires PyTorch >= 1.12 and torchvision >= 0.13.
Pascal VOC 2012
data/VOCdevkit/VOC2012/
├── JPEGImages/
├── SegmentationClass/
├── SegmentationClassAug/ # for Blended/SBD setup
└── ImageSets/Segmentation/
├── train.txt
├── val.txt
└── trainaug.txt # for Blended/SBD setup
Cityscapes
data/cityscapes/
├── leftImg8bit/
│ ├── train/
│ └── val/
└── gtFine/
├── train/
└── val/
# Pascal VOC Classic, 1/8 labeled, ResNet-101
python train.py --dataset pascal_voc --split classic --labeled_ratio 0.125 \
--backbone resnet101 --data_root ./data/VOCdevkit/VOC2012
# Pascal VOC Blended (SBD-augmented), 1/4 labeled
python train.py --dataset pascal_voc --split blended --labeled_ratio 0.25 \
--backbone resnet101 --data_root ./data/VOCdevkit/VOC2012
# Cityscapes, 1/8 labeled
python train.py --dataset cityscapes --labeled_ratio 0.125 \
--backbone resnet101 --data_root ./data/cityscapes
# Resume from checkpoint
python train.py --dataset pascal_voc --resume checkpoints/farcluss_best.pth| Ratio | Flag |
|---|---|
| 1/16 | --labeled_ratio 0.0625 |
| 1/8 | --labeled_ratio 0.125 |
| 1/4 | --labeled_ratio 0.25 |
| 1/2 | --labeled_ratio 0.5 |
python evaluate.py --checkpoint checkpoints/farcluss_best.pth \
--dataset pascal_voc --data_root ./data/VOCdevkit/VOC2012| Backbone | 1/16 | 1/8 | 1/4 | 1/2 |
|---|---|---|---|---|
| ResNet-50 | 72.90 | 76.18 | 76.50 | 77.69 |
| ResNet-101 | 76.4 | 78.2 | 79.0 | 80.3 |
| Backbone | 1/16 | 1/8 | 1/4 | 1/2 |
|---|---|---|---|---|
| ResNet-50 | 75.20 | 77.50 | 78.00 | 79.60 |
| ResNet-101 | 77.2 | 78.8 | 80.0 | 81.0 |
├── train.py # Training entry point
├── evaluate.py # Evaluation script
├── requirements.txt
└── farcluss/
├── config.py # Experiment configurations
├── trainer.py # Training loop (Algorithm 1)
├── model/
│ ├── encoder.py # ResNet-50/101 with dilated convolutions
│ ├── aspp.py # Atrous Spatial Pyramid Pooling
│ ├── deeplabv3plus.py # DeepLabV3+ architecture
│ └── projection_head.py # 128-D projection for contrastive loss
├── losses/
│ └── farcluss_losses.py # All loss components (Eq. 2-12)
├── dataset/
│ ├── augmentation.py # Weak & strong augmentations
│ ├── pascal_voc.py # Pascal VOC 2012 (Classic & Blended)
│ └── cityscapes.py # Cityscapes dataset
└── utils/
├── ema.py # EMA teacher update
├── lr_scheduler.py # Polynomial LR decay
└── metrics.py # mIoU evaluation
| Parameter | Value |
|---|---|
| EMA momentum (α) | 0.99 |
| Top-K for fuzzy labels | 2 |
| λ_u (unsupervised weight) | 0.5 |
| λ_c (contrastive weight) | 0.1 |
| Contrastive confidence threshold | 0.5 |
| Projection dimension | 128 |
| Optimizer | SGD (momentum=0.9, weight_decay=1e-4) |
| LR schedule | Polynomial decay (power=0.9) |
| Initial LR | 0.001 |
| Pascal VOC epochs | 80 |
| Cityscapes epochs | 240 |
@article{tarubinga2026farcluss,
title={FARCLUSS: Fuzzy Adaptive Rebalancing and Contrastive Uncertainty Learning for Semi-Supervised Semantic Segmentation},
author={Tarubinga, Ebenezer and Kalafatovich, Jenifer and Lee, Seong-Whan},
journal={Neural Networks},
year={2026}
}This research was supported by the Institute of Information & Communications Technology Planning & Evaluation (IITP) grant, funded by the Korea government (MSIT).
