[NeurIPS 2025 Spotlight] ReCon: Region-Controllable Data Augmentation with Rectification and Alignment for Object Detection
This is the official repository of ReCon: Region-Controllable Data Augmentation with Rectification and Alignment for Object Detection.
Abstract: The scale and quality of datasets are crucial for training robust perception models. However, obtaining large-scale annotated data is both costly and time-consuming. Generative models have emerged as a powerful tool for data augmentation by synthesizing samples that adhere to desired distributions. However, current generative approaches often rely on complex post-processing or extensive fine-tuning on massive datasets to achieve satisfactory results, and they remain prone to content–position mismatches and semantic leakage. To overcome these limitations, we introduce ReCon, a novel augmentation framework that enhances the capacity of structure-controllable generative models for object detection. ReCon integrates region-guided rectification into the diffusion sampling process, using feedback from a pre-trained perception model to rectify misgenerated regions within diffusion sampling process. We further propose region-aligned cross-attention to enforce spatial–semantic alignment between image regions and their textual cues, thereby improving both semantic consistency and overall image fidelity. Extensive experiments demonstrate that ReCon substantially improves the quality and trainability of generated data, achieving consistent performance gains across various datasets, backbone architectures, and data scales.
- We propose ReCon, a novel region-controllable data augmentation method that enhances the regional control capabilities of existing models without requiring additional training.
- We introduce region-guided rectification and region-aligned cross-attention mechanisms to improve control ability during the diffusion sampling process.
- Extensive experiments show that ReCon generates high-quality augmented data and substantially improves detection performance compared to both traditional augmentation techniques and current generative approaches.
Please refer to the paper for more technical details.
Create the environment:
conda create -n recon python=3.10.6 -y
conda activate recon- Install necessary python libraries:
pip install -r requirements.txtData Preparation:
The main dataset used in our work is COCO, it should be prepared in ./data/. The structure is like:
./data
└── coco
├── annotations
├── train2017
└── val2017Model Preparation:
Download the Grounding-DINO model IDEA-Research/grounding-dino-tiny and the SAM model weights sam_vit_h_4b8939.pth into the ./ckpts/ directory. The directory structure should look like this:
./ckpts
├── grounding-dino-tiny
└── sam_vit_h_4b8939.pth1) Generate Dataset
- Taking ControlNet-ReCon model as an example, generate data using:
python generate.py controlnet_recon --dataset_config_name configs/data/coco_512x512.py- Generated data will be saved in
outputs/.
2) Training and Evaluation
- After data augmentation, use the MMDetection framework to train and evaluate object detectors on the generated dataset.
If you find our work inspiring in your research, please cite our work.
@article{zhu2025recon,
title={ReCon: Region-Controllable Data Augmentation with Rectification and Alignment for Object Detection},
author={Haowei Zhu and Tianxiang Pan and Rui Qin and Jun-Hai Yong and Bin Wang},
year={2025},
eprint={2510.15783},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2510.15783},
}
