The Hong Kong University of Science and Technology (HKUST)
- ☑ Our paper is now available on arXiv.
- ☑ CARE-Edit is accepted by CVPR 2026. Codes will be released soon.
Existing unified diffusion editors suffer from task interference and cannot dynamically handle conflicting conditions, leading to color bleeding, identity drift, and unpredictable behavior. We propose CARE-Edit - a unified editor which routes diffusion tokens to four specialized experts via a lightweight condition-aware router.
CARE-Edit introduces condition-aware specialized experts within the frozen DiT backbone. Given multimodal conditions, inputs are tokenized and projected to heterogeneous expert branches. The router assigns confidence scores and selects the Top-K experts to process each token. Expert outputs are normalized, modulated, and fused through the Latent Mixture module, yielding denoised representations refined by Mask Repaint module.
Code coming soon! Stay tuned for the full release.
If CARE-Edit is helpful for your research, please cite:
@inproceedings{wang2026careedit,
title={CARE-Edit: Condition-Aware Routing of Experts for Contextual Image Editing},
author={Yucheng Wang and Zedong Wang and Yuetong Wu and Yue Ma and Dan Xu},
booktitle={The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2026}
}If you have any questions, please email [email protected].
Appreciate the following works for their great contributions:
- UNO: Serves as the inspiration for our project.
- OmniControl: Foundational conditioning approaches that motivate our routing design.



