Skip to content

UnicomAI/MeanCache

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

29 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

[ICLR 2026] MeanCache: From Instantaneous to Average Velocity for Accelerating Flow Matching Inference

1Data Science & Artificial Intelligence Research Institute, China Unicom,
2Unicom Data Intelligence, China Unicom,
3National Key Laboratory for Novel Software Technology, Nanjing University
(* Corresponding author.)

🎬 Demo Video

MeanCache_720p.mp4

Introduction

In Flow Matching inference, existing caching methods primarily rely on reusing Instantaneous Velocity or its feature-level proxies. However, we observe that instantaneous velocity often exhibits sharp fluctuations across timesteps. This leads to severe trajectory deviations and cumulative errors, especially as the cache interval increases. Inspired by MeanFlow, we propose MeanCache. Compared to unstable instantaneous velocity, Average Velocity is significantly smoother and more robust over time. By shifting the caching perspective from a single "point" to an "interval," MeanCache effectively mitigates trajectory drift under high acceleration ratios.

Latest News

  • [2026/02/05] Community Contribution: ComfyUI-MeanCache-Z is now available! thanks to @facok!
  • [2025/02/04] Support Z-Image and released the MeanCache vs. LeMiCa comparative study.
  • [2025/02/02] Support Qwen-Image and Inference Code Released !

MeanCache vs. LeMiCa

This benchmark evaluates the performance of MeanCache against LeMiCa using the Qwen-Image-2512 model as the base.

πŸš€ Efficiency

Baseline Latency (Original Qwen-Image-2512): 32.8s

Constraint Method Latency Speedup Time Reduction
$B=25$ LeMiCa 18.83 s 1.74x -
MeanCache 17.13 s 1.91x 9.0%
$B=17$ LeMiCa 14.35 s 2.29x -
MeanCache 11.67 s 2.81x 18.7%
$B=10$ LeMiCa 10.41 s 3.15x -
MeanCache 6.95 s 4.72x 33.2%

🎨 Quality

Constraint Method PSNR (↑) SSIM (↑) LPIPS (↓)
$B=25$ LeMiCa 29.20 0.945 0.065
MeanCache 29.46 0.944 0.057
$B=17$ LeMiCa 24.31 0.835 0.176
MeanCache 26.49 0.907 0.104
$B=10$ LeMiCa 17.80 0.637 0.368
MeanCache 19.44 0.767 0.237

Demo

Z-Image

Z-Image-base MeanCache(B=25) MeanCache(B=20) MeanCache(B=15) MeanCache(B=13)
18.07 s 9.15 s 7.36 s 5.58 s 4.85 s

Qwen-Image-2512

Method Qwen-Image-2512 MeanCache(B=25) MeanCache(B=17) MeanCache(B=10)
Latency 32.8 s 17.13 s 11.67 s 6.95 s
T2I Qwen-Image-2512 Meancache_b25 Meancache_b17 Meancache_b10

Qwen-Image

Method Qwen-Image MeanCache(B=25) MeanCache(B=17) MeanCache(B=10)
Latency 33.13 s 17.04 s 11.63 s 6.92 s
T2I Qwen-Image Meancache_b25 Meancache_b17 Meancache_b10

License

The majority of this project is released under the Apache 2.0 license as found in the LICENSE file.

πŸ“– Citation

If you find MeanCache useful in your research or applications, please consider giving us a star ⭐ and citing it by the following BibTeX entry:

@inproceedings{gao2025meancache,
  title     = {MeanCache: From Instantaneous to Average Velocity for Accelerating Flow Matching Inference},
  author    = {Huanlin Gao and Ping Chen and Fuyuan Shi and Ruijia Wu and Yantao Li and Qiang Hui and Yuren You and Ting Lu and Chao Tan and Shaoan Zhao and Zhaoxiang Liu and Fang Zhao and Kai Wang and Shiguo Lian},
  journal   = {International Conference on Learning Representations (ICLR)},
  year      = {2026},
  url       = {https://arxiv.org/abs/2601.19961}
}

About

[ICLR 2026] MeanCache: From Instantaneous to Average Velocity for Accelerating Flow Matching Inference

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages