Skip to content

xh9998/LinearSR

Repository files navigation

🚀 LinearSR: Unlocking Linear Attention for Stable and Efficient Image Super-Resolution

ICLR 2026

arXiv GitHub Hugging Face Model

This repository provides official inference code and the official LinearSR model.


LinearSR restores fine details while keeping linear-time and linear-FLOPs scaling, unlike quadratic vanilla attention.


📰 News

  • [2026.03] Official inference code and official model release.

🎬 Overview


Integrated LinearSR framework: TAG-guided SNR-MoE on a linear-attention backbone, stabilized by ESGF at the knee point.

🛠️ Environment Setup

Use the provided environment file:

conda env create -f environment.yml
conda activate linearsr

📦 Downloads

Component Required for Link
LinearSR official model checkpoints CKPT1..CKPT4 or --model-1..--model-4 Download
Text Encoder (google/gemma-2-2b-it) TEXT_ENCODER_NAME=gemma-2-2b-it, TEXT_ENCODER_PATH Download
VAE (mit-han-lab/dc-ae-f32c32-sana-1.0) VAE_PRETRAINED Download
RAM weights (ram_swin_large_14m.pth) RAM_WEIGHTS (only when TAG enabled) Download
DAPE weights (DAPE.pth) RAM_COND (only when TAG enabled) Download

☕ Quick Inference

Use the path-clean template script:

CKPT1=/path/to/expert_1.pth \
CKPT2=/path/to/expert_2.pth \
CKPT3=/path/to/expert_3.pth \
CKPT4=/path/to/expert_4.pth \
TEXT_ENCODER_NAME=gemma-2-2b-it \
TEXT_ENCODER_PATH=/path/to/google--gemma-2-2b-it \
VAE_PRETRAINED=/path/to/dc-ae-f32c32-sana-1.0 \
RAM_WEIGHTS=/path/to/ram_swin_large_14m.pth \
RAM_COND=/path/to/DAPE.pth \
INPUT_DIR=/path/to/lr_images \
OUTPUT_DIR=/path/to/output \
LOCAL_FILES_ONLY=true \
bash run_inference_local_test_template.sh

❤️ Acknowledgement

Our work builds upon strong open-source foundations in generative modeling and super-resolution. We especially thank the authors of Sana, a foundational open-source text-to-image model.

📑 Citation

@article{li2025linearsr,
  title={LinearSR: Unlocking Linear Attention for Stable and Efficient Image Super-Resolution},
  author={Li, Xiaohui and Zhuang, Shaobin and Cao, Shuo and Yang, Yang and Pu, Yuandong and Qin, Qi and Luo, Siqi and Fu, Bin and Liu, Yihao},
  journal={arXiv preprint arXiv:2510.08771},
  year={2025}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages