The PyTorch4.0.1 implementation of our NTIRE2019 model. Unfortunately, this model performs poorly on the real single-image super-resolution problem. It may be caused by the gap between bicubic-downsampling and this new challenge, or the poor network design, e.g., where to add the attention module. Hopefully, OISR modules can be used in the winner models of this competition to further improve the state-of-the-arts.
Since we just sent our FACT_SHEETs to one of the organizer (should be sent to all of them), our submission has been regarded as an invalid result. You may not directly compare our model with other participants on PSNR and SSIM. Sorry for the inconvenience.
- Python 3.7
- PyTorch >= 0.4.0
- numpy
- skimage
- imageio
- matplotlib
- tqdm
- Download the pre-processed training and validation sets from Baidu Cloud with code
fb0zor from OneDrive. - Unzip images to the given folders:
unzip /your/download/data.zip -d /your/NTIRE2019/- Training from scratch:
cd /your/NTIRE2019/OISR/src
bash train.sh- Fine-tuning on patches, download the pre-processed training patches and validation sets from Baidu Cloud with code
mu0aor from OneDrive:
unzip /your/download/data2.zip -d /your/NTIRE2019/
cd /your/NTIRE2019/OISR/src/
cp ../experiment/OISR/model/model_best.pt ./
bash train2.sh
cp ../experiment/OISR/model/model_best.pt ./
bash train3.sh- Evaluation on test set, download
Test_LR.zipfrom Baidu Cloud with codefwnhor from OneDrive:
zip /your/download/Test_LR.zip -d /your/NTIRE2019/data/benckmark/B100/
cd /your/NTIRE2019/OISR/src
cp ../experiment/OISR/model/model_best.pt ./ # or move the pre-trained model to ./
bash test.sh
python test_SRimages_rename.py # SR images can be found in ../experiment/test/results-B100
In this case, we apply the channel attention module (similar to SE/CBAM) to the RK-3 block.
- We use the following MatLab script to create patches:
%% LR images augment
% NOTE THAT : The GT/LR pair should have the same filename.
clc;
clear;
%%
% src_dir = './HR';
src_dir_LR = './LR';
target_dir_LR = './new_LR';
src_dir_HR = './GT';
target_dir_HR = './new_GT';
src_filenames = dir(fullfile(src_dir_LR, '*.png'));
N = length(src_filenames);
for i = 1:N
try
I = imread([src_dir_LR, '/', src_filenames(i).name]);
src_new_filename = [target_dir_LR, '/', src_filenames(i).name];
degree = randi(360);
degree = (degree - mod(degree, 90)); % comment this line to create more challenging patches
J = imrotate(I, degree, 'bicubic', 'crop');
[~, rect] = imcrop(J);
rect = round(rect);
J = imcrop(J, rect);
imwrite(J, src_new_filename);
I = imread([src_dir_HR, '/', src_filenames(i).name]);
src_new_filename = [target_dir_HR, '/', src_filenames(i).name];
J = imrotate(I, degree, 'bicubic', 'crop');
J = imcrop(J, rect);
imwrite(J, src_new_filename);
catch
disp('skip.');
end
end- We trim the cropped patches to 100N x 100N:
%% Trim
clc;
clear;
%%
target_dir_LR = './new_LR';
target_dir_HR = './new_GT';
src_filenames = dir(fullfile(target_dir_LR, '*.png'));
N = length(src_filenames);
for i = 1:N
I = imread([target_dir_LR, '/', src_filenames(i).name]);
src_new_filename = [target_dir_LR, '/', src_filenames(i).name];
[W, H, ~] = size(I);
W = W - mod(W, 100);
H = H - mod(H, 100);
imwrite(I(1:W, 1:H, :), src_new_filename);
I = imread([target_dir_HR, '/', src_filenames(i).name]);
src_new_filename = [target_dir_HR, '/', src_filenames(i).name];
[W, H, ~] = size(I);
W = W - mod(W, 100);
H = H - mod(H, 100);
imwrite(I(1:W, 1:H, :), src_new_filename);
endInspired by the smoothL1loss in object detection, we use smooth L1 loss in this competition:
- LR:
- SR:
- LR:
- SR:
- LR:
- SR:
- LR:
- SR:
- LR:
- SR:
| model (2x) | Param | Set5 | Set14 | B100 | Urban100 |
|---|---|---|---|---|---|
| OISR-NTIRE19 | 58M | 38.21 | 33.94 | 32.31 | 32.75 |
| OISR-RK3 | 42M | 38.21 | 33.94 | 32.36 | 33.03 |
Our toy example gives a poor attention-based design sample.









