liif
liif copied to clipboard
Learning Continuous Image Representation with Local Implicit Image Function, in CVPR 2021 (Oral)
LIIF
This repository contains the official implementation for LIIF introduced in the following paper:
Learning Continuous Image Representation with Local Implicit Image Function
Yinbo Chen, Sifei Liu, Xiaolong Wang
CVPR 2021 (Oral)
The project page with video is at https://yinboc.github.io/liif/.

Citation
If you find our work useful in your research, please cite:
@inproceedings{chen2021learning,
title={Learning continuous image representation with local implicit image function},
author={Chen, Yinbo and Liu, Sifei and Wang, Xiaolong},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={8628--8638},
year={2021}
}
Environment
- Python 3
- Pytorch 1.6.0
- TensorboardX
- yaml, numpy, tqdm, imageio
Quick Start
- Download a DIV2K pre-trained model.
Model | File size | Download |
---|---|---|
EDSR-baseline-LIIF | 18M | Dropbox | Google Drive |
RDN-LIIF | 256M | Dropbox | Google Drive |
- Convert your image to LIIF and present it in a given resolution (with GPU 0,
[MODEL_PATH]
denotes the.pth
file)
python demo.py --input xxx.png --model [MODEL_PATH] --resolution [HEIGHT],[WIDTH] --output output.png --gpu 0
Reproducing Experiments
Data
mkdir load
for putting the dataset folders.
-
DIV2K:
mkdir
andcd
intoload/div2k
. Download HR images and bicubic validation LR images from DIV2K website (i.e. Train_HR, Valid_HR, Valid_LR_X2, Valid_LR_X3, Valid_LR_X4).unzip
these files to get the image folders. -
benchmark datasets:
cd
intoload/
. Download andtar -xf
the benchmark datasets (provided by this repo), get aload/benchmark
folder with sub-foldersSet5/, Set14/, B100/, Urban100/
. -
celebAHQ:
mkdir load/celebAHQ
andcp scripts/resize.py load/celebAHQ/
, thencd load/celebAHQ/
. Download andunzip
data1024x1024.zip from the Google Drive link (provided by this repo). Runpython resize.py
and get image folders256/, 128/, 64/, 32/
. Download the split.json.
Running the code
0. Preliminaries
-
For
train_liif.py
ortest.py
, use--gpu [GPU]
to specify the GPUs (e.g.--gpu 0
or--gpu 0,1
). -
For
train_liif.py
, by default, the save folder is atsave/_[CONFIG_NAME]
. We can use--name
to specify a name if needed. -
For dataset args in configs,
cache: in_memory
denotes pre-loading into memory (may require large memory, e.g. ~40GB for DIV2K),cache: bin
denotes creating binary files (in a sibling folder) for the first time,cache: none
denotes direct loading. We can modify it according to the hardware resources before running the training scripts.
1. DIV2K experiments
Train: python train_liif.py --config configs/train-div2k/train_edsr-baseline-liif.yaml
(with EDSR-baseline backbone, for RDN replace edsr-baseline
with rdn
). We use 1 GPU for training EDSR-baseline-LIIF and 4 GPUs for RDN-LIIF.
Test: bash scripts/test-div2k.sh [MODEL_PATH] [GPU]
for div2k validation set, bash scripts/test-benchmark.sh [MODEL_PATH] [GPU]
for benchmark datasets. [MODEL_PATH]
is the path to a .pth
file, we use epoch-last.pth
in corresponding save folder.
2. celebAHQ experiments
Train: python train_liif.py --config configs/train-celebAHQ/[CONFIG_NAME].yaml
.
Test: python test.py --config configs/test/test-celebAHQ-32-256.yaml --model [MODEL_PATH]
(or test-celebAHQ-64-128.yaml
for another task). We use epoch-best.pth
in corresponding save folder.