RALF
RALF copied to clipboard
[CVPR24 Oral] Official repository for RALF: Retrieval-Augmented Layout Transformer for Content-Aware Layout Generation
Retrieval-Augmented Layout Transformer for Content-Aware Layout Generation
Daichi Horita1
Naoto Inoue2
Kotaro Kikuchi2
Kota Yamaguchi2
Kiyoharu Aizawa1
1The University of Tokyo,
2CyberAgent
CVPR 2024 (Oral)
1The University of Tokyo, 2CyberAgent
Content-aware graphic layout generation
aims to automatically arrange visual elements along with a given content, such as an e-commerce product image. This repository aims to provide all-in-one package for content-aware layout generation
. If you like this repository, please give it a star!
In this paper, we propose
Retrieval-augmented content-aware layout generation
. We retrieve nearest neighbor examples based on the input image and use them as a reference to augment the generation process.
Content
- Setup
- Dataset splits
- Pre-processing Dataset
- Training
- Inference & Evaluation
- Inference using a canvas
Overview of Benchmark
We provide not only our method (RALF / Autoreg Baseline) but also other state-of-the-art methods for content-aware layout generation. The following methods are included in this repository:
- Autoreg Baseline [Horita+ CVPR24]
- RALF [Horita+ CVPR24]
- CGL-GAN [Zhou+ IJCAI22]
- DS-GAN [Hsu+ CVPR23]
- ICVT [Cao+ ACMMM22]
- LayoutDM [Inoue+ CVPR23]
- MaskGIT [Zhang+ CVPR22]
- VQDiffusion [Gu+ CVPR22]
Setup
We recommend using Docker to easily try our code.
1. Requirements
- Python3.9+
- PyTorch 1.13.1
We recommend using Poetry (all settings and dependencies in pyproject.toml).
2. How to install
Local environment
- Install poetry (see official docs).
curl -sSL https://install.python-poetry.org | python3 -
- Install dependencies (it may be slow..)
poetry install
Docker environment
- Build a Docker image
bash scripts/docker/build.sh
- Attach the container to your shell.
bash scripts/docker/exec.sh
- Install dependencies in the container
poetry install
3. Setup global environment variables
Some variables should be set. Please make scripts/bin/setup.sh on your own. At least these three variables should be set. If you download the provided zip, please ignore the setup.
DATA_ROOT="./cache/dataset"
Some variables might be set (e.g., OMP_NUM_THREADS
)
4. Check Checkpoints and experimental results
The checkpoints and generated layouts of the Autoreg Baseline and our RALF for the unconstrained and constrained tasks are available at google drive.
After downloading it, please run unzip cache.zip
in this directory.
Note that the file size is 13GB.
cache
directory contains:
- the preprocessed CGL dataset in
cache/dataset
. - the weights of the layout encoder and ResNet50 in
cache/PRECOMPUTED_WEIGHT_DIR
. - the pre-computed layout feature of CGL in
cache/eval_gt_features
. - the relationship of elements for a
relationship
task incache/pku_cgl_relationships_dic_using_canvas_sort_label_lexico.pt
. - the checkpoints and evaluation results of both the Autoreg Baseline and our RALF in
cache/training_logs
.
Dataset splits
Train / Test / Val / Real data splits
We perform preprocessing on the PKU and CGL datasets by partitioning the training set into validation and test subsets, as elaborated in Section 4.1.
The CGL dataset, as distributed, is already segmented into these divisions.
For replication of our results, we furnish details of the filenames within the data_splits/splits/<DATASET_NAME>
directory.
We encourage the use of these predefined splits when conducting experiments based on our setting and using our reported scores such as CGL-GAN and DS-GAN.
IDs of retrieved samples
We use the training split as a retrieval source. For example, when RALF is trained with the PKU, the training split of PKU is used for training and evaluation.
We provide the pre-computed correspondense using DreamSim [Fu+ NeurIPS23] in data_splits/retrieval/<DATASET_NAME>
. The data structure follows below
FILENAME:
- FILENAME top1
- FILENAME top2
...
- FILENAME top16
You can load an image from <IMAGE_ROOT>/<FILENAME>.png
.
Pre-processing Dataset
We highly recommend to pre-process datasets since you can run your experiments as quick as possible!!
Each script can be used for processing both PKU and CGL by specifying --dataset_type (pku|cgl)
Dataset setup
Folder names with parentheses will be generated by this pipeline.
<DATASET_ROOT>
| - annotation
| | (for PKU)
| | - train_csv_9973.csv
| | - [test_csv_905.csv](https://drive.google.com/file/d/19BIHOdOzVPBqf26SZY0hu1bImIYlRqVd/view?usp=sharing)
| | (for CGL)
| | - layout_train_6w_fixed_v2.json
| | - layout_test_6w_fixed_v2.json
| | - yinhe.json
| - image
| | - train
| | | - original: image with layout elements
| | | - (input): image without layout elements (by inpainting)
| | | - (saliency)
| | | - (saliency_sub)
| | - test
| | | - input: image without layout elements
| | | - (saliency)
| | | - (saliency_sub)
Image inpainting
poetry run python image2layout/hfds_builder/inpainting.py --dataset_root <DATASET_ROOT>
Saliency detection
poetry run python image2layout/hfds_builder/saliency_detection.py --input_dir <INPUT_DIR> --output_dir <OUTPUT_DIR> (--algorithm (isnet|basnet))
Aggregate data and dump to HFDS
poetry run python image2layout/hfds_builder/dump_dataset.py --dataset_root <DATASET_ROOT> --output_dir <OUTPUT_DIR>
Training
Tips
configs/<METHOD>_<DATASET>.sh
contains the hyperparameters and settings for each method and dataset. Please refer to the file for the details.
In particular, please check whether the debugging mode DEBUG=True or False
.
Autoreg Baseline with CGL
Please run
bash scripts/train/autoreg_cgl.sh <GPU_ID> <TASK_NAME>
# If you wanna run train and eval, please run
bash scripts/run_job/end_to_end.sh <GPU_ID e.g. 0> autoreg cgl <TASK_NAME e.g. uncond>
where TASK_NAME
indicates the unconstrained and constrained tasks.
Please refer to the below task list:
-
uncond
: Unconstraint generation -
c
: Category → Size + Position -
cwh
: Category + Size → Position -
partial
: Completion -
refinement
: Refinement -
relation
: Relationship
RALF with CGL
The dataset with inpainting.
Please run
bash scripts/train/ralf_cgl.sh <GPU_ID> <TASK_NAME>
# If you wanna run train and eval, please run
bash scripts/run_job/end_to_end.sh <GPU_ID e.g. 0> ralf cgl <TASK_NAME e.g. uncond>
Other methods
For example, these scripts are helpful. end_to_end.sh
is a wrapper script for training, inference, and evaluation.
# DS-GAN with CGL dataset
bash scripts/run_job/end_to_end.sh 0 dsgan cgl uncond
# LayoutDM with CGL dataset
bash scripts/run_job/end_to_end.sh 2 layoutdm cgl uncond
# CGL-GAN + Retrieval Augmentation with CGL dataset
bash scripts/run_job/end_to_end.sh 2 cglgan_ra cgl uncond
Inference & Evaluation
Experimental results are provided in cache/training_logs
. For example, a directory of autoreg_c_cgl
, which the results of the Autoreg Baseline with Category → Size + Position task, includes:
-
test_<SEED>.pkl
: the generated layouts -
layout_test_<SEED>.png
: the rendered layouts, in which top sample is ground truth and bottom sample is a predicted sample -
gen_final_model.pt
: the final checkpoint -
scores_test.tex
: summarized qualitative results
Annotated split
Please see and run
bash scripts/eval_inference/eval_inference.sh <GPU_ID> <JOB_DIR> <COND_TYPE> cgl
For example,
# Autoreg Baseline with Unconstraint generation
bash scripts/eval_inference/eval_inference.sh 0 "cache/training_logs/autoreg_uncond_cgl" uncond cgl
Unannotated split
The dataset with real canvas i.e. no inpainting.
Please see and run
bash scripts/eval_inference/eval_inference_all.sh <GPU_ID>
Inference using a canvas
Please run
bash scripts/run_job/inference_single_data.sh <GPU_ID> <JOB_DIR> cgl <SAMPLE_ID>
where SAMPLE_ID
can optionally be set as a dataset index.
For example,
bash scripts/run_job/inference_single_data.sh 0 "./cache/training_logs/ralf_uncond_cgl" cgl
Inference using your personal data
Please customize image2layout/train/inference_single_data.py to load your data.
Citation
If you find our work useful in your research, please consider citing:
@article{horita2024retrievalaugmented,
title={{Retrieval-Augmented Layout Transformer for Content-Aware Layout Generation}},
author={Daichi Horita and Naoto Inoue and Kotaro Kikuchi and Kota Yamaguchi and Kiyoharu Aizawa},
booktitle={CVPR},
year={2024}
}