Skip to yearly menu bar Skip to main content


Oral

Retrieval-Augmented Layout Transformer for Content-Aware Layout Generation

Daichi Horita · Naoto Inoue · Kotaro Kikuchi · Kota Yamaguchi · Kiyoharu Aizawa

Summit Flex Hall AB Oral #2
[ ] [ Visit Orals 1B Vision and Graphics ]
Wed 19 Jun 9:18 a.m. — 9:36 a.m. PDT

Abstract:

Content-aware graphic layout generation aims to automatically arrange visual elements along with a given content, such as an e-commerce product image. In this paper, we argue that the current layout generation approaches suffer from the limited training data for the high-dimensional layout structure. We show that a simple retrieval augmentation can significantly improve the generation quality. Our model, which is named Retrieval-Augmented Layout Transformer (RALF), retrieves nearest neighbor layout examples based on an input image and feeds these results into an autoregressive generator. Our model can apply retrieval augmentation to various controllable generation tasks and yield high-quality layouts within a unified architecture. Our extensive experiments show that RALF successfully generates content-aware layouts in both constrained and unconstrained settings and significantly outperforms the baselines.

Chat is not available.