Skip to yearly menu bar Skip to main content


Poster

LayoutFormer: Hierarchical Text Detection Towards Scene Text Understanding

Min Liang · Jia-Wei Ma · Xiaobin Zhu · Jingyan Qin · Xu-Cheng Yin

Arch 4A-E Poster #102
[ ] [ Paper PDF ]
[ Poster
Thu 20 Jun 5 p.m. PDT — 6:30 p.m. PDT

Abstract:

Existing scene text detectors generally focus on accurately detecting single-level (i.e., word-level, line-level, or paragraph-level) text entities without exploring the relationships among different levels of text entities. To comprehensively understand scene texts, detecting multi-level texts while exploring their contextual information is critical. To this end, we propose a unified framework (dubbed LayoutFormer) for hierarchical text detection, which simultaneously conducts multi-level text detection and predicts the geometric layouts for promoting scene text understanding. In LayoutFormer, WordDecoder, LineDecoder, and ParaDecoder are proposed to be responsible for word-level text prediction, line-level text prediction, and paragraph-level text prediction, respectively. Meanwhile, WordDecoder and ParaDecoder adaptively learn word-line and line-paragraph relationships, respectively. In addition, we propose a Prior Location Sampler to be used on multi-scale features to adaptively select a few representative foreground features for updating text queries. It can improve hierarchical detection performance while significantly reducing the computational cost. Comprehensive experiments verify that our method achieves state-of-the-art performance on single-level and hierarchical text detection.

Chat is not available.