Poster

HumanBench: Towards General Human-Centric Perception With Projector Assisted Pretraining

Shixiang Tang · Cheng Chen · Qingsong Xie · Meilin Chen · Yizhou Wang · Yuanzheng Ci · Lei Bai · Feng Zhu · Haiyang Yang · Li Yi · Rui Zhao · Wanli Ouyang

West Building Exhibit Halls ABC 130
[ Abstract ] [ Project Page ]
[ Paper PDF [ Slides [ Poster
Thu 22 Jun 4:30 p.m. PDT — 6 p.m. PDT

Abstract:

Human-centric perceptions include a variety of vision tasks, which have widespread industrial applications, including surveillance, autonomous driving, and the metaverse. It is desirable to have a general pretrain model for versatile human-centric downstream tasks. This paper forges ahead along this path from the aspects of both benchmark and pretraining methods. Specifically, we propose a HumanBench based on existing datasets to comprehensively evaluate on the common ground the generalization abilities of different pretraining methods on 19 datasets from 6 diverse downstream tasks, including person ReID, pose estimation, human parsing, pedestrian attribute recognition, pedestrian detection, and crowd counting. To learn both coarse-grained and fine-grained knowledge in human bodies, we further propose a Projector AssisTed Hierarchical pretraining method (PATH) to learn diverse knowledge at different granularity levels. Comprehensive evaluations on HumanBench show that our PATH achieves new state-of-the-art results on 17 downstream datasets and on-par results on the other 2 datasets. The code will be publicly at https://github.com/OpenGVLab/HumanBench.

Chat is not available.