Skip to yearly menu bar Skip to main content


Poster

Self-Supervised Representation Learning from Arbitrary Scenarios

Zhaowen Li · Yousong Zhu · Zhiyang Chen · Zongxin Gao · Rui Zhao · Chaoyang Zhao · Ming Tang · Jinqiao Wang

Arch 4A-E Poster #334
[ ]
Fri 21 Jun 10:30 a.m. PDT — noon PDT

Abstract:

Current self-supervised methods can primarily be categorized into contrastive learning and masked image modeling. Extensive studies have demonstrated that combining these two approaches can achieve state-of-the-art performance. However, these methods essentially reinforce the global consistency of contrastive learning without taking into account the conflicts between these two approaches, which hinders their generalizability to arbitrary scenarios. In this paper, we theoretically prove that MAE serves as a patch-level contrastive learning, where each patch within an image is considered as a distinct category. This presents a significant conflict with global-level contrastive learning, which treats all patches in an image as an identical category.To address this conflict, this work abandons the non-generalizable global-level constraints and proposes explicit patch-level contrastive learning as a solution. Specifically, this work employs the encoder of MAE to generate dual-branch features, which then perform patch-level learning through a decoder. In contrast to global-level data augmentation in contrastive learning, our approach leverages patch-level feature augmentation to mitigate interference from global-level learning. Consequently, our approach can learn heterogeneous representations from a single image while avoiding the conflicts encountered by previous methods. Massive experiments affirm the potential of our method for learning from arbitrary scenarios.

Chat is not available.