Object Detection With Self-Supervised Scene Adaptation

Zekun Zhang · Minh Hoai

West Building Exhibit Halls ABC 094
[ Abstract ] [ Project Page ]
[ Paper PDF [ Slides [ Poster
Thu 22 Jun 4:30 p.m. PDT — 6 p.m. PDT


This paper proposes a novel method to improve the performance of a trained object detector on scenes with fixed camera perspectives based on self-supervised adaptation. Given a specific scene, the trained detector is adapted using pseudo-ground truth labels generated by the detector itself and an object tracker in a cross-teaching manner. When the camera perspective is fixed, our method can utilize the background equivariance by proposing artifact-free object mixup as a means of data augmentation, and utilize accurate background extraction as an additional input modality. We also introduce a large-scale and diverse dataset for the development and evaluation of scene-adaptive object detection. Experiments on this dataset show that our method can improve the average precision of the original detector, outperforming the previous state-of-the-art self-supervised domain adaptive object detection methods by a large margin. Our dataset and code are published at

Chat is not available.