Visual understanding at different levels of granularity has been a longstanding problem in the computer vision community. The tasks span from image-level tasks (e.g., image classification, image-text retrieval, image captioning, and visual question answering), region-level localization tasks (e.g., object detection and phrase grounding), to pixel-level grouping tasks (e.g., image instance/semantic/panoptic segmentation). Until recently, most of these tasks have been separately tackled with specialized model designs, preventing the synergy of tasks across different granularities from being exploited.
In light of the versatility of transformers and inspired by large-scale vision-language pre-training, the computer vision community is now witnessing a growing interest in building general-purpose vision systems, also called vision foundation models, that can learn from and be applied to various downstream tasks, ranging from image-level , region-level, to pixel-level vision tasks.
In this tutorial, we will cover the most recent approaches and principles at the frontier of learning and applying vision foundation models, including (1) learning vision foundation models from natural language supervision, with applications to open-vocabulary image classification and retrieval, object detection, segmentation, and multimodal understanding; (2) learning vision foundation models via masked image modeling, with its extensions to multimodal pre-training; and (3) vision foundation model architecture design with transformer and beyond.