Skip to yearly menu bar Skip to main content


Paper
in
Workshop: Mechanistic Interpretability for Vision

Analyzing Hierarchical Structure in Vision Models with Sparse Autoencoders

Matthew L. Olson · Musashi Hinck · Neale Ratzlaff · Changbai Li · Phillip Howard · Vasudev Lal · Shao-Yen Tseng


Abstract:

The ImageNet hierarchy provides a structured taxonomy of object categories, offering a valuable lens through which to analyze the representations learned by deep vision models. In this work, we conduct a comprehensive analysis of how vision models encode the ImageNet hierarchy, leveraging Sparse Autoencoders (SAEs) to probe their internal representations. SAEs have been widely used in the interpretability of large language models (LLMs), where they enable the discovery of semantically meaningful features. Here, we extend their use to vision models to investigate whether learned representations align with the ontological structure defined by the ImageNet taxonomy. Our results show that SAEs uncover hierarchical relationships in model activations, revealing an implicit encoding of taxonomic structure. We analyze the consistency of these representations across different layers of the popular vision foundation model DINOv2 and provide insights into how deep vision models internalize hierarchical category information by increasing information in the class token through each layer. Our study establishes a framework for systematic hierarchical analysis of vision model representations and highlights the potential of SAEs as a tool for probing semantic structure in deep networks.

Chat is not available.