Skip to yearly menu bar Skip to main content


Paper
in
Workshop: The 4th Explainable AI for Computer Vision (XAI4CV) Workshop

Disentangling Visual Transformers: Patch-level Interpretability for Image Classification

Guillaume Jeanneret · Loïc Simon · Frederic Jurie


Abstract:

Visual transformers have achieved remarkable performance in image classification tasks, but this performance gain has come at the cost of interpretability. One of the main obstacles to the interpretation of transformers is the self-attention mechanism, which mixes visual information across the whole image in a complex way. In this paper, we propose Hindered Transformer (HiT), a novel interpretable by design architecture inspired by visual transformers. Our proposed architecture rethinks the design of transformers to better disentangle patch influences at the classification stage. Ultimately, HiT can be interpreted as a linear combination of patch-level information. We show that the advantages of our approach in terms of explicability come with a reasonable trade-off in performance, making it an attractive alternative for applications where interpretability is paramount.

Chat is not available.