Skip to yearly menu bar Skip to main content


Poster

Adaptive Random Feature Regularization on Fine-tuning Deep Neural Networks

Shin'ya Yamaguchi · Sekitoshi Kanai · Kazuki Adachi · Daiki Chijiwa

Arch 4A-E Poster #384
[ ] [ Project Page ] [ Paper PDF ]
[ Poster
Fri 21 Jun 10:30 a.m. PDT — noon PDT

Abstract:

While fine-tuning is a de facto standard method for training deep neural networks, it still suffers from overfitting when using small target datasets. Previous methods improve fine-tuning performance by maintaining knowledge of the source datasets or introducing regularization terms such as contrastive loss. However, these methods require auxiliary source information (e.g., source labels or datasets) or heavy additional computations. In this paper, we propose a simple method called adaptive random feature regularization (AdaRand). AdaRand helps the feature extractors of training models to adaptively change the distribution of feature vectors for downstream classification tasks without auxiliary source information and with reasonable computation costs. To this end, AdaRand minimizes the gap between feature vectors and random reference vectors that are sampled from class-conditional Gaussian distributions. Furthermore, AdaRand dynamically updates the conditional distribution to follow the currently updated feature extractors and balance the distance between classes in feature spaces. Our experiments show that AdaRand outperforms the other fine-tuning regularization requiring auxiliary source information and heavy computation costs.

Chat is not available.