Train-Once-for-All Personalization

Hong-You Chen · Yandong Li · Yin Cui · Mingda Zhang · Wei-Lun Chao · Li Zhang

West Building Exhibit Halls ABC 342
[ Abstract ]
Wed 21 Jun 10:30 a.m. PDT — noon PDT


We study the problem of how to train a “personalization-friendly” model such that given only the task descriptions, the model can be adapted to different end-users’ needs, e.g., for accurately classifying different subsets of objects. One baseline approach is to train a “generic” model for classifying a wide range of objects, followed by class selection. In our experiments, we however found it suboptimal, perhaps because the model’s weights are kept frozen without being personalized. To address this drawback, we propose Train-once-for-All PERsonalization (TAPER), a framework that is trained just once and can later customize a model for different end-users given their task descriptions. TAPER learns a set of “basis” models and a mixer predictor, such that given the task description, the weights (not the predictions!) of the basis models can be on the fly combined into a single “personalized” model. Via extensive experiments on multiple recognition tasks, we show that TAPER consistently outperforms the baseline methods in achieving a higher personalized accuracy. Moreover, we show that TAPER can synthesize a much smaller model to achieve comparable performance to a huge generic model, making it “deployment-friendly” to resource-limited end devices. Interestingly, even without end-users’ task descriptions, TAPER can still be specialized to the deployed context based on its past predictions, making it even more “personalization-friendly”.

Chat is not available.