Skip to yearly menu bar Skip to main content


Poster

Model Inversion Robustness: Can Transfer Learning Help?

Sy-Tuyen Ho · Koh Jun Hao · Keshigeyan Chandrasegaran · Ngoc-Bao Nguyen · Ngai-Man Cheung

Arch 4A-E Poster #248
[ ] [ Paper PDF ]
Thu 20 Jun 10:30 a.m. PDT — noon PDT

Abstract:

Model Inversion (MI) attacks aim to reconstruct private training data by abusing access to machine learning models. Contemporary MI attacks have achieved impressive attack performance posing serious threats to privacy. Meanwhile, all existing MI defense methods rely on regularization that has direct conflict with the training objective, resulting in noticeable degradation in model utility. In this work, we take a different perspective, and propose a novel and simple method based on transfer learning (TL) to render MI-robust models. Particularly, by leveraging TL, we limit the number of layers encoding sensitive information from private training dataset, thereby degrading the performance of MI attack. We conduct an analysis using Fisher Information to justify our method. Our defense is remarkably simple to implement. Without bells and whistles, we show in extensive experiments that our method achieves state-of-the-art (SOTA) MI robustness. Our code, pre-trained models, demo and inverted data are included in Supp.

Chat is not available.