Skip to yearly menu bar Skip to main content


Poster

MMCert: Provable Defense against Adversarial Attacks to Multi-modal Models

Yanting Wang · Hongye Fu · Wei Zou · Jinyuan Jia

Arch 4A-E Poster #42
[ ]
Fri 21 Jun 5 p.m. PDT — 6:30 p.m. PDT

Abstract:

Different from a unimodal model whose input is from a single modality, the input (called multi-modal input) of a multi-modal model is from multiple modalities such as image, 3D points, audio, text, etc. Similar to unimodal models, many existing studies show that a multi-modal model is also vulnerable to adversarial perturbation, where an attacker could add small perturbation to all modalities of a multi-modal input such that the multi-modal model makes incorrect predictions for it. Existing certified defenses are mainly designed for unimodal models. Our experimental results show they achieve sub-optimal certified robustness guarantees when extended to multi-modal models. In our work, we aim to bridge the gap. In particular, we propose MMCert, the first certified defense against adversarial attacks to a multi-modal model. We derive a lower bound on the performance of our MMCert under arbitrary adversarial attacks with bounded perturbations to both modalities (e.g., in the context of auto-driving, we bound the number of changed pixels in both RGB image and depth image). We evaluate our MMCert using two benchmark datasets: one for the multi-modal road segmentation task and the other for the multi-modal emotion recognition task. Moreover, we compare our MMCert with a state-of-the-art certified defense extended from unimodal models. Our experimental results show that our MMCert outperforms the baseline.

Chat is not available.