Skip to yearly menu bar Skip to main content


Poster

Querying as Prompt: Parameter-Efficient Learning for Multimodal Language Model

Tian Liang · Jing Huang · Ming Kong · Luyuan Chen · Qiang Zhu

Arch 4A-E Poster #261
[ ] [ Paper PDF ]
Fri 21 Jun 5 p.m. PDT — 6:30 p.m. PDT

Abstract:

Recent advancements in language models pre-trained on large-scale corpora have significantly propelled developments in the NLP domain and advanced progress in multimodal tasks. In this paper, we propose a Parameter-Efficient multimodal language model learning strategy, named QaP (Querying as Prompt). Its core innovation is a novel modality-bridging method that allows a set of modality-specific queries to be input as soft prompts into a frozen pre-trained language model. Specifically, we introduce an efficient Text-Conditioned Resampler that is easy to incorporate into the language models, which enables adaptive injection of text-related multimodal information at different levels of the model through query learning. This approach effectively bridges multimodal information to the language models while fully leveraging its token fusion and representation potential. We validated our method across four datasets in three distinct multimodal tasks. The results demonstrate that our QaP multimodal language model achieves state-of-the-art performance in various tasks with training only 4.6% parameters.

Chat is not available.