Skip to yearly menu bar Skip to main content


Poster

Would Deep Generative Models Amplify Bias in Future Models?

Tianwei Chen · Yusuke Hirota · Mayu Otani · Noa Garcia · Yuta Nakashima

Arch 4A-E Poster #114
[ ]
Thu 20 Jun 10:30 a.m. PDT — noon PDT

Abstract:

This paper investigates the impact of recent deep generative models, such as stable diffusion, on potential biases in upcoming models. As the internet witnesses an increasing influx of images generated by these models, concerns arise regarding inherent biases that may accompany them, potentially leading to the dissemination of harmful content. The primary question explored in this study is whether a detrimental feedback loop, resulting in the amplification of bias, would ensue if these generated images were used as the training data for future models. Toward this concern, we conduct simulations by progressively substituting original images in COCO and CC3M datasets with images generated through stable diffusion. Subsequently, these modified datasets are utilized to train image caption and CLIP models, and the resulting models are evaluated using both quality and bias metrics. Contrary to expectations, our findings indicate that the introduction of generated images during training does not uniformly amplify bias. Instead, instances of bias mitigation across specific tasks are observed. Our exploration extends to identifying factors influencing bias changes, such as features in image generation (e.g., producing blurry faces) and pre-existing biases in the original dataset.

Chat is not available.