Multimodal Data Augmentation for Image Captioning using Diffusion Models

Published in ACMMM LGM3A, 2023

Recommended citation: Changrong Xiao, Sean Xin Xu, and Kunpeng Zhang. 2023. Multimodal Data Augmentation for Image Captioning using Diffusion Models. In Proceedings of the 1st Workshop on Large Generative Models Meet Multimodal Applications (LGM3A 2023). Association for Computing Machinery, New York, NY, USA, 23–33. https://doi.org/10.1145/3607827.3616839 https://dl.acm.org/doi/10.1145/3607827.3616839

Abstract

Image captioning, an important vision-language task, often requires a tremendous number of finely labeled image-caption pairs for learning the underlying alignment between images and texts. In this paper, we proposed a multimodal data augmentation method, leveraging a recent text-to-image model called Stable Diffusion, to expand the training set via high-quality generation of image-caption pairs. Extensive experiments on the MS COCO dataset demonstrate the advantages of our approach over several benchmark methods, and particularly a significant boost when having fewer training instances. In addition, models trained on our augmented datasets also outperform prior unpaired image captioning methods by a large margin. Finally, further improvement regarding the training efficiency and effectiveness can be obtained after intentionally filtering the generated data based on quality assessment.

Download paper here