COMIC: Toward A Compact Image Captioning Model With Attention
Document Type
Article
Publication Date
1-1-2019
Abstract
Recent works in image captioning have shown very promising raw performance. However, we realize that most of these encoder-decoder style networks with attention do not scale naturally to large vocabulary size, making them difficult to deploy on embedded systems with limited hardware resources. This is because the size of word and output embedding matrices grow proportionally with the size of vocabulary, adversely affecting the compactness of these networks. To address this limitation, this paper introduces a brand new idea in the domain of image captioning. That is, we tackle the problem of compactness of image captioning models which is hitherto unexplored. We showed that our proposed model, named COMIC for compact image captioning, achieves comparable results in five common evaluation metrics with state-of-the-art approaches on both MS-COCO and InstaPIC-1.1M datasets despite having an embedded vocabulary size that is 39×-99× smaller. © 1999-2012 IEEE.
Keywords
deep compression network, deep learning, Image captioning
Divisions
fsktm,fac_eng
Funders
UM Frontier Research under Grant FG002-17AFR, from the University of Malaya
Publication Title
IEEE Transactions on Multimedia
Volume
21
Issue
10
Publisher
Institute of Electrical and Electronics Engineers (IEEE)