Deep Hierarchical Encoder–Decoder Network for Image Captioning | IEEE Journals & Magazine | IEEE Xplore

Deep Hierarchical Encoder–Decoder Network for Image Captioning


Abstract:

Encoder-decoder models have been widely used in image captioning, and most of them are designed via single long short term memory (LSTM). The capacity of single-layer net...Show More

Abstract:

Encoder-decoder models have been widely used in image captioning, and most of them are designed via single long short term memory (LSTM). The capacity of single-layer network, whose encoder and decoder are integrated together, is limited for such a complex task of image captioning. Moreover, how to effectively increase the “vertical depth” of encoder-decoder remains to be solved. To deal with these problems, a novel deep hierarchical encoder-decoder network is proposed for image captioning, where a deep hierarchical structure is explored to separate the functions of encoder and decoder. This model is capable of efficiently exerting the representation capacity of deep networks to fuse high level semantics of vision and language in generating captions. Specifically, visual representations in top levels of abstraction are simultaneously considered, and each of these levels is associated to one LSTM. The bottom-most LSTM is applied as the encoder of textual inputs. The application of the middle layer in encoder-decoder is to enhance the decoding ability of top-most LSTM. Furthermore, depending on the introduction of semantic enhancement module of image feature and distribution combine module of text feature, variants of architectures of our model are constructed to explore the impacts and mutual interactions among the visual representation, textual representations, and the output of the middle LSTM layer. Particularly, the framework is training under a reinforcement learning method to address the exposure bias problem between the training and the testing by the policy gradient optimization. Qualitative analyses indicate the process that our model “translates” image to sentence and further visualization presents the evolution of the hidden states from different hierarchical LSTMs over time. Extensive experiments demonstrate that our model outperforms current state-of-the-art models on three benchmark datasets: Flickr8K, Flickr30K, and MSCOCO. On both image captioning and...
Published in: IEEE Transactions on Multimedia ( Volume: 21, Issue: 11, November 2019)
Page(s): 2942 - 2956
Date of Publication: 09 May 2019

ISSN Information:

Funding Agency:


I. Introduction

As one of the vision-language problems, image captioning is a challenging problem in computer vision and machine learning, which has attracted increasing attention of researchers [1]–[6]. The objective of image captioning is to generate a natural language description of a given image, and it essentially applies translation between two disparate modals of information. Compared with conventional computer vision tasks, image captioning is more difficult as it requires not only capturing the information contained in an image, but also extracting the semantic correlation of the captured visual information to the relevant language expressions.

Contact IEEE to Subscribe

References

References is not available for this document.