TY - JOUR
T1 - From Deterministic to Generative
T2 - Multimodal Stochastic RNNs for Video Captioning
AU - Song, Jingkuan
AU - Guo, Yuyu
AU - Gao, Lianli
AU - Li, Xuelong
AU - Hanjalic, Alan
AU - Shen, Heng Tao
N1 - Accepted Author Manuscript
PY - 2018
Y1 - 2018
N2 - Video captioning, in essential, is a complex natural process, which is affected by various uncertainties stemming from video content, subjective judgment, and so on. In this paper, we build on the recent progress in using encoder-decoder framework for video captioning and address what we find to be a critical deficiency of the existing methods that most of the decoders propagate deterministic hidden states. Such complex uncertainty cannot be modeled efficiently by the deterministic models. In this paper, we propose a generative approach, referred to as multimodal stochastic recurrent neural networks (MS-RNNs), which models the uncertainty observed in the data using latent stochastic variables. Therefore, MS-RNN can improve the performance of video captioning and generate multiple sentences to describe a video considering different random factors. Specifically, a multimodal long short-term memory (LSTM) is first proposed to interact with both visual and textual features to capture a high-level representation. Then, a backward stochastic LSTM is proposed to support uncertainty propagation by introducing latent variables. Experimental results on the challenging data sets, microsoft video description and microsoft research video-to-text, show that our proposed MS-RNN approach outperforms the state-of-the-art video captioning benchmarks.
AB - Video captioning, in essential, is a complex natural process, which is affected by various uncertainties stemming from video content, subjective judgment, and so on. In this paper, we build on the recent progress in using encoder-decoder framework for video captioning and address what we find to be a critical deficiency of the existing methods that most of the decoders propagate deterministic hidden states. Such complex uncertainty cannot be modeled efficiently by the deterministic models. In this paper, we propose a generative approach, referred to as multimodal stochastic recurrent neural networks (MS-RNNs), which models the uncertainty observed in the data using latent stochastic variables. Therefore, MS-RNN can improve the performance of video captioning and generate multiple sentences to describe a video considering different random factors. Specifically, a multimodal long short-term memory (LSTM) is first proposed to interact with both visual and textual features to capture a high-level representation. Then, a backward stochastic LSTM is proposed to support uncertainty propagation by introducing latent variables. Experimental results on the challenging data sets, microsoft video description and microsoft research video-to-text, show that our proposed MS-RNN approach outperforms the state-of-the-art video captioning benchmarks.
KW - Recurrent neural network (RNN)
KW - uncertainty
KW - video captioning.
UR - http://www.scopus.com/inward/record.url?scp=85051822079&partnerID=8YFLogxK
UR - http://resolver.tudelft.nl/uuid:953b6795-560d-470e-809e-0dda843ecc68
U2 - 10.1109/TNNLS.2018.2851077
DO - 10.1109/TNNLS.2018.2851077
M3 - Article
AN - SCOPUS:85051822079
SN - 2162-237X
SP - 1
EP - 12
JO - IEEE Transactions on Neural Networks and Learning Systems
JF - IEEE Transactions on Neural Networks and Learning Systems
ER -