Evaluating automatically generated phoneme captions for images

Justin van der Hout, Zoltán D’Haese, Mark Hasegawa-Johnson, Odette Scharenborg

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

3 Citations (Scopus)
30 Downloads (Pure)

Abstract

Image2Speech is the relatively new task of generating a spoken description of an image. This paper presents an investigation into the evaluation of this task. For this, first an Image2Speech system was implemented which generates image captions consisting of phoneme sequences. This system outperformed the original Image2Speech system on the Flickr8k corpus. Subsequently, these phoneme captions were converted into sentences of words. The captions were rated by human evaluators for their goodness of describing the image. Finally, several objective metric scores of the results were correlated with these human ratings. Although BLEU4 does not perfectly correlate with human ratings, it obtained the highest correlation among the investigated metrics, and is the best currently existing metric for the Image2Speech task. Current metrics are limited by the fact that they assume their input to be words. A more appropriate metric for the Image2Speech task should assume its input to be parts of words, i.e. phonemes, instead.
Original languageEnglish
Title of host publicationProceedings of Interspeech 2020
PublisherISCA
Pages2317 - 2321
Number of pages5
DOIs
Publication statusPublished - 2020
EventINTERSPEECH 2020 - Shanghai, Shanghai, China
Duration: 25 Oct 202029 Oct 2020

Publication series

NameInterspeech 2020
PublisherISCA
ISSN (Print)1990-9772

Conference

ConferenceINTERSPEECH 2020
Country/TerritoryChina
CityShanghai
Period25/10/2029/10/20

Keywords

  • Image captioning
  • Speech
  • Unwritten languages

Fingerprint

Dive into the research topics of 'Evaluating automatically generated phoneme captions for images'. Together they form a unique fingerprint.

Cite this