S2IGAN: Speech-to-Image Generation via Adversarial Learning

Xinsheng Wang, Tingting Qiao, Jihua Zhu, Alan Hanjalic, Odette Scharenborg

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

3 Citations (Scopus)
33 Downloads (Pure)

Abstract

An estimated half of the world’s languages do not have a written form, making it impossible for these languages to benefit from any existing text-based technologies. In this paper, a speech-to-image generation (S2IG) framework is proposed which translates speech descriptions to photo-realistic images without using any text information, thus allowing unwritten languages to potentially benefit from this technology. The proposed S2IG framework, named S2IGAN, consists of a speech embedding network (SEN) and a relation-supervised densely-stacked generative model (RDG). SEN learns the speech embedding with the supervision of the corresponding visual information. Conditioned on the speech embedding produced by SEN, the proposed RDG synthesizes images that are semantically consistent with the corresponding speech descriptions. Extensive experiments on datasets CUB and Oxford-102 demonstrate the effectiveness of the proposed S2IGAN on synthesizing high-quality and semantically-consistent images from the speech signal, yielding a good performance and a solid baseline for the S2IG task.
Original languageEnglish
Title of host publicationProceedings of Interspeech 2020
PublisherISCA
Pages2292 - 2296
Number of pages5
DOIs
Publication statusPublished - 2020
EventINTERSPEECH 2020 - Shanghai, Shanghai, China
Duration: 25 Oct 202029 Oct 2020

Publication series

NameInterspeech 2020
PublisherISCA
ISSN (Print)1990-9772

Conference

ConferenceINTERSPEECH 2020
Country/TerritoryChina
CityShanghai
Period25/10/2029/10/20

Keywords

  • Adversarial learning
  • Multimodal modelling
  • Speech embedding
  • Speech-to-image generation

Fingerprint

Dive into the research topics of 'S2IGAN: Speech-to-Image Generation via Adversarial Learning'. Together they form a unique fingerprint.

Cite this