S2IGAN: Speech-to-Image Generation via Adversarial Learning

Xinsheng Wang, Tingting Qiao, Jihua Zhu, Alan Hanjalic, Odette Scharenborg

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

4 Citations (Scopus)
35 Downloads (Pure)


An estimated half of the world’s languages do not have a written form, making it impossible for these languages to benefit from any existing text-based technologies. In this paper, a speech-to-image generation (S2IG) framework is proposed which translates speech descriptions to photo-realistic images without using any text information, thus allowing unwritten languages to potentially benefit from this technology. The proposed S2IG framework, named S2IGAN, consists of a speech embedding network (SEN) and a relation-supervised densely-stacked generative model (RDG). SEN learns the speech embedding with the supervision of the corresponding visual information. Conditioned on the speech embedding produced by SEN, the proposed RDG synthesizes images that are semantically consistent with the corresponding speech descriptions. Extensive experiments on datasets CUB and Oxford-102 demonstrate the effectiveness of the proposed S2IGAN on synthesizing high-quality and semantically-consistent images from the speech signal, yielding a good performance and a solid baseline for the S2IG task.
Original languageEnglish
Title of host publicationProceedings of Interspeech 2020
Pages2292 - 2296
Number of pages5
Publication statusPublished - 2020
EventINTERSPEECH 2020 - Shanghai, Shanghai, China
Duration: 25 Oct 202029 Oct 2020

Publication series

NameInterspeech 2020
ISSN (Print)1990-9772


ConferenceINTERSPEECH 2020


  • Adversarial learning
  • Multimodal modelling
  • Speech embedding
  • Speech-to-image generation


Dive into the research topics of 'S2IGAN: Speech-to-Image Generation via Adversarial Learning'. Together they form a unique fingerprint.

Cite this