Learning fine-grained semantics in spoken language using visual grounding

Xinsheng Wang, Tian Tian, Jihua Zhu, Odette Scharenborg

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

1 Citation (Scopus)
27 Downloads (Pure)


In the case of unwritten languages, acoustic models cannot be trained in the standard way, i.e., using speech and textual transcriptions. Recently, several methods have been proposed to learn speech representations using images, i.e., using visual grounding. Existing studies have focused on scene images. Here, we investigate whether fine-grained semantic information, reflecting the relationship between attributes and objects, can be learned from spoken language. To this end, a Fine-grained Semantic Embedding Network (FSEN) for learning semantic representations of spoken language grounded by fine-grained images is proposed. For training, we propose an efficient objective function, which includes a matching constraint, an adversarial objective, and a classification constraint. The learned speech representations are evaluated using two tasks, i.e., speech-image cross-modal retrieval and speech-to-image generation. On the retrieval task, FSEN outperforms other state-of-the-art methods on both a scene image dataset and two fine-grained datasets. The image generation task shows that the learned speech representations can be used to generate high-quality and semantic-consistent fine-grained images. Learning fine-grained semantics from spoken language via visual grounding is thus possible.

Original languageEnglish
Title of host publication2021 IEEE International Symposium on Circuits and Systems (ISCAS)
Place of PublicationPiscataway
Number of pages5
ISBN (Electronic)978-1-7281-9201-7
Publication statusPublished - 2021
Event53rd IEEE International Symposium on Circuits and Systems, ISCAS 2021 - Virtual at Daegu, Korea, Republic of
Duration: 22 May 202128 May 2021


Conference53rd IEEE International Symposium on Circuits and Systems, ISCAS 2021
Country/TerritoryKorea, Republic of
CityVirtual at Daegu

Bibliographical note

Accepted author manuscript


  • Image generation
  • Multimodal modelling
  • Semantic retrieval
  • Speech representation learning
  • Visual grounding


Dive into the research topics of 'Learning fine-grained semantics in spoken language using visual grounding'. Together they form a unique fingerprint.

Cite this