Learning to recognise words using visually grounded speech

Sebastiaan Scholten, Danny Merkx, Odette Scharenborg

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

6 Downloads (Pure)


We investigated word recognition in a Visually Grounded Speech model. The model has been trained on pairs of images and spoken captions to create visually grounded embeddings which can be used for speech to image retrieval and vice versa. We investigate whether such a model can be used to recognise words by embedding isolated words and using them to retrieve images of their visual referents. We investigate the time-course of word recognition using a gating paradigm and perform a statistical analysis to see whether well known word competition effects in human speech processing influence word recognition. Our experiments show that the model is able to recognise words, and the gating paradigm reveals that words can be recognised from partial input as well and that recognition is negatively influenced by word competition from the word initial cohort.

Original languageEnglish
Title of host publication2021 IEEE International Symposium on Circuits and Systems (ISCAS)
Place of PublicationPiscataway
Number of pages5
ISBN (Electronic)978-1-7281-9201-7
Publication statusPublished - 2021
Event53rd IEEE International Symposium on Circuits and Systems, ISCAS 2021 - Virtual at Daegu, Korea, Republic of
Duration: 22 May 202128 May 2021


Conference53rd IEEE International Symposium on Circuits and Systems, ISCAS 2021
CountryKorea, Republic of
CityVirtual at Daegu

Bibliographical note

Accepted author manuscript


  • Analysis
  • Flickr8k
  • Recurrent neural network
  • Visually grounded speech


Dive into the research topics of 'Learning to recognise words using visually grounded speech'. Together they form a unique fingerprint.

Cite this