Exploiting visual-based intent classification for diverse social image retrieval

Bo Wang, Martha Larson

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

103 Downloads (Pure)


In the 2017 MediaEval Retrieving Diverse Social Images task, we (TUD-MMC team) propose a novel method, namely an intent-based approach, for social image search result diversification. The underlying assumption is that the visual appearance of social images is impacted by the underlying photographic act, i.e., why the images were taken. Better understanding the rationale behind the photographic act could potentially benefit social image search result diversification. To investigate this idea, we employ a manual content analysis approach to create a taxonomy of intent classes. Our experiments show that a CNN-based neural network classifier is able to capture the visual difference between the classes in the intent taxonomy. We cluster images of the Flickr baseline based on predicted intent class and generate a re-ranked list by alternating images from different clusters. Our results reveal that, compared to conventional diversification strategies, intent-based search result diversification is able to bring a considerable improvement in terms of cluster recall with several extra benefits.

Original languageEnglish
Title of host publicationWorking Notes Proceedings of the MediaEval 2017 Workshop
EditorsGuillaume Gravier, Benjamin Bischke , Claire-Hélène Demarty, Maia Zaharieva, Michael Riegler, Emmanuel Dellandrea, Dmitry Bogdanov, Richard Sutcliffe, Gareth J.F. Jones, Martha Larson
Number of pages3
Publication statusPublished - 2017
EventMediaEval 2017: Multimedia Benchmark Workshop - Dublin, Ireland
Duration: 13 Sep 201715 Sep 2017

Publication series

NameCEUR Workshop Proceedings
ISSN (Print)1613-0073


ConferenceMediaEval 2017


Dive into the research topics of 'Exploiting visual-based intent classification for diverse social image retrieval'. Together they form a unique fingerprint.

Cite this