Towards Cross-Modal Point Cloud Retrieval for Indoor Scenes

Fuyang Yu, Zhen Wang, Dongyuan Li, Peide Zhu, Xiaohui Liang*, Xiaochuan Wang, Manabu Okumura

*Corresponding author for this work

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

Abstract

Cross-modal retrieval, as an important emerging foundational information retrieval task, benefits from recent advances in multimodal technologies. However, current cross-modal retrieval methods mainly focus on the interaction between textual information and 2D images, lacking research on 3D data, especially point clouds at scene level, despite the increasing role point clouds play in daily life. Therefore, in this paper, we proposed a cross-modal point cloud retrieval benchmark that focuses on using text or images to retrieve point clouds of indoor scenes. Given the high cost of obtaining point cloud compared to text and images, we first designed a pipeline to automatically generate a large number of indoor scenes and their corresponding scene graphs. Based on this pipeline, we collected a balanced dataset called CRISP, which contains 10K point cloud scenes along with their corresponding scene images and descriptions. We then used state-of-the-art models to design baseline methods on CRISP. Our experiments demonstrated that point cloud retrieval accuracy is much lower than cross-modal retrieval of 2D images, especially for textual queries. Furthermore, we proposed ModalBlender, a tri-modal framework which can greatly improve the Text-PointCloud retrieval performance. Through extensive experiments, CRISP proved to be a valuable dataset and worth researching. (Dataset can be downloaded at https://github.com/CRISPdataset/CRISP.)

Original languageEnglish
Title of host publicationMultiMedia Modeling - 30th International Conference, MMM 2024, Proceedings
EditorsStevan Rudinac, Marcel Worring, Cynthia Liem, Alan Hanjalic, Björn Pór Jónsson, Yoko Yamakata, Bei Liu
Place of PublicationCham
PublisherSpringer
Pages89-102
Number of pages14
ISBN (Electronic)978-3-031-53302-0
ISBN (Print)978-3-031-53301-3
DOIs
Publication statusPublished - 2024
Event30th International Conference on MultiMedia Modeling, MMM 2024 - Amsterdam, Netherlands
Duration: 29 Jan 20242 Feb 2024

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume14557 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference30th International Conference on MultiMedia Modeling, MMM 2024
Country/TerritoryNetherlands
CityAmsterdam
Period29/01/242/02/24

Bibliographical note

Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care
Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.

Keywords

  • Cross-modal Retrieval
  • Indoor Scene
  • Point Cloud

Fingerprint

Dive into the research topics of 'Towards Cross-Modal Point Cloud Retrieval for Indoor Scenes'. Together they form a unique fingerprint.

Cite this