RANRAC: Robust Neural Scene Representations via Random Ray Consensus

Benno Buschmann*, Andreea Dogaru, Elmar Eisemann, Michael Weinmann, Bernhard Egger

*Corresponding author for this work

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

Abstract

Learning-based scene representations such as neural radiance fields or light field networks, that rely on fitting a scene model to image observations, commonly encounter challenges in the presence of inconsistencies within the images caused by occlusions, inaccurately estimated camera parameters or effects like lens flare. To address this challenge, we introduce RANdom RAy Consensus (RANRAC), an efficient approach to eliminate the effect of inconsistent data, thereby taking inspiration from classical RANSAC based outlier detection for model fitting. In contrast to the down-weighting of the effect of outliers based on robust loss formulations, our approach reliably detects and excludes inconsistent perspectives, resulting in clean images without floating artifacts. For this purpose, we formulate a fuzzy adaption of the RANSAC paradigm, enabling its application to large scale models. We interpret the minimal number of samples to determine the model parameters as a tunable hyperparameter, investigate the generation of hypotheses with data-driven models, and analyse the validation of hypotheses in noisy environments. We demonstrate the compatibility and potential of our solution for both photo-realistic robust multi-view reconstruction from real-world images based on neural radiance fields and for single-shot reconstruction based on light-field networks. In particular, the results indicate significant improvements compared to state-of-the-art robust methods for novel-view synthesis on both synthetic and captured scenes with various inconsistencies including occlusions, noisy camera pose estimates, and unfocused perspectives. The results further indicate significant improvements for single-shot reconstruction from occluded images.
Original languageEnglish
Title of host publicationComputer Vision – ECCV 2024
Subtitle of host publication18th European Conference, Milan, Italy, September 29–October 4, 2024, Proceedings, Part LXXVI
EditorsAleš Leonardis, Elisa Ricci, Stefan Roth, Olga Russakovsky, Torsten Sattler, Gül Varol
Place of PublicationCham
PublisherSpringer
Pages126-143
Number of pages18
ISBN (Electronic)978-3-031-73116-7
ISBN (Print)978-3-031-73115-0
DOIs
Publication statusPublished - 2025
EventEuropean Conference on Computer Vision – ECCV 2024 - MiCo Milano, Milan, Italy
Duration: 29 Sept 20244 Oct 2024
https://eccv.ecva.net/

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume15134 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

ConferenceEuropean Conference on Computer Vision – ECCV 2024
Abbreviated titleECCV 2024
Country/TerritoryItaly
CityMilan
Period29/09/244/10/24
Internet address

Bibliographical note

Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care
Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.

Keywords

  • light-field networks
  • neural radiance fields
  • neural rendering
  • neural scene representations
  • RANSAC
  • robust estimation

Fingerprint

Dive into the research topics of 'RANRAC: Robust Neural Scene Representations via Random Ray Consensus'. Together they form a unique fingerprint.

Cite this