Detecting hidden confounding in observational data using multiple environments

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

Abstract

A common assumption in causal inference from observational data is that there is no hidden confounding. Yet it is, in general, impossible to verify this assumption from a single dataset. Under the assumption of independent causal mechanisms underlying the data-generating process, we demonstrate a way to detect unobserved confounders when having multiple observational datasets coming from different environments. We present a theory for testable conditional independencies that are only absent when there is hidden confounding and examine cases where we violate its assumptions: degenerate & dependent mechanisms, and faithfulness violations. Additionally, we propose a procedure to test these independencies and study its empirical finite-sample behavior using simulation studies and semi-synthetic data based on a real-world dataset. In most cases, the proposed procedure correctly predicts the presence of hidden confounding, particularly when the confounding bias is large.
Original languageEnglish
Title of host publicationProceedings of the 37th Annual Conference on Neural Information Processing Systems
Publication statusPublished - 2023
Event37th Annual Conference on Neural Information Processing Systems - New Orleans, United States
Duration: 10 Dec 202316 Dec 2023
Conference number: 37

Conference

Conference37th Annual Conference on Neural Information Processing Systems
Abbreviated titleNeurIPS 2023
Country/TerritoryUnited States
CityNew Orleans
Period10/12/2316/12/23

Fingerprint

Dive into the research topics of 'Detecting hidden confounding in observational data using multiple environments'. Together they form a unique fingerprint.

Cite this