Abstract
A common assumption in causal inference from observational data is that there is no hidden confounding. Yet it is, in general, impossible to verify this assumption from a single dataset. Under the assumption of independent causal mechanisms underlying the data-generating process, we demonstrate a way to detect unobserved confounders when having multiple observational datasets coming from different environments. We present a theory for testable conditional independencies that are only absent when there is hidden confounding and examine cases where we violate its assumptions: degenerate & dependent mechanisms, and faithfulness violations. Additionally, we propose a procedure to test these independencies and study its empirical finite-sample behavior using simulation studies and semi-synthetic data based on a real-world dataset. In most cases, the proposed procedure correctly predicts the presence of hidden confounding, particularly when the confounding bias is large.
Original language | English |
---|---|
Title of host publication | Proceedings of the 37th Annual Conference on Neural Information Processing Systems |
Publication status | Published - 2023 |
Event | 37th Annual Conference on Neural Information Processing Systems - New Orleans, United States Duration: 10 Dec 2023 → 16 Dec 2023 Conference number: 37 |
Conference
Conference | 37th Annual Conference on Neural Information Processing Systems |
---|---|
Abbreviated title | NeurIPS 2023 |
Country/Territory | United States |
City | New Orleans |
Period | 10/12/23 → 16/12/23 |