Causal Inference for Infectious Disease Intervention Effects

Date of Award

Spring 1-1-2024

Document Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Public Health

First Advisor

Crawford, Forrest

Abstract

Causal inference is central to the evaluation of treatments and interventions for their causal impact on the downstream outcomes. It starts with defining causal estimands that answer the research question of interest, deals with challenges in the identification and estimation of causal effects, and generates conclusions as empirical evidence for decision-making. Infectious diseases, due to the unique structure of their data-generating processes, pose structural difficulties to the causal evaluations of both individual-level treatment and population-level interventions. We investigate these difficulties and the pitfalls that arise in drawing causal conclusions using data from infectious disease processes and interventions. We address both randomized and observational research designs. The first chapter of this dissertation deals with the structural challenge of interference when evaluating the effect of an individual-level treatment on its recipient. We find contradictory conclusions yielded by two popular but divergent definitions of direct effects under interference. We prove a new theoretical result that formalizes the distinction between the two direct effects under general dependent randomized study designs. Under different assumptions on the data-generating process, we propose two perspectives that justify the two direct effects and clarify their causal interpretations. Depending on the randomized study design, we discuss the identification and estimation of the two direct effects in empirical applications. The second chapter studies the issue of time-varying confounding bias in the identification of epidemic population-level intervention effects. We illustrate the source, the mechanism, and the consequences of time-varying confounding bias when policymakers wish to implement and evaluate interventions as responses to previous disease evolution and measures intended to reduce future disease transmission. We propose a formal result on the directional bias that can lead to substantial underestimation of adverse outcomes. We verify via simulation that it usually occurs in a counterfactual control scenario where no intervention was implemented. This scenario arises often in causal analyses of infections averted by an intervention program. This bias could mislead policymakers into enacting insufficient implementation and promotion of effective measures that mitigate disease transmission. The third chapter focuses on addressing selection bias in infectious disease studies using causal graphical rules. We investigate two important cases of selection in randomized trials where the average treatment effect is not recovered by existing simple graphical rules. We propose two identification strategies, based on the g-formula and the inverse probability weighting method, for identifying the treatment effect of interest. The approach uses external information that helps adjust for post-treatment variables and addresses selection bias without incurring confounding bias. The methodology we propose is flexibly extended to complex observational settings. We also propose and outline the procedures for statistical estimation and inference of the treatment effect by an inverse probability weighting method. We conduct simulation studies to verify favorable performances by the proposed methods, which correct the serious bias and misleading conclusions that would arise from a complete-case selected-sample analysis.

This document is currently not available here.

Share

COinS