Date of Award

Spring 2022

Document Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Economics

First Advisor

Kitamura, Yuichi

Abstract

This dissertation presents two essays on econometric methods for program evaluation and policy choice. In Chapter 1, I develop a statistically optimal way of using data to make policy decisions when the performance of counterfactual policies is only partially identified. Specifically, I consider a class of statistical decision problems in which the policy maker must decide between two alternative policies to maximize social welfare (e.g., the population mean of an outcome) based on a finite sample. The central assumption is that the underlying, possibly infinite-dimensional parameter, lies in a known convex set, potentially leading to partial identification of the welfare effect. An example of such restrictions is the smoothness of counterfactual outcome functions. As the main theoretical result, I obtain a finite-sample decision rule (i.e., a function that maps data to a decision) that is optimal under the minimax regret criterion. This rule is easy to compute, yet achieves optimality among all decision rules; no ad hoc restrictions are imposed on the class of decision rules. I then apply my results to the problem of whether to change a policy eligibility cutoff in a regression discontinuity setup. I illustrate my approach in an empirical application to the Burkinabé Response to Improve Girls' Chances to Succeed program, a school construction program in Burkina Faso, where villages were selected to receive schools based on scores computed from their characteristics. Under reasonable restrictions on the smoothness of the counterfactual outcome function, the optimal decision rule implies that it is not cost-effective to expand the program. I empirically compare the performance of the optimal decision rule with alternative decision rules. In Chapter 2, joint with Yusuke Narita, we show how to use data obtained from algorithmic decision making for impact evaluation. Machine learning and other algorithms produce a growing portion of decisions and recommendations both in policy and in business. This chapter first highlights a valuable aspect of such algorithmic decisions. That is, algorithmic decisions are natural experiments (conditionally quasi-randomly assigned instruments) since the algorithms make decisions based only on observable input variables. We then use this observation to develop a treatment-effect estimator for a class of stochastic and deterministic decision-making algorithms. Our estimator is shown to be consistent and asymptotically normal for well-defined causal effects. A key special case of our estimator is a multidimensional regression discontinuity design. We apply our estimator to evaluate the effect of the Coronavirus Aid, Relief, and Economic Security (CARES) Act, where hundreds of billions of dollars worth of relief funding were allocated to hospitals via an algorithmic rule. Our estimates suggest that the relief funding has little effect on COVID-19-related hospital activity levels. Naive OLS and IV estimates exhibit substantial selection bias.

Share

COinS