Date of Award

Spring 2021

Document Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Economics

First Advisor

Andrews, Donald

Abstract

This dissertation presents four essays on robust methods in econometrics. The first chapter, "Optimal Shrinkage Estimation of Fixed Effects in Linear Panel Data Models," proposes a shrinkage estimator for the fixed effects in linear panel data models whose risk properties are robust against violations of the distributional assumptions that are commonly imposed. Shrinkage methods are frequently used to estimate fixed effects. However, the risk properties of existing estimators are fragile to violations of the underlying distributional assumptions. I develop an estimator for the fixed effects that obtains the best possible mean squared error (MSE) within a class of shrinkage estimators. This class includes conventional estimators, and the optimality does not require distributional assumptions. Importantly, the fixed effects are allowed to vary with time and to be serially correlated, and the shrinkage optimally incorporates the underlying correlation structure in this case. In such a context, I also provide a method to forecast fixed effects one period ahead. A simulation study shows that the proposed estimator substantially reduces the MSE relative to conventional methods when the distributional assumptions of the conventional methods are violated, and loses very little when the assumptions are met. Using administrative data on the public schools of New York City, I estimate a teacher value-added model and show that the proposed estimator makes an empirically relevant difference. In the second chapter, "Inference in Moment Inequality Models That Is Robust to Spurious Precision under Model Misspecification'" (with Donald W.K. Andrews), we propose an inference procedure for the moment inequality model that is robust to misspecification in a specific sense. Standard tests and confidence sets in the moment inequality literature are not robust to model misspecification in the sense that they exhibit spurious precision when the identified set is empty. This paper introduces tests and confidence sets that provide correct asymptotic inference for a pseudo-true parameter in such scenarios, and hence, do not suffer from spurious precision. The pseudo-true parameter is defined as the parameter value that satisfies the minimally relaxed moment inequalities. The last two chapters are on the problem of constructing confidence intervals (CIs) under nonparametric settings. The provided CIs are robust in the sense that they account for (worst-case) finite sample bias and thus have uniform coverage over the underlying parameter space. In the third chapter, "Inference in Regression Discontinuity Designs under Monotonicity" (with Koohyun Kwon), we provide an inference procedure for the sharp regression discontinuity design (RDD) under monotonicity. Specifically, we consider the case where the true regression function is monotone with respect to (all or some of) the running variables and assumed to lie in a Lipschitz smoothness class. Such a monotonicity condition is natural in many empirical contexts, and the Lipschitz constant has an intuitive interpretation. We propose a minimax two-sided confidence interval (CI) and an adaptive one-sided CI. For the two-sided CI, the researcher is required to choose a Lipschitz constant where she believes the true regression function to lie in. This is the only tuning parameter, and the resulting CI has uniform coverage and obtains the minimax optimal length. The one-sided CI can be constructed to maintain coverage over all monotone functions, providing maximum credibility in terms of the choice of the Lipschitz constant. Moreover, the monotonicity makes it possible for the (excess) length of the CI to adapt to the true Lipschitz constant of the unknown regression function. Overall, the proposed procedures make it easy to see under what conditions on the underlying regression function the given estimates are significant, which can add more transparency to research using RDD methods. In the fourth chapter, "Adaptive Inference in Multivariate Nonparametric Regression Models Under Monotonicity'' (with Koohyun Kwon), we consider the problem of adaptive inference on a regression function at a point under a multivariate nonparametric regression setting. The regression function belongs to a Hölder class and is assumed to be monotone with respect to some or all of the arguments. We derive the minimax rate of convergence for CIs that adapt to the underlying smoothness, and provide an adaptive inference procedure that obtains this minimax rate. The procedure differs from that of Cai and Low (2004), intended to yield shorter CIs under practically relevant specifications. The proposed method applies to general linear functionals of the regression function, and is shown to have favorable performance compared to existing inference procedures.

Share

COinS