Essays in Econometrics: Nonparametrics and Robustness
Author(s)
Deaner, Ben
DownloadThesis PDF (1.348Mb)
Advisor
Newey, Whitney
Anna, Mikusheva
Hausman, Jerry
Terms of use
Metadata
Show full item recordAbstract
This thesis consists of three chapters. In each chapter I consider a particular problem in econometrics with implications for applied research, and in each case I attempt to solve that problem.
In Chapter 1 I consider the task of inferring causal effects when only `proxy controls' are available. Proxy controls are informative proxies for unobserved confounding factors. For example, suppose we wish to estimate the causal impact of holding students back a grade on their future test scores. Academic ability is likely a confounding factor. While ability is not observed, early test scores may be used to proxy for ability. Under suitable conditions, nonparametric identification and estimation of treatment effects is possible in this setting. I present novel nonparametric identification results that motivate simple and `well-posed' nonparametric estimation and inference methods for use with proxy controls.
My analysis applies to cross-sectional settings but is particularly well-suited to panel models. In panel settings, proxy control methods provide a novel approach to the difficult problem of identification with non-separable, general heterogeneity and fixed T. In panels, observations from different periods serve as proxies for unobserved
heterogeneity and my key identifying assumptions follow from restrictions on the serial dependence structure.
I derive convergence rates for my estimator and construct uniform confidence bands with asymptotically correct size. I apply my methodology to two empirical settings. I estimate causal effects of grade retention on cognitive performance and I estimate consumer demand counterfactuals using panel data.
In Chapter 2 I show that nonparametric instrumental variables (NPIV) estimators are highly sensitive to misspecification: an arbitrarily small deviation from instrumental validity can lead to large asymptotic bias for a broad class of estimators. One can mitigate the problem by placing strong restrictions on the structural function in estimation. If the true function does not obey the restrictions then imposing them imparts bias. Therefore, there is a trade-off between the sensitivity to invalid instruments and bias from imposing excessive restrictions. In response, I present a method that allows researchers to empirically assess the sensitivity of their findings to misspecification. I apply my procedure to the empirical demand setting of Blundell(2007) and Horowitz (2011).
In Chapter 3 I consider methods for inference in dynamic discrete choice models that are robust to approximation error in the solution to the dynamic decision problem. Estimation and inference in dynamic discrete choice models often relies on approximation to lower the computational burden of dynamic programming. If it is not accounted for, the use of approximation can impart substantial bias in estimation and results in invalid confidence sets. I present a method for set estimation and inference that explicitly accounts for the use of approximation and is thus valid regardless of the approximation error. I show how one can account for the error from approximation at low computational cost. My methodology allows researchers to assess the estimation error due to approximation and thus more
effectively manage the trade-off between bias and computational expedience. I provide simulation evidence to demonstrate the practicality of my approach.
Date issued
2021-06Department
Massachusetts Institute of Technology. Department of EconomicsPublisher
Massachusetts Institute of Technology