Document Type
Discussion Paper
Publication Date
5-1-2020
CFDP Number
2235
CFDP Pages
59
Abstract
We present an approach to analyze learning outcomes in a broad class of misspecified environments, spanning both single-agent and social learning. Our main results provide general criteria to determine—without the need to explicitly analyze learning dynamics—when beliefs in a given environment converge to some long-run belief either locally or globally (i.e., from some or all initial beliefs). The key ingredient underlying these criteria is a novel “prediction accuracy” ordering over subjective models that refines existing comparisons based on Kullback-Leibler divergence. We show that these criteria can be applied, first, to unify and generalize various convergence results in previously studied settings. Second, they enable us to identify and analyze a natural class of environments, including costly information acquisition and sequential social learning, where unlike most settings the literature has focused on so far, long-run beliefs can fail to be robust to the details of the true data generating process or agents’ perception thereof. In particular, even if agents learn the truth when they are correctly specified, vanishingly small amounts of misspecification can lead to extreme failures of learning.
Recommended Citation
Frick, Mira; Iijima, Ryota; and Ishii, Yuhta, "Stability and Robustness in Misspecified Learning Models" (2020). Cowles Foundation Discussion Papers. 12.
https://elischolar.library.yale.edu/cowles-discussion-paper-series/12