Document Type
Discussion Paper
Publication Date
5-1-2020
CFDP Number
2235R2
CFDP Revision Date
12-4-2021
CFDP Pages
55
Abstract
We present an approach to analyze learning outcomes in a broad class of misspecified environments, spanning both single-agent and social learning. We introduce a novel “prediction accuracy” order over subjective models, and observe that this makes it possible to partially restore standard martingale convergence arguments that apply under correctly specified learning. Based on this, we derive general conditions to determine when beliefs in a given environment converge to some long-run belief either locally or globally (i.e., from some or all initial beliefs). We show that these conditions can be applied, first, to unify and generalize various convergence results in previously studied settings. Second, they enable us to analyze environments where learning is “slow,” such as costly information acquisition and sequential social learning. In such environments, we illustrate that even if agents learn the truth when they are correctly specified, vanishingly small amounts of misspecification can generate extreme failures of learning.
Recommended Citation
Frick, Mira; Iijima, Ryota; and Ishii, Yuhta, "Belief Convergence under Misspecified Learning: A Martingale Approach" (2020). Cowles Foundation Discussion Papers. 2667.
https://elischolar.library.yale.edu/cowles-discussion-paper-series/2667