We present an approach to analyze learning outcomes in a broad class of misspeciﬁed environments, spanning both single-agent and social learning. Our main results provide general criteria to determine—without the need to explicitly analyze learning dynamics—when beliefs in a given environment converge to some long-run belief either locally or globally (i.e., from some or all initial beliefs). The key ingredient underlying these criteria is a novel “prediction accuracy” ordering over subjective models that reﬁnes existing comparisons based on Kullback-Leibler divergence. We show that these criteria can be applied, ﬁrst, to unify and generalize various convergence results in previously studied settings. Second, they enable us to identify and analyze a natural class of environments, including costly information acquisition and sequential social learning, where unlike most settings the literature has focused on so far, long-run beliefs can fail to be robust to the details of the true data generating process or agents’ perception thereof. In particular, even if agents learn the truth when they are correctly speciﬁed, vanishingly small amounts of misspeciﬁcation can lead to extreme failures of learning.
Frick, Mira; Iijima, Ryota; and Ishii, Yuhta, "Stability and Robustness in Misspecified Learning Models" (2020). Cowles Foundation Discussion Papers. 12.