Document Type
Discussion Paper
Publication Date
10-23-2025
CFDP Number
2472
CFDP Pages
98
Abstract
We propose SLIM (Stochastic Learning and Inference in overidentified Models), a scalable stochastic approximation framework for nonlinear GMM. SLIM forms iterative updates from independent mini-batches of moments and their derivatives, producing unbiased directions that ensure almost-sure convergence. It requires neither a consistent initial estimator nor global convexity and accommodates both fixed-sample and random-sampling asymptotics. We further develop an optional second-order refinement and inference procedures based on random scaling and plug-in methods, including plug-in, debiased plug-in, and online versions of the Sargan–Hansen J-test tailored to stochastic learning. In Monte Carlo experiments based on a nonlinear EASI demand system with 576 moment conditions, 380 parameters, and n = 105 , SLIM solves the model in under 1.4 hours, whereas full-sample GMM in Stata on a powerful laptop converges only after 18 hours. The debiased plug-in J-test delivers satisfactory finite-sample inference, and SLIM scales smoothly to n = 106.
Recommended Citation
Chen, Xiaohong; Kim, Min Seong; Lee, Sokbae; Seo, Myung Hwan; and Song, Myunghyun, "SLIM: Stochastic Learning and Inference in Overidentified Models" (2025). Cowles Foundation Discussion Papers. 2891.
https://elischolar.library.yale.edu/cowles-discussion-paper-series/2891