Document Type

Discussion Paper

Publication Date

1-1-1997

CFDP Number

1144

CFDP Pages

32

Abstract

This paper provides an algorithm for computing policies for dynamic economic models whose state vectors evolve as ergodic Markov processes. The algorithm can be described as a simple learning process (one that agents might actually use). It has two features which break the relationship between its computational requirements and the dimension of the model’s state space. First the integral over future states needed to determine policies is never calculated; rather it is estimated by a simple average of past outcomes. Second, the algorithm never computes policies at all points. Iterations are defined by a location and only policies at that location are computed. Random draws from the distribution determined by those policies determine the next location. This selection only repeatedly hits the recurrent class of points, a subset of the feasible set whose cardinality is not directly tied to the dimension of the state space. Our motivating example is Markov Perfect Equilibria (a leading model of industry dynamics; see Maskin and Tirole, 1988). Though estimators for the primitive parameters of these models are often available, computational problems have made it difficult to use them in applied analysis. We provide numerical results which show that our algorithm can be several orders of magnitude faster than standard algorithms in this case; opening up new possibilities for applied work

Included in

Economics Commons

Share

COinS