Calibrating RBC models - A simple example illustrating the method of successive approximations

Indian Growth and Development Review

ISSN: 1753-8254

Article publication date: 18 April 2008

Citation

Ghate, C. (2008), "Calibrating RBC models - A simple example illustrating the method of successive approximations", Indian Growth and Development Review, Vol. 1 No. 1, pp. 119-124. https://doi.org/10.1108/igdr.2008.35001aab.002

Publisher

:

Emerald Group Publishing Limited

Copyright © 2008, Emerald Group Publishing Limited


Calibrating RBC models - A simple example illustrating the method of successive approximations

Article Type: Education briefing From: Indian Growth and Development Review, Volume 1, Issue 1.

Introduction

Since the seminal contribution of Kydland and Prescott (1982), dynamic stochastic general equilibrium (DSGE) models have become the main framework for studying real business cycles (RBC) in advanced economies. The methodological contribution of Kydland and Prescott's approach has had profound implications for the evaluation of macroeconomic models. A "good" model is one that generates data that are consistent with a set of facts that the model is trying to explain. Macroeconomists now take the goodness of fit criterion very seriously when thinking about the accuracy of quantitative models.

It is widely recognized, however, that it is not generally possible to compute equilibria of business cycle models analytically. This led Kydland and Prescott (1982) to consider a structure in which equilibria can be computed. Their insight was that a structure with a quadratic objective function (generated by taking a second-order Taylor approximation of the objective function of the representative agent), linear constraints, and a technology shock following a first-order linear vector auto-regressive process can lead to the computation of equilibria even when there are many state variables. Chapter 2 in Cooley (1995), by Gary Hansen and Edward Prescott, outlines how this method works. For economies in which the second welfare theorem holds (a competitive equilibrium allocation can be attained by a social planner), the social planning problem can be converted into a standard linear quadratic dynamic programming problem and then solved using the method of successive approximations[1].

My experience suggests that a simple analytical example which shows how future state variables and decision variables are eliminated using the method of successive approximations greatly helps students understand the general procedure outlined by Hansen and Prescott. Using a 3 × 3 version of their procedure, I work out the mechanism behind: (1) the elimination of a future state variable (such as the future capital stock, k′) and (2) the elimination of a decision variable (such as hours worked, h) using the method of successive approximations. The value addition to performe this exercise is that students can clearly see how the mechanism behind the procedure works which is quite intricate.

In what follows, I assume that the matrix, R[η(x)], where η(x) denotes the dimension of R, has dimension 3 × 3. Hence, η(x) = 3[2]. I use identical notation as in Cooley (1995, chapter 2). My discussion of quadratic forms is identical to Simon and Blume (1994, pp. 289-91). I first discuss some preliminaries[3].

Definition 1 (Simon and Blume, 1994, p. 289). A quadratic form on is a real-valued function of the form:

We write the general quadratic form in in matrix form as:

Similarly, the general quadratic form on R3 is given by:

This indicates a basic theorem about representing quadratic forms on .

Theorem 2 (Simon and Blume, 1994, Theorem 13.3, p. 291). The general quadratic form

can be written as:

where, A is a (unique) symmetric matrix. Conversely, if A is a symmetric matrix, then the real-valued function, Q(x) = xT Ax, as above, is a quadratic form.

An example

Eliminating a future state variable

We now utilize the above discussion on quadratic forms to work out how the method of successive approximation eliminates some future state variable and some decision variable. To motivate the discussion, suppose that we want to eliminate k′, the future capital stock. The law of motion of k′ is given by:

The variable notations are standard: δ ∈ [0, 1] denotes the depreciation rate on capital, k′ is the period t + 1 capital stock, and, i denotes the amount of investment undertaken by the representative agent in the economy. As described by Hansen and Prescott (1995, pp. 46-50), we can express, k′, as a linear combination of the current states (the technology shock, z, and the capital stock, k) and the current decision variables (h denoting hours worked if the labour-leisure choice is endogenous, and i). In our case we have:

More generally, denote x to be the vector of both current and future state as well as decision variables, where xj is the jth component of x. If xj is some future state variable, (from Step 3, Cooley, 1995, p. 49) we can write:

or as a linear combination solely of the current states and current decision variables. Now assume that x is a 1 × 3 vector, i.e. x = [x1, x2, x3]. Since j = 3, we have[4]:

We are interested in eliminating the last component of the vector x, x3, which we assume to be some future state variable. Using Equation (2), we simply substitute out the expression for x3 given in Equation (3) into the above expression and consolidate terms. This reduces the dimension of the quadratic form by one since we have now gotten rid of x3 and have everything in terms of x1 and x2:

where:

Thus, we have reduced the dimension of the matrix R from [3 × 3] to [2 × 2]. It remains to be checked whether expressions for b11, b12, and b22 are identical to the general formulas given by Hansen and Prescott in Step 3 (Cooley, 1995, p. 48, equation (17)):

For b22, we have i = 2, h = 2, j − 1 = 2, and j = 3. This gives us,

Similarly, for b11 we have,

Likewise, one can work out the case for b12. Note that each of the above expressions for bij, j = 1, 2, are identical to the expressions obtained when we substitute out for x3 (in terms of x1 and x2) above. Now that x3 is eliminated, the R matrix has dimension [2 × 2]. This is essentially the mechanism behind what is going on when a future state is being eliminated. Hansen and Prescott also suggest (see Equation (18), p. 48) that there is a simpler way of eliminating a future state and reducing R from a [η(x) × η(x)] matrix to a [(η(x) − 1) × (η(x) − 1)] matrix. This involves defining:

where:

Ij − 1 is a [j − 1] dimensional identity matrix, and R[j] corresponds to a [j × j] dimensional matrix. Assuming that j = 3 (the third element, x3, needs to be eliminated), this gives us, R[2] = ΓT R[3] Γ. We also have,

which implies,

Then,

which after some algebra, can be shown to yield:

where bi, j = 1, 2 are defined above. Once again, we reduce R from [3 × 3] to a [2 × 2] matrix.

Eliminating a decision variable

Now suppose that the variable to be eliminated, xj, is a decision variable. Once again, we assume that j = 3, although x3 is now a decision variable. Hansen and Prescott (1995, p. 49, Equation (19)), provide the formula for what the optimal value of x3 from the relevant first-order condition:

which implies,

However, to the reader, it may not be clear where the closed form expression for x3 is coming from. To see this, simply take the partial derivative of Q( . ) in Equation (2) with respect to x3 and set it equal to zero. This gives Equation (4). Note that the second-order conditions are satisfied if a33 < 0. If the return function is strictly concave, then this condition will be satisfied.

In sum, the value addition in performing this exercise is to clearly see how the mechanism behind the method of successive approximation works. We do this by considering a 3 × 3 case of the procedure worked out by Hansen and Prescott (1995, chapter 2). We focus our attention however just on the elimination of a future state variable and a decision variable. My experience is that working out this example as a supplement to describing the method of successive approximations greatly enhances a student's comprehension of the procedure outlined by Hansen and Prescott. It also helps students clearly understand the computer code when calibrating RBC models using this technique[5].

Chetan GhatePlanning Unit, Indian Statistical Institute, New Delhi, Indiacghate@isid.ac.in

Notes

  1. 1.

    Two excellent books that cover a variety of approaches to calibrate and estimate DSGE models are Burkhard and Maussner (2005) and Dejong and Dave (2007).

  2. 2.

    The R[η(x)] matrix contains the terms obtained from the Taylor approximation of the return function (evaluated at the steady-state values of the current state and decision variables) and the initial guess for the value function. See Cooley (1995, chapter 2, p. 47) for more details.

  3. 3.

    The reader is referred to Cooley (1995, chapter 2) for a detailed discussion of the definition of variables and notation that follows.

  4. 4.

    The quadratic form is given by: xT R[3] x.

  5. 5.

    A useful cite which has detailed listings of computer code for RBC and other quantitative macroeconomic models is http://dge.repec.org/codes.html

References

Burkhard, H. and Maussner, A. (2005), Dynamic General Equilibrium Models: Computational Methods and Applications, Springer-Verlag, Berlin.

Cooley, T. (Ed.) (1995), Frontiers of Business Cycle Research, Princeton University Press, Princeton, NJ.

Dejong, D. and Dave, C. (2007), Structural Macroeconometrics, Princeton University Press, Princeton, NJ.

Hansen, G. and Prescott, E. (1995), "Recursive methods for computing equilibria of business cycle models", in Cooley, T. (Ed.), Frontiers of Business Cycle Research, Princeton University Press, Princeton, NJ, pp. 39-64.

Kydland, F. and Prescott, E. (1982), "Time to build and aggregate fluctuations", Econometrica, Vol. 50, No. 6, pp. 1345-70.

Simon, C. and Blume, L. (1994), Mathematics for Economists, W.W. Norton and Company, Inc., New York, NY.