Measurement Error: Consequences, Applications and Solutions: Volume 24

Subject:

Table of contents

(17 chapters)

In the late 1980s, I attended a briefing for the Federal Reserve's Board of Governors prior to a meeting of its Federal Open Market Committee. Although employed by the St. Louis Fed, I was spending a week “behind the scenes,” observing how information was assembled and delivered by staff in Washington. During that briefing, when one of the senior staff mentioned that the most recent unemployment figure had changed by one-tenth of one percent. Manley Johnson, the Fed's Vice Chairman, then asked an obvious question: “What's the standard error of that measurement?” Some junior member of the staff said “point six” or, in other words, that any change in the unemployment rate within six-tenths of one percent would fall within the measurement error of the raw data. After an appropriate amount of chuckling rippled through the room at Governor Johnson's important insight, everyone went back to discussing how the recent 0.1 change in the measured unemployment rate should affect the looming monetary policy decision. And so it goes in the world of empirical macroeconomics and the sausage factory that is policy making at the Fed.

While it has been claimed in many empirical studies that the political futures market can forecast better than the polls, it is unclear upon which our forecast should be based. Standard practice seems to suggest the use of the closing price of the market, as a reflection of the continuous process of information revealing and aggregation, but we are unsure that this practice applies to thin markets. In this chapter, we propose a number of reconstructions of the price series and use the closing price based on these reconstructed series as the forecast. We then test these ideas by comparing their forecasting performance with the closing price of the original series. It is found that forecasting accuracy can be gained if we use the closing price based on the smoothing series rather than the original series. However, there is no clear advantage by either using more sophisticated smoothing techniques, such as wavelets, or using external information, such as trading volume and duration time. The results show that the median, the simplest smoothing technique, performs rather well when compared with all complications.

This chapter examines factors that cause violations of regularity conditions and biases in estimates of substitution. In the context of the Fourier demand system, failing to impose curvature restrictions but correcting for serial correlation results in few violations of the curvature conditions. In contrast, imposing curvature restrictions without correcting for serial correlation biases substitution estimates and can cause violations of monotonicity. For serially correlated data, results suggest that correcting for serial correlation may be more important than imposing curvature. Furthermore, the artificially break-adjusted data that are inconsistent with consumer optimization can severely bias estimates. Results from the Bank of England's (BOE) preferred non-break-adjusted data establish that money and goods are substitutes in demand.

In this chapter, we analyze the information content of data on inflationary expectations derived from the Israeli bond market. The results indicate that these expectations are unbiased and efficient with respect to the variables considered. In other words, we cannot reject the hypothesis that these expectations are rational.

The existence of continuous data of this type, which is unique to the Israeli economy, enables us to test a number of hypotheses concerning the nature of price adjustment. The study found that expected inflation is a primary factor in the explanation of current inflation. This result is in agreement with the neo-Keynesian approach according to which the adjustment of prices is costly, and as a result, price increases in the present are determined primarily by expectations of future price increases. It was also found that inflation in Israel is better explained by the neo-Keynesian approach than by the classical approach or the “lack of information” approach according to which current inflation is determined by past, rather than current, inflationary expectations.

Another issue examined in this study is whether inflationary inertia existed in Israel during the 1990s. From conventional estimation of an inflation equation (i.e., using future inflation as proxy for expectations), one can get the impression that there was strong inflationary inertia during this period. However, when data on inflationary expectations from the bond market were used in the estimation, this inertia (i.e., lagged inflation) became negative (and insignificant). This finding raises the possibility that inflationary inertia that is found elsewhere is not a structural phenomenon but an outcome of lack of reliable data on inflationary expectations.

The problem of measurement errors in the national accounts has been recognized for a long time. The error chiefly arises from various source data and the timing of the flow of data received from providers. This chapter first discusses the type of measurement errors confronted by statistical agencies. Second, it presents a model of their behavior that illustrates the trade-offs that must be made in dealing with such errors. Third, the chapter discusses how the quality of the estimates can be gauged given measurement error and the inability to conduct standard statistical tests. Although the focus is on the production of U.S. Gross Domestic Product, the principles are applicable to all national statistical agencies.

A new nonparametric procedure is developed to evaluate the significance of violations of weak separability. The procedure correctly detects weak separability with high probability using simulated data that have violations of weak separability caused by adding measurement error. Results are not very sensitive when the amount of measurement error is miss-specified by the researcher. The methodology also correctly rejects weak separability for nonseparable simulated data. We fail to reject weak separability for a monetary and consumption data set that has violations of revealed preference, which suggests that measurement error may be the source of the observed violations.

We study the effect of errors-in-variables [EV] on cointegration tests and cointegrating regressions. It turns out that the rate of convergence of static ordinary least squares [OLS] estimators is not affected by EV, whereas the limiting distribution does change. However, procedures accounting for short-run dynamics correct for EV at the same time and hence are robust to measurement errors. This is established asymptotically, and the relevance of our findings for finite samples is confirmed through computer experiments. Although our analysis is restricted to selected procedures, we indicate how our results will extend to related statistical techniques.

Weak separability is an important concept in many fields of economic theory. This chapter uses Monte Carlo experiments to investigate the performance of newly developed nonparametric revealed preference tests for weak separability. A main finding is that the bias of the sequentially implemented test for weak separability proposed by Fleissig and Whitney (2003) is low. The theoretically unbiased Swofford and Whitney test (1994) is found to perform better than all sequentially implemented test procedures but is found to suffer from an empirical bias, most likely because of the complexity in executing the test procedure. As a further source of information, we also perform sensitivity analyses on the nonparametric revealed preference tests. It is found that the Fleissig and Whitney test seems to be sensitive to measurement errors in the data.

In this chapter the author studies the capital market efficiency hypothesis and checks whether the stock price adjustment dynamics is instantaneous, continuous, and linear or not. In particular, the author proposes to analyze the stock price evolution while taking into account the presence of transaction costs, the coexistence of heterogeneous investors, and the interdependence between stock markets. On the one hand, he provides strong evidence to suggest that the efficiency hypothesis is rejected. On the other hand, he proves that the stock index adjustment is rather discontinuous, asymmetrical, and nonlinear. Using threshold cointegration techniques, he proposes a new nonlinear modeling to reproduce the CAC40 adjustment dynamics that not only replicates the French market adjustment dynamics in the presence of market frictions but also captures the interdependence between the French and American stock markets, highlighting the reaction of French shareholders in relation to the changes in the behaviour of American speculators.

Revealed preference axioms provide a simple way of testing data from consumers or firms for consistency with optimizing behavior. The resulting non-parametric tests are very attractive, since they do not require any ad hoc functional form assumptions. A weakness of such tests, however, is that they are non-stochastic. In this paper, we provide a detailed analysis of two non-parametric approaches that can be used to derive statistical tests for utility maximization, which account for random measurement errors in the observed data. These same approaches can also be used to derive tests for separability of the utility function.

In this chapter, I will examine the problems created by incorrectly using a simple sum monetary aggregate (SSUM) to measure the monetary stock. Specifically, I will show that SSUM confounds the current stock of money (CSM) with the investment stock of money (ISM) and that this confounding leads the SSUM to report an artificially smooth monetary stock. This smoothing causes important information about the dynamic movements of the monetary stock to be lost. This may offer at least a partial explanation of why so many studies find that money has little economic relevance. To that end, we will conclude the chapter by examining a reduced form backward looking IS equation to determine whether monetary aggregates contain information about real GDP gap. This chapter differs from previous work in monetary aggregation in that it focuses on smoothing of the monetary stock data caused by the use of simple sum methodology, where the previous work focuses on the bias exhibited by SSUMs.

This chapter presents a model of distribution dynamics in the presence of measurement error in the underlying data. Studies of international growth convergence generally ignore the fact that per capita income data from the Penn World Table (PWT) are not only continuous variables but also measured with error. Together with short-time scale fluctuations, measurement error makes inferences potentially unreliable. When first-order, time-homogeneous Markov models are fitted to continuous data with measurement error, a bias towards excess mobility is introduced into the estimated transition probability matrix. This chapter evaluates different methods of accounting for this error. An EM algorithm is used for parameter estimation, and the methods are illustrated using data from the PWT Mark 6.1. Measurement error in income data is found to have quantitatively important effects on distribution dynamics. For instance, purging the data of measurement error reduces estimated transition intensities by between one- and four-fifths and more than halves the observed mobility of countries.

This chapter introduces a mechanism for generating a series of rules that characterize the money-price relationship for the United States, defined as the relationship between the rate of growth of the money supply and inflation. Monetary Services Indicator (MSI) component data is used to train a selection of candidate feedforward neural networks. The selected network is mined for rules, expressed in human-readable and machine-executable form. The rule and network accuracy are compared, and expert commentary is made on the readability and reliability of the extracted rule set. The ultimate goal of this research is to produce rules that meaningfully and accurately describe inflation in terms of the MSI component dataset.11Paper cleared for public release AFRL/WS–07–0848.

DOI
10.1108/S0731-9053(2009)24
Publication date
Book series
Advances in Econometrics
Editors
Series copyright holder
Emerald Publishing Limited
ISBN
978-1-84855-902-8
eISBN
978-1-84855-903-5
Book series ISSN
0731-9053