Search results
1 – 10 of over 1000Douglas Miller and Sang-Hak Lee
In this chapter, we use the minimum cross-entropy method to derive an approximate joint probability model for a multivariate economic process based on limited information about…
Abstract
In this chapter, we use the minimum cross-entropy method to derive an approximate joint probability model for a multivariate economic process based on limited information about the marginal quasi-density functions and the joint moment conditions. The modeling approach is related to joint probability models derived from copula functions, but we note that the entropy approach has some practical advantages over copula-based models. Under suitable regularity conditions, the quasi-maximum likelihood estimator (QMLE) of the model parameters is consistent and asymptotically normal. We demonstrate the procedure with an application to the joint probability model of trading volume and price variability for the Chicago Board of Trade soybean futures contract.
Sampling units for the 2013 Methods-of-Payment survey were selected through an approximate stratified two-stage sampling design. To compensate for nonresponse and noncoverage and…
Abstract
Sampling units for the 2013 Methods-of-Payment survey were selected through an approximate stratified two-stage sampling design. To compensate for nonresponse and noncoverage and ensure consistency with external population counts, the observations are weighted through a raking procedure. We apply bootstrap resampling methods to estimate the variance, allowing for randomness from both the sampling design and raking procedure. We find that the variance is smaller when estimated through the bootstrap resampling method than through the naive linearization method, where the latter does not take into account the correlation between the variables used for weighting and the outcome variable of interest.
Details
Keywords
Andrew H. Chen, James A. Conover and John W. Kensinger
Analysis of Information Options offers new tools for evaluating investments in research, mineral exploration, logistics, energy transmission, and other information operations…
Abstract
Analysis of Information Options offers new tools for evaluating investments in research, mineral exploration, logistics, energy transmission, and other information operations. With Information Options, the underlying assets are information assets and the rules governing exercise are based on the realities of the information realm (infosphere). Information Options can be modeled as options to “purchase” information assets by paying the cost of the information operations involved. Information Options arise at several stages of value creation. The initial stage involves observation of physical phenomena with accompanying data capture. The next refinement is to organize the data into structured databases. Then bits of information are selected from storage and synthesized into an information product (such as a management report). Next, the information product is presented to the user via an efficient interface that does not require the user to be a field expert. Information Options are similar in concept to real options but substantially different in their details, since real options have physical objects as the underlying assets and the rules governing exercise are based on the realities of the physical world. Also, while exercising a financial option typically kills the option, Information Options may include multiple exercises. Information Options may involve high volatility or jump processes as well, further enhancing their value. This chapter extends several important real option applications into the information realm, including jump process models and models for valuing options to synthesize any of n information items into any of m output assets.
Details
Keywords
Stephen M. Stohs and Jeffrey T. LaFrance
A common feature of certain kinds of data is a high level of statistical dependence across space and time. This spatial and temporal dependence contains useful information that…
Abstract
A common feature of certain kinds of data is a high level of statistical dependence across space and time. This spatial and temporal dependence contains useful information that can be exploited to significantly reduce the uncertainty surrounding local distributions. This chapter develops a methodology for inferring local distributions that incorporates these dependencies. The approach accommodates active learning over space and time, and from aggregate data and distributions to disaggregate individual data and distributions. We combine data sets on Kansas winter wheat yields – annual county-level yields over the period from 1947 through 2000 for all 105 counties in the state of Kansas, and 20,720 individual farm-level sample moments, based on ten years of the reported actual production histories for the winter wheat yields of farmers participating in the United States Department of Agriculture Federal Crop Insurance Corporation Multiple Peril Crop Insurance Program in each of the years 1991–2000. We derive a learning rule that combines statewide, county, and local farm-level data using Bayes’ rule to estimate the moments of individual farm-level crop yield distributions. Information theory and the maximum entropy criterion are used to estimate farm-level crop yield densities from these moments. These posterior densities are found to substantially reduce the bias and volatility of crop insurance premium rates.
Andrew H. Chen, James A. Conover and John W. Kensinger
Option models have provided insight into the value of flexibility to switch from one state to another (such as switching a mine or refinery from operating to closed status). More…
Abstract
Option models have provided insight into the value of flexibility to switch from one state to another (such as switching a mine or refinery from operating to closed status). More complex flexible processes offer multiple possibilities for switching states. A fabrication facility, for example, may offer options to shift from the current status to any of several alternatives (reflecting reconfiguration of basic facilities to accommodate different operating processes with different outputs). New algorithms enable practical application of complex option pricing models to flexible facilities, improving analysts’ ability to draw sound conclusions about the effects of flexibility and innovativeness on share value. Such models also apply for options with information items as the underlying assets. Information organizations such as oil exploration and development companies may include options to shift from the current capability to any of several alternatives reflecting added abilities to handle new information sources or apply the organization’s talents in new ways. In the case of either physical or information processing, careful attention to estimating the matrix of correlations among the values of potential alternative states allows explicit integration of financial analysis and strategic analysis – especially the influence of substitutes and the anticipated reactions of competitors, suppliers, and potential new entrants.
Details
Keywords
Whayoung Jung and Ji Hyung Lee
This chapter studies the dynamic responses of the conditional quantiles and their applications in macroeconomics and finance. The authors build a multi-equation autoregressive…
Abstract
This chapter studies the dynamic responses of the conditional quantiles and their applications in macroeconomics and finance. The authors build a multi-equation autoregressive conditional quantile model and propose a new construction of quantile impulse response functions (QIRFs). The tool set of QIRFs provides detailed distributional evolution of an outcome variable to economic shocks. The authors show the left tail of economic activity is the most responsive to monetary policy and financial shocks. The impacts of the shocks on Growth-at-Risk (the 5% quantile of economic activity) during the Global Financial Crisis are assessed. The authors also examine how the economy responds to a hypothetical financial distress scenario.
Details
Keywords
Knowledge of the dependence structure between financial assets is crucial to improve the performance in financial risk management. It is known that the copula completely…
Abstract
Knowledge of the dependence structure between financial assets is crucial to improve the performance in financial risk management. It is known that the copula completely summarizes the dependence structure among multiple variables. We propose a multivariate exponential series estimator (ESE) to estimate copula densities nonparametrically. The ESE has an appealing information-theoretic interpretation and attains the optimal rate of convergence for nonparametric density estimations in Stone (1982). More importantly, it overcomes the boundary bias of conventional nonparametric copula estimators. Our extensive Monte Carlo studies show the proposed estimator outperforms the kernel and the log-spline estimators in copula estimation. It also demonstrates that two-step density estimation through an ESE copula often outperforms direct estimation of joint densities. Finally, the ESE copula provides superior estimates of tail dependence compared to the empirical tail index coefficient. An empirical examination of the Asian financial markets using the proposed method is provided.
Jan F. Kiviet and Jerzy Niemczyk
IV estimation is examined when some instruments may be invalid. This is relevant because the initial just-identifying orthogonality conditions are untestable, whereas their…
Abstract
IV estimation is examined when some instruments may be invalid. This is relevant because the initial just-identifying orthogonality conditions are untestable, whereas their validity is required when testing the orthogonality of additional instruments by so-called overidentification restriction tests. Moreover, these tests have limited power when samples are small, especially when instruments are weak. Distinguishing between conditional and unconditional settings, we analyze the limiting distribution of inconsistent IV and examine normal first-order asymptotic approximations to its density in finite samples. For simple classes of models we compare these approximations with their simulated empirical counterparts over almost the full parameter space. The latter is expressed in measures for: model fit, simultaneity, instrument invalidity, and instrument weakness. Our major findings are that for the accuracy of large sample asymptotic approximations instrument weakness is much more detrimental than instrument invalidity. Also, IV estimators obtained from strong but possibly invalid instruments are usually much closer to the true parameter values than those obtained from valid but weak instruments.
Details