Search results

1 – 10 of over 8000
Book part
Publication date: 25 July 1997

Les Gulko

Abstract

Details

Applying Maximum Entropy to Econometric Problems
Type: Book
ISBN: 978-0-76230-187-4

Book part
Publication date: 12 December 2003

Douglas Miller and Sang-Hak Lee

In this chapter, we use the minimum cross-entropy method to derive an approximate joint probability model for a multivariate economic process based on limited information about…

Abstract

In this chapter, we use the minimum cross-entropy method to derive an approximate joint probability model for a multivariate economic process based on limited information about the marginal quasi-density functions and the joint moment conditions. The modeling approach is related to joint probability models derived from copula functions, but we note that the entropy approach has some practical advantages over copula-based models. Under suitable regularity conditions, the quasi-maximum likelihood estimator (QMLE) of the model parameters is consistent and asymptotically normal. We demonstrate the procedure with an application to the joint probability model of trading volume and price variability for the Chicago Board of Trade soybean futures contract.

Details

Maximum Likelihood Estimation of Misspecified Models: Twenty Years Later
Type: Book
ISBN: 978-1-84950-253-5

Article
Publication date: 1 September 1995

Zuu‐Chang Hong, Ching Lin and Ming‐Hua Chen

A transport equation for the one‐point velocity probability densityfunction (pdf) of turbulence is derived, modelled and solved. The new pdfequation is obtained by two modeling…

Abstract

A transport equation for the one‐point velocity probability density function (pdf) of turbulence is derived, modelled and solved. The new pdf equation is obtained by two modeling steps. In the first step, a dynamic equation for the fluid elements is proposed in terms of the fluctuating part of Navier‐Stokes equation. A transition probability density function (tpdf) is extracted from the modelled dynamic equation. Then the pdf equation of Fokker‐Planck type is obtained from the tpdf. In the second step, the Fokker‐Planck type pdf equation is modified by Lundgren’s formal pdf equation to ensure it can properly describe the turbulence intrinsic mechanism. With the new pdf equation, the turbulent plane Couette flow is solved by the direct finite difference method coupled with dimensionality reduction and QUICKER scheme. A simple boundary treatment is proposed such that the near‐wall solution is tractable and then no refined grid is required. The calculated mean velocity, friction coefficient, and turbulence structure are in good agreement with available experimental data. In the region departed from the center of flow field, the contours of isojoint pdf of V1 and V2 is very similar to that of experimental result of channel flow. These agreements show the validity of the new pdf model and the availability of the boundary treatment and QUICKER scheme for solving the turbulent plane Couette flow.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 5 no. 9
Type: Research Article
ISSN: 0961-5539

Keywords

Book part
Publication date: 17 January 2009

Frenck Waage

Assume that we generate forecasts from a model y=cx+d+ξ. The constants “c” and “d” are placement parameters estimated from observations on x and y, and ξ is the residual error…

Abstract

Assume that we generate forecasts from a model y=cx+d+ξ. The constants “c” and “d” are placement parameters estimated from observations on x and y, and ξ is the residual error variable.

Our objective is to develop a method for accurately measuring and evaluating the risk profile of a forecasted variable y. To do so, it is necessary to first obtain an accurate representation of the histogram of a forecasting model's residual errors. That is not always so easy because the histogram of the residual ξ may be symmetric, or it may be skewed to either the left of or to the right of its mode. We introduce the probability density function (PDF) family of functions because it is versatile enough to fit any residual's locus be it skewed to the left, symmetric about the mean, or skewed to the right. When we have measured the residual's density, we show how to correctly calculate the risk profile of the forecasted variable y from the density of the residual using the PPD function. We achieve the desired and accurate risk profile for y that we seek. We conclude the chapter by discussing how a universally followed paradigm leads to misstating the risk profile and to wrongheaded decisions by too freely using the symmetric Gauss–normal function instead of the PPD function. We expect that this chapter will open up many new avenues of progress for econometricians.

Details

Advances in Business and Management Forecasting
Type: Book
ISBN: 978-1-84855-548-8

Article
Publication date: 1 August 2006

Marco Fabio Delzio

To propose a new methodology to infer the risk‐neutral default probability curve of a generic firm XYZ from equity options prices.

1247

Abstract

Purpose

To propose a new methodology to infer the risk‐neutral default probability curve of a generic firm XYZ from equity options prices.

Design/methodology/approach

It is assumed that the market is arbitrage‐free and the “market” probability measure implied in the equity options prices to the pricing of credit risky assets is applied. First, the equity probability density function of XYZ is inferred from a set of quoted equity options with different strikes and maturities. This function is then transformed into the probability density function of the XYZ assets and the term structure of the “option implied” XYZ default probabilities is calculated. These default probabilities can be used to price corporate bonds and, more generally, single‐name credit derivatives as “exotic” equity derivatives.

Findings

Equity derivatives and credit derivatives have ultimately the same (unobservable) underlying, the XYZ assets value. A model that considers any security issued by XYZ as derivatives on the firm's assets can be used to price these securities in a consistent way to each other and/or detect relative/value opportunities.

Originality/value

The paper offers both a pricing tool for traded single‐name credit risky assets or a relative value tool in liquid markets.

Details

The Journal of Risk Finance, vol. 7 no. 4
Type: Research Article
ISSN: 1526-5943

Keywords

Book part
Publication date: 14 July 2006

Duangkamon Chotikapanich and William E. Griffiths

Hypothesis tests for dominance in income distributions has received considerable attention in recent literature. See, for example, Barrett and Donald (2003a, b), Davidson and…

Abstract

Hypothesis tests for dominance in income distributions has received considerable attention in recent literature. See, for example, Barrett and Donald (2003a, b), Davidson and Duclos (2000) and references therein. Such tests are useful for assessing progress towards eliminating poverty and for evaluating the effectiveness of various policy initiatives directed towards welfare improvement. To date the focus in the literature has been on sampling theory tests. Such tests can be set up in various ways, with dominance as the null or alternative hypothesis, and with dominance in either direction (X dominates Y or Y dominates X). The result of a test is expressed as rejection of, or failure to reject, a null hypothesis. In this paper, we develop and apply Bayesian methods of inference to problems of Lorenz and stochastic dominance. The result from a comparison of two income distributions is reported in terms of the posterior probabilities for each of the three possible outcomes: (a) X dominates Y, (b) Y dominates X, and (c) neither X nor Y is dominant. Reporting results about uncertain outcomes in terms of probabilities has the advantage of being more informative than a simple reject/do-not-reject outcome. Whether a probability is sufficiently high or low for a policy maker to take a particular action is then a decision for that policy maker.

The methodology is applied to data for Canada from the Family Expenditure Survey for the years 1978 and 1986. We assess the likelihood of dominance from one time period to the next. Two alternative assumptions are made about the income distributions – Dagum and Singh-Maddala – and in each case the posterior probability of dominance is given by the proportion of times a relevant parameter inequality is satisfied by the posterior observations generated by Markov chain Monte Carlo.

Details

Dynamics of Inequality and Poverty
Type: Book
ISBN: 978-0-76231-350-1

Article
Publication date: 20 February 2007

J.P. Noonan and Prabahan Basu

In many problems involving decision‐making under uncertainty, the underlying probability model is unknown but partial information is available. In some approaches to this problem…

Abstract

Purpose

In many problems involving decision‐making under uncertainty, the underlying probability model is unknown but partial information is available. In some approaches to this problem, the available prior information is used to define an appropriate probability model for the system uncertainty through a probability density function. When the prior information is available as a finite sequence of moments of the unknown probability density function (PDF) defining the appropriate probability model for the uncertain system, the maximum entropy (ME) method derives a PDF from an exponential family to define an approximate model. This paper, aims to investigate some optimality properties of the ME estimates.

Design/methodology/approach

For n>m, when the exact model can be best approximated by one of an infinite number of unknown PDFs from an n parameter exponential family. The upper bound of the divergence distance between any PDF from this family and the m parameter exponential family PDF defined by the ME method are derived. A measure of adequacy of the model defined by ME method is thus provided.

Findings

These results may be used to establish confidence intervals on the estimate of a function of the random variable when the ME approach is employed. Additionally, it is shown that when working with large samples of independent observations, a probability density function (PDF) can be defined from an exponential family to model the uncertainty of the underlying system with measurable accuracy. Finally, a relationship with maximum likelihood estimation for this case is established.

Practical implications

The so‐called known moments problem addressed in this paper has a variety of applications in learning, blind equalization and neural networks.

Originality/value

An upper bound for error in approximating an unknown density function, f(x) by its ME estimate based on m moment constraints, obtained as a PDF p(x, α) from an m parameter exponential family is derived. The error bound will help us decide if the number of moment constraints is adequate for modeling the uncertainty in the system under study. In turn, this allows one to establish confidence intervals on an estimate of some function of the random variable, X, given the known moments. It is also shown how, when working with a large sample of independent observations, instead of precisely known moment constraints, a density from an exponential family to model the uncertainty of the underlying system with measurable accuracy can be defined. In this case, a relationship to ML estimation is established.

Details

Kybernetes, vol. 36 no. 1
Type: Research Article
ISSN: 0368-492X

Keywords

Abstract

Details

Fundamentals of Transportation and Traffic Operations
Type: Book
ISBN: 978-0-08-042785-0

Article
Publication date: 23 January 2019

Rakesh Ranjan, Subrata Kumar Ghosh and Manoj Kumar

The probability distribution of major length and aspect ratio (major length/minor length) of wear debris collected from gear oil used in planetary gear drive were analysed and…

Abstract

Purpose

The probability distribution of major length and aspect ratio (major length/minor length) of wear debris collected from gear oil used in planetary gear drive were analysed and modelled. The paper aims to find an appropriate probability distribution model to forecast the kind of wear particles at different running hour of the machine.

Design/methodology/approach

Used gear oil of the planetary gear box of a slab caster was drained out and charged with a fresh oil of grade (EP-460). Six chronological oil samples were collected at different time interval between 480 and 1,992 h of machine running. The oil samples were filtered to separate wear particles, and microscopic study of wear debris was carried out at 100X magnification. Statistical modelling of wear debris distribution was done using Weibull and exponential probability distribution model. A comparison was studied among actual, Weibull and exponential probability distribution of major length and aspect ratio of wear particles.

Findings

Distribution of major length of wear particle was found to be closer to the exponential probability density function, whereas Weibull probability density function fitted better to distribution of aspect ratio of wear particle.

Originality/value

The potential of the developed model can be used to analyse the distribution of major length and aspect ratio of wear debris present in planetary gear box of slab caster machine.

Details

Industrial Lubrication and Tribology, vol. 71 no. 2
Type: Research Article
ISSN: 0036-8792

Keywords

Book part
Publication date: 24 April 2023

Saraswata Chaudhuri, Eric Renault and Oscar Wahlstrom

The authors discuss the econometric underpinnings of Barro (2006)'s defense of the rare disaster model as a way to bring back an asset pricing model “into the right ballpark for…

Abstract

The authors discuss the econometric underpinnings of Barro (2006)'s defense of the rare disaster model as a way to bring back an asset pricing model “into the right ballpark for explaining the equity-premium and related asset-market puzzles.” Arbitrarily low-probability economic disasters can restore the validity of model-implied moment conditions only if the amplitude of disasters may be arbitrary large in due proportion. The authors prove an impossibility theorem that in case of potentially unbounded disasters, there is no such thing as a population empirical likelihood (EL)-based model-implied probability distribution. That is, one cannot identify some belief distortions for which the EL-based implied probabilities in sample, as computed by Julliard and Ghosh (2012), could be a consistent estimator. This may lead to consider alternative statistical discrepancy measures to avoid the problem with EL. Indeed, the authors prove that, under sufficient integrability conditions, power divergence Cressie-Read measures with positive power coefficients properly define a unique population model-implied probability measure. However, when this computation is useful because the reference asset pricing model is misspecified, each power divergence will deliver different model-implied beliefs distortion. One way to provide economic underpinnings to the choice of a particular belief distortion is to see it as the endogenous result of investor's choice when optimizing a recursive multiple-priors utility a la Chen and Epstein (2002). Jeong et al. (2015)'s econometric study confirms that this way of accommodating ambiguity aversion may help to address the Equity Premium puzzle.

Details

Essays in Honor of Joon Y. Park: Econometric Methodology in Empirical Applications
Type: Book
ISBN: 978-1-83753-212-4

Keywords

1 – 10 of over 8000