Search results

1 – 10 of 526
Book part
Publication date: 31 December 2010

Rania Hentati and Jean-Luc Prigent

Purpose – In this chapter, copula theory is used to model dependence structure between hedge fund returns series.Methodology/approach – Goodness-of-fit tests, based on the…

Abstract

Purpose – In this chapter, copula theory is used to model dependence structure between hedge fund returns series.

Methodology/approach – Goodness-of-fit tests, based on the Kendall's functions, are applied as selection criteria of the “best” copula. After estimating the parametric copula that best fits the used data, we apply previous results to construct the cumulative distribution functions of the equally weighted portfolios.

Findings – The empirical validation shows that copula clearly allows better estimation of portfolio returns including hedge funds. The three studied portfolios reject the assumption of multivariate normality of returns. The chosen structure is often of Student type when only indices are considered. In the case of portfolios composed by only hedge funds, the dependence structure is of Franck type.

Originality/value of the chapter – Introducing goodness-of-fit bootstrap method to validate the choice of the best structure of dependence is relevant for hedge fund portfolios. Copulas would be introduced to provide better estimations of performance measures.

Details

Nonlinear Modeling of Economic and Financial Time-Series
Type: Book
ISBN: 978-0-85724-489-5

Keywords

Article
Publication date: 11 November 2013

Giovanni Petrone, John Axerio-Cilies, Domenico Quagliarella and Gianluca Iaccarino

A probabilistic non-dominated sorting genetic algorithm (P-NSGA) for multi-objective optimization under uncertainty is presented. The purpose of this algorithm is to create a…

Abstract

Purpose

A probabilistic non-dominated sorting genetic algorithm (P-NSGA) for multi-objective optimization under uncertainty is presented. The purpose of this algorithm is to create a tight coupling between the optimization and uncertainty procedures, use all of the possible probabilistic information to drive the optimizer, and leverage high-performance parallel computing.

Design/methodology/approach

This algorithm is a generalization of a classical genetic algorithm for multi-objective optimization (NSGA-II) by Deb et al. The proposed algorithm relies on the use of all possible information in the probabilistic domain summarized by the cumulative distribution functions (CDFs) of the objective functions. Several analytic test functions are used to benchmark this algorithm, but only the results of the Fonseca-Fleming test function are shown. An industrial application is presented to show that P-NSGA can be used for multi-objective shape optimization of a Formula 1 tire brake duct, taking into account the geometrical uncertainties associated with the rotating rubber tire and uncertain inflow conditions.

Findings

This algorithm is shown to have deterministic consistency (i.e. it turns back to the original NSGA-II) when the objective functions are deterministic. When the quality of the CDF is increased (either using more points or higher fidelity resolution), the convergence behavior improves. Since all the information regarding uncertainty quantification is preserved, all the different types of Pareto fronts that exist in the probabilistic framework (e.g. mean value Pareto, mean value penalty Pareto, etc.) are shown to be generated a posteriori. An adaptive sampling approach and parallel computing (in both the uncertainty and optimization algorithms) are shown to have several fold speed-up in selecting optimal solutions under uncertainty.

Originality/value

There are no existing algorithms that use the full probabilistic distribution to guide the optimizer. The method presented herein bases its sorting on real function evaluations, not merely measures (i.e. mean of the probabilistic distribution) that potentially do not exist.

Details

Engineering Computations, vol. 30 no. 8
Type: Research Article
ISSN: 0264-4401

Keywords

Book part
Publication date: 18 October 2019

Mohammad Arshad Rahman and Shubham Karnawat

This article is motivated by the lack of flexibility in Bayesian quantile regression for ordinal models where the error follows an asymmetric Laplace (AL) distribution. The…

Abstract

This article is motivated by the lack of flexibility in Bayesian quantile regression for ordinal models where the error follows an asymmetric Laplace (AL) distribution. The inflexibility arises because the skewness of the distribution is completely specified when a quantile is chosen. To overcome this shortcoming, we derive the cumulative distribution function (and the moment-generating function) of the generalized asymmetric Laplace (GAL) distribution – a generalization of AL distribution that separates the skewness from the quantile parameter – and construct a working likelihood for the ordinal quantile model. The resulting framework is termed flexible Bayesian quantile regression for ordinal (FBQROR) models. However, its estimation is not straightforward. We address estimation issues and propose an efficient Markov chain Monte Carlo (MCMC) procedure based on Gibbs sampling and joint Metropolis–Hastings algorithm. The advantages of the proposed model are demonstrated in multiple simulation studies and implemented to analyze public opinion on homeownership as the best long-term investment in the United States following the Great Recession.

Details

Topics in Identification, Limited Dependent Variables, Partial Observability, Experimentation, and Flexible Modeling: Part B
Type: Book
ISBN: 978-1-83867-419-9

Keywords

Abstract

Details

Contingent Valuation: A Critical Assessment
Type: Book
ISBN: 978-1-84950-860-5

Article
Publication date: 6 August 2018

Keerti Tiwari, Davinder S. Saini and Sunil V. Bhooshan

This paper aims to exploit an orthogonal space-time block code (OSTBC) and maximal ratio combining (MRC) techniques to evaluate error rate performance of multiple-input…

Abstract

Purpose

This paper aims to exploit an orthogonal space-time block code (OSTBC) and maximal ratio combining (MRC) techniques to evaluate error rate performance of multiple-input multiple-output system for different modulation schemes operating over single- and double-Weibull fading channels.

Design/methodology/approach

The authors provided a novel analytical expression for cumulative distribution function (CDF) of double-Weibull distribution in the form of Meijer-G function. They also evaluated probability density function (PDF) and CDF for single- and double-Weibull random variables. CDF-based closed-form expressions of symbol error rate (SER) are computed for the proposed systems’ design.

Findings

Based on simulation and analytical results, the authors have shown that double-Weibull fading which shows the cascaded nature of channel gives significantly poor SER performance compared to that of single-Weibull fading. Moreover, MRC offers an improved error rate performance compared to that of OSTBC. As the fading parameter increases for any modulation technique, the required signal-to-noise ratio (SNR) gap between single- and double-Weibull fading decreases. Finally, it is observed that the analytical results are a good approximation to simulation results.

Practical implications

For practical implication, the authors use a number of antennas at the base station, but solely to maximize performance, one can use receive diversity, i.e. MRC.

Originality/value

Using higher-order modulation (i.e. 16-QAM), 4 and 1 dB less SNR is required at high and less fading, respectively, in single-Weibull fading as compared to double-Weibull fading. Hence, at higher-order modulation, double-Weibull channel model performs better as compared to lower-order modulation.

Book part
Publication date: 16 December 2009

Gaosheng Ju, Rui Li and Zhongwen Liang

In this paper we construct a nonparametric kernel estimator to estimate the joint multivariate cumulative distribution function (CDF) of mixed discrete and continuous variables…

Abstract

In this paper we construct a nonparametric kernel estimator to estimate the joint multivariate cumulative distribution function (CDF) of mixed discrete and continuous variables. We use a data-driven cross-validation method to choose optimal smoothing parameters which asymptotically minimize the mean integrated squared error (MISE). The asymptotic theory of the proposed estimator is derived, and the validity of the cross-validation method is proved. We provide sufficient and necessary conditions for the uniqueness of optimal smoothing parameters when the estimation of CDF degenerates to the case with only continuous variables, and provide a sufficient condition for the general mixed variables case.

Details

Nonparametric Econometric Methods
Type: Book
ISBN: 978-1-84950-624-3

Article
Publication date: 15 August 2019

Aleksandre Maisashvili, Henry Bryant, George Knapek and James Marc Raulston

The purpose of this paper is to develop methods for inferring if crop insurance premiums imply yield distributions that are valid according to standard laws of probability and…

Abstract

Purpose

The purpose of this paper is to develop methods for inferring if crop insurance premiums imply yield distributions that are valid according to standard laws of probability and broadly consistent with observed empirical evidence. The authors also survey current premium-implied distributions both before and after conditioning on the producer’s choice of coverage level.

Design/methodology/approach

Under an assumption of actuarial fairness, the authors derive expressions for upper and lower bounds for premium-implied yield cumulative distribution functions (CDFs) at loss thresholds for each coverage level. When observed premiums imply a CDF that exceeds one or is not non-decreasing, the authors conclude that premiums cannot be actuarially fair. The authors additionally specify very weak conditions for premium-implied yield CDFs to be consistent with two possible reasonable parametric distributions.

Findings

The authors evaluate premiums for the year 2018 for 19,104 county-crop-type-practice combinations, both before and after conditioning on producer’s choice of coverage level. The authors find problems in roughly one-third of cases. Problems are exhibited for all crops evaluated, and are strongly associated with areas with lower expected yields and higher yield variability. At least 40m acres are currently insured under premium schedules that cannot possibly be consistent with valid probability distributions.

Originality/value

The authors make two primary contributions. First, the premium-implied yield CDF bounds the authors derive requires fewer assumptions than previous similar work, while simultaneously placing more stringent conditions on premiums to be consistent with actuarial fairness. Second, the authors show that current US crop insurance premiums cannot possibly be actuarially fair for many cases, reflecting tens of millions of insured acres, which implies sub-optimal producer risk mitigation and inequitable expenditures for producers and taxpayers.

Details

Agricultural Finance Review, vol. 79 no. 4
Type: Research Article
ISSN: 0002-1466

Keywords

Article
Publication date: 14 October 2022

Fernando Antonio Moala and Karlla Delalibera Chagas

The step-stress accelerated test is the most appropriate statistical method to obtain information about the reliability of new products faster than would be possible if the…

Abstract

Purpose

The step-stress accelerated test is the most appropriate statistical method to obtain information about the reliability of new products faster than would be possible if the product was left to fail in normal use. This paper presents the multiple step-stress accelerated life test using type-II censored data and assuming a cumulative exposure model. The authors propose a Bayesian inference with the lifetimes of test item under gamma distribution. The choice of the loss function is an essential part in the Bayesian estimation problems. Therefore, the Bayesian estimators for the parameters are obtained based on different loss functions and a comparison with the usual maximum likelihood (MLE) approach is carried out. Finally, an example is presented to illustrate the proposed procedure in this paper.

Design/methodology/approach

A Bayesian inference is performed and the parameter estimators are obtained under symmetric and asymmetric loss functions. A sensitivity analysis of these Bayes and MLE estimators are presented by Monte Carlo simulation to verify if the Bayesian analysis is performed better.

Findings

The authors demonstrated that Bayesian estimators give better results than MLE with respect to MSE and bias. The authors also consider three types of loss functions and they show that the most dominant estimator that had the smallest MSE and bias is the Bayesian under general entropy loss function followed closely by the Linex loss function. In this case, the use of a symmetric loss function as the SELF is inappropriate for the SSALT mainly with small data.

Originality/value

Most of papers proposed in the literature present the estimation of SSALT through the MLE. In this paper, the authors developed a Bayesian analysis for the SSALT and discuss the procedures to obtain the Bayes estimators under symmetric and asymmetric loss functions. The choice of the loss function is an essential part in the Bayesian estimation problems.

Details

International Journal of Quality & Reliability Management, vol. 40 no. 4
Type: Research Article
ISSN: 0265-671X

Keywords

Book part
Publication date: 2 December 2021

Gordon Anderson and Rui Fu

Wellbeing evaluation using ordered categorical response data is hazardous given the scale dependent nature of most measures of wellbeing and inequality. Here, scale independent…

Abstract

Wellbeing evaluation using ordered categorical response data is hazardous given the scale dependent nature of most measures of wellbeing and inequality. Here, scale independent instruments for measuring levels of wellbeing and inequalities between groups in multidimensional ordered categorical environments are introduced and applied in a study of health and consumption wellbeing and the aging process in twenty‐first century China. Urban/rural location, gender, age and the availability of welfare support were considered circumstances in what is in essence a study of equality of opportunity in the acquisition of health and consumption wellbeing in Chinas’ aging population. Older populations are found to experience diminished and increasingly diverse wellbeing outcomes that are, to some extent, ameliorated by welfare support.

Details

Research on Economic Inequality: Poverty, Inequality and Shocks
Type: Book
ISBN: 978-1-80071-558-5

Keywords

Article
Publication date: 22 March 2022

Zhanpeng Shen, Chaoping Zang, Xueqian Chen, Shaoquan Hu and Xin-en Liu

For fast calculation of complex structure in engineering, correlations among input variables are often ignored in uncertainty propagation, even though the effect of ignoring these…

Abstract

Purpose

For fast calculation of complex structure in engineering, correlations among input variables are often ignored in uncertainty propagation, even though the effect of ignoring these correlations on the output uncertainty is unclear. This paper aims to quantify the inputs uncertainty and estimate the correlations among them acorrding to the collected observed data instead of questionable assumptions. Moreover, the small size of the experimental data should also be considered, as it is such a common engineering problem.

Design/methodology/approach

In this paper, a novel method of combining p-box with copula function for both uncertainty quantification and correlation estimation is explored. Copula function is utilized to estimate correlations among uncertain inputs based upon the observed data. The p-box method is employed to quantify the input uncertainty as well as the epistemic uncertainty associated with the limited amount of the observed data. Nested Monte Carlo sampling technique is adopted herein to ensure that the propagation is always feasible. In addition, a Kriging model is built up to reduce the computational cost of uncertainty propagation.

Findings

To illustrate the application of this method, an engineering example of structural reliability assessment is performed. The results indicate that it may significantly affect output uncertainty whether to quantify the correlation among input variables. Furthermore, an additional advantage for risk management is obtained in this approach due to the separation of aleatory and epistemic uncertainties.

Originality/value

The proposed method takes advantage of p-box and copula function to deal with the correlations and limited amount of the observed data, which are two important issues of uncertainty quantification in engineering. Thus, it is practical and has the ability to predict accurate response uncertainty or system state.

Details

Engineering Computations, vol. 39 no. 6
Type: Research Article
ISSN: 0264-4401

Keywords

1 – 10 of 526