Search results

1 – 10 of 899
Book part
Publication date: 23 October 2023

Morten I. Lau, Hong Il Yoo and Hongming Zhao

We evaluate the hypothesis of temporal stability in risk preferences using two recent data sets from longitudinal lab experiments. Both experiments included a combination of…

Abstract

We evaluate the hypothesis of temporal stability in risk preferences using two recent data sets from longitudinal lab experiments. Both experiments included a combination of decision tasks that allows one to identify a full set of structural parameters characterizing risk preferences under Cumulative Prospect Theory (CPT), including loss aversion. We consider temporal stability in those structural parameters at both population and individual levels. The population-level stability pertains to whether the distribution of risk preferences across individuals in the subject population remains stable over time. The individual-level stability pertains to within-individual correlation in risk preferences over time. We embed the CPT structure in a random coefficient model that allows us to evaluate temporal stability at both levels in a coherent manner, without having to switch between different sets of models to draw inferences at a specific level.

Details

Models of Risk Preferences: Descriptive and Normative Challenges
Type: Book
ISBN: 978-1-83797-269-2

Keywords

Article
Publication date: 21 November 2023

Patrice Gaillardetz and Saeb Hachem

By using higher moments, this paper extends the quadratic local risk-minimizing approach in a general discrete incomplete financial market. The local optimization subproblems are…

Abstract

Purpose

By using higher moments, this paper extends the quadratic local risk-minimizing approach in a general discrete incomplete financial market. The local optimization subproblems are convex or nonconvex, depending on the moment variants used in the modeling. Inspired by Lai et al. (2006), the authors propose a new multiobjective approach for the combination of moments that is transformed into a multigoal programming problem.

Design/methodology/approach

The authors evaluate financial derivatives with American features using local risk-minimizing strategies. The financial structure is in line with Schweizer (1988): the market is discrete, self-financing is not guaranteed, but deviations are controlled and reduced by minimizing the second moment. As for the quadratic approach, the algorithm proceeds backwardly.

Findings

In the context of evaluating American option, a transposition of this multigoal programming leads not only to nonconvex optimization subproblems but also to the undesirable fact that local zero deviations from self-financing are penalized. The analysis shows that issuers should consider some higher moments when evaluating contingent claims because they help reshape the distribution of global cumulative deviations from self-financing.

Practical implications

A detailed numerical analysis that compares all the moments or some combinations of them is performed.

Originality/value

The quadratic approach is extended by exploring other higher moments, positive combinations of moments and variants to enforce asymmetry. This study also investigates the impact of two types of exercise decisions and multiple assets.

Details

Studies in Economics and Finance, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1086-7376

Keywords

Article
Publication date: 31 May 2024

Haylim Chha and Yongbo Peng

Contemporary stochastic optimal control by synergy of the probability density evolution method (PDEM) and conventional optimal controller exhibits less capability to guarantee…

Abstract

Purpose

Contemporary stochastic optimal control by synergy of the probability density evolution method (PDEM) and conventional optimal controller exhibits less capability to guarantee economical energy consumption versus control efficacy when non-stationary stochastic excitations drive hysteretic structures. In this regard, a novel multiscale stochastic optimal controller is invented based on the wavelet transform and the PDEM.

Design/methodology/approach

For a representative point, a conventional control law is decomposed into sub-control laws by deploying the multiresolution analysis. Then, the sub-control laws are classified into two generic control laws using resonant and non-resonant bands. Both frequency bands are established by employing actual natural frequency(ies) of structure, making computed efforts depend on actual structural properties and time-frequency effect of non-stationary stochastic excitations. Gain matrices in both bands are then acquired by a probabilistic criterion pertaining to system second-order statistics assessment. A multi-degree-of-freedom hysteretic structure driven by non-stationary and non-Gaussian stochastic ground accelerations is numerically studied, in which three distortion scenarios describing uncertainties in structural properties are considered.

Findings

Time-frequency-dependent gain matrices sophisticatedly address non-stationary stochastic excitations, providing efficient ways to independently suppress vibrations between resonant and non-resonant bands. Wavelet level, natural frequency(ies), and ratio of control forces in both bands influence the scheme’s outcomes. Presented approach outperforms existing approach in ensuring trade-off under uncertainty and randomness in system and excitations.

Originality/value

Presented control law generates control efforts relying upon resonant and non-resonant bands, and deploys actual structural properties. Cost-function weights and probabilistic criterion are promisingly developed, achieving cost-effectiveness of energy demand versus controlled structural performance.

Article
Publication date: 3 July 2023

James L. Sullivan, David Novak, Eric Hernandez and Nick Van Den Berg

This paper introduces a novel quality measure, the percent-within-distribution, or PWD, for acceptance and payment in a quality control/quality assurance (QC/QA) performance…

Abstract

Purpose

This paper introduces a novel quality measure, the percent-within-distribution, or PWD, for acceptance and payment in a quality control/quality assurance (QC/QA) performance specification (PS).

Design/methodology/approach

The new quality measure takes any sample size or distribution and uses a Bayesian updating process to re-estimate parameters of a design distribution as sample observations are fed through the algorithm. This methodology can be employed in a wide range of applications, but the authors demonstrate the use of the measure for a QC/QA PS with upper and lower bounds on 28-day compressive strength of in-place concrete for bridge decks.

Findings

The authors demonstrate the use of this new quality measure to illustrate how it addresses the shortcomings of the percent-within-limits (PWL), which is the current industry standard quality measure. The authors then use the PWD to develop initial pay factors through simulation regimes. The PWD is shown to function better than the PWL with realistic sample lots simulated to represent a variety of industry responses to a new QC/QA PS.

Originality/value

The analytical contribution of this work is the introduction of the new quality measure. However, the practical and managerial contributions of this work are of equal significance.

Details

International Journal of Quality & Reliability Management, vol. 41 no. 2
Type: Research Article
ISSN: 0265-671X

Keywords

Book part
Publication date: 23 October 2023

Glenn W. Harrison and J. Todd Swarthout

We take Cumulative Prospect Theory (CPT) seriously by rigorously estimating structural models using the full set of CPT parameters. Much of the literature only estimates a subset…

Abstract

We take Cumulative Prospect Theory (CPT) seriously by rigorously estimating structural models using the full set of CPT parameters. Much of the literature only estimates a subset of CPT parameters, or more simply assumes CPT parameter values from prior studies. Our data are from laboratory experiments with undergraduate students and MBA students facing substantial real incentives and losses. We also estimate structural models from Expected Utility Theory (EUT), Dual Theory (DT), Rank-Dependent Utility (RDU), and Disappointment Aversion (DA) for comparison. Our major finding is that a majority of individuals in our sample locally asset integrate. That is, they see a loss frame for what it is, a frame, and behave as if they evaluate the net payment rather than the gross loss when one is presented to them. This finding is devastating to the direct application of CPT to these data for those subjects. Support for CPT is greater when losses are covered out of an earned endowment rather than house money, but RDU is still the best single characterization of individual and pooled choices. Defenders of the CPT model claim, correctly, that the CPT model exists “because the data says it should.” In other words, the CPT model was borne from a wide range of stylized facts culled from parts of the cognitive psychology literature. If one is to take the CPT model seriously and rigorously then it needs to do a much better job of explaining the data than we see here.

Details

Models of Risk Preferences: Descriptive and Normative Challenges
Type: Book
ISBN: 978-1-83797-269-2

Keywords

Abstract

Details

Understanding Financial Risk Management, Third Edition
Type: Book
ISBN: 978-1-83753-253-7

Article
Publication date: 10 June 2024

Zhangtao Peng, Qian Fang, Qing Ai, Xiaomo Jiang, Hui Wang, Xingchun Huang and Yong Yuan

A risk-based method is proposed to identify the dominant influencing factors of secondary lining cracking in an operating mountain tunnel with weak surrounding rock.

Abstract

Purpose

A risk-based method is proposed to identify the dominant influencing factors of secondary lining cracking in an operating mountain tunnel with weak surrounding rock.

Design/methodology/approach

Based on the inspection data from a mountain tunnel in Southwest China, a lognormal proportional hazard model is established to describe the statistical distribution of secondary lining cracks. Then, the model parameters are obtained by using the Bayesian regression method, and the importance of influencing factors can be sorted based on the absolute values of the parameters.

Findings

The results show that the order of importance of the influencing factors of secondary lining cracks is as follows: location of the crack on the tunnel profile, rock mass grade of the surrounding rock, time to completion of the secondary lining, and void behind the secondary lining. Accordingly, the location of the crack on the tunnel profile and rock mass grade of the surrounding rock are the two most important influencing factors of secondary lining cracks in the investigated mountain tunnel, and appropriate maintenance measures should be focused on these two aspects.

Originality/value

This study provides a general and effective reference for identifying the dominant influencing factors of secondary lining cracks to guide the targeted maintenance in mountain tunnels.

Details

International Journal of Structural Integrity, vol. 15 no. 4
Type: Research Article
ISSN: 1757-9864

Keywords

Content available
Book part
Publication date: 27 May 2024

Angelo Corelli

Abstract

Details

Understanding Financial Risk Management, Third Edition
Type: Book
ISBN: 978-1-83753-253-7

Article
Publication date: 15 December 2023

Muhammad Arif Mahmood, Chioibasu Diana, Uzair Sajjad, Sabin Mihai, Ion Tiseanu and Andrei C. Popescu

Porosity is a commonly analyzed defect in the laser-based additive manufacturing processes owing to the enormous thermal gradient caused by repeated melting and solidification…

Abstract

Purpose

Porosity is a commonly analyzed defect in the laser-based additive manufacturing processes owing to the enormous thermal gradient caused by repeated melting and solidification. Currently, the porosity estimation is limited to powder bed fusion. The porosity estimation needs to be explored in the laser melting deposition (LMD) process, particularly analytical models that provide cost- and time-effective solutions compared to finite element analysis. For this purpose, this study aims to formulate two mathematical models for deposited layer dimensions and corresponding porosity in the LMD process.

Design/methodology/approach

In this study, analytical models have been proposed. Initially, deposited layer dimensions, including layer height, width and depth, were calculated based on the operating parameters. These outputs were introduced in the second model to estimate the part porosity. The models were validated with experimental data for Ti6Al4V depositions on Ti6Al4V substrate. A calibration curve (CC) was also developed for Ti6Al4V material and characterized using X-ray computed tomography. The models were also validated with the experimental results adopted from literature. The validated models were linked with the deep neural network (DNN) for its training and testing using a total of 6,703 computations with 1,500 iterations. Here, laser power, laser scanning speed and powder feeding rate were selected inputs, whereas porosity was set as an output.

Findings

The computations indicate that owing to the simultaneous inclusion of powder particulates, the powder elements use a substantial percentage of the laser beam energy for their melting, resulting in laser beam energy attenuation and reducing thermal value at the substrate. The primary operating parameters are directly correlated with the number of layers and total height in CC. Through X-ray computed tomography analyses, the number of layers showed a straightforward correlation with mean sphericity, while a converse relation was identified with the number, mean volume and mean diameter of pores. DNN and analytical models showed 2%–3% and 7%–9% mean absolute deviations, respectively, compared to the experimental results.

Originality/value

This research provides a unique solution for LMD porosity estimation by linking the developed analytical computational models with artificial neural networking. The presented framework predicts the porosity in the LMD-ed parts efficiently.

Article
Publication date: 6 June 2024

Bingzi Jin and Xiaojie Xu

The purpose of this study is to make property price forecasts for the Chinese housing market that has grown rapidly in the last 10 years, which is an important concern for both…

Abstract

Purpose

The purpose of this study is to make property price forecasts for the Chinese housing market that has grown rapidly in the last 10 years, which is an important concern for both government and investors.

Design/methodology/approach

This study examines Gaussian process regressions with different kernels and basis functions for monthly pre-owned housing price index estimates for ten major Chinese cities from March 2012 to May 2020. The authors do this by using Bayesian optimizations and cross-validation.

Findings

The ten price indices from June 2019 to May 2020 are accurately predicted out-of-sample by the established models, which have relative root mean square errors ranging from 0.0458% to 0.3035% and correlation coefficients ranging from 93.9160% to 99.9653%.

Originality/value

The results might be applied separately or in conjunction with other forecasts to develop hypotheses regarding the patterns in the pre-owned residential real estate price index and conduct further policy research.

Details

Journal of Modelling in Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1746-5664

Keywords

1 – 10 of 899