Search results

1 – 10 of over 2000
Article
Publication date: 31 July 2009

Donald E. Hutto, Thomas Mazzuchi and Shahram Sarkani

The purpose of this paper is to provide maintenance personnel with a methodology for using masked field reliability data to determine the probability of each subassembly failure.

1997

Abstract

Purpose

The purpose of this paper is to provide maintenance personnel with a methodology for using masked field reliability data to determine the probability of each subassembly failure.

Design/methodology/approach

The paper compares an iterative maximum likelihood estimation method and a Bayesian methodology for handling masked data collected from 227 identical radar power supplies. The power supply consists of several subassemblies hereafter referred to as shop replaceable assemblies (SRAs).

Findings

The study examined two approaches for dealing with masking, an iterative maximum likelihood estimate procedure, IMLEP, and a Bayesian approach implemented with the application WinBUGS. It indicates that the performances of IMLEP and WinBUGS in estimating the parameters of the SRA distribution under no masking conditions are similar. IMLEP and WinBUGS also provide similar results under masking conditions. However, the study indicates that WinBUGS may perform better than IMLEP when the competing risk responsible for a failure represents a smaller total percentage of the total failures. Future study to confirm this conclusion by expanding the number of SRAs into which the item under study is organized is required.

Research limitations/implications

If an item is considered to be comprised of various subassemblies and the failure of the first subassembly causes the item to fail, then the item is referred to as a series system in the literature. If the probability of a each subassembly failure is statistically independent then the item can be represented by a competing risk model and the probability distributions of the subassemblies can be ascertained from the item's failure data. When the item's cause of failure is not known, the data are referred to in the literature as being masked. Since competing risk theory requires a cause of failure and a time of failure, any masked data must be addressed in the competing risk model.

Practical implications

This study indicates that competing risk theory can be applied to the equipment field failure data to determine a SRA's probability of failure and thereby provide an efficient sequence of replacing suspect failed SRAs.

Originality/value

The analysis of masked failure data is an important area that has had only limited study in the literature due to the availability of failure data. This paper contributes to the research by providing the complete historical equipment usage data for the item under study gathered over a time frame of approximately seven years.

Details

International Journal of Quality & Reliability Management, vol. 26 no. 7
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 1 January 2001

S.L. Byers and K. Ben Nowman

Refers to previous research on the empirical testing of continuous time, two factor short rate interest models by Chan, Karolyi, Longstaff and Sanders (1992), Vasicek (1997) and…

Abstract

Refers to previous research on the empirical testing of continuous time, two factor short rate interest models by Chan, Karolyi, Longstaff and Sanders (1992), Vasicek (1997) and Cox, Ingersoll and Ross (1985); and the Nowman (1997, 2000) Gaussian estimation approach. Applies these ideas to monthly interbank and Euro‐currency data for a variety of periods and currencies to compare the explanatory/forecasting power of each model with the unrestricted model. Presents the results which show that volatility levels and forecasting performance vary for the models and markets tested.

Details

Managerial Finance, vol. 27 no. 1/2
Type: Research Article
ISSN: 0307-4358

Keywords

Article
Publication date: 22 February 2013

Jianfeng Zhang and Wenxiu Hu

The purpose of this paper is to examine whether realized volatility can provide additional information on the volatility process to the GARCH and EGARCH model, based on the data…

Abstract

Purpose

The purpose of this paper is to examine whether realized volatility can provide additional information on the volatility process to the GARCH and EGARCH model, based on the data of Chinese stock market.

Design/methodology/approach

The realized volatility is defined as the squared overnight return plus the close to open squared return of the period between the morning and afternoon session, to plus the sum of the squared f-minute returns between the trading hours during the relevant trading day. The methodology is a GARCH (EGARCH) model with added explanation variables in the variance equation. The estimation methodology is exact maximum likelihood estimation, using the BHHH algorithms for optimization.

Findings

There are some stocks for which realized volatility measures add information in the volatility process, but there are still quite a number of stocks for which they do not contain any additional information. The 30 minutes realized volatility measures outperform measures constructed on other time intervals. The firm size, turnover rate, and amplitude also partially explain the difference in realized volatility ' s explanatory power across firms.

Research limitations/implications

When analyzing the factors determining the role of realized volatility, as the difficulty of getting the data, ownership structure and ultimately ownerships are not taken into account, except for the turnover ratio, amplitude and size.

Originality/value

This study extends firstly this line of inquiry of realized volatility into the emerging Chinese stock market. Due to the unique institutional setting in China, the results of this study have played an important role on pricing warrant for domestic investors in the Chinese market.

Details

International Journal of Managerial Finance, vol. 9 no. 1
Type: Research Article
ISSN: 1743-9132

Keywords

Article
Publication date: 5 March 2018

Preeti Wanti Srivastava and Savita Savita

Most of the literature on the design of accelerated life test (ALT) plan focus on a single system (subsystem) totally disregarding its internal configuration. Many a times it is…

Abstract

Purpose

Most of the literature on the design of accelerated life test (ALT) plan focus on a single system (subsystem) totally disregarding its internal configuration. Many a times it is not possible to identify the components that cause the system failure or that the cause can only be identified by a subset of its components resulting in a masked observation. The purpose of this paper is to deal with the planning of ramp-stress accelerated life testing for a high-reliability parallel system comprising two dependent components using masked failure data. Such a testing may prove to be useful in a twin-engine aircraft. A ramp-stress results when stress applied on the system increases linearly with time.

Design/methodology/approach

A parallel system with two dependent components is taken with dependency modeled by Gumbel-Hougaard copula. The stress-life relationship is modeled using inverse power law, and cumulative exposure model is assumed to model the effect of changing stress. The method of maximum likelihood is used for estimating design parameters. The optimal plan consists in finding optimal stress rate using D-optimality criterion.

Findings

The optimal plan consists in finding optimal stress rate using D-optimality criterion by minimizing the reciprocal of the determinant of Fisher information matrix. The proposed plan has been explained using a numerical example and carrying out a sensitivity analysis.

Originality/value

The model formulated can help reliability engineers obtain reliability estimates quickly of high-reliability products that are likely to last for several years.

Details

International Journal of Quality & Reliability Management, vol. 35 no. 3
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 1 April 2000

Fumiyo N. Kondo and Genshiro Kitagawa

Access to daily store level scanner data has been increasingly easier in recent years in Japan and time series analysis based on a sales response model is becoming realistic…

2789

Abstract

Access to daily store level scanner data has been increasingly easier in recent years in Japan and time series analysis based on a sales response model is becoming realistic. Introduces a new method of combining time series analysis and regression analysis on the price promotion effect, which enables simultaneous decomposition of store level scanner sales into trend (including seasonality), day‐of‐the‐week effect and explanatory variable effect due to price promotion. The method was applied to daily store level scanner sales of milk, showing evidence of the existence of day‐of‐the‐week effect. Further, a method of incorporating several kinds of price‐cut variables in regression analysis and the analyzed results were presented.

Details

Marketing Intelligence & Planning, vol. 18 no. 2
Type: Research Article
ISSN: 0263-4503

Keywords

Article
Publication date: 22 March 2022

Zhanpeng Shen, Chaoping Zang, Xueqian Chen, Shaoquan Hu and Xin-en Liu

For fast calculation of complex structure in engineering, correlations among input variables are often ignored in uncertainty propagation, even though the effect of ignoring these…

Abstract

Purpose

For fast calculation of complex structure in engineering, correlations among input variables are often ignored in uncertainty propagation, even though the effect of ignoring these correlations on the output uncertainty is unclear. This paper aims to quantify the inputs uncertainty and estimate the correlations among them acorrding to the collected observed data instead of questionable assumptions. Moreover, the small size of the experimental data should also be considered, as it is such a common engineering problem.

Design/methodology/approach

In this paper, a novel method of combining p-box with copula function for both uncertainty quantification and correlation estimation is explored. Copula function is utilized to estimate correlations among uncertain inputs based upon the observed data. The p-box method is employed to quantify the input uncertainty as well as the epistemic uncertainty associated with the limited amount of the observed data. Nested Monte Carlo sampling technique is adopted herein to ensure that the propagation is always feasible. In addition, a Kriging model is built up to reduce the computational cost of uncertainty propagation.

Findings

To illustrate the application of this method, an engineering example of structural reliability assessment is performed. The results indicate that it may significantly affect output uncertainty whether to quantify the correlation among input variables. Furthermore, an additional advantage for risk management is obtained in this approach due to the separation of aleatory and epistemic uncertainties.

Originality/value

The proposed method takes advantage of p-box and copula function to deal with the correlations and limited amount of the observed data, which are two important issues of uncertainty quantification in engineering. Thus, it is practical and has the ability to predict accurate response uncertainty or system state.

Details

Engineering Computations, vol. 39 no. 6
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 3 August 2010

Wei Xu, Guenther Filler, Martin Odening and Ostap Okhrin

The purpose of this paper is to assess the losses of weather‐related insurance at different regional levels. The possibility of spatial diversification of insurance is explored by…

Abstract

Purpose

The purpose of this paper is to assess the losses of weather‐related insurance at different regional levels. The possibility of spatial diversification of insurance is explored by estimating the joint occurrence on unfavorable weather conditions in different locations, looking particularly at the tail behavior of the loss distribution.

Design/methodology/approach

Joint weather‐related losses are estimated using copulas. Copulas avoid the direct estimation of multivariate distributions but allow for much greater flexibility in modeling the dependence structure of weather risks compared with simple correlation coefficients.

Findings

Results indicate that indemnity payments based on temperature as well as on cumulative rainfall show strong stochastic dependence even at a large regional scale. Thus the possibility to reduce risk exposure by increasing the trading area of insurance is limited.

Research limitations/implications

The empirical findings are limited by a rather weak database. In that case the estimation of high‐dimensional copulas leads to large estimation errors.

Practical implications

The paper includes implications for the quantification of systemic weather risk which is important for the rate making of crop insurance and reinsurance.

Originality/value

This paper's results highlight how important the choice of the statistical approach is when modeling the dependence structure of weather risks.

Details

Agricultural Finance Review, vol. 70 no. 2
Type: Research Article
ISSN: 0002-1466

Keywords

Article
Publication date: 17 May 2013

Michael Martin

Interest rate risk, i.e. the risk of changes in the interest rate term structure, is of high relevance in insurers' risk management. Due to large capital investments in interest…

1595

Abstract

Purpose

Interest rate risk, i.e. the risk of changes in the interest rate term structure, is of high relevance in insurers' risk management. Due to large capital investments in interest rate sensitive assets such as bonds, interest rate risk plays a considerable role for deriving the solvency capital requirement (SCR) in the context of Solvency II. This paper seeks to address these issues.

Design/methodology/approach

In addition to the Solvency II standard model, the author applies the model of Gatzert and Martin for introducing a partial internal model for the market risk of bond exposures. After introducing calibration methods for short rate models, the author quantifies interest rate and credit risk for corporate and government bonds and demonstrates that the type of process can have a considerable impact despite comparable underlying input data.

Findings

The results show that, in general, the SCR for interest rate risk derived from the standard model of Solvency II tends to the SCR achieved by the short rate model from Vasicek, while the application of the Cox, Ingersoll, and Ross model leads to a lower SCR. For low‐rated bonds, the internal models approximate each other and, moreover, show a considerable underestimation of credit risk in the Solvency II model.

Originality/value

The aim of this paper is to assess model risk with focus on bonds in the market risk module of Solvency II regarding the underlying interest rate process and input parameters.

Details

The Journal of Risk Finance, vol. 14 no. 3
Type: Research Article
ISSN: 1526-5943

Keywords

Article
Publication date: 2 October 2020

Xiu Wei Yeap, Hooi Hooi Lean, Marius Galabe Sampid and Haslifah Mohamad Hasim

This paper investigates the dependence structure and market risk of the currency exchange rate portfolio from the Malaysian ringgit perspective.

630

Abstract

Purpose

This paper investigates the dependence structure and market risk of the currency exchange rate portfolio from the Malaysian ringgit perspective.

Design/methodology/approach

The marginal return of the five major exchange rates series, i.e. United States dollar (USD), Japanese yen (JPY), Singapore dollar (SGD), Thai baht (THB) and Chinese Yuan Renminbi (CNY) are modelled by the Bayesian generalized autoregressive conditional heteroskedasticity (GARCH) (1,1) model with Student's t innovations. In addition, five different copulas, such as Gumbel, Clayton, Frank, Gaussian and Student's t, are applied for modelling the joint distribution for examining the dependence structure of the five currencies. Moreover, the portfolio risk is measured by Value at Risk (VaR) that considers the extreme events through the extreme value theory (EVT).

Findings

The finding shows that Gumbel and Student's t are the best-fitted Archimedean and elliptical copulas, for the five currencies. The dependence structure is asymmetric and heavy tailed.

Research limitations/implications

The findings of this paper have important implications for diversification decision and hedging problems for investors who involving in foreign currencies. The authors found that the portfolio is diversified with the consideration of extreme events. Therefore, investors who are holding an individual currency with VaR higher than the portfolio may consider adding other currencies used in this paper for hedging.

Originality/value

This is the first paper estimating VaR of a currency exchange rate portfolio using a combination of Bayesian GARCH model, EVT and copula theory. Moreover, the VaR of the currency exchange rate portfolio can be used as a benchmark of the currency exchange market risk.

Details

International Journal of Emerging Markets, vol. 16 no. 5
Type: Research Article
ISSN: 1746-8809

Keywords

Article
Publication date: 1 January 2014

Mustapha Djeddou, Hichem Zeher and Younes Nekachtali

– The paper aims to propose a new method for estimating the time of arrival (TOA) of ultra-wideband (UWB) signals under IEEE 802.15.4a multipath channel model.

Abstract

Purpose

The paper aims to propose a new method for estimating the time of arrival (TOA) of ultra-wideband (UWB) signals under IEEE 802.15.4a multipath channel model.

Design/methodology/approach

The proposed approach is based on a proportionality test and consists in finding out whether two autoregressive (AR) processes, modeling two frames, are proportional or not. The latter operation uses a distance to measure the proportionality between the two AR processes.

Findings

The developed technique may be used in two ways, sample-by-sample or block-by-block, according to the required ranging accuracy. It is important to note that the method offers flexibility between the computational load and the needed estimation accuracy. Moreover, the proposed method uses a threshold that is derived analytically according to a preset false alarm probability.

Practical implications

Simulation experiments are conducted to assess the performance of the new TOA estimation algorithm. Thereby, the comparison is done against the well-known CLEAN algorithm for a sample-by-sample based TOA estimation and against three energy detector based receiver algorithms. The obtained results highlight the effectiveness of the developed approach.

Originality/value

The developed TOA estimation algorithm is completely different from other techniques in the literature, and it is based on a proportionality test between two sliding frames. These latter are modeled by two AR processes. Then a distance measure is used to check whether or not the power spectral densities are proportional.

Details

COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering, vol. 33 no. 1/2
Type: Research Article
ISSN: 0332-1649

Keywords

1 – 10 of over 2000