Search results

1 – 10 of over 4000
To view the access options for this content please click here
Article
Publication date: 5 February 2018

Haoliang Wang, Xiwang Dong, Qingdong Li and Zhang Ren

By using small reference samples, the calculation method of confidence value and prediction method of confidence interval for multi-input system are investigated. The…

Abstract

Purpose

By using small reference samples, the calculation method of confidence value and prediction method of confidence interval for multi-input system are investigated. The purpose of this paper is to offer effective assessing methods of confidence value and confidence interval for the simulation models used in establishing guidance and control systems.

Design/methodology/approach

In this paper, first, an improved cluster estimation method is proposed to guide the selection of the small reference samples. Then, based on analytic hierarchy process method, the new calculation method of the weight of each reference sample is derived. By using the grey relation analysis method, new calculation methods of the correlation coefficient and confidence value are presented. Moreover, the confidence interval of the sample awaiting assessment is defined. A new prediction method is derived to obtain the confidence interval of the sample awaiting assessment which has no reference sample. Subsequently, by using the prediction method and original small reference samples, Bootstrap resampling method is used to obtain more correlation coefficients for the sample to reduce the probability of abandoning the true.

Findings

The grey relational analysis is used in assessing the confidence value and interval prediction. The numerical simulations are presented to demonstrate the effectiveness of the theoretical results.

Originality/value

Based on the selected small reference samples, new calculation methods of the correlation coefficient and confidence value are presented to assess the confidence value of model awaiting assessment. The calculation methods of maximum confidence interval, expected confidence interval and other required confidence intervals are presented, which can be used in assessing the validities of controller and guidance system obtained from the model awaiting assessment.

Details

Grey Systems: Theory and Application, vol. 8 no. 1
Type: Research Article
ISSN: 2043-9377

Keywords

To view the access options for this content please click here
Article
Publication date: 29 July 2014

Yinao Wang

The purpose of this paper is to discuss the interval forecasting, prediction interval and its reliability. When the predicted interval and its reliability are…

Abstract

Purpose

The purpose of this paper is to discuss the interval forecasting, prediction interval and its reliability. When the predicted interval and its reliability are construction, the general rule which must satisfy is studied, grey wrapping band forecasting method is perfect.

Design/methodology/approach

A forecasting method puts forward a process of prediction interval. It also elaborates on the meaning of interval (the probability of the prediction interval including the real value of predicted variable). The general rule is abstracted and summarized by many forecasting cases. The general rule is discussed by axiomatic method.

Findings

The prediction interval is categorized into three types. Three axioms that construction predicted interval must satisfy are put forward. Grey wrapping band forecasting method is improved based on the proposed axioms.

Practical implications

Take the Shanghai composite index as the example, according to the K-line diagram from 4 January 2013 to 9 May 2013, the reliability of predicted rebound height of subsequent two or three trading day does not exceed the upper wrapping curve is 80 per cent. It is significant to understand the forecasting range correctly, build a reasonable range forecasting method and to apply grey wrapping band forecasting method correctly.

Originality/value

Grey wrapping band forecasting method is improved based on the proposed axioms.

Details

Grey Systems: Theory and Application, vol. 4 no. 2
Type: Research Article
ISSN: 2043-9377

Keywords

To view the access options for this content please click here
Book part
Publication date: 12 November 2014

Marco Lam and Brad S. Trinkle

The purpose of this paper is to improve the information quality of bankruptcy prediction models proposed in the literature by building prediction intervals around the…

Abstract

The purpose of this paper is to improve the information quality of bankruptcy prediction models proposed in the literature by building prediction intervals around the point estimates generated by these models and to determine if the use of the prediction intervals in conjunction with the point estimated yields an improvement in predictive accuracy over traditional models. The authors calculated the point estimates and prediction intervals for a sample of firms from 1991 to 2008. The point estimates and prediction intervals were used in concert to classify firms as bankrupt or non-bankrupt. The accuracy of the tested technique was compared to that of a traditional bankruptcy prediction model. The results indicate that the use of upper and lower bounds in concert with the point estimates yield an improvement in the predictive ability of bankruptcy prediction models. The improvements in overall prediction accuracy and non-bankrupt firm prediction accuracy are statistically significant at the 0.01 level. The authors present a technique that (1) provides a more complete picture of the firm’s status, (2) is derived from multiple forms of evidence, (3) uses a predictive interval technique that is easily repeated, (4) can be generated in a timely manner, (5) can be applied to other bankruptcy prediction models in the literature, and (6) is statistically significantly more accurate than traditional point estimate techniques. The current research is the first known study to use the combination of point estimates and prediction intervals to in bankruptcy prediction.

Details

Advances in Business and Management Forecasting
Type: Book
ISBN: 978-1-78441-209-8

Keywords

To view the access options for this content please click here
Article
Publication date: 1 October 2019

Mustagime Tülin Yildirim and Bülent Kurt

With the condition monitoring system on airplanes, failures can be predicted before they occur. Performance deterioration of aircraft engines is monitored by parameters…

Abstract

Purpose

With the condition monitoring system on airplanes, failures can be predicted before they occur. Performance deterioration of aircraft engines is monitored by parameters such as fuel flow, exhaust gas temperature, engine fan speeds, vibration, oil pressure and oil temperature. The vibration parameter allows us to easily detect any existing or possible faults. The purpose of this paper is to develop a new model to estimate the low pressure turbine (LPT) vibration parameter of an aircraft engine by using the data of an aircraft’s actual flight from flight data recorder (FDR).

Design/methodology/approach

First, statistical regression analysis is used to determine the parameters related to LPT. Then, the selected parameters were applied as an input to the developed Levenberg–Marquardt feedforward neural network and the output LPT vibration parameter was estimated with a small error. Analyses were performed on MATLAB and SPSS Statistics 22 package program. Finally, the confidence interval method is used to check the accuracy of the estimated results of artificial neural networks (ANNs).

Findings

This study shows that the health conditions of an aircraft engine can be evaluated in terms of this paper by using confidence interval prediction of ANN-estimated LPT vibration parameters without dismantling and expert knowledge.

Practical implications

With this study, it has been shown that faults that may occur during flight can be easily detected using the data of a flight without expert evaluation.

Originality/value

The health condition of the turbofan engine was evaluated using the confidence interval prediction of ANN-estimated LPT vibration parameters.

Details

Aircraft Engineering and Aerospace Technology, vol. 92 no. 2
Type: Research Article
ISSN: 1748-8842

Keywords

To view the access options for this content please click here
Article
Publication date: 19 June 2009

Clara M. Novoa and Francis Mendez

The purpose of this paper is to present bootstrapping as an alternative statistical methodology to analyze time studies and input data for discrete‐event simulations…

Abstract

Purpose

The purpose of this paper is to present bootstrapping as an alternative statistical methodology to analyze time studies and input data for discrete‐event simulations. Bootstrapping is a non‐parametric technique to estimate the sampling distribution of a statistic by doing repeated sampling (i.e. resampling) with replacement from an original sample. This paper proposes a relatively simple implementation of bootstrap techniques to time study analysis.

Design/methodology/approach

Using an inductive approach, this work selects a typical situation to conduct a time study, applies two bootstrap procedures for the statistical analysis, compares bootstrap to traditional parametric approaches, and extrapolates general advantages of bootstrapping over parametric approaches.

Findings

Bootstrap produces accurate inferences when compared to those from parametric methods, and it is an alternative when the underlying parametric assumptions are not met.

Research limitations/implications

Research results contribute to work measurement and simulation fields since bootstrap promises an increase in accuracy in cases where the normality assumption is violated or only small samples are available. Furthermore, this paper shows that electronic spreadsheets are appropriate tools to implement the proposed bootstrap procedures.

Originality/value

In previous work, the standard procedure to analyze time studies and input data for simulations is a parametric approach. Bootstrap permits to obtain both point estimates and estimates of time distributions. Engineers and managers involved in process improvement initiatives could use bootstrap to exploit better the information from available samples.

Details

International Journal of Productivity and Performance Management, vol. 58 no. 5
Type: Research Article
ISSN: 1741-0401

Keywords

Content available
Article
Publication date: 3 February 2021

Geoff A.M. Loveman and Joel J.E. Edney

The purpose of the present study was the development of a methodology for translating predicted rates of decompression sickness (DCS), following tower escape from a sunken…

Abstract

Purpose

The purpose of the present study was the development of a methodology for translating predicted rates of decompression sickness (DCS), following tower escape from a sunken submarine, into predicted probability of survival, a more useful statistic for making operational decisions.

Design/methodology/approach

Predictions were made, using existing models, for the probabilities of a range of DCS symptoms following submarine tower escape. Subject matter expert estimates of the effect of these symptoms on a submariner’s ability to survive in benign weather conditions on the sea surface until rescued were combined with the likelihoods of the different symptoms occurring using standard probability theory. Plots were generated showing the dependence of predicted probability of survival following escape on the escape depth and the pressure within the stricken submarine.

Findings

Current advice on whether to attempt tower escape is based on avoiding rates of DCS above approximately 5%–10%. Consideration of predicted survival rates, based on subject matter expert opinion, suggests that the current advice might be considered as conservative in the distressed submarine scenario, as DCS rates of 10% are not anticipated to markedly affect survival rates.

Originality/value

According to the authors’ knowledge, this study represents the first attempt to quantify the effect of different DCS symptoms on the probability of survival in submarine tower escape.

Details

Journal of Defense Analytics and Logistics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2399-6439

Keywords

To view the access options for this content please click here
Article
Publication date: 16 April 2018

Qi Zhou, Xinyu Shao, Ping Jiang, Tingli Xie, Jiexiang Hu, Leshi Shu, Longchao Cao and Zhongmei Gao

Engineering system design and optimization problems are usually multi-objective and constrained and have uncertainties in the inputs. These uncertainties might…

Abstract

Purpose

Engineering system design and optimization problems are usually multi-objective and constrained and have uncertainties in the inputs. These uncertainties might significantly degrade the overall performance of engineering systems and change the feasibility of the obtained solutions. This paper aims to propose a multi-objective robust optimization approach based on Kriging metamodel (K-MORO) to obtain the robust Pareto set under the interval uncertainty.

Design/methodology/approach

In K-MORO, the nested optimization structure is reduced into a single loop optimization structure to ease the computational burden. Considering the interpolation uncertainty from the Kriging metamodel may affect the robustness of the Pareto optima, an objective switching and sequential updating strategy is introduced in K-MORO to determine (1) whether the robust analysis or the Kriging metamodel should be used to evaluate the robustness of design alternatives, and (2) which design alternatives are selected to improve the prediction accuracy of the Kriging metamodel during the robust optimization process.

Findings

Five numerical and engineering cases are used to demonstrate the applicability of the proposed approach. The results illustrate that K-MORO is able to obtain robust Pareto frontier, while significantly reducing computational cost.

Practical implications

The proposed approach exhibits great capability for practical engineering design optimization problems that are multi-objective and constrained and have uncertainties.

Originality/value

A K-MORO approach is proposed, which can obtain the robust Pareto set under the interval uncertainty and ease the computational burden of the robust optimization process.

Details

Engineering Computations, vol. 35 no. 2
Type: Research Article
ISSN: 0264-4401

Keywords

To view the access options for this content please click here
Book part
Publication date: 1 September 2020

Ron Messer

Abstract

Details

Financial Modeling for Decision Making: Using MS-Excel in Accounting and Finance
Type: Book
ISBN: 978-1-78973-414-0

To view the access options for this content please click here
Book part
Publication date: 26 October 2017

Matthew Lindsey and Robert Pavur

Control charts are designed to be effective in detecting a shift in the distribution of a process. Typically, these charts assume that the data for these processes follow…

Abstract

Control charts are designed to be effective in detecting a shift in the distribution of a process. Typically, these charts assume that the data for these processes follow an approximately normal distribution or some known distribution. However, if a data-generating process has a large proportion of zeros, that is, the data is intermittent, then traditional control charts may not adequately monitor these processes. The purpose of this study is to examine proposed control chart methods designed for monitoring a process with intermittent data to determine if they have a sufficiently small percentage of false out-of-control signals. Forecasting techniques for slow-moving/intermittent product demand have been extensively explored as intermittent data is common to operational management applications (Syntetos & Boylan, 2001, 2005, 2011; Willemain, Smart, & Schwarz, 2004). Extensions and modifications of traditional forecasting models have been proposed to model intermittent or slow-moving demand, including the associated trends, correlated demand, seasonality and other characteristics (Altay, Litteral, & Rudisill, 2012). Croston’s (1972) method and its adaptations have been among the principal procedures used in these applications. This paper proposes adapting Croston’s methodology to design control charts, similar to Exponentially Weighted Moving Average (EWMA) control charts, to be effective in monitoring processes with intermittent data. A simulation study is conducted to assess the performance of these proposed control charts by evaluating their Average Run Lengths (ARLs), or equivalently, their percent of false positive signals.

Details

Advances in Business and Management Forecasting
Type: Book
ISBN: 978-1-78743-069-3

Keywords

To view the access options for this content please click here
Article
Publication date: 6 June 2016

Charalambos Pitros and Yusuf Arayici

The purpose of this paper is to provide a decision support model for the early diagnosis of housing bubbles in the UK during the maturity process of the phenomenon.

Abstract

Purpose

The purpose of this paper is to provide a decision support model for the early diagnosis of housing bubbles in the UK during the maturity process of the phenomenon.

Design/methodology/approach

The development process of the model is divided into four stages. These stages are driven by the normal distribution theorem coupled with the case study approach. The application of normal distribution theory is allowed through the usage of several parametric tools. The case studies tested in this research include the last two UK housing bubbles, 1986 to 1989 and 2001/2002 to 2007. The central hypothesis of the model is that during housing bubbles, all speculative activities of market participants follow an approximate synchronisation, and therefore, an irrational, synchronous and periodic increase on a wide range of relevant variables must occur to anticipate the bubble component. An empirical application of the model is conducted on UK housing market data over the period of 1983-2011.

Findings

The new approach successfully identifies the well-known UK historical bubble episodes over the period of 1983-2011. The study further determines that for uncovering housing bubbles in the UK, house price changes have the same weight with the debt–burden ratio when their velocity is positive. Finally, the application of this model has led us to conclude that the model’s outputs fluctuate approximately in line with phases of the UK real estate cycle.

Originality/value

This paper proposes a new measure for studying the presence of housing bubbles. This measure is not simply an ex post detection technique but dating algorithms that use data only up to the point of analysis for an on-going bubble assessment, giving an early warning diagnostic that can assist market participants and regulators in market monitoring.

Details

International Journal of Housing Markets and Analysis, vol. 9 no. 2
Type: Research Article
ISSN: 1753-8270

Keywords

1 – 10 of over 4000