Search results

1 – 10 of over 7000
Article
Publication date: 5 February 2018

Haoliang Wang, Xiwang Dong, Qingdong Li and Zhang Ren

By using small reference samples, the calculation method of confidence value and prediction method of confidence interval for multi-input system are investigated. The purpose of…

Abstract

Purpose

By using small reference samples, the calculation method of confidence value and prediction method of confidence interval for multi-input system are investigated. The purpose of this paper is to offer effective assessing methods of confidence value and confidence interval for the simulation models used in establishing guidance and control systems.

Design/methodology/approach

In this paper, first, an improved cluster estimation method is proposed to guide the selection of the small reference samples. Then, based on analytic hierarchy process method, the new calculation method of the weight of each reference sample is derived. By using the grey relation analysis method, new calculation methods of the correlation coefficient and confidence value are presented. Moreover, the confidence interval of the sample awaiting assessment is defined. A new prediction method is derived to obtain the confidence interval of the sample awaiting assessment which has no reference sample. Subsequently, by using the prediction method and original small reference samples, Bootstrap resampling method is used to obtain more correlation coefficients for the sample to reduce the probability of abandoning the true.

Findings

The grey relational analysis is used in assessing the confidence value and interval prediction. The numerical simulations are presented to demonstrate the effectiveness of the theoretical results.

Originality/value

Based on the selected small reference samples, new calculation methods of the correlation coefficient and confidence value are presented to assess the confidence value of model awaiting assessment. The calculation methods of maximum confidence interval, expected confidence interval and other required confidence intervals are presented, which can be used in assessing the validities of controller and guidance system obtained from the model awaiting assessment.

Details

Grey Systems: Theory and Application, vol. 8 no. 1
Type: Research Article
ISSN: 2043-9377

Keywords

Article
Publication date: 29 July 2014

Yinao Wang

The purpose of this paper is to discuss the interval forecasting, prediction interval and its reliability. When the predicted interval and its reliability are construction, the…

Abstract

Purpose

The purpose of this paper is to discuss the interval forecasting, prediction interval and its reliability. When the predicted interval and its reliability are construction, the general rule which must satisfy is studied, grey wrapping band forecasting method is perfect.

Design/methodology/approach

A forecasting method puts forward a process of prediction interval. It also elaborates on the meaning of interval (the probability of the prediction interval including the real value of predicted variable). The general rule is abstracted and summarized by many forecasting cases. The general rule is discussed by axiomatic method.

Findings

The prediction interval is categorized into three types. Three axioms that construction predicted interval must satisfy are put forward. Grey wrapping band forecasting method is improved based on the proposed axioms.

Practical implications

Take the Shanghai composite index as the example, according to the K-line diagram from 4 January 2013 to 9 May 2013, the reliability of predicted rebound height of subsequent two or three trading day does not exceed the upper wrapping curve is 80 per cent. It is significant to understand the forecasting range correctly, build a reasonable range forecasting method and to apply grey wrapping band forecasting method correctly.

Originality/value

Grey wrapping band forecasting method is improved based on the proposed axioms.

Details

Grey Systems: Theory and Application, vol. 4 no. 2
Type: Research Article
ISSN: 2043-9377

Keywords

Book part
Publication date: 12 November 2014

Marco Lam and Brad S. Trinkle

The purpose of this paper is to improve the information quality of bankruptcy prediction models proposed in the literature by building prediction intervals around the point…

Abstract

The purpose of this paper is to improve the information quality of bankruptcy prediction models proposed in the literature by building prediction intervals around the point estimates generated by these models and to determine if the use of the prediction intervals in conjunction with the point estimated yields an improvement in predictive accuracy over traditional models. The authors calculated the point estimates and prediction intervals for a sample of firms from 1991 to 2008. The point estimates and prediction intervals were used in concert to classify firms as bankrupt or non-bankrupt. The accuracy of the tested technique was compared to that of a traditional bankruptcy prediction model. The results indicate that the use of upper and lower bounds in concert with the point estimates yield an improvement in the predictive ability of bankruptcy prediction models. The improvements in overall prediction accuracy and non-bankrupt firm prediction accuracy are statistically significant at the 0.01 level. The authors present a technique that (1) provides a more complete picture of the firm’s status, (2) is derived from multiple forms of evidence, (3) uses a predictive interval technique that is easily repeated, (4) can be generated in a timely manner, (5) can be applied to other bankruptcy prediction models in the literature, and (6) is statistically significantly more accurate than traditional point estimate techniques. The current research is the first known study to use the combination of point estimates and prediction intervals to in bankruptcy prediction.

Details

Advances in Business and Management Forecasting
Type: Book
ISBN: 978-1-78441-209-8

Keywords

Article
Publication date: 1 October 2019

Mustagime Tülin Yildirim and Bülent Kurt

With the condition monitoring system on airplanes, failures can be predicted before they occur. Performance deterioration of aircraft engines is monitored by parameters such as…

Abstract

Purpose

With the condition monitoring system on airplanes, failures can be predicted before they occur. Performance deterioration of aircraft engines is monitored by parameters such as fuel flow, exhaust gas temperature, engine fan speeds, vibration, oil pressure and oil temperature. The vibration parameter allows us to easily detect any existing or possible faults. The purpose of this paper is to develop a new model to estimate the low pressure turbine (LPT) vibration parameter of an aircraft engine by using the data of an aircraft’s actual flight from flight data recorder (FDR).

Design/methodology/approach

First, statistical regression analysis is used to determine the parameters related to LPT. Then, the selected parameters were applied as an input to the developed Levenberg–Marquardt feedforward neural network and the output LPT vibration parameter was estimated with a small error. Analyses were performed on MATLAB and SPSS Statistics 22 package program. Finally, the confidence interval method is used to check the accuracy of the estimated results of artificial neural networks (ANNs).

Findings

This study shows that the health conditions of an aircraft engine can be evaluated in terms of this paper by using confidence interval prediction of ANN-estimated LPT vibration parameters without dismantling and expert knowledge.

Practical implications

With this study, it has been shown that faults that may occur during flight can be easily detected using the data of a flight without expert evaluation.

Originality/value

The health condition of the turbofan engine was evaluated using the confidence interval prediction of ANN-estimated LPT vibration parameters.

Details

Aircraft Engineering and Aerospace Technology, vol. 92 no. 2
Type: Research Article
ISSN: 1748-8842

Keywords

Article
Publication date: 19 June 2009

Clara M. Novoa and Francis Mendez

The purpose of this paper is to present bootstrapping as an alternative statistical methodology to analyze time studies and input data for discrete‐event simulations…

Abstract

Purpose

The purpose of this paper is to present bootstrapping as an alternative statistical methodology to analyze time studies and input data for discrete‐event simulations. Bootstrapping is a non‐parametric technique to estimate the sampling distribution of a statistic by doing repeated sampling (i.e. resampling) with replacement from an original sample. This paper proposes a relatively simple implementation of bootstrap techniques to time study analysis.

Design/methodology/approach

Using an inductive approach, this work selects a typical situation to conduct a time study, applies two bootstrap procedures for the statistical analysis, compares bootstrap to traditional parametric approaches, and extrapolates general advantages of bootstrapping over parametric approaches.

Findings

Bootstrap produces accurate inferences when compared to those from parametric methods, and it is an alternative when the underlying parametric assumptions are not met.

Research limitations/implications

Research results contribute to work measurement and simulation fields since bootstrap promises an increase in accuracy in cases where the normality assumption is violated or only small samples are available. Furthermore, this paper shows that electronic spreadsheets are appropriate tools to implement the proposed bootstrap procedures.

Originality/value

In previous work, the standard procedure to analyze time studies and input data for simulations is a parametric approach. Bootstrap permits to obtain both point estimates and estimates of time distributions. Engineers and managers involved in process improvement initiatives could use bootstrap to exploit better the information from available samples.

Details

International Journal of Productivity and Performance Management, vol. 58 no. 5
Type: Research Article
ISSN: 1741-0401

Keywords

Open Access
Article
Publication date: 8 August 2023

Elisa Verna, Gianfranco Genta and Maurizio Galetto

The purpose of this paper is to investigate and quantify the impact of product complexity, including architectural complexity, on operator learning, productivity and quality…

Abstract

Purpose

The purpose of this paper is to investigate and quantify the impact of product complexity, including architectural complexity, on operator learning, productivity and quality performance in both assembly and disassembly operations. This topic has not been extensively investigated in previous research.

Design/methodology/approach

An extensive experimental campaign involving 84 operators was conducted to repeatedly assemble and disassemble six different products of varying complexity to construct productivity and quality learning curves. Data from the experiment were analysed using statistical methods.

Findings

The human learning factor of productivity increases superlinearly with the increasing architectural complexity of products, i.e. from centralised to distributed architectures, both in assembly and disassembly, regardless of the level of overall product complexity. On the other hand, the human learning factor of quality performance decreases superlinearly as the architectural complexity of products increases. The intrinsic characteristics of product architecture are the reasons for this difference in learning factor.

Practical implications

The results of the study suggest that considering product complexity, particularly architectural complexity, in the design and planning of manufacturing processes can optimise operator learning, productivity and quality performance, and inform decisions about improving manufacturing operations.

Originality/value

While previous research has focussed on the effects of complexity on process time and defect generation, this study is amongst the first to investigate and quantify the effects of product complexity, including architectural complexity, on operator learning using an extensive experimental campaign.

Details

Journal of Manufacturing Technology Management, vol. 34 no. 9
Type: Research Article
ISSN: 1741-038X

Keywords

Open Access
Article
Publication date: 3 February 2021

Geoff A.M. Loveman and Joel J.E. Edney

The purpose of the present study was the development of a methodology for translating predicted rates of decompression sickness (DCS), following tower escape from a sunken…

Abstract

Purpose

The purpose of the present study was the development of a methodology for translating predicted rates of decompression sickness (DCS), following tower escape from a sunken submarine, into predicted probability of survival, a more useful statistic for making operational decisions.

Design/methodology/approach

Predictions were made, using existing models, for the probabilities of a range of DCS symptoms following submarine tower escape. Subject matter expert estimates of the effect of these symptoms on a submariner’s ability to survive in benign weather conditions on the sea surface until rescued were combined with the likelihoods of the different symptoms occurring using standard probability theory. Plots were generated showing the dependence of predicted probability of survival following escape on the escape depth and the pressure within the stricken submarine.

Findings

Current advice on whether to attempt tower escape is based on avoiding rates of DCS above approximately 5%–10%. Consideration of predicted survival rates, based on subject matter expert opinion, suggests that the current advice might be considered as conservative in the distressed submarine scenario, as DCS rates of 10% are not anticipated to markedly affect survival rates.

Originality/value

According to the authors’ knowledge, this study represents the first attempt to quantify the effect of different DCS symptoms on the probability of survival in submarine tower escape.

Details

Journal of Defense Analytics and Logistics, vol. 5 no. 1
Type: Research Article
ISSN: 2399-6439

Keywords

Article
Publication date: 16 April 2018

Qi Zhou, Xinyu Shao, Ping Jiang, Tingli Xie, Jiexiang Hu, Leshi Shu, Longchao Cao and Zhongmei Gao

Engineering system design and optimization problems are usually multi-objective and constrained and have uncertainties in the inputs. These uncertainties might significantly…

Abstract

Purpose

Engineering system design and optimization problems are usually multi-objective and constrained and have uncertainties in the inputs. These uncertainties might significantly degrade the overall performance of engineering systems and change the feasibility of the obtained solutions. This paper aims to propose a multi-objective robust optimization approach based on Kriging metamodel (K-MORO) to obtain the robust Pareto set under the interval uncertainty.

Design/methodology/approach

In K-MORO, the nested optimization structure is reduced into a single loop optimization structure to ease the computational burden. Considering the interpolation uncertainty from the Kriging metamodel may affect the robustness of the Pareto optima, an objective switching and sequential updating strategy is introduced in K-MORO to determine (1) whether the robust analysis or the Kriging metamodel should be used to evaluate the robustness of design alternatives, and (2) which design alternatives are selected to improve the prediction accuracy of the Kriging metamodel during the robust optimization process.

Findings

Five numerical and engineering cases are used to demonstrate the applicability of the proposed approach. The results illustrate that K-MORO is able to obtain robust Pareto frontier, while significantly reducing computational cost.

Practical implications

The proposed approach exhibits great capability for practical engineering design optimization problems that are multi-objective and constrained and have uncertainties.

Originality/value

A K-MORO approach is proposed, which can obtain the robust Pareto set under the interval uncertainty and ease the computational burden of the robust optimization process.

Details

Engineering Computations, vol. 35 no. 2
Type: Research Article
ISSN: 0264-4401

Keywords

Open Access
Article
Publication date: 8 December 2022

James Christopher Westland

This paper tests whether Bayesian A/B testing yields better decisions that traditional Neyman-Pearson hypothesis testing. It proposes a model and tests it using a large, multiyear…

1204

Abstract

Purpose

This paper tests whether Bayesian A/B testing yields better decisions that traditional Neyman-Pearson hypothesis testing. It proposes a model and tests it using a large, multiyear Google Analytics (GA) dataset.

Design/methodology/approach

This paper is an empirical study. Competing A/B testing models were used to analyze a large, multiyear dataset of GA dataset for a firm that relies entirely on their website and online transactions for customer engagement and sales.

Findings

Bayesian A/B tests of the data not only yielded a clear delineation of the timing and impact of the intellectual property fraud, but calculated the loss of sales dollars, traffic and time on the firm’s website, with precise confidence limits. Frequentist A/B testing identified fraud in bounce rate at 5% significance, and bounces at 10% significance, but was unable to ascertain fraud at the standard significance cutoffs for scientific studies.

Research limitations/implications

None within the scope of the research plan.

Practical implications

Bayesian A/B tests of the data not only yielded a clear delineation of the timing and impact of the IP fraud, but calculated the loss of sales dollars, traffic and time on the firm’s website, with precise confidence limits.

Social implications

Bayesian A/B testing can derive economically meaningful statistics, whereas frequentist A/B testing only provide p-value’s whose meaning may be hard to grasp, and where misuse is widespread and has been a major topic in metascience. While misuse of p-values in scholarly articles may simply be grist for academic debate, the uncertainty surrounding the meaning of p-values in business analytics actually can cost firms money.

Originality/value

There is very little empirical research in e-commerce that uses Bayesian A/B testing. Almost all corporate testing is done via frequentist Neyman-Pearson methods.

Details

Journal of Electronic Business & Digital Economics, vol. 1 no. 1/2
Type: Research Article
ISSN: 2754-4214

Keywords

Abstract

Details

Financial Modeling for Decision Making: Using MS-Excel in Accounting and Finance
Type: Book
ISBN: 978-1-78973-414-0

1 – 10 of over 7000