Search results

1 – 10 of over 7000
Article
Publication date: 24 July 2007

Dja‐Shin Wang, Tong‐Yuan Koo and Chao‐Yu Chou

The present paper aims to present the results of a simulation study on the behavior of the four 95 percent bootstrap confidence intervals for estimating Cpk when collected data…

608

Abstract

Purpose

The present paper aims to present the results of a simulation study on the behavior of the four 95 percent bootstrap confidence intervals for estimating Cpk when collected data are from a multiple streams process.

Design/methodology/approach

A computer simulation study is developed to present the behavior of four 95 percent bootstrap confidence intervals, i.e. standard bootstrap (SB), percentile bootstrap (PB), biased‐corrected percentile bootstrap (BCPB), and biased‐corrected and accelerated (BCa) bootstrap for estimating the capability index Cpk of a multiple streams process. An analysis of variance using two factorial and three‐stage nested designs is applied for experimental planning and data analysis.

Findings

For multiple process streams, the relationship between the true value of Cpk and the required sample size for effective experiment is presented. Based on the simulation study, the two‐stream process always gives a higher coverage percentage of bootstrap confidence interval than the four‐stream process. Meanwhile, BCPB and BCa intervals lead to better coverage percentage than SB and PB intervals.

Practical implications

Since a large number of process streams decreases the coverage percentage of the bootstrap confidence interval, it may be inappropriate to use the bootstrap method for constructing the confidence interval of a process capability index as the number of process streams is large.

Originality/value

The present paper is the first work to explore the behavior of bootstrap confidence intervals for estimating the capability index Cpk of a multiple streams process. It is concluded that the number of process streams definitively affects the performance of bootstrap methods.

Details

Engineering Computations, vol. 24 no. 5
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 14 November 2008

Jau‐Chuan Ke, Yunn‐Kuang Chu and Jia‐Huei Lee

In order to develop a feasible and efficient method to acquire the long‐run availability of a parallel system with distribution‐free up and down times, the purpose of this paper…

290

Abstract

Purpose

In order to develop a feasible and efficient method to acquire the long‐run availability of a parallel system with distribution‐free up and down times, the purpose of this paper is to perform the simulation comparisons on the interval estimations of system availability using four bootstrapping methods.

Design/methodology/approach

By using four bootstrap methods; standard bootstrap (SB) confidence interval, percentile bootstrap (PB) confidence interval, bias‐corrected percentile bootstrap (BCPB) confidence interval, and bias‐corrected and accelerated (BCa) confidence interval. A numerical simulation study is carried out in order to demonstrate performance of these proposed bootstrap confidence intervals. Especially, we investigate the accuracy of the four bootstrap confidence intervals by calculating the coverage percentage, the average length, and the relative coverage of confidence intervals.

Findings

Among the four bootstrap confidence intervals, the PB method has the largest relative coverage in most situations. That is, the PB method is the best one made by practitioners who want to obtain an efficient interval estimation of availability.

Originality/value

It is the first time that the relative coverage is introduced to evaluate the performance of estimation method, which is more efficient than the existing measures.

Details

Engineering Computations, vol. 25 no. 8
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 26 June 2020

Tadashi Dohi, Hiroyuki Okamura and Cun Hua Qian

In this paper, the authors propose two construction methods to estimate confidence intervals of the time-based optimal software rejuvenation policy and its associated maximum…

Abstract

Purpose

In this paper, the authors propose two construction methods to estimate confidence intervals of the time-based optimal software rejuvenation policy and its associated maximum system availability via a parametric bootstrap method. Through simulation experiments the authors investigate their asymptotic behaviors and statistical properties.

Design/methodology/approach

The present paper is the first challenge to derive the confidence intervals of the optimal software rejuvenation schedule, which maximizes the system availability in the sense of long run. In other words, the authors concern the statistical software fault management by employing an idea of process control in quality engineering and a parametric bootstrap.

Findings

As a remarkably different point from the existing work, the authors carefully take account of a special case where the two-sided confidence interval of the optimal software rejuvenation time does not exist due to that fact that the estimator distribution of the optimal software rejuvenation time is defective. Here the authors propose two useful construction methods of the two-sided confidence interval: conditional confidence interval and heuristic confidence interval.

Research limitations/implications

Although the authors applied a simulation-based bootstrap confidence method in this paper, another re-sampling-based approach can be also applied to the same problem. In addition, the authors just focused on a parametric bootstrap, but a non-parametric bootstrap method can be also applied to the confidence interval estimation of the optimal software rejuvenation time interval, when the complete knowledge on the distribution form is not available.

Practical implications

The statistical software fault management techniques proposed in this paper are useful to control the system availability of operational software systems, by means of the control chart.

Social implications

Through the online monitoring in operational software systems, it would be possible to estimate the optimal software rejuvenation time and its associated system availability, without applying any approximation. By implementing this function on application programming interface (API), it is possible to realize the low-cost fault-tolerance for software systems with aging.

Originality/value

In the past literature, almost all authors employed parametric and non-parametric inference techniques to estimate the optimal software rejuvenation time but just focused on the point estimation. This may often lead to the miss-judgment based on over-estimation or under-estimation under uncertainty. The authors overcome the problem by introducing the two-sided confidence interval approach.

Details

International Journal of Quality & Reliability Management, vol. 37 no. 6/7
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 19 June 2009

Clara M. Novoa and Francis Mendez

The purpose of this paper is to present bootstrapping as an alternative statistical methodology to analyze time studies and input data for discrete‐event simulations…

Abstract

Purpose

The purpose of this paper is to present bootstrapping as an alternative statistical methodology to analyze time studies and input data for discrete‐event simulations. Bootstrapping is a non‐parametric technique to estimate the sampling distribution of a statistic by doing repeated sampling (i.e. resampling) with replacement from an original sample. This paper proposes a relatively simple implementation of bootstrap techniques to time study analysis.

Design/methodology/approach

Using an inductive approach, this work selects a typical situation to conduct a time study, applies two bootstrap procedures for the statistical analysis, compares bootstrap to traditional parametric approaches, and extrapolates general advantages of bootstrapping over parametric approaches.

Findings

Bootstrap produces accurate inferences when compared to those from parametric methods, and it is an alternative when the underlying parametric assumptions are not met.

Research limitations/implications

Research results contribute to work measurement and simulation fields since bootstrap promises an increase in accuracy in cases where the normality assumption is violated or only small samples are available. Furthermore, this paper shows that electronic spreadsheets are appropriate tools to implement the proposed bootstrap procedures.

Originality/value

In previous work, the standard procedure to analyze time studies and input data for simulations is a parametric approach. Bootstrap permits to obtain both point estimates and estimates of time distributions. Engineers and managers involved in process improvement initiatives could use bootstrap to exploit better the information from available samples.

Details

International Journal of Productivity and Performance Management, vol. 58 no. 5
Type: Research Article
ISSN: 1741-0401

Keywords

Article
Publication date: 14 October 2020

Haiyan Ge, Xintian Liu, Yu Fang, Haijie Wang, Xu Wang and Minghui Zhang

The purpose of this paper is to introduce error ellipse into the bootstrap method to improve the reliability of small samples and the credibility of the S-N curve.

Abstract

Purpose

The purpose of this paper is to introduce error ellipse into the bootstrap method to improve the reliability of small samples and the credibility of the S-N curve.

Design/methodology/approach

Based on the bootstrap method and the reliability of the original samples, two error ellipse models are proposed. The error ellipse model reasonably predicts that the discrete law of expanded virtual samples obeys two-dimensional normal distribution.

Findings

By comparing parameters obtained by the bootstrap method, improved bootstrap method (normal distribution) and error ellipse methods, it is found that the error ellipse method achieves the expansion of sampling range and shortens the confidence interval, which improves the accuracy of the estimation of parameters with small samples. Through case analysis, it is proved that the tangent error ellipse method is feasible, and the series of S-N curves is reasonable by the tangent error ellipse method.

Originality/value

The error ellipse methods can lay a technical foundation for life prediction of products and have a progressive significance for the quality evaluation of products.

Details

Engineering Computations, vol. 38 no. 1
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 3 October 2016

Santiago Gamba-Santamaria, Oscar Fernando Jaulin-Mendez, Luis Fernando Melo-Velandia and Carlos Andrés Quicazán-Moreno

Value at risk (VaR) is a market risk measure widely used by risk managers and market regulatory authorities, and various methods are proposed in the literature for its estimation…

Abstract

Purpose

Value at risk (VaR) is a market risk measure widely used by risk managers and market regulatory authorities, and various methods are proposed in the literature for its estimation. However, limited studies discuss its distribution or its confidence intervals. The purpose of this paper is to compare different techniques for computing such intervals to identify the scenarios under which such confidence interval techniques perform properly.

Design/methodology/approach

The methods that are included in the comparison are based on asymptotic normality, extreme value theory and subsample bootstrap. The evaluation is done by computing the coverage rates for each method through Monte Carlo simulations under certain scenarios. The scenarios consider different persistence degrees in mean and variance, sample sizes, VaR probability levels, confidence levels of the intervals and distributions of the standardized errors. Additionally, an empirical application for the stock market index returns of G7 countries is presented.

Findings

The simulation exercises show that the methods that were considered in the study are only valid for high quantiles. In particular, in terms of coverage rates, there is a good performance for VaR(99 per cent) and bad performance for VaR(95 per cent) and VaR(90 per cent). The results are confirmed by an empirical application for the stock market index returns of G7 countries.

Practical implications

The findings of the study suggest that the methods that were considered to estimate VaR confidence interval are appropriated when considering high quantiles such as VaR(99 per cent). However, using these methods for smaller quantiles, such as VaR(95 per cent) and VaR(90 per cent), is not recommended.

Originality/value

This study is the first one, as far as it is known, to identify the scenarios under which the methods for estimating the VaR confidence intervals perform properly. The findings are supported by simulation and empirical exercises.

Details

Studies in Economics and Finance, vol. 33 no. 4
Type: Research Article
ISSN: 1086-7376

Keywords

Article
Publication date: 18 February 2021

Wenguang Yang, Lianhai Lin and Hongkui Gao

To solve the problem of simulation evaluation with small samples, a fresh approach of grey estimation is presented based on classical statistical theory and grey system theory…

Abstract

Purpose

To solve the problem of simulation evaluation with small samples, a fresh approach of grey estimation is presented based on classical statistical theory and grey system theory. The purpose of this paper is to make full use of the difference of data distribution and avoid the marginal data being ignored.

Design/methodology/approach

Based upon the grey distribution characteristics of small sample data, the definition about a new concept of grey relational similarity measure comes into being. At the same time, the concept of sample weight is proposed according to the grey relational similarity measure. Based on the new definition of grey weight, the grey point estimation and grey confidence interval are studied. Then the improved Bootstrap resampling is designed by uniform distribution and randomness as an important supplement of the grey estimation. In addition, the accuracy of grey bilateral and unilateral confidence intervals is introduced by using the new grey relational similarity measure approach.

Findings

The new small sample evaluation method can realize the effective expansion and enrichment of data and avoid the excessive concentration of data. This method is an organic fusion of grey estimation and improved Bootstrap method. Several examples are used to demonstrate the feasibility and validity of the proposed methods to illustrate the credibility of some simulation data, which has no need to know the probability distribution of small samples.

Originality/value

This research has completed the combination of grey estimation and improved Bootstrap, which makes more reasonable use of the value of different data than the unimproved method.

Details

Grey Systems: Theory and Application, vol. 12 no. 2
Type: Research Article
ISSN: 2043-9377

Keywords

Article
Publication date: 29 June 2010

Jeh‐Nan Pan, Tzu‐Chun Kuo and Abraham Bretholt

The purpose of this research is to develop a new key performance index (KPI) and its interval estimation for measuring the service quality from customers' perceptions, since most…

5626

Abstract

Purpose

The purpose of this research is to develop a new key performance index (KPI) and its interval estimation for measuring the service quality from customers' perceptions, since most service quality data follow non‐normal distribution.

Design/methodology/approach

Based on the non‐normal process capability indices used in manufacturing industries, a new KPI suitable for measuring service quality is developed using Parasuraman's 5th Gap between customers' expectation and perception. Moreover, the confidence interval of the proposed KPI is established using the bootstrapping method.

Findings

The quantitative method for measuring the service quality through the new KPI and its interval estimation is illustrated by a realistic example. The results show that the new KPI allows practising managers to evaluate the actual service quality level delivered within each of five SERVQUAL categories and prioritize the possible improvement projects from customers' perspectives. Moreover, compared with the traditional method of sample size determination, a substantial amount of cost savings can be expected by using the suggested sample sizes.

Practical implications

The paper presents a structured approach of opportunity assessment for improving service quality from a strategic alignment perspective, particularly in the five dimensions: tangibles, reliability, responsiveness, assurance, and empathy. The new approach provides practising managers with a decision‐making tool for measuring service quality, detecting problematic situations and selecting the most urgent improvement project. Once the existing service problems are identified and improvement projects are prioritized, it can lead to the direction of continuous improvement for any service industry.

Originality/value

Given a managerial target on any desired service level as well as customers' perceptions and expectations, the new KPI could be applied to any non‐normal service quality and other survey data. Thus, the corporate performance in terms of key factors of business success can also be measured by the new KPI, which may lead to managing complexities and enhancing sustainability in service industries.

Details

Industrial Management & Data Systems, vol. 110 no. 6
Type: Research Article
ISSN: 0263-5577

Keywords

Article
Publication date: 20 September 2021

Marwa Kh. Hassan

Distribution. The purpose of this study is to obtain the modified maximum likelihood estimator of stress–strength model using the ranked set sampling, to obtain the asymptotic and…

Abstract

Purpose

Distribution. The purpose of this study is to obtain the modified maximum likelihood estimator of stress–strength model using the ranked set sampling, to obtain the asymptotic and bootstrap confidence interval of P[Y < X], to compare the performance of author’s estimates with the estimates under simple random sampling and to apply author’s estimates on head and neck cancer.

Design/methodology/approach

The maximum likelihood estimator of R = P[Y < X], where X and Y are two independent inverse Weibull random variables common shape parameter that affect the shape of the distribution, and different scale parameters that have an effect on the distribution dispersion are given under ranked set sampling. Together with the asymptotic and bootstrap confidence interval, Monte Carlo simulation shows that this estimator performs better than the estimator under simple random sampling. Also, the asymptotic and bootstrap confidence interval under ranked set sampling is better than these interval estimators under simple random sampling. The application to head and neck cancer disease data shows that the estimator of R = P[Y < X] that shows the treatment with radiotherapy is more efficient than the treatment with a combined radiotherapy and chemotherapy under ranked set sampling that is better than these estimators under simple random sampling.

Findings

The ranked set sampling is more effective than the simple random sampling for the inference of stress-strength model based on inverse Weibull distribution.

Originality/value

This study sheds light on the author’s estimates on head and neck cancer.

Details

International Journal of Quality & Reliability Management, vol. 39 no. 7
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 5 February 2018

Haoliang Wang, Xiwang Dong, Qingdong Li and Zhang Ren

By using small reference samples, the calculation method of confidence value and prediction method of confidence interval for multi-input system are investigated. The purpose of…

Abstract

Purpose

By using small reference samples, the calculation method of confidence value and prediction method of confidence interval for multi-input system are investigated. The purpose of this paper is to offer effective assessing methods of confidence value and confidence interval for the simulation models used in establishing guidance and control systems.

Design/methodology/approach

In this paper, first, an improved cluster estimation method is proposed to guide the selection of the small reference samples. Then, based on analytic hierarchy process method, the new calculation method of the weight of each reference sample is derived. By using the grey relation analysis method, new calculation methods of the correlation coefficient and confidence value are presented. Moreover, the confidence interval of the sample awaiting assessment is defined. A new prediction method is derived to obtain the confidence interval of the sample awaiting assessment which has no reference sample. Subsequently, by using the prediction method and original small reference samples, Bootstrap resampling method is used to obtain more correlation coefficients for the sample to reduce the probability of abandoning the true.

Findings

The grey relational analysis is used in assessing the confidence value and interval prediction. The numerical simulations are presented to demonstrate the effectiveness of the theoretical results.

Originality/value

Based on the selected small reference samples, new calculation methods of the correlation coefficient and confidence value are presented to assess the confidence value of model awaiting assessment. The calculation methods of maximum confidence interval, expected confidence interval and other required confidence intervals are presented, which can be used in assessing the validities of controller and guidance system obtained from the model awaiting assessment.

Details

Grey Systems: Theory and Application, vol. 8 no. 1
Type: Research Article
ISSN: 2043-9377

Keywords

1 – 10 of over 7000