Search results

1 – 10 of over 17000
Article
Publication date: 2 March 2015

Zafar Iqbal, Nigel Peter Grigg, K. Govindaraju and Nicola Marie Campbell-Allen

Quality function deployment (QFD) is a planning methodology to improve products, services and their associated processes by ensuring that the voice of the customer has been…

Abstract

Purpose

Quality function deployment (QFD) is a planning methodology to improve products, services and their associated processes by ensuring that the voice of the customer has been effectively deployed through specified and prioritised technical attributes (TAs). The purpose of this paper is two ways: to enhance the prioritisation of TAs: computer simulation significance test; and computer simulation confidence interval. Both are based on permutation sampling, bootstrap sampling and parametric bootstrap sampling of given empirical data.

Design/methodology/approach

The authors present a theoretical case for the use permutation sampling, bootstrap sampling and parametric bootstrap sampling. Using a published case study the authors demonstrate how these can be applied on given empirical data to generate a theoretical population. From this the authors describe a procedure to decide upon which TAs have significantly different priority, and also estimate confidence intervals from the theoretical simulated populations.

Findings

First, the authors demonstrate not only parametric bootstrap is useful to simulate theoretical populations. The authors can also employ permutation sampling and bootstrap sampling to generate theoretical populations. Then the authors obtain the results from these three approaches. qThe authors describe why there is a difference in results of permutation sampling, bootstrap and parametric bootstrap sampling. Practitioners can employ any approach, it depends how much variation in FWs is required by quality assurance division.

Originality/value

Using these methods provides QFD practitioners with a robust and reliable method for determining which TAs should be selected for attention in product and service design. The explicit selection of TAs will help to achieve maximum customer satisfaction, and save time and money, which are the ultimate objectives of QFD.

Details

International Journal of Productivity and Performance Management, vol. 64 no. 3
Type: Research Article
ISSN: 1741-0401

Keywords

Article
Publication date: 14 October 2020

Haiyan Ge, Xintian Liu, Yu Fang, Haijie Wang, Xu Wang and Minghui Zhang

The purpose of this paper is to introduce error ellipse into the bootstrap method to improve the reliability of small samples and the credibility of the S-N curve.

Abstract

Purpose

The purpose of this paper is to introduce error ellipse into the bootstrap method to improve the reliability of small samples and the credibility of the S-N curve.

Design/methodology/approach

Based on the bootstrap method and the reliability of the original samples, two error ellipse models are proposed. The error ellipse model reasonably predicts that the discrete law of expanded virtual samples obeys two-dimensional normal distribution.

Findings

By comparing parameters obtained by the bootstrap method, improved bootstrap method (normal distribution) and error ellipse methods, it is found that the error ellipse method achieves the expansion of sampling range and shortens the confidence interval, which improves the accuracy of the estimation of parameters with small samples. Through case analysis, it is proved that the tangent error ellipse method is feasible, and the series of S-N curves is reasonable by the tangent error ellipse method.

Originality/value

The error ellipse methods can lay a technical foundation for life prediction of products and have a progressive significance for the quality evaluation of products.

Details

Engineering Computations, vol. 38 no. 1
Type: Research Article
ISSN: 0264-4401

Keywords

Book part
Publication date: 10 April 2019

Heng Chen and Q. Rallye Shen

Sampling units for the 2013 Methods-of-Payment survey were selected through an approximate stratified two-stage sampling design. To compensate for nonresponse and noncoverage and…

Abstract

Sampling units for the 2013 Methods-of-Payment survey were selected through an approximate stratified two-stage sampling design. To compensate for nonresponse and noncoverage and ensure consistency with external population counts, the observations are weighted through a raking procedure. We apply bootstrap resampling methods to estimate the variance, allowing for randomness from both the sampling design and raking procedure. We find that the variance is smaller when estimated through the bootstrap resampling method than through the naive linearization method, where the latter does not take into account the correlation between the variables used for weighting and the outcome variable of interest.

Details

The Econometrics of Complex Survey Data
Type: Book
ISBN: 978-1-78756-726-9

Keywords

Article
Publication date: 19 June 2009

Clara M. Novoa and Francis Mendez

The purpose of this paper is to present bootstrapping as an alternative statistical methodology to analyze time studies and input data for discrete‐event simulations. Bootstrapping

Abstract

Purpose

The purpose of this paper is to present bootstrapping as an alternative statistical methodology to analyze time studies and input data for discrete‐event simulations. Bootstrapping is a non‐parametric technique to estimate the sampling distribution of a statistic by doing repeated sampling (i.e. resampling) with replacement from an original sample. This paper proposes a relatively simple implementation of bootstrap techniques to time study analysis.

Design/methodology/approach

Using an inductive approach, this work selects a typical situation to conduct a time study, applies two bootstrap procedures for the statistical analysis, compares bootstrap to traditional parametric approaches, and extrapolates general advantages of bootstrapping over parametric approaches.

Findings

Bootstrap produces accurate inferences when compared to those from parametric methods, and it is an alternative when the underlying parametric assumptions are not met.

Research limitations/implications

Research results contribute to work measurement and simulation fields since bootstrap promises an increase in accuracy in cases where the normality assumption is violated or only small samples are available. Furthermore, this paper shows that electronic spreadsheets are appropriate tools to implement the proposed bootstrap procedures.

Originality/value

In previous work, the standard procedure to analyze time studies and input data for simulations is a parametric approach. Bootstrap permits to obtain both point estimates and estimates of time distributions. Engineers and managers involved in process improvement initiatives could use bootstrap to exploit better the information from available samples.

Details

International Journal of Productivity and Performance Management, vol. 58 no. 5
Type: Research Article
ISSN: 1741-0401

Keywords

Article
Publication date: 7 August 2017

Bahram Sadeghpour Gildeh, Sedigheh Rahimpour and Fatemeh Ghanbarpour Gravi

The purpose of this paper is to construct a statistical hypotheses test for process capability indices and compare the pairs of them with a fixed sample size.

Abstract

Purpose

The purpose of this paper is to construct a statistical hypotheses test for process capability indices and compare the pairs of them with a fixed sample size.

Design/methodology/approach

Since the sampling distribution of the estimators of pairs of two process capability indices (PCIs) is very complex, an exact statistical hypothesis test for them cannot be constructed. Therefore, the authors have proposed a bootstrap method to construct the hypothesis test for them on the basis of p-value.

Findings

The authors have shown that by increasing n, the bootstrap method has better output relative to other methods and it can be easily implemented. The authors have also demonstrated that sometimes an exact hypotheses test cannot be constructed and need some assumptions.

Originality/value

In the present paper, several methods to test of hypotheses about the difference between two process capability indices have been compared.

Details

International Journal of Quality & Reliability Management, vol. 34 no. 7
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 9 February 2022

Xintian Liu, Jiazhi Liu, Haijie Wang and Xiaobing Yang

To improve the accuracy of parameter prediction for small-sample data, considering the existence of error in samples, the error circle is introduced to analyze original samples.

Abstract

Purpose

To improve the accuracy of parameter prediction for small-sample data, considering the existence of error in samples, the error circle is introduced to analyze original samples.

Design/methodology/approach

The influence of surface roughness on fatigue life is discussed. The error circle can treat the original samples and extend the single sample, which reduces the influence of the sample error.

Findings

The S-N curve obtained by the error circle method is more reliable; the S-N curve of the Bootstrap method is more reliable than that of the Maximum Likelihood Estimation (MLE) method.

Originality/value

The parameter distribution and characteristics are statistically obtained based on the surface roughness, surface roughness factor and intercept constant. The original sample is studied by an error circle and discussed using the Bootstrap and MLE methods to obtain corresponding S-N curves. It provides a more trustworthy basis for predicting the useful life of products.

Details

International Journal of Structural Integrity, vol. 13 no. 2
Type: Research Article
ISSN: 1757-9864

Keywords

Article
Publication date: 1 July 2005

Augustin Prodan and Remus Campean

The aim of this work is to implement bootstrapping methods into software tools, based on Java.

Abstract

Purpose

The aim of this work is to implement bootstrapping methods into software tools, based on Java.

Design/methodology/approach

This paper presents a category of software e‐tools aimed at simulating laboratory works and experiments.

Findings

Both students and teaching staff use traditional statistical methods to infer the truth from sample data gathered in laboratory experiments. However, the repeated laboratory experiments mean the consumption of a great deal of substances and reactants. At the same time, there are some ethically motivated reasons to reduce the number of animals used in experimentation. Using a bootstrapping tool and computer power, the experimenter can repeat the original experiment on computer, obtaining pseudo‐data as plausible as those obtained from the original experiment.

Originality/value

Provides data on implementing bootstrapping methods into software e‐tools, simulating laboratory experiments in didactic and research activities.

Details

Campus-Wide Information Systems, vol. 22 no. 3
Type: Research Article
ISSN: 1065-0741

Keywords

Article
Publication date: 31 December 2018

Antonio Gil Ropero, Ignacio Turias Dominguez and Maria del Mar Cerbán Jiménez

The purpose of this paper is to evaluate the functioning of the main Spanish and Portuguese containers ports to observe if they are operating below their production capabilities.

Abstract

Purpose

The purpose of this paper is to evaluate the functioning of the main Spanish and Portuguese containers ports to observe if they are operating below their production capabilities.

Design/methodology/approach

To achieve the above-mentioned objective, one possible method is to calculate the data envelopment analysis (DEA) efficiency, and the scale efficiency (SE) of targets, and in order to consider the variability across different samples, a bootstrap scheme has been applied.

Findings

The results showed that the DEA bootstrap-based approach can not only select a suitable unit which accords with a port’s actual input capabilities, but also provides a more accurate result. The bootstrapped results indicate that all ports do not need to develop future investments to expand port infrastructure.

Practical implications

The proposed DEA bootstrap-based approach provides useful implications in the robust measurement of port efficiency considering different samples. The study proves the usefulness of this approach as a decision-making tool in port efficiency.

Originality/value

This study is one of the first studies to apply bootstrap to measure port efficiency under the background of the Spain and Portugal case. In the first stage, two models of DEA have been used to obtain the pure technical, and the technical and SE, and both the input-oriented options: constant return scale and variable return scale. In the second stage, the bootstrap method has been applied in order to determine efficiency rankings of Iberian Peninsula container ports taking into consideration different samples. Confidence interval estimates of efficiency for each port are reported. This paper provides useful insights into the application of a DEA bootstrap-based approach as a modeling tool to aid decision making in measuring port efficiency.

Details

Industrial Management & Data Systems, vol. 119 no. 4
Type: Research Article
ISSN: 0263-5577

Keywords

Book part
Publication date: 12 December 2003

R.Carter Hill, Lee C. Adkins and Keith A. Bender

The Heckman two-step estimator (Heckit) for the selectivity model is widely applied in Economics and other social sciences. In this model a non-zero outcome variable is observed…

Abstract

The Heckman two-step estimator (Heckit) for the selectivity model is widely applied in Economics and other social sciences. In this model a non-zero outcome variable is observed only if a latent variable is positive. The asymptotic covariance matrix for a two-step estimation procedure must account for the estimation error introduced in the first stage. We examine the finite sample size of tests based on alternative covariance matrix estimators. We do so by using Monte Carlo experiments to evaluate bootstrap generated critical values and critical values based on asymptotic theory.

Details

Maximum Likelihood Estimation of Misspecified Models: Twenty Years Later
Type: Book
ISBN: 978-1-84950-253-5

Book part
Publication date: 10 April 2019

Shu Yang and Jae Kwang Kim

Nearest neighbor imputation has a long tradition for handling item nonresponse in survey sampling. In this article, we study the asymptotic properties of the nearest neighbor…

Abstract

Nearest neighbor imputation has a long tradition for handling item nonresponse in survey sampling. In this article, we study the asymptotic properties of the nearest neighbor imputation estimator for general population parameters, including population means, proportions and quantiles. For variance estimation, we propose novel replication variance estimation, which is asymptotically valid and straightforward to implement. The main idea is to construct replicates of the estimator directly based on its asymptotically linear terms, instead of individual records of variables. The simulation results show that nearest neighbor imputation and the proposed variance estimation provide valid inferences for general population parameters.

Details

The Econometrics of Complex Survey Data
Type: Book
ISBN: 978-1-78756-726-9

Keywords

1 – 10 of over 17000