Search results

1 – 10 of over 85000
Article
Publication date: 1 December 1998

Lee‐Ing Tong and Jann‐Pygn Chen

When the process probability distribution is non‐normal or is unknown, the process mean and standard deviation may not properly describe the distribution’s shape. Consequently…

Abstract

When the process probability distribution is non‐normal or is unknown, the process mean and standard deviation may not properly describe the distribution’s shape. Consequently, the traditional process capability indices (PCI) Cp, Cpk, Cpm and Cpmk cannot express the actual process capability. This paper presents a procedure to construct lower confidence limits for PCIs when the process distribution is unknown. First, the order statistics are utilized to find the estimators of Cp, Cpk, Cpm and Cpmk. Bootstrap simulation method is then utilized to construct the lower confidence limits of PCIs, thereby allowing the process’s capability to be evaluated. A numerical example demonstrates the effectiveness of the proposed procedure.

Details

International Journal of Quality & Reliability Management, vol. 15 no. 8/9
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 18 September 2007

Sri Beldona and Vernon E. Francis

To develop, test and implement a sampling strategy for equipment auditing for a Fortune 100 company.

3634

Abstract

Purpose

To develop, test and implement a sampling strategy for equipment auditing for a Fortune 100 company.

Design/methodology/approach

Regression analysis is applied to auditing of equipment for a large US corporation. Empirical data and test data sets are used to evaluate the efficacy of using regression for auditing and to determine reasonable and efficient sample sizes to be employed across more than 5,000 locations.

Findings

Regression is a viable and useful method for equipment auditing when there is anticipated high correlation between pre‐ and post‐audit equipment value. Recommended sample size is dependent upon the size of the location as measured by total pieces of equipment. Decision rules combining acceptable tolerance limits, desired confidence level and sample size are provided.

Research limitations/implications

The method, recommended sample sizes and decision rules are particularly applicable to instances where high correlation is expected between pre‐ and post‐audit equipment values. Standard regression assumptions are not all met in all instances, especially with small sample sizes.

Practical implications

The regression approach and model, sample size recommendations and decision rules for passing or failing an equipment audit described herein have been implemented at a Fortune 100 company, and are generally applicable to equipment and inventory auditing when high correlation between pre‐ and post‐audit equipment is expected.

Originality/value

This paper provides a practical and useful regression‐based approach to sampling for equipment auditing. Recommended sample sizes and decision rules for passing or failing the audit are explicitly defined.

Details

Managerial Auditing Journal, vol. 22 no. 8
Type: Research Article
ISSN: 0268-6902

Keywords

Book part
Publication date: 19 December 2012

Jenny N. Lye and Joseph G. Hirschberg

In this chapter we demonstrate the construction of inverse test confidence intervals for the turning-points in estimated nonlinear relationships by the use of the marginal or…

Abstract

In this chapter we demonstrate the construction of inverse test confidence intervals for the turning-points in estimated nonlinear relationships by the use of the marginal or first derivative function. First, we outline the inverse test confidence interval approach. Then we examine the relationship between the traditional confidence intervals based on the Wald test for the turning-points for a cubic, a quartic, and fractional polynomials estimated via regression analysis and the inverse test intervals. We show that the confidence interval plots of the marginal function can be used to estimate confidence intervals for the turning-points that are equivalent to the inverse test. We also provide a method for the interpretation of the confidence intervals for the second derivative function to draw inferences for the characteristics of the turning-point.

This method is applied to the examination of the turning-points found when estimating a quartic and a fractional polynomial from data used for the estimation of an Environmental Kuznets Curve. The Stata do files used to generate these examples are listed in Appendix A along with the data.

Article
Publication date: 29 June 2010

Jeh‐Nan Pan, Tzu‐Chun Kuo and Abraham Bretholt

The purpose of this research is to develop a new key performance index (KPI) and its interval estimation for measuring the service quality from customers' perceptions, since most…

5679

Abstract

Purpose

The purpose of this research is to develop a new key performance index (KPI) and its interval estimation for measuring the service quality from customers' perceptions, since most service quality data follow non‐normal distribution.

Design/methodology/approach

Based on the non‐normal process capability indices used in manufacturing industries, a new KPI suitable for measuring service quality is developed using Parasuraman's 5th Gap between customers' expectation and perception. Moreover, the confidence interval of the proposed KPI is established using the bootstrapping method.

Findings

The quantitative method for measuring the service quality through the new KPI and its interval estimation is illustrated by a realistic example. The results show that the new KPI allows practising managers to evaluate the actual service quality level delivered within each of five SERVQUAL categories and prioritize the possible improvement projects from customers' perspectives. Moreover, compared with the traditional method of sample size determination, a substantial amount of cost savings can be expected by using the suggested sample sizes.

Practical implications

The paper presents a structured approach of opportunity assessment for improving service quality from a strategic alignment perspective, particularly in the five dimensions: tangibles, reliability, responsiveness, assurance, and empathy. The new approach provides practising managers with a decision‐making tool for measuring service quality, detecting problematic situations and selecting the most urgent improvement project. Once the existing service problems are identified and improvement projects are prioritized, it can lead to the direction of continuous improvement for any service industry.

Originality/value

Given a managerial target on any desired service level as well as customers' perceptions and expectations, the new KPI could be applied to any non‐normal service quality and other survey data. Thus, the corporate performance in terms of key factors of business success can also be measured by the new KPI, which may lead to managing complexities and enhancing sustainability in service industries.

Details

Industrial Management & Data Systems, vol. 110 no. 6
Type: Research Article
ISSN: 0263-5577

Keywords

Article
Publication date: 1 February 1986

T.N. Goh

A variety of quantitative specifications, usually in terms of numerical limits, have been developed in industry for the description, prediction and control of product quality. As…

Abstract

A variety of quantitative specifications, usually in terms of numerical limits, have been developed in industry for the description, prediction and control of product quality. As the theoretical foundations of these specifications are often beyond the working knowledge of many manufacturing engineers and managers, this article gives a non‐mathematical account of some of the limits commonly encountered in discussions related to product design, manufacture and inspection; emphasis is placed on the distinctions in their intended purposes and methods of application.

Details

International Journal of Quality & Reliability Management, vol. 3 no. 2
Type: Research Article
ISSN: 0265-671X

Open Access
Article
Publication date: 21 March 2024

Warisa Thangjai and Sa-Aat Niwitpong

Confidence intervals play a crucial role in economics and finance, providing a credible range of values for an unknown parameter along with a corresponding level of certainty…

Abstract

Purpose

Confidence intervals play a crucial role in economics and finance, providing a credible range of values for an unknown parameter along with a corresponding level of certainty. Their applications encompass economic forecasting, market research, financial forecasting, econometric analysis, policy analysis, financial reporting, investment decision-making, credit risk assessment and consumer confidence surveys. Signal-to-noise ratio (SNR) finds applications in economics and finance across various domains such as economic forecasting, financial modeling, market analysis and risk assessment. A high SNR indicates a robust and dependable signal, simplifying the process of making well-informed decisions. On the other hand, a low SNR indicates a weak signal that could be obscured by noise, so decision-making procedures need to take this into serious consideration. This research focuses on the development of confidence intervals for functions derived from the SNR and explores their application in the fields of economics and finance.

Design/methodology/approach

The construction of the confidence intervals involved the application of various methodologies. For the SNR, confidence intervals were formed using the generalized confidence interval (GCI), large sample and Bayesian approaches. The difference between SNRs was estimated through the GCI, large sample, method of variance estimates recovery (MOVER), parametric bootstrap and Bayesian approaches. Additionally, confidence intervals for the common SNR were constructed using the GCI, adjusted MOVER, computational and Bayesian approaches. The performance of these confidence intervals was assessed using coverage probability and average length, evaluated through Monte Carlo simulation.

Findings

The GCI approach demonstrated superior performance over other approaches in terms of both coverage probability and average length for the SNR and the difference between SNRs. Hence, employing the GCI approach is advised for constructing confidence intervals for these parameters. As for the common SNR, the Bayesian approach exhibited the shortest average length. Consequently, the Bayesian approach is recommended for constructing confidence intervals for the common SNR.

Originality/value

This research presents confidence intervals for functions of the SNR to assess SNR estimation in the fields of economics and finance.

Details

Asian Journal of Economics and Banking, vol. 8 no. 2
Type: Research Article
ISSN: 2615-9821

Keywords

Article
Publication date: 14 October 2020

Haiyan Ge, Xintian Liu, Yu Fang, Haijie Wang, Xu Wang and Minghui Zhang

The purpose of this paper is to introduce error ellipse into the bootstrap method to improve the reliability of small samples and the credibility of the S-N curve.

Abstract

Purpose

The purpose of this paper is to introduce error ellipse into the bootstrap method to improve the reliability of small samples and the credibility of the S-N curve.

Design/methodology/approach

Based on the bootstrap method and the reliability of the original samples, two error ellipse models are proposed. The error ellipse model reasonably predicts that the discrete law of expanded virtual samples obeys two-dimensional normal distribution.

Findings

By comparing parameters obtained by the bootstrap method, improved bootstrap method (normal distribution) and error ellipse methods, it is found that the error ellipse method achieves the expansion of sampling range and shortens the confidence interval, which improves the accuracy of the estimation of parameters with small samples. Through case analysis, it is proved that the tangent error ellipse method is feasible, and the series of S-N curves is reasonable by the tangent error ellipse method.

Originality/value

The error ellipse methods can lay a technical foundation for life prediction of products and have a progressive significance for the quality evaluation of products.

Details

Engineering Computations, vol. 38 no. 1
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 1 December 2001

John Reyers and John Mansfield

A literature review suggested that conservation refurbishment work was perceived by design professionals to be inherently more risky than new‐build projects. The objective…

3417

Abstract

A literature review suggested that conservation refurbishment work was perceived by design professionals to be inherently more risky than new‐build projects. The objective assessment of risk items helps ameliorate its impact. The results of a large questionnaire‐based survey evaluating specialist design consultants’ risk identification and management approaches are presented. The risk management approaches of specialist design consultants are divergent, reflecting their professional philosophies, educational programmes and experience. Further differences emerge according to practice size and contract value. Particular attention is paid to the responses considering contingency pricing, project budget forecasts and extensions of time. Results suggest that client education via briefing and consultants’ wider use of confidence limits can help improve the management of risk.

Details

Structural Survey, vol. 19 no. 5
Type: Research Article
ISSN: 0263-080X

Keywords

Article
Publication date: 1 March 2006

David Oloke, David J. Edwards, Bruce Wright and Peter E.D. Love

Effective management and utilisation of plant history data can considerably improve plant and equipment performance. This rationale underpins statistical and mathematical models…

Abstract

Effective management and utilisation of plant history data can considerably improve plant and equipment performance. This rationale underpins statistical and mathematical models for exploiting plant management data more efficiently, but industry has been slow to adopt these models. Reasons proffered for this include: a perception of models being too complex and time consuming; and an inability of their being able to account for dynamism inherent within data sets. To help address this situation, this research developed and tested a web‐based data capture and information management system. Specifically, the system represents integration of a web‐enabled relational database management system (RDBMS) with a model base management system (MBMS). The RDBMS captures historical data from geographically dispersed plant sites, while the MBMS hosts a set of (Autoregressive Integrated Moving Average – ARIMA) time series models to predict plant breakdown. Using a sample of plant history file data, the system and ARIMA predictive capacity were tested. As a measure of model error, the Mean Absolute Deviation (MAD) ranged between 5.34 and 11.07 per cent for the plant items used in the test. The Root Mean Square Error (RMSE) values also showed similar trends, with the prediction model yielding the highest value of 29.79 per cent. The paper concludes with direction for future work, which includes refining the Graphical User Interface (GUI) and developing a Knowledge Based Management System (KBMS) to interface with the RDBMS.

Details

Journal of Engineering, Design and Technology, vol. 4 no. 1
Type: Research Article
ISSN: 1726-0531

Keywords

Article
Publication date: 1 February 2009

M.H Hojjati and A. Sadighi

In a conventional finite element analysis, material properties, dimensions and applied loads are usually defined as deterministic quantities. This simplifying assumption however…

Abstract

In a conventional finite element analysis, material properties, dimensions and applied loads are usually defined as deterministic quantities. This simplifying assumption however, is not true in practical applications. Using statistics in engineering problems enables us to consider the effects of the input variables dispersion on the output parameters in an analysis. This provides a powerful tool for better decision making for more reliable design. In this paper, a probabilistic based design is presented which evaluates the sensitivity of a mechanical model to random input variables. To illustrate the effectiveness of this method, a simple bracket is analyzed for stress‐strain behavior using commercially available finite element software. Young’s modulus, applied pressure and dimensions are considered as random variables with Gaussian distribution and their effects on maximum stress and displacement is evaluated. The finite element results are compared with reliability based theoretical results which show very good agreement. This demonstrates the capability of commercially available software to handle probabilistic approach design.

Details

Multidiscipline Modeling in Materials and Structures, vol. 5 no. 2
Type: Research Article
ISSN: 1573-6105

Keywords

1 – 10 of over 85000