Search results

1 – 10 of over 45000
Article
Publication date: 16 October 2018

Nandkumar Mishra and Santosh B. Rane

The purpose of this technical paper is to explore the application of analytics and Six Sigma in the manufacturing processes for iron foundries. This study aims to establish a…

Abstract

Purpose

The purpose of this technical paper is to explore the application of analytics and Six Sigma in the manufacturing processes for iron foundries. This study aims to establish a causal relationship between chemical composition and the quality of the iron casting to achieve the global benchmark quality level.

Design/methodology/approach

The case study-based exploratory research design is used in this study. The problem discovery is done through the literature survey and Delphi method-based expert opinions. The prediction model is built and deployed in 11 cases to validate the research hypothesis. The analytics helps in achieving the statistically significant business goals. The design includes Six Sigma DMAIC (Define – Measure – Analyze – Improve and Control) approach, benchmarking, historical data analysis, literature survey and experiments for the data collection. The data analysis is done through stratification and process capability analysis. The logistic regression-based analytics helps in prediction model building and simulations.

Findings

The application of prediction model helped in quick root cause analysis and reduction of rejection by over 99 per cent saving over INR6.6m per year. This has also enhanced the reliability of the production line and supply chain with on-time delivery of 99.78 per cent, which earlier was 80 per cent. The analytics with Six Sigma DMAIC approach can quickly and easily be applied in manufacturing domain as well.

Research limitations implications

The limitation of the present analytics model is that it provides the point estimates. The model can further be enhanced incorporating range estimates through Monte Carlo simulation.

Practical implications

The increasing use of prediction model in the near future is likely to enhance predictability and efficiencies of the various manufacturing process with sensors and Internet of Things.

Originality/value

The researchers have used design of experiments, artificial neural network and the technical simulations to optimise either chemical composition or mould properties or melt shop parameters. However, this work is based on comprehensive historical data-based analytics. It considers multiple human and temporal factors, sand and mould properties and melt shop parameters along with their relative weight, which is unique. The prediction model is useful to the practitioners for parameter simulation and quality enhancements. The researchers can use similar analytics models with structured Six Sigma DMAIC approach in other manufacturing processes for the simulation and optimisations.

Details

International Journal of Lean Six Sigma, vol. 10 no. 1
Type: Research Article
ISSN: 2040-4166

Keywords

Article
Publication date: 31 May 2022

Qiang Li, Sifeng Liu and Changhai Lin

The purpose of this paper is to solve the problem of quality prediction in the equipment production process and provide a method to deal with abnormal data and solve the problem…

Abstract

Purpose

The purpose of this paper is to solve the problem of quality prediction in the equipment production process and provide a method to deal with abnormal data and solve the problem of data fluctuation.

Design/methodology/approach

The analytic hierarchy process-process failure mode and effect analysis (AHP-PFMEA) structure tree is established based on the analytic hierarchy process (AHP) and process failure mode and effect analysis (PFMEA). Through the failure mode analysis table of the production process, the weight of the failure process and stations is determined, and the ranking of risk failure stations is obtained so as to find out the serious failure process and stations. The spectrum analysis method is used to identify the fault data and judge the “abnormal” value in the fault data. Based on the analysis of the impact, an “offset operator” is designed to eliminate the impact. A new moving average denoise operator is constructed to eliminate the “noise” in the original random fluctuation data. Then, DGM (1,1) model is constructed to predict the production process quality.

Findings

It is discovered the “offset operator” can eliminate the impact of specific shocks effectively, moving average denoise operator can eliminate the “noise” in the original random fluctuation data and the practical application of the shown model is very effective for quality predicting in the equipment production process.

Practical implications

The proposed approach can help provide a good guidance and reference for enterprises to strengthen onsite equipment management and product quality management. The application on a real-world case showed that the DGM (1,1) grey discrete model is very effective for quality predicting in the equipment production process.

Originality/value

The offset operators, including an offset operator for a multiplicative effect and an offset operator for an additive effect, are proposed to eliminate the impact of specific shocks, and a new moving average denoise operator is constructed to eliminate the “noise” in the original random fluctuation data. Both the concepts of offset operator and denoise operator with their calculation formulas were first proposed in this paper.

Details

Grey Systems: Theory and Application, vol. 13 no. 1
Type: Research Article
ISSN: 2043-9377

Keywords

Article
Publication date: 17 February 2021

Anusha R. Pai, Gopalkrishna Joshi and Suraj Rane

This paper is focused at studying the current state of research involving the four dimensions of defect management strategy, i.e. software defect analysis, software quality

Abstract

Purpose

This paper is focused at studying the current state of research involving the four dimensions of defect management strategy, i.e. software defect analysis, software quality, software reliability and software development cost/effort.

Design/methodology/approach

The methodology developed by Kitchenham (2007) is followed in planning, conducting and reporting of the systematic review. Out of 625 research papers, nearly 100 primary studies related to our research domain are considered. The study attempted to find the various techniques, metrics, data sets and performance validation measures used by researchers.

Findings

The study revealed the need for integrating the four dimensions of defect management and studying its effect on software performance. This integrated approach can lead to optimal use of resources in software development process.

Research limitations/implications

There are many dimensions in defect management studies. The authors have considered only vital few based on the practical experiences of software engineers. Most of the research work cited in this review used public data repositories to validate their methodology and there is a need to apply these research methods on real datasets from industry to realize the actual potential of these techniques.

Originality/value

The authors believe that this paper provides a comprehensive insight into the various aspects of state-of-the-art research in software defect management. The authors feel that this is the only research article that delves into the four facets namely software defect analysis, software quality, software reliability and software development cost/effort.

Details

International Journal of Quality & Reliability Management, vol. 38 no. 10
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 3 April 2009

K.N. Jha and C.T. Chockalingam

The purpose of this paper is to enable construction project team members to understand the factors that they must closely monitor to complete projects with a desired quality and…

Abstract

Purpose

The purpose of this paper is to enable construction project team members to understand the factors that they must closely monitor to complete projects with a desired quality and also to predict quality performance during the course of a project. With quality being one of the prime concerns of clients in their construction projects, there is a definite need to monitor its performance.

Design/methodology/approach

The study discussed here is an extension of past research in which 55 project performance attributes were identified based on expert's opinion and literature survey which after analysis resulted in 20 factors (11 success and nine failure). The results of the second stage questionnaire survey conducted have been used to develop the quality performance prediction model based on artificial neural networks (ANN).

Findings

The analyses of the responses led to the conclusion that factors such as project manager's competence, monitoring and feedback by project participants, commitment of all project participants, good coordination between project participants and availability of trained resources significantly affect the quality performance criterion. The best prediction model was found to be a 5‐5‐1 feed forward neural network based on back propagation algorithm with a mean absolute percentage deviation (MAPD) of 8.044 percent.

Practical implications

Project professionals can concentrate on certain factors instead of handling all the factors at the same time to achieve the desired quality performance. Also the study may be helpful for the project manager and his/her team to predict the quality performance of the project during its course.

Originality/value

The present study resulted in a model to predict the quality performance based on the factors identified as critical using ANN. With the control of the identified critical factors and usage of the prediction model, the desired quality performance can be achieved in construction projects.

Details

Journal of Advances in Management Research, vol. 6 no. 1
Type: Research Article
ISSN: 0972-7981

Keywords

Article
Publication date: 11 February 2021

Xiaoyue Zhu, Yaoguo Dang and Song Ding

Aiming to address the forecasting dilemma of seasonal air quality, the authors design the novel self-adaptive seasonal adjustment factor to extract the seasonal fluctuation…

Abstract

Purpose

Aiming to address the forecasting dilemma of seasonal air quality, the authors design the novel self-adaptive seasonal adjustment factor to extract the seasonal fluctuation information about the air quality index. Based on the novel self-adaptive seasonal adjustment factor, the novel seasonal grey forecasting models are established to predict the air quality in China.

Design/methodology/approach

This paper constructs a novel self-adaptive seasonal adjustment factor for quantifying the seasonal difference information of air quality. The novel self-adaptive seasonal adjustment factor reflects the periodic fluctuations of air quality. Therefore, it is employed to optimize the data generation of three conventional grey models, consisting of the GM(1,1) model, the discrete grey model and the fractional-order grey model. Then three novel self-adaptive seasonal grey forecasting models, including the self-adaptive seasonal GM(1,1) model (SAGM(1,1)), the self-adaptive seasonal discrete grey model (SADGM(1,1)) and the self-adaptive seasonal fractional-order grey model (SAFGM(1,1)), are put forward for prognosticating the air quality of all provinces in China .

Findings

The experiment results confirm that the novel self-adaptive seasonal adjustment factors promote the precision of the conventional grey models remarkably. Simultaneously, compared with three non-seasonal grey forecasting models and the SARIMA model, the performance of self-adaptive seasonal grey forecasting models is outstanding, which indicates that they capture the seasonal changes of air quality more efficiently.

Research limitations/implications

Since air quality is affected by various factors, subsequent research may consider including meteorological conditions, pollutant emissions and other factors to perfect the self-adaptive seasonal grey models.

Practical implications

Given the problematic air pollution situation in China, timely and accurate air quality forecasting technology is exceptionally crucial for mitigating their adverse effects on the environment and human health. The paper proposes three self-adaptive seasonal grey forecasting models to forecast the air quality index of all provinces in China, which improves the adaptability of conventional grey models and provides more efficient prediction tools for air quality.

Originality/value

The self-adaptive seasonal adjustment factors are constructed to characterize the seasonal fluctuations of air quality index. Three novel self-adaptive seasonal grey forecasting models are established for prognosticating the air quality of all provinces in China. The robustness of the proposed grey models is reinforced by integrating the seasonal irregularity. The proposed methods acquire better forecasting precisions compared with the non-seasonal grey models and the SARIMA model.

Details

Grey Systems: Theory and Application, vol. 11 no. 4
Type: Research Article
ISSN: 2043-9377

Keywords

Article
Publication date: 25 July 2019

Xia Li, Ruibin Bai, Peer-Olaf Siebers and Christian Wagner

Many transport and logistics companies nowadays use raw vehicle GPS data for travel time prediction. However, they face difficult challenges in terms of the costs of information…

Abstract

Purpose

Many transport and logistics companies nowadays use raw vehicle GPS data for travel time prediction. However, they face difficult challenges in terms of the costs of information storage, as well as the quality of the prediction. This paper aims to systematically investigate various meta-data (features) that require significantly less storage space but provide sufficient information for high-quality travel time predictions.

Design/methodology/approach

The paper systematically studied the combinatorial effects of features and different model fitting strategies with two popular decision tree ensemble methods for travel time prediction, namely, random forests and gradient boosting regression trees. First, the investigation was conducted using pseudo travel time data that were generated using a pseudo travel time sampling algorithm, which allows generating travel time data using different noise processes so that the prediction performance under different travel conditions and noise characteristics can be studied systematically. The results and findings were then further compared and evaluated through a real-life case.

Findings

The paper provides empirical insights and guidelines about how raw GPS data can be reduced into a small-sized feature vector for the purposes of vehicle travel time prediction. It suggests that, add travel time observations from the previous departure time intervals are beneficial to the prediction, particularly when there is no other types of real-time information (e.g. traffic flow, speed) are available. It was also found that modular model fitting does not improve the quality of the prediction in all experimental settings used in this paper.

Research limitations/implications

The findings are primarily based on empirical studies on limited real-life data instances, and the results may lack generalisabilities. Therefore, the researchers are encouraged to test them further in more real-life data instances.

Practical implications

The paper includes implications and guidelines for the development of efficient GPS data storage and high-quality travel time prediction under different types of travel conditions.

Originality/value

This paper systematically studies the combinatorial feature effects for tree-ensemble-based travel time prediction approaches.

Details

VINE Journal of Information and Knowledge Management Systems, vol. 49 no. 3
Type: Research Article
ISSN: 2059-5891

Keywords

Article
Publication date: 1 January 2004

Alison M. Dean

Reported studies on call centers emphasize efficiency and control, with possible implications for service priorities, customer orientation and service quality. However, there is…

6928

Abstract

Reported studies on call centers emphasize efficiency and control, with possible implications for service priorities, customer orientation and service quality. However, there is little empirical research to test assumptions from the customer’s perspective. This study aimed to establish whether customers expected (predicted) low levels of service from a call center, how this level compared to the minimum level they considered adequate, and whether the perceived customer orientation of the call center was related to service quality expectations. Data were collected in Australia from two sources: end consumers (n = 289) of an insurance provider, and business customers (n = 325) of a bank. Key findings were similar for both samples. First, customers had very high levels of adequate (minimum) expectations, and adequate expectations behaved independently from predicted (forecast) expectations. Second, customer orientation was associated with predicted expectations but not adequate expectations. The paper concludes with suggestions for future research and managerial implications.

Details

Journal of Services Marketing, vol. 18 no. 1
Type: Research Article
ISSN: 0887-6045

Keywords

Article
Publication date: 6 November 2017

Wenjie Dong, Sifeng Liu, Zhigeng Fang, Xiaoyu Yang, Qian Hu and Liangyan Tao

The purpose of this paper is to clarify several commonly used quality cost models based on Juran’s characteristic curve. Through mathematical deduction, the lowest point of quality

Abstract

Purpose

The purpose of this paper is to clarify several commonly used quality cost models based on Juran’s characteristic curve. Through mathematical deduction, the lowest point of quality cost and the lowest level of quality level (often depicted by qualification rate) can be obtained. This paper also aims to introduce a new prediction model, namely discrete grey model (DGM), to forecast the changing trend of quality cost.

Design/methodology/approach

This paper comes to the conclusion by means of mathematical deduction. To make it more clear, the authors get the lowest quality level and the lowest quality cost by taking the derivative of the equation of quality cost and quality level. By introducing the weakening buffer operator, the authors can significantly improve the prediction accuracy of DGM.

Findings

This paper demonstrates that DGM can be used to forecast quality cost based on Juran’s cost characteristic curve, especially when the authors do not have much information or the sample capacity is rather small. When operated by practical weakening buffer operator, the randomness of time series can be obviously weakened and the prediction accuracy can be significantly improved.

Practical implications

This paper uses a real case from a literature to verify the validity of discrete grey forecasting model, getting the conclusion that there is a certain degree of feasibility and rationality of DGM to forecast the variation tendency of quality cost.

Originality/value

This paper perfects the theory of quality cost based on Juran’s characteristic curve and expands the scope of application of grey system theory.

Details

Grey Systems: Theory and Application, vol. 7 no. 3
Type: Research Article
ISSN: 2043-9377

Keywords

Article
Publication date: 5 September 2018

Melinda Oroszlányová, Carla Teixeira Lopes, Sérgio Nunes and Cristina Ribeiro

The quality of consumer-oriented health information on the web has been defined and evaluated in several studies. Usually it is based on evaluation criteria identified by the…

Abstract

Purpose

The quality of consumer-oriented health information on the web has been defined and evaluated in several studies. Usually it is based on evaluation criteria identified by the researchers and, so far, there is no agreed standard for the quality indicators to use. Based on such indicators, tools have been developed to evaluate the quality of web information. The HONcode is one of such tools. The purpose of this paper is to investigate the influence of web document features on their quality, using HONcode as ground truth, with the aim of finding whether it is possible to predict the quality of a document using its characteristics.

Design/methodology/approach

The present work uses a set of health documents and analyzes how their characteristics (e.g. web domain, last update, type, mention of places of treatment and prevention strategies) are associated with their quality. Based on these features, statistical models are built which predict whether health-related web documents have certification-level quality. Multivariate analysis is performed, using classification to estimate the probability of a document having quality given its characteristics. This approach tells us which predictors are important. Three types of full and reduced logistic regression models are built and evaluated. The first one includes every feature, without any exclusion, the second one disregards the Utilization Review Accreditation Commission variable, due to it being a quality indicator, and the third one excludes the variables related to the HONcode principles, which might also be indicators of quality. The reduced models were built with the aim to see whether they reach similar results with a smaller number of features.

Findings

The prediction models have high accuracy, even without including the characteristics of Health on the Net code principles in the models. The most informative prediction model considers characteristics that can be assessed automatically (e.g. split content, type, process of revision and place of treatment). It has an accuracy of 89 percent.

Originality/value

This paper proposes models that automatically predict whether a document has quality or not. Some of the used features (e.g. prevention, prognosis or treatment) have not yet been explicitly considered in this context. The findings of the present study may be used by search engines to promote high-quality documents. This will improve health information retrieval and may contribute to reduce the problems caused by inaccurate information.

Details

Online Information Review, vol. 42 no. 7
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 5 May 2021

Nathalie Hernandez, Nicolas Caradot, Hauke Sonnenberg, Pascale Rouault and Andrés Torres

The purpose of this paper was exploring and comparing different deterioration models based on statistical and machine learning approaches. These models were chosen from their…

Abstract

Purpose

The purpose of this paper was exploring and comparing different deterioration models based on statistical and machine learning approaches. These models were chosen from their successful results in other case studies. The deterioration models were developing considering two scenarios: (i) only the age as covariate (Scenario 1); and (ii) the age together with other available sewer characteristics as covariates (Scenario 2). Both were evaluated to achieve two different management objectives related to the prediction of the critical condition of sewers: at the network and the sewer levels.

Design/methodology/approach

Six statistical and machine learning methods [logistic regression (LR), random forest (RF), multinomial logistic regression, ordinal logistic regression, linear discriminant analysis and support vector machine] were explored considering two kinds of predictor variables (independent variables in the model). The main propose of these models was predicting the structural condition at network and pipe level evaluated from deviation analysis and performance curve techniques. Further, the deterioration models were exploring for two case studies: the sewer systems of Bogota and Medellin. These case studies were considered because of both counts with their own assessment standards and low inspection rate.

Findings

The results indicate that LR models for both case studies show higher prediction capacity under Scenario 1 (considering only the age) for the management objective related to the network, such as annual budget plans; and RF shows the highest success percentage of sewers in critical condition (sewer level) considering Scenario 2 for both case studies.

Practical implications

There is not a deterioration method whose predictions are adaptable for achieving different management objectives; it is important to explore different approaches to find which one could support a sewer asset management objective for a specific case study.

Originality/value

The originality of this paper consists of there is not a paper in which the prediction of several statistical and machine learning-based deterioration models has been compared for case studies with different local assessment standard. The above to find which is adaptable for each one and which model is adaptable for each management objective.

1 – 10 of over 45000