Search results

1 – 10 of over 3000
Article
Publication date: 4 January 2013

Mahmoud O. Elish, Mojeeb AL‐Rahman AL‐Khiaty and Mohammad Alshayeb

The purpose of this paper is to investigate the relationships between some aspect‐oriented metrics and aspect fault proneness, content and fixing effort.

272

Abstract

Purpose

The purpose of this paper is to investigate the relationships between some aspect‐oriented metrics and aspect fault proneness, content and fixing effort.

Design/methodology/approach

An exploratory case study was conducted using an open source aspect‐oriented software consisting of 76 aspects, and 13 aspect‐oriented metrics were investigated that measure different structural properties of an aspect: size, coupling, cohesion, and inheritance. In addition, different prediction models for aspect fault proneness, content and fixing effort were built using different combinations of metrics' categories.

Findings

The results obtained from this study indicate statistically significant correlation between most of the size metrics and aspect fault proneness, content and fixing effort. The cohesion metric was also found to be significantly correlated with the same. Moreover, it was observed that the best accuracy in aspect fault proneness, content and fixing effort prediction can be achieved as a function of some size metrics.

Originality/value

Fault prediction helps software developers to focus their quality assurance activities and to allocate the needed resources for these activities more effectively and efficiently; thus improving software reliability. In literature, some aspect‐oriented metrics have been evaluated for aspect fault proneness prediction, but not for other fault‐related prediction problems such as aspect fault content and fixing effort.

Details

International Journal of Quality & Reliability Management, vol. 30 no. 1
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 13 September 2019

Guru Prasad Bhandari, Ratneshwer Gupta and Satyanshu Kumar Upadhyay

Software fault prediction is an important concept that can be applied at an early stage of the software life cycle. Effective prediction of faults may improve the reliability and…

Abstract

Purpose

Software fault prediction is an important concept that can be applied at an early stage of the software life cycle. Effective prediction of faults may improve the reliability and testability of software systems. As service-oriented architecture (SOA)-based systems become more and more complex, the interaction between participating services increases frequently. The component services may generate enormous reports and fault information. Although considerable research has stressed on developing fault-proneness prediction models in service-oriented systems (SOS) using machine learning (ML) techniques, there has been little work on assessing how effective the source code metrics are for fault prediction. The paper aims to discuss this issue.

Design/methodology/approach

In this paper, the authors have proposed a fault prediction framework to investigate fault prediction in SOS using metrics of web services. The effectiveness of the model has been explored by applying six ML techniques, namely, Naïve Bayes, Artificial Networks (ANN), Adaptive Boosting (AdaBoost), decision tree, Random Forests and Support Vector Machine (SVM), along with five feature selection techniques to extract the essential metrics. The authors have explored accuracy, precision, recall, f-measure and receiver operating characteristic curves of the area under curve values as performance measures.

Findings

The experimental results show that the proposed system can classify the fault-proneness of web services, whether the service is faulty or non-faulty, as a binary-valued output automatically and effectively.

Research limitations/implications

One possible threat to internal validity in the study is the unknown effects of undiscovered faults. Specifically, the authors have injected possible faults into the classes using Java C3.0 tool and only fixed faults are injected into the classes. However, considering the Java C3.0 community of development, testing and use, the authors can generalize that the undiscovered faults should be few and have less impact on the results presented in this study, and that the results may be limited to the investigated complexity metrics and the used ML techniques.

Originality/value

In the literature, only few studies have been observed to directly concentrate on metrics-based fault-proneness prediction of SOS using ML techniques. However, most of the contributions are regarding the fault prediction of the general systems rather than SOS. A majority of them have considered reliability, changeability, maintainability using a logging/history-based approach and mathematical modeling rather than fault prediction in SOS using metrics. Thus, the authors have extended the above contributions further by applying supervised ML techniques over web services metrics and measured their capability by employing fault injection methods.

Details

Data Technologies and Applications, vol. 53 no. 4
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 3 November 2022

Vinod Nistane

Rolling element bearings (REBs) are commonly used in rotating machinery such as pumps, motors, fans and other machineries. The REBs deteriorate over life cycle time. To know the…

Abstract

Purpose

Rolling element bearings (REBs) are commonly used in rotating machinery such as pumps, motors, fans and other machineries. The REBs deteriorate over life cycle time. To know the amount of deteriorate at any time, this paper aims to present a prognostics approach based on integrating optimize health indicator (OHI) and machine learning algorithm.

Design/methodology/approach

Proposed optimum prediction model would be used to evaluate the remaining useful life (RUL) of REBs. Initially, signal raw data are preprocessing through mother wavelet transform; after that, the primary fault features are extracted. Further, these features process to elevate the clarity of features using the random forest algorithm. Based on variable importance of features, the best representation of fault features is selected. Optimize the selected feature by adjusting weight vector using optimization techniques such as genetic algorithm (GA), sequential quadratic optimization (SQO) and multiobjective optimization (MOO). New OHIs are determined and apply to train the network. Finally, optimum predictive models are developed by integrating OHI and artificial neural network (ANN), K-mean clustering (KMC) (i.e. OHI–GA–ANN, OHI–SQO–ANN, OHI–MOO–ANN, OHI–GA–KMC, OHI–SQO–KMC and OHI–MOO–KMC).

Findings

Optimum prediction models performance are recorded and compared with the actual value. Finally, based on error term values best optimum prediction model is proposed for evaluation of RUL of REBs.

Originality/value

Proposed OHI–GA–KMC model is compared in terms of error values with previously published work. RUL predicted by OHI–GA–KMC model is smaller, giving the advantage of this method.

Article
Publication date: 11 August 2020

Bin Bai, Ze Li, Qiliang Wu, Ce Zhou and Junyi Zhang

This study aims to obtained the failure probability distributions of subsystems for industrial robot and filtrate its fault data considering the complicated influencing factors of…

Abstract

Purpose

This study aims to obtained the failure probability distributions of subsystems for industrial robot and filtrate its fault data considering the complicated influencing factors of failure rate for industrial robot and numerous epistemic uncertainties.

Design Methodology Approach

A fault data screening method and failure rate prediction framework are proposed to investigate industrial robot. First, the failure rate model of the industrial robot with different subsystems is established and then the surrogate model is used to fit bathtub curve of the original industrial robot to obtain the early fault time point. Furthermore, the distribution parameters of the original industrial robot are solved by maximum-likelihood function. Second, the influencing factors of the new industrial robot are quantified, and the epistemic uncertainties are refined using interval analytic hierarchy process method to obtain the correction coefficient of the failure rate.

Findings

The failure rate and mean time between failure (MTBF) of predicted new industrial robot are obtained, and the MTBF of predicted new industrial robot is improved compared with that of the original industrial robot.

Research Limitations Implications

Failure data of industrial robots is the basis of this prediction method, but it cannot be used for new or similar products, which is the limitation of this method. At the same time, based on the series characteristics of the industrial robot, it is not suitable for parallel or series-parallel systems.

Practical Implications

This investigation has important guiding significance to maintenance strategy and spare parts quantity of industrial robot. In addition, this study is of great help to engineers and of great significance to increase the service life and reliability of industrial robots.

Social Implications

This investigation can improve MTBF and extend the service life of industrial robots; furthermore, this method can be applied to predict other mechanical products.

Originality Value

This method can complete the process of fitting, screening and refitting the fault data of the industrial robot, which provides a theoretic basis for reliability growth of the predicted new industrial robot. This investigation has significance to maintenance strategy and spare parts quantity of the industrial robot. Moreover, this method can also be applied to the prediction of other mechanical products.

Details

Industrial Robot: the international journal of robotics research and application, vol. 47 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 1 July 2014

Bratislav Tasic, Jos J. Dohmen, E. Jan W. ter Maten, Theo G.J. Beelen, Wil H.A. Schilders, Alex de Vries and Maikel van Beurden

Imperfections in manufacturing processes may cause unwanted connections (faults) that are added to the nominal, “golden”, design of an electronic circuit. By fault simulation one…

Abstract

Purpose

Imperfections in manufacturing processes may cause unwanted connections (faults) that are added to the nominal, “golden”, design of an electronic circuit. By fault simulation one simulates all situations. Normally this leads to a large list of simulations in which for each defect a steady-state (direct current (DC)) solution is determined followed by a transient simulation. The purpose of this paper is to improve the robustness and the efficiency of these simulations.

Design/methodology/approach

Determining the DC solution can be very hard. For this the authors present an adaptive time-domain source stepping procedure that can deal with controlled sources. The method can easily be combined with existing pseudo-transient procedures. The method is robust and efficient. In the subsequent transient simulation the solution of a fault is compared to a golden, fault-free, solution. A strategy is developed to efficiently simulate the faulty solutions until their moment of detection.

Findings

The paper fully exploits the hierarchical structure of the circuit in the simulation process to bypass parts of the circuit that appear to be unaffected by the fault. Accurate prediction and efficient solution procedures lead to fast fault simulation.

Originality/value

The fast fault simulation helps to store a database with detectable deviations for each fault. If such a detectable output “matches” a result of a product that has been returned because of malfunctioning it helps to identify the subcircuit that may contain the real fault. One aims to detect as much as possible candidate faults. Because of the many options the simulations must be very efficient.

Details

COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering, vol. 33 no. 4
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 21 December 2023

Majid Rahi, Ali Ebrahimnejad and Homayun Motameni

Taking into consideration the current human need for agricultural produce such as rice that requires water for growth, the optimal consumption of this valuable liquid is…

Abstract

Purpose

Taking into consideration the current human need for agricultural produce such as rice that requires water for growth, the optimal consumption of this valuable liquid is important. Unfortunately, the traditional use of water by humans for agricultural purposes contradicts the concept of optimal consumption. Therefore, designing and implementing a mechanized irrigation system is of the highest importance. This system includes hardware equipment such as liquid altimeter sensors, valves and pumps which have a failure phenomenon as an integral part, causing faults in the system. Naturally, these faults occur at probable time intervals, and the probability function with exponential distribution is used to simulate this interval. Thus, before the implementation of such high-cost systems, its evaluation is essential during the design phase.

Design/methodology/approach

The proposed approach included two main steps: offline and online. The offline phase included the simulation of the studied system (i.e. the irrigation system of paddy fields) and the acquisition of a data set for training machine learning algorithms such as decision trees to detect, locate (classification) and evaluate faults. In the online phase, C5.0 decision trees trained in the offline phase were used on a stream of data generated by the system.

Findings

The proposed approach is a comprehensive online component-oriented method, which is a combination of supervised machine learning methods to investigate system faults. Each of these methods is considered a component determined by the dimensions and complexity of the case study (to discover, classify and evaluate fault tolerance). These components are placed together in the form of a process framework so that the appropriate method for each component is obtained based on comparison with other machine learning methods. As a result, depending on the conditions under study, the most efficient method is selected in the components. Before the system implementation phase, its reliability is checked by evaluating the predicted faults (in the system design phase). Therefore, this approach avoids the construction of a high-risk system. Compared to existing methods, the proposed approach is more comprehensive and has greater flexibility.

Research limitations/implications

By expanding the dimensions of the problem, the model verification space grows exponentially using automata.

Originality/value

Unlike the existing methods that only examine one or two aspects of fault analysis such as fault detection, classification and fault-tolerance evaluation, this paper proposes a comprehensive process-oriented approach that investigates all three aspects of fault analysis concurrently.

Details

International Journal of Intelligent Computing and Cybernetics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 27 July 2021

Avinash Kumar Shrivastava and Ruchi Sharma

The purpose of this paper is to develop a new software reliability growth model considering different fault distribution function before and after the change point.

Abstract

Purpose

The purpose of this paper is to develop a new software reliability growth model considering different fault distribution function before and after the change point.

Design/methodology/approach

In this paper, the authors have developed a framework to incorporate change-point in developing a hybrid software reliability growth model by considering different distribution functions before and after the change point.

Findings

Numerical illustration suggests that the proposed model gives better results in comparison to the existing models.

Originality/value

The existing literature on change point-based software reliability growth model assumes that the fault correction trend before and after the change is governed by the same distribution. This seems impractical as after the change in the testing environment, the trend of fault detection or correction may not follow the same trend; hence, the assumption of same distribution function may fail to predict the potential number of faults. The modelling framework assumes different distributions before and after change point in developing a software reliability growth model.

Details

International Journal of Quality & Reliability Management, vol. 39 no. 5
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 31 May 2022

Qiang Li, Sifeng Liu and Changhai Lin

The purpose of this paper is to solve the problem of quality prediction in the equipment production process and provide a method to deal with abnormal data and solve the problem…

Abstract

Purpose

The purpose of this paper is to solve the problem of quality prediction in the equipment production process and provide a method to deal with abnormal data and solve the problem of data fluctuation.

Design/methodology/approach

The analytic hierarchy process-process failure mode and effect analysis (AHP-PFMEA) structure tree is established based on the analytic hierarchy process (AHP) and process failure mode and effect analysis (PFMEA). Through the failure mode analysis table of the production process, the weight of the failure process and stations is determined, and the ranking of risk failure stations is obtained so as to find out the serious failure process and stations. The spectrum analysis method is used to identify the fault data and judge the “abnormal” value in the fault data. Based on the analysis of the impact, an “offset operator” is designed to eliminate the impact. A new moving average denoise operator is constructed to eliminate the “noise” in the original random fluctuation data. Then, DGM (1,1) model is constructed to predict the production process quality.

Findings

It is discovered the “offset operator” can eliminate the impact of specific shocks effectively, moving average denoise operator can eliminate the “noise” in the original random fluctuation data and the practical application of the shown model is very effective for quality predicting in the equipment production process.

Practical implications

The proposed approach can help provide a good guidance and reference for enterprises to strengthen onsite equipment management and product quality management. The application on a real-world case showed that the DGM (1,1) grey discrete model is very effective for quality predicting in the equipment production process.

Originality/value

The offset operators, including an offset operator for a multiplicative effect and an offset operator for an additive effect, are proposed to eliminate the impact of specific shocks, and a new moving average denoise operator is constructed to eliminate the “noise” in the original random fluctuation data. Both the concepts of offset operator and denoise operator with their calculation formulas were first proposed in this paper.

Details

Grey Systems: Theory and Application, vol. 13 no. 1
Type: Research Article
ISSN: 2043-9377

Keywords

Article
Publication date: 19 April 2022

D. Divya, Bhasi Marath and M.B. Santosh Kumar

This study aims to bring awareness to the developing of fault detection systems using the data collected from sensor devices/physical devices of various systems for predictive…

1613

Abstract

Purpose

This study aims to bring awareness to the developing of fault detection systems using the data collected from sensor devices/physical devices of various systems for predictive maintenance. Opportunities and challenges in developing anomaly detection algorithms for predictive maintenance and unexplored areas in this context are also discussed.

Design/methodology/approach

For conducting a systematic review on the state-of-the-art algorithms in fault detection for predictive maintenance, review papers from the years 2017–2021 available in the Scopus database were selected. A total of 93 papers were chosen. They are classified under electrical and electronics, civil and constructions, automobile, production and mechanical. In addition to this, the paper provides a detailed discussion of various fault-detection algorithms that can be categorised under supervised, semi-supervised, unsupervised learning and traditional statistical method along with an analysis of various forms of anomalies prevalent across different sectors of industry.

Findings

Based on the literature reviewed, seven propositions with a focus on the following areas are presented: need for a uniform framework while scaling the number of sensors; the need for identification of erroneous parameters; why there is a need for new algorithms based on unsupervised and semi-supervised learning; the importance of ensemble learning and data fusion algorithms; the necessity of automatic fault diagnostic systems; concerns about multiple fault detection; and cost-effective fault detection. These propositions shed light on the unsolved issues of predictive maintenance using fault detection algorithms. A novel architecture based on the methodologies and propositions gives more clarity for the reader to further explore in this area.

Originality/value

Papers for this study were selected from the Scopus database for predictive maintenance in the field of fault detection. Review papers published in this area deal only with methods used to detect anomalies, whereas this paper attempts to establish a link between different industrial domains and the methods used in each industry that uses fault detection for predictive maintenance.

Details

Journal of Quality in Maintenance Engineering, vol. 29 no. 2
Type: Research Article
ISSN: 1355-2511

Keywords

Article
Publication date: 17 February 2021

Anusha R. Pai, Gopalkrishna Joshi and Suraj Rane

This paper is focused at studying the current state of research involving the four dimensions of defect management strategy, i.e. software defect analysis, software quality…

Abstract

Purpose

This paper is focused at studying the current state of research involving the four dimensions of defect management strategy, i.e. software defect analysis, software quality, software reliability and software development cost/effort.

Design/methodology/approach

The methodology developed by Kitchenham (2007) is followed in planning, conducting and reporting of the systematic review. Out of 625 research papers, nearly 100 primary studies related to our research domain are considered. The study attempted to find the various techniques, metrics, data sets and performance validation measures used by researchers.

Findings

The study revealed the need for integrating the four dimensions of defect management and studying its effect on software performance. This integrated approach can lead to optimal use of resources in software development process.

Research limitations/implications

There are many dimensions in defect management studies. The authors have considered only vital few based on the practical experiences of software engineers. Most of the research work cited in this review used public data repositories to validate their methodology and there is a need to apply these research methods on real datasets from industry to realize the actual potential of these techniques.

Originality/value

The authors believe that this paper provides a comprehensive insight into the various aspects of state-of-the-art research in software defect management. The authors feel that this is the only research article that delves into the four facets namely software defect analysis, software quality, software reliability and software development cost/effort.

Details

International Journal of Quality & Reliability Management, vol. 38 no. 10
Type: Research Article
ISSN: 0265-671X

Keywords

1 – 10 of over 3000