Search results

1 – 10 of 66
Article
Publication date: 1 October 2000

Yi‐Ping Chang

Non‐homogeneous Poisson process (NHPP) models, as the stochastic models frequently employed in reliability engineering, have been successfully used in the reliability study of…

Abstract

Non‐homogeneous Poisson process (NHPP) models, as the stochastic models frequently employed in reliability engineering, have been successfully used in the reliability study of software systems. The software reliability based on NHPP models was proposed by Goel and Okumoto. In general, the software reliability will increase along with the correction of the software errors. This idea gives rise to a hypothesis: extending the time on the software reliability test could result in obtaining a higher reliability. Nevertheless, the scheme was found to be discrepant for some NHPP models, which are utilized in the analysis. As this often occurs in practice, the reliability outcome could differ from its appropriate estimation when the test procedures are intentionally terminated prior to the end of the required testing time span. An “above average software reliability” is proposed in this paper to accomplish the inconsistency. Under the investigation of “average software reliability”, a higher software reliability can be achieved as long as the testing time increases. Also, in this paper, an optimal software release policy is proposed to explore the issue of compromising the expenses on software development and software reliability improvement.

Details

International Journal of Quality & Reliability Management, vol. 17 no. 7
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 24 May 2011

Satadal Ghosh and Sujit K. Majumdar

The purpose of this paper is to provide the maintenance personnel with a methodology for modeling and estimating the reliability of critical machine systems using the historical…

1290

Abstract

Purpose

The purpose of this paper is to provide the maintenance personnel with a methodology for modeling and estimating the reliability of critical machine systems using the historical data of their inter‐failure times.

Design/methodology/approach

The failure patterns of five different machine systems were modeled with NHPP‐log linear process and HPP belonging to stochastic point process for predicting their reliability in future time frames. Besides the classical approach, Bayesian approach was also used involving Jeffreys's invariant non‐informative independent priors to derive the posterior densities of the model parameters of NHPP‐LLP and HPP with a view to estimating the reliability of the machine systems in future time intervals.

Findings

For at least three machine systems, Bayesian approach gave lower reliability estimates and a larger number of (expected) failures than those obtained by the classical approach. Again, Bayesian estimates of the probability that “ROCOF (rate of occurrence of failures) would exceed its upper threshold limit” in future time frames were uniformly higher for these machine systems than those obtained with the classical approach.

Practical implications

This study indicated that, the Bayesian approach would give more realistic estimates of reliability (in future time frames) of the machine systems, which had dependent inter‐failure times. Such information would be helpful to the maintenance team for deciding on appropriate maintenance strategy.

Originality/value

With the help of Bayesian approach, the posterior densities of the model parameters were found analytically by considering Jeffreys's invariant non‐informative independent prior. The case study would serve to motivate the maintenance teams to model the failure patterns of the repairable systems making use of the historical data on inter‐failure times and estimating their reliability in future time frames.

Details

International Journal of Quality & Reliability Management, vol. 28 no. 5
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 25 October 2019

Nicolas La Roche-Carrier, Guyh Dituba Ngoma, Yasar Kocaefe and Fouad Erchiqui

Reliability plays an important role in the execution of the maintenance improvement and the understanding of its concepts is essential to predict the type of maintenance according…

Abstract

Purpose

Reliability plays an important role in the execution of the maintenance improvement and the understanding of its concepts is essential to predict the type of maintenance according to the equipment state. Thereby, a computational tool was developed and programming with VBA in Excel® for reliability and failure analysis in a mining context. The paper aims to discuss these issues.

Design/methodology/approach

The developed approach use the modeling of stochastic processes, such as the renewal process, the non-homogeneous Poisson process and less conventional method as the Bayesian approach, by considering Jeffreys non-informative prior. The resolution gives the best associated model, the parameters estimation, the mean time between failure and the reliability estimate. This approach is validated with the reliability analysis of inter-failure times from underground rock bolters subsystems, over a two-year period.

Findings

Results show that Weibull and lognormal probability distribution fit to the most subsystems inter-failure times. The study revealed that the bolting head, the rock drill, the screen handler, the electric/electronic system, the hydraulic system, the drilling feeder and the structural consume the most repair frequency. The hydraulic and electric/electronic subsystems represent the lowest reliability after 50 operation hours.

Originality/value

For the first time, this case study defines practical failures and reliability information for rock bolter subsystems based on real operation data. This paper is useful to the comparative evaluation of rock bolter by detecting the weakest elements and understanding failure patterns in the individual observation subsystems on the overall machine performance.

Details

International Journal of Quality & Reliability Management, vol. 37 no. 2
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 5 June 2007

Olivier Basile, Pierre Dehombreux and Fouad Riane

Reliability models are generally estimated from small samples. This paper seeks to calculate the uncertainty affecting reliability parameters in function of the sample size.

Abstract

Purpose

Reliability models are generally estimated from small samples. This paper seeks to calculate the uncertainty affecting reliability parameters in function of the sample size.

Design/methodology/approach

The confidence intervals are calculated on the basis of Monte Carlo simulations and using the variance‐covariance matrix; the two methods are compared.

Findings

Numerical results for the estimation of uncertainty have been obtained for standard reliability models, non‐homogeneous Poisson process and generalized renewal process.

Originality/value

For the generalized renewal process, the article points out the influence of the age correction factor on the number of repairs authorized and on uncertainty. The surface plot of the likelihood function with respect to parameters is a convenient tool to interpret the uncertainty.

Details

Journal of Quality in Maintenance Engineering, vol. 13 no. 2
Type: Research Article
ISSN: 1355-2511

Keywords

Article
Publication date: 1 April 1997

Nalina Suresh and A.J.G. Babu

Considers an extension of the non‐homogeneous Poisson process to describe the software failure process more appropriately. This extension introduces a new structure for the…

372

Abstract

Considers an extension of the non‐homogeneous Poisson process to describe the software failure process more appropriately. This extension introduces a new structure for the intensity function of the non‐homogeneous Poisson process. Using this intensity function, develops a mathematical model to determine the optimal allocation of resources to be spent during the development to maximize the reliability within a budget.

Details

International Journal of Quality & Reliability Management, vol. 14 no. 3
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 4 September 2017

Miguel Angel Navas, Carlos Sancho and Jose Carpio

The purpose of this paper is to present the results of the application of various models to estimate the reliability in railway repairable systems.

Abstract

Purpose

The purpose of this paper is to present the results of the application of various models to estimate the reliability in railway repairable systems.

Design/methodology/approach

The methodology proposed by the International Electrotechnical Commission (IEC), using homogeneous Poisson process (HPP) and non-homogeneous Poisson process (NHPP) models, is adopted. Additionally, renewal process (RP) models, not covered by the IEC, are used, with a complementary analysis to characterize the failure intensity thereby obtained.

Findings

The findings show the impact of the recurrent failures in the times between failures (TBF) for rejection of the HPP and NHPP models. For systems not exhibiting a trend, RP models are presented, with TBF described by three-parameter lognormal or generalized logistic distributions, together with a methodology for generating clusters.

Research limitations/implications

For those systems that do not exhibit a trend, TBF is assumed to be independent and identically distributed (i.i.d.), and therefore, RP models of “perfect repair” have to be used.

Practical implications

Maintenance managers must refocus their efforts to study the reliability of individual repairable systems and their recurrent failures, instead of collections, in order to customize maintenance to the needs of each system.

Originality/value

The stochastic process models were applied for the first time to electric traction systems in 23 trains and to 40 escalators with ten years of operating data in a railway company. A practical application of the IEC models is presented for the first time.

Details

International Journal of Quality & Reliability Management, vol. 34 no. 8
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 12 January 2010

N. Ahmad, M.G.M. Khan and L.S. Rafi

The purpose of this paper is to investigate how to incorporate the exponentiated Weibull (EW) testing‐effort function (TEF) into inflection S‐shaped software reliability growth…

Abstract

Purpose

The purpose of this paper is to investigate how to incorporate the exponentiated Weibull (EW) testing‐effort function (TEF) into inflection S‐shaped software reliability growth models (SRGMs) based on non‐homogeneous Poisson process (NHPP). The aim is also to present a more flexible SRGM with imperfect debugging.

Design/methodology/approach

This paper reviews the EW TEFs and discusses inflection S‐shaped SRGM with EW testing‐effort to get a better description of the software fault detection phenomenon. The SRGM parameters are estimated by weighted least square estimation (WLSE) and maximum‐likelihood estimation (MLE) methods. Furthermore, the proposed models are also discussed under imperfect debugging environment.

Findings

Experimental results from three actual data applications are analyzed and compared with the other existing models. The findings reveal that the proposed SRGM has better performance and prediction capability. Results also confirm that the EW TEF is suitable for incorporating into inflection S‐shaped NHPP growth models.

Research limitations/implications

This paper presents the WLSE results with equal weight. Future research may be carried out for unequal weights.

Practical implications

Software reliability modeling and estimation are a major concern in the software development process, particularly during the software testing phase, as unreliable software can cause a failure in the computer system that can be hazardous. The results obtained in this paper may facilitate the software engineers, scientists, and managers in improving the software testing process.

Originality/value

The proposed SRGM has a flexible structure and may capture features of both exponential and S‐shaped NHPP growth models for failure phenomenon.

Details

International Journal of Quality & Reliability Management, vol. 27 no. 1
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 2 November 2021

Rama Rao Narvaneni and K. Suresh Babu

Software reliability growth models (SRGMs) are used to assess and predict reliability of a software system. Many of these models are effective in predicting future failures unless…

Abstract

Purpose

Software reliability growth models (SRGMs) are used to assess and predict reliability of a software system. Many of these models are effective in predicting future failures unless the software evolves.

Design/methodology/approach

This objective of this paper is to identify the best path for rectifying the BFT (bug fixing time) and BFR (bug fixing rate). Moreover, the flexible software project has been examined while materializing the BFR. To enhance the BFR, the traceability of bug is lessened by the version tag virtue in every software deliverable component. The release time of software build is optimized with the utilization of mathematical optimization mechanisms like ‘software reliability growth’ and ‘non-homogeneous Poisson process methods.’

Findings

In current market scenario, this is most essential. The automation and variation of build is also resolved in this contribution. Here, the software, which is developed, is free from the bugs or defects and enhances the quality of software by increasing the BFR.

Originality/value

In current market scenario, this is most essential. The automation and variation of build is also resolved in this contribution. Here, the software, which is developed, is free from the bugs or defects and enhances the quality of software by increasing the BFR.

Details

International Journal of Intelligent Unmanned Systems, vol. 10 no. 1
Type: Research Article
ISSN: 2049-6427

Keywords

Article
Publication date: 12 March 2018

Momotaz Begum and Tadashi Dohi

The purpose of this paper is to present a novel method to estimate the optimal software testing time which minimizes the relevant expected software cost via a refined neural…

Abstract

Purpose

The purpose of this paper is to present a novel method to estimate the optimal software testing time which minimizes the relevant expected software cost via a refined neural network approach with the grouped data, where the multi-stage look ahead prediction is carried out with a simple three-layer perceptron neural network with multiple outputs.

Design/methodology/approach

To analyze the software fault count data which follows a Poisson process with unknown mean value function, the authors transform the underlying Poisson count data to the Gaussian data by means of one of three data transformation methods, and predict the cost-optimal software testing time via a neural network.

Findings

In numerical examples with two actual software fault count data, the authors compare the neural network approach with the common non-homogeneous Poisson process-based software reliability growth models. It is shown that the proposed method could provide a more accurate and more flexible decision making than the common stochastic modeling approach.

Originality/value

It is shown that the neural network approach can be used to predict the optimal software testing time more accurately.

Details

Journal of Quality in Maintenance Engineering, vol. 24 no. 1
Type: Research Article
ISSN: 1355-2511

Keywords

Article
Publication date: 1 June 1997

Weishing Chen and Tai‐Hsi Wu

Studies a non‐homogeneous Poisson process software reliability model with failure rate based on Zipf’s law. Discusses the rate function, mean value function and the estimation of…

Abstract

Studies a non‐homogeneous Poisson process software reliability model with failure rate based on Zipf’s law. Discusses the rate function, mean value function and the estimation of parameters. The proposed model can be used to analyse the reliability growth. The results of applying the proposed model and Duane model to several actual failure data sets show that the model with failure rate observed from Zipf’s law can fit not only in operating software but also in testing software. The result also indicates that the proposed model has better long‐term predictive capability than the Duane model for failure data sets with power law’s failure rates

Details

International Journal of Quality & Reliability Management, vol. 14 no. 4
Type: Research Article
ISSN: 0265-671X

Keywords

1 – 10 of 66