Search results

1 – 10 of over 30000
Article
Publication date: 1 October 2006

Y. Tamura, S. Yamada and M. Kimura

The aim of this paper is to propose a software reliability growth model based on stochastic differential equations for the integration testing phase of distributed development…

Abstract

Purpose

The aim of this paper is to propose a software reliability growth model based on stochastic differential equations for the integration testing phase of distributed development environment.

Design/methodology/approach

A client/server system (CSS), which is a new development method, has come into existence as a result of the progress of networking technology by UNIX systems. On the other hand, the effective testing method for distributed development environment has only a few presented. The method of software reliability assessment considering the interaction among software components in a distributed one is discussed.

Findings

Conventional software reliability growth models for system testing phase in distributed development environment have included many unknown parameters. Especially, the effective estimation method in terms of these unknown parameters, which means the proportion of the total testing‐load for the software component, has never been presented. This software reliability growth model can be easily applied in distributed software development, because the model has a simple structure.

Practical implications

This model is very useful for software developers in terms of practical reliability assessment in the actual distributed development environment.

Originality/value

The method of software reliability assessment considering the interaction among software components in distributed development environment is proposed. Additionally, several numerical examples for the actual data are presented.

Details

Journal of Quality in Maintenance Engineering, vol. 12 no. 4
Type: Research Article
ISSN: 1355-2511

Keywords

Article
Publication date: 17 February 2021

Anusha R. Pai, Gopalkrishna Joshi and Suraj Rane

This paper is focused at studying the current state of research involving the four dimensions of defect management strategy, i.e. software defect analysis, software quality…

Abstract

Purpose

This paper is focused at studying the current state of research involving the four dimensions of defect management strategy, i.e. software defect analysis, software quality, software reliability and software development cost/effort.

Design/methodology/approach

The methodology developed by Kitchenham (2007) is followed in planning, conducting and reporting of the systematic review. Out of 625 research papers, nearly 100 primary studies related to our research domain are considered. The study attempted to find the various techniques, metrics, data sets and performance validation measures used by researchers.

Findings

The study revealed the need for integrating the four dimensions of defect management and studying its effect on software performance. This integrated approach can lead to optimal use of resources in software development process.

Research limitations/implications

There are many dimensions in defect management studies. The authors have considered only vital few based on the practical experiences of software engineers. Most of the research work cited in this review used public data repositories to validate their methodology and there is a need to apply these research methods on real datasets from industry to realize the actual potential of these techniques.

Originality/value

The authors believe that this paper provides a comprehensive insight into the various aspects of state-of-the-art research in software defect management. The authors feel that this is the only research article that delves into the four facets namely software defect analysis, software quality, software reliability and software development cost/effort.

Details

International Journal of Quality & Reliability Management, vol. 38 no. 10
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 1 March 1985

A. Veevers and K.A. Davies

This article reports the first phase of an investigation into the quantification of software reliability. Following an extensive literature search the current approaches are…

Abstract

This article reports the first phase of an investigation into the quantification of software reliability. Following an extensive literature search the current approaches are critically reviewed. The difficulties in interpretation attached to quantifications in probabilistic terms are highlighted. The notion of software credibility is proposed as a means of condensing quality assurance techniques into a measure of reliability suitable for use in general reliability assessments.

Details

International Journal of Quality & Reliability Management, vol. 2 no. 3
Type: Research Article
ISSN: 0265-671X

Article
Publication date: 5 October 2012

Assefa Semegn and Eamonn Murphy

The purpose of this paper is to introduce a novel approach of designing, specifying, and describing the behavior of software systems in a way that helps to predict their…

Abstract

Purpose

The purpose of this paper is to introduce a novel approach of designing, specifying, and describing the behavior of software systems in a way that helps to predict their reliability from the reliability of the components and their interactions.

Design/methodology/approach

Design imperatives and relevant mathematical documentation techniques for improved reliability predictability of software systems are identified.

Findings

The design approach, which is named design for reliability predictability (DRP), integrates design for change, precise behavioral documentation and structure based reliability prediction to achieve improved reliability predictability of software systems. The specification and documentation approach builds upon precise behavioral specification of interfaces using the trace function method (TFM) and introduces a number of structure functions or connection documents. These functions capture both the static and dynamic behavior of component‐based software systems and are used as a basis for a novel document driven structure based reliability predication model.

Originality/value

Decades of research effort have been spent in software design, mathematical/formal specification and description and reliability prediction of software systems. However, there has been little convergence among these three areas. This paper brings a new direction where the three research areas are unified to create a new design paradigm.

Details

International Journal of Quality & Reliability Management, vol. 29 no. 9
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 25 November 2021

Saurabh Panwar, Vivek Kumar, P.K. Kapur and Ompal Singh

Software testing is needed to produce extremely reliable software products. A crucial decision problem that the software developer encounters is to ascertain when to terminate the…

Abstract

Purpose

Software testing is needed to produce extremely reliable software products. A crucial decision problem that the software developer encounters is to ascertain when to terminate the testing process and when to release the software system in the market. With the growing need to deliver quality software, the critical assessment of reliability, cost of testing and release time strategy is requisite for project managers. This study seeks to examine the reliability of the software system by proposing a generalized testing coverage-based software reliability growth model (SRGM) that incorporates the effect of testing efforts and change point. Moreover, the strategic software time-to-market policy based on costreliability criteria is suggested.

Design/methodology/approach

The fault detection process is modeled as a composite function of testing coverage, testing efforts and the continuation time of the testing process. Also, to assimilate factual scenarios, the current research exhibits the influence of software users refer as reporters in the fault detection process. Thus, this study models the reliability growth phenomenon by integrating the number of reporters and the number of instructions executed in the field environment. Besides, it is presumed that the managers release the software early to capture maximum market share and continue the testing process for an added period in the user environment. The multiattribute utility theory (MAUT) is applied to solve the optimization model with release time and testing termination time as two decision variables.

Findings

The practical applicability and performance of the proposed methodology are demonstrated through real-life software failure data. The findings of the empirical analysis have shown the superiority of the present study as compared to conventional approaches.

Originality/value

This study is the first attempt to assimilate testing coverage phenomenon in joint optimization of software time to market and testing duration.

Details

International Journal of Quality & Reliability Management, vol. 39 no. 3
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 18 November 2021

Adarsh Anand, Subhrata Das, Mohini Agarwal and Shinji Inoue

In the current market scenario, software upgrades and updates have proved to be very handy in improving the reliability of the software in its operational phase. Software upgrades…

Abstract

Purpose

In the current market scenario, software upgrades and updates have proved to be very handy in improving the reliability of the software in its operational phase. Software upgrades help in reinventing working software through major changes, like functionality addition, feature enhancement, structural changes, etc. In software updates, minor changes are undertaken which help in improving software performance by fixing bugs and security issues in the current version of the software. Through the current proposal, the authors wish to highlight the economic benefits of the combined use of upgrade and update service. A cost analysis model has been proposed for the same.

Design/methodology/approach

The article discusses a cost analysis model highlighting the distinction between launch time and time to end the testing process. The number of bugs which have to be catered in each release has been determined which also consists of the count of latent bugs of previous version. Convolution theory has been utilized to incorporate the joint role of tester and user in bug detection into the model. The cost incurred in debugging process was determined. An optimization model was designed which considers the reliability and budget constraints while minimizing the total debugging cost. This optimization was used to determine the release time and testing stop time.

Findings

The proposal is backed by real-life software bug dataset consisting of four releases. The model was able to successfully determine the ideal software release time and the testing stop time. An increased profit is generated by releasing the software earlier and continues testing long after its release.

Originality/value

The work contributes positively to the field by providing an effective optimization model, which was able to determine the economic benefit of the combined use of upgrade and update service. The model can be used by management to determine their timelines and cost that will be incurred depending on their product and available resources.

Details

International Journal of Quality & Reliability Management, vol. 39 no. 3
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 16 February 2023

Vibha Verma, Sameer Anand and Anu Gupta Aggarwal

The software development team reviews the testing phase to assess if the reliability growth of software is as per plan and requirement and gives suggestions for improvement. The…

Abstract

Purpose

The software development team reviews the testing phase to assess if the reliability growth of software is as per plan and requirement and gives suggestions for improvement. The objective of this study is to determine the optimal review time such that there is enough time to make judgments about changes required before the scheduled release.

Design/methodology/approach

Testing utilizes majority of time and resources, assures reliability and plays a critical role in release and warranty decision-making reviews necessary. A very early review during testing may not give useful information for analyzing or improving project performance, and a very late review may delay product delivery and lead to opportunity loss for developers. Therefore, it is assumed that the optimal time for review is in the later stage of testing when the fault removal rate starts to decline. The expression for this time point is determined using the S-curve 2-D software reliability growth model (SRGM).

Findings

The methodology has been illustrated using the real-life fault datasets of Tandem computers and radar systems resulting in optimal review time of 14 weeks and 26 months, respectively, which is neither very early in testing nor very near to the scheduled release. The developer can make changes (more resources or postpone release) to expedite the process.

Originality/value

Most of the literature studies focus on determination of optimal testing or release time to achieve considerable reliability within the budget, but in this study, the authors determine the optimal review time during testing using SRGM to ensure the considerable reliability at release.

Details

International Journal of Quality & Reliability Management, vol. 40 no. 9
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 1 March 2000

Koichi Tokuno and Shigeru Yamada

It is important to take into account the trade‐off between hardware and software systems when total computer‐system reliability/performance is evaluated and assessed. Develops an…

Abstract

It is important to take into account the trade‐off between hardware and software systems when total computer‐system reliability/performance is evaluated and assessed. Develops an availability model for a hardware‐software system. The system treated here consists of one hardware and one software subsystem. For the software subsystem, in particular, it is supposed that: the restoration actions are not always performed perfectly; the restoration times for later software failures become longer; and reliability growth occurs in the perfect restoration action. The hardware‐ and software‐failure occurrence phenomena are described by a constant and a geometrically decreasing hazard rate, respectively. The time‐dependent behavior of the system is described by a Markov process. Useful expressions for several quantitative measures of system performance are derived from this model. Finally, numerical examples are presented for illustration of system availability measurement and assessment.

Details

International Journal of Quality & Reliability Management, vol. 17 no. 2
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 7 February 2019

Rajkumar Bhimgonda Patil

Reliability, maintainability and availability of modern complex engineered systems are significantly affected by four basic systems or elements: hardware, software, organizational…

Abstract

Purpose

Reliability, maintainability and availability of modern complex engineered systems are significantly affected by four basic systems or elements: hardware, software, organizational and human. Computerized Numerical Control Turning Center (CNCTC) is one of the complex machine tools used in manufacturing industries. Several research studies have shown that the reliability and maintainability is greatly influenced by human and organizational factors (HOFs). The purpose of this paper is to identify critical HOFs and their effects on the reliability and maintainability of the CNCTC.

Design/methodology/approach

In this paper, 12 human performance influencing factors (PIFs) and 10 organizational factors (OFs) which affect the reliability and maintainability of the CNCTC are identified and prioritized according to their criticality. The opinions of experts in the fields are used for prioritizing, whereas the field failure and repair data are used for reliability and maintainability modeling.

Findings

Experience, training, and behavior are the three most critical human PIFs, and safety culture, problem solving resources, corrective action program and training program are the four most critical OFs which significantly affect the reliability and maintainability of the CNCTC. The reliability and maintainability analysis reveals that the Weibull is the best-fit distribution for time-between-failure data, whereas log-normal is the best-fit distribution for Time-To-Repair data. The failure rate of the CNCTC is nearly constant. Nearly 66 percent of the total failures and repairs are typically due to the hardware system. The percentage of failures and repairs influenced by HOFs is nearly only 16 percent; however, the failure and repair impact of HOFs is significant. The HOFs can increase the mean-time-to-repair and mean-time-between-failure of the CNCTC by nearly 65 and 33 percent, respectively.

Originality/value

The paper uses the field failure data and expert opinions for the analysis. The critical sub-systems of the CNCTC are identified using the judgment of the experts, and the trend of the results is verified with published results.

Details

Journal of Quality in Maintenance Engineering, vol. 26 no. 1
Type: Research Article
ISSN: 1355-2511

Keywords

Article
Publication date: 22 October 2019

Navneet Bhatt, Adarsh Anand and Deepti Aggrawal

The purpose of this paper is to provide a mathematical framework to optimally allocate resources required for the discovery of vulnerabilities pertaining to different severity…

Abstract

Purpose

The purpose of this paper is to provide a mathematical framework to optimally allocate resources required for the discovery of vulnerabilities pertaining to different severity risk levels.

Design/methodology/approach

Different sets of optimization problems have been formulated and using the concept of dynamic programming approach, sequence of recursive functions has been constructed for the optimal allocation of resources used for discovering vulnerabilities of different severity scores. Mozilla Thunderbird web browser data set has been considered for giving the empirical evaluation by working with vulnerabilities of different severities.

Findings

As per the impact associated with a vulnerability, critical and high severity level are required to be patched promptly, and hence, a larger amount of funds have to be allocated for vulnerability discovery. Nevertheless, a low or medium risk vulnerability might also get exploited and thereby their discovery is also crucial for higher severity vulnerabilities. The current framework provides a diversified allocation of funds as per the requirement of a software manager and also aims at improving the discovery of vulnerability significantly.

Practical implications

The finding of this research may enable software managers to adequately assign resources in managing the discovery of vulnerabilities. It may also help in acknowledging the funds required for various bug bounty programs to cater security reporters based on the potential number of vulnerabilities present in software.

Originality/value

Much of the attention has been focused on the vulnerability discovery modeling and the risk associated with the security flaws. But, as far as the authors’ knowledge is concern, there is no such study that incorporates optimal allocation of resources with respect to the vulnerabilities of different severity scores. Hence, the building block of this paper contributes to future research.

Details

International Journal of Quality & Reliability Management, vol. 37 no. 6/7
Type: Research Article
ISSN: 0265-671X

Keywords

1 – 10 of over 30000