Search results

1 – 10 of over 1000
Article
Publication date: 2 June 2020

Nasrin Shomali and Bahman Arasteh

For delivering high-quality software applications, proper testing is required. A software test will function successfully if it can find more software faults. The traditional…

Abstract

Purpose

For delivering high-quality software applications, proper testing is required. A software test will function successfully if it can find more software faults. The traditional method of assessing the quality and effectiveness of a test suite is mutation testing. One of the main drawbacks of mutation testing is its computational cost. The research problem of this study is the high computational cost of the mutation test. Reducing the time and cost of the mutation test is the main goal of this study.

Design/methodology/approach

With regard to the 80–20 rule, 80% of the faults are found in 20% of the fault-prone code of a program. The proposed method statically analyzes the source code of the program to identify the fault-prone locations of the program. Identifying the fault-prone (complex) paths of a program is an NP-hard problem. In the proposed method, a firefly optimization algorithm is used for identifying the most fault-prone paths of a program; then, the mutation operators are injected only on the identified fault-prone instructions.

Findings

The source codes of five traditional benchmark programs were used for evaluating the effectiveness of the proposed method to reduce the mutant number. The proposed method was implemented in Matlab. The mutation injection operations were carried out by MuJava, and the output was investigated. The results confirm that the proposed method considerably reduces the number of mutants, and consequently, the cost of software mutation-test.

Originality/value

The proposed method avoids the mutation of nonfault-prone (simple) codes of the program, and consequently, the number of mutants considerably is reduced. In a program with n branch instructions (if instruction), there are 2n execution paths (test paths) that the data and codes into each of these paths can be considered as a target of mutation. Identifying the error-prone (complex) paths of a program is an NP-hard problem. In the proposed method, a firefly optimization algorithm as a heuristic algorithm is used for identifying the most error-prone paths of a program; then, the mutation operators (faults) are injected only on the identified fault-prone instructions.

Details

Data Technologies and Applications, vol. 54 no. 4
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 12 November 2020

Seyed Mohammad Javad Hosseini, Bahman Arasteh, Ayaz Isazadeh, Mehran Mohsenzadeh and Mitra Mirzarezaee

The purpose of this study is to reduce the number of mutations and, consequently, reduce the cost of mutation test. The results of related studies indicate that about 40% of…

Abstract

Purpose

The purpose of this study is to reduce the number of mutations and, consequently, reduce the cost of mutation test. The results of related studies indicate that about 40% of injected faults (mutants) in the source code are effect-less (equivalent). Equivalent mutants are one of the major costs of mutation testing and the identification of equivalent and effect-less mutants has been known as an undecidable problem.

Design/methodology/approach

In a program with n branch instructions (if instruction) there are 2n execution paths (test paths) that the data and codes into each of these paths can be considered as a target of mutation. Given the role and impact of data in a program, some of data and codes propagates the injected mutants more likely to the output of the program. In this study, firstly the error-propagation rate of the program data is quantified using static analysis of the program control-flow graph. Then, the most error-propagating test paths are identified by the proposed heuristic algorithm (Genetic Algorithm [GA]). Data and codes with higher error-propagation rate are only considered as the strategic locations for the mutation testing.

Findings

In order to evaluate the proposed method, an extensive series of mutation testing experiments have been conducted on a set of traditional benchmark programs using MuJava tool set. The results depict that the proposed method reduces the number of mutants about 24%. Also, in the corresponding experiments, the mutation score is increased about 5.6%. The success rate of the GA in finding the most error-propagating paths of the input programs is 99%. On average, only 7.46% of generated mutants by the proposed method are equivalent. Indeed, 92.54% of generated mutants are non-equivalent.

Originality/value

The main contribution of this study is as follows: Proposing a set of equations to measure the error-propagation rate of each data, basic-block and execution path of a program. Proposing a genetic algorithm to identify a most error-propagating path of program as locations of mutations. Developing an efficient mutation-testing framework that mutates only the strategic locations of a program identified by the proposed genetic algorithms. Reducing the time and cost of mutation testing by reducing the equivalent mutants.

Details

Data Technologies and Applications, vol. 55 no. 1
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 27 November 2020

Bahman Arasteh, Razieh Sadegi and Keyvan Arasteh

Software module clustering is one of the reverse engineering techniques, which is considered to be an effective technique for presenting software architecture and structural…

Abstract

Purpose

Software module clustering is one of the reverse engineering techniques, which is considered to be an effective technique for presenting software architecture and structural information. The objective of clustering software modules is to achieve minimum coupling among different clusters and create maximum cohesion among the modules of each cluster. Finding the best clustering is considered to be a multi-objective N-P hard optimization-problem, and for solving this problem, different meta-heuristic algorithms have been previously proposed. Achieving higher module lustering quality (MQ), obtaining higher success rate for achieving the best clustering quality and improving convergence speed are the main objectives of this study.

Design/methodology/approach

In this study, a method (Bölen) is proposed for clustering software modules which combines the two algorithms of shuffled frog leaping and genetic algorithm.

Findings

The results of conducted experiments using traditional data sets confirm that the proposed method outperforms the previous methods in terms of convergence speed, module clustering quality and stability of the results.

Originality/value

The study proposes SFLA_GA algorithm for optimizing software module clustering, implementing SFLA algorithm in a discrete form by two operators of the genetic algorithm and achieving the above-mentioned purposes in this study. The aim is to achieve higher performance of the proposed algorithm in comparison with other algorithms.

Details

Data Technologies and Applications, vol. 55 no. 2
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 20 February 2020

Vijay Kumar and Ramita Sahni

The use of software is overpowering our modern society. Advancement in technology is directly proportional to an increase in user demand which further leads to an increase in the…

Abstract

Purpose

The use of software is overpowering our modern society. Advancement in technology is directly proportional to an increase in user demand which further leads to an increase in the burden on software firms to develop high-quality and reliable software. To meet the demands, software firms need to upgrade existing versions. The upgrade process of software may lead to additional faults in successive versions of the software. The faults that remain undetected in the previous version are passed on to the new release. As this process is complicated and time-consuming, it is important for firms to allocate resources optimally during the testing phase of software development life cycle (SDLC). Resource allocation task becomes more challenging when the testing is carried out in a dynamic nature.

Design/methodology/approach

The model presented in this paper explains the methodology to estimate the testing efforts in a dynamic environment with the assumption that debugging cost corresponding to each release follows learning curve phenomenon. We have used optimal control theoretic approach to find the optimal policies and genetic algorithm to estimate the testing effort. Further, numerical illustration has been given to validate the applicability of the proposed model using a real-life software failure data set.

Findings

The paper yields several substantive insights for software managers. The study shows that estimated testing efforts as well as the faults detected for both the releases are closer to the real data set.

Originality /value

We have proposed a dynamic resource allocation model for multirelease of software with the objective to minimize the total testing cost using the flexible software reliability growth model (SRGM).

Details

International Journal of Quality & Reliability Management, vol. 37 no. 6/7
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 27 January 2021

Mohamed ElMenshawy and Mohamed Marzouk

Nowadays, building information modeling (BIM) represents an evolution in the architecture, engineering and construction (AEC) industries with its various applications. BIM is…

1018

Abstract

Purpose

Nowadays, building information modeling (BIM) represents an evolution in the architecture, engineering and construction (AEC) industries with its various applications. BIM is capable to store huge amounts of information related to buildings which can be leveraged in several areas such as quantity takeoff, scheduling, sustainability and facility management. The main objective of this research is to establish a model for automated schedule generation using BIM and to solve the time–cost trade-off problem (TCTP) resulting from the various scenarios offered to the user.

Design/methodology/approach

A model is developed to use the quantities exported from a BIM platform, then generate construction activities, calculate the duration of each activity and finally the logic/sequence is applied in order to link the activities together. Then, multiobjective optimization is performed using nondominated sorting genetic algorithm (NSGA-II) in order to provide the most feasible solutions considering project duration and cost. The researchers opted NSGA-II because it is one of the well-known and credible algorithms that have been used in many applications, and its performances were tested in several comparative studies.

Findings

The proposed model is capable to select the near-optimum scenario for the project and export it to Primavera software. A case study is worked to demonstrate the use of the proposed model and illustrate its main features.

Originality/value

The proposed model can provide a simple and user-friendly model for automated schedule generation of construction projects. In addition, opportunities related to the interface between an automated schedule generation model and Primavera software are enabled as Primavera is one of the most popular and common schedule software solutions in the construction industry. Furthermore, it allows importing data from MS Excel, which is used to store activities data in the different scenarios. In addition, there are numerous solutions, each one corresponds to a certain duration and cost according to the performance factor which often reflects the number of crews assigned to the activity and/or construction method.

Details

Engineering, Construction and Architectural Management, vol. 28 no. 10
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 13 April 2022

Xiaofan Liu, Yupeng Zhou, Minghao Yin and Shuai Lv

The paper aims to provide an efficient meta-heuristic algorithm to solve the partial set covering problem (PSCP). With rich application scenarios, the PSCP is a fascinating and…

Abstract

Purpose

The paper aims to provide an efficient meta-heuristic algorithm to solve the partial set covering problem (PSCP). With rich application scenarios, the PSCP is a fascinating and well-known non-deterministic polynomial (NP)-hard problem whose goal is to cover at least k elements with as few subsets as possible.

Design/methodology/approach

In this work, the authors present a novel variant of the ant colony optimization (ACO) algorithm, called Argentine ant system (AAS), to deal with the PSCP. The developed AAS is an integrated system of different populations that use the same pheromone to communicate. Moreover, an effective local search framework with the relaxed configuration checking (RCC) and the volatilization-fixed weight mechanism is proposed to improve the exploitation of the algorithm.

Findings

A detailed experimental evaluation of 75 instances reveals that the proposed algorithm outperforms the competitors in terms of the quality of the optimal solutions. Also, the performance of AAS gradually improves with the growing instance size, which shows the potential in handling complex practical scenarios. Finally, the designed components of AAS are experimentally proved to be beneficial to the whole framework. Finally, the key components in AAS have been demonstrated.

Originality/value

At present, there is no heuristic method to solve this problem. The authors present the first implementation of heuristic algorithm for solving PSCP and provide competitive solutions.

Details

Data Technologies and Applications, vol. 56 no. 5
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 9 July 2020

Sepehr Abrishami, Jack Goulding and Farzad Rahimian

The integration and automation of the whole design and implementation process have become a pivotal factor in construction projects. Problems of process integration, particularly…

1284

Abstract

Purpose

The integration and automation of the whole design and implementation process have become a pivotal factor in construction projects. Problems of process integration, particularly at the conceptual design stage, often manifest through a number of significant areas, from design representation, cognition and translation to process fragmentation and loss of design integrity. Whilst building information modelling (BIM) applications can be used to support design automation, particularly through the modelling, amendment and management stages, they do not explicitly provide whole design integration. This is a significant challenge. However, advances in generative design now offer significant potential for enhancing the design experience to mitigate this challenge.

Design/methodology/approach

The approach outlined in this paper specifically addresses BIM deficiencies at the conceptual design stage, where the core drivers and indicators of BIM and generative design are identified and mapped into a generative BIM (G-BIM) framework and subsequently embedded into a G-BIM prototype. This actively engages generative design methods into a single dynamic BIM environment to support the early conceptual design process. The developed prototype followed the CIFE “horseshoe” methodology of aligning theoretical research with scientific methods to procure architecture, construction and engineering (AEC)-based solutions. This G-BIM prototype was also tested and validated through a focus group workshop engaging five AEC domain experts.

Findings

The G-BIM prototype presents a valuable set of rubrics to support the conceptual design stage using generative design. It benefits from the advanced features of BIM tools in relation to illustration and collaboration (coupled with BIM's parametric change management features).

Research limitations/implications

This prototype has been evaluated through multiple projects and scenarios. However, additional test data is needed to further improve system veracity using conventional and non-standard real-life design settings (and contexts). This will be reported in later works.

Originality/value

Originality and value rest with addressing the shortcomings of previous research on automation during the design process. It also addresses novel computational issues relating to the implementation of generative design systems, where, for example, instead of engaging static and formal description of the domain concepts, G-BIM actively enhances the applicability of BIM during the early design stages to generate optimised (and more purposeful) design solutions.

Details

Engineering, Construction and Architectural Management, vol. 28 no. 2
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 30 October 2009

Christian Bach, Jing Zhang and Salvatore Belardo

The paper aims to demonstrate the usefulness of the intellectual bandwidth model (IB model) and expand its basic foundation to the bioscience industry.

Abstract

Purpose

The paper aims to demonstrate the usefulness of the intellectual bandwidth model (IB model) and expand its basic foundation to the bioscience industry.

Design/methodology/approach

A case study of a real work example from the bioscience industry is presented.

Findings

The study discusses an end‐user information system and reveals that the information assimilation dimension can be meaningfully extended, adding automated utilization and implementation.

Research limitations/implications

The vertical research approach does not provide certainty that the case is truly representative.

Practical implications

Practical implications of the study include having a useful management tool to plan solutions for complex business problems and investment decisions.

Originality/value

The extension of the IB model is useful to practitioners and organizations seeking to manage scientific networks in knowledge‐intensive and complex collaborative environments.

Details

Management Research News, vol. 32 no. 12
Type: Research Article
ISSN: 0140-9174

Keywords

Book part
Publication date: 1 November 2007

Irina Farquhar, Michael Kane, Alan Sorkin and Kent H. Summers

This chapter proposes an optimized innovative information technology as a means for achieving operational functionalities of real-time portable electronic health records, system…

Abstract

This chapter proposes an optimized innovative information technology as a means for achieving operational functionalities of real-time portable electronic health records, system interoperability, longitudinal health-risks research cohort and surveillance of adverse events infrastructure, and clinical, genome regions – disease and interventional prevention infrastructure. In application to the Dod-VA (Department of Defense and Veteran's Administration) health information systems, the proposed modernization can be carried out as an “add-on” expansion (estimated at $288 million in constant dollars) or as a “stand-alone” innovative information technology system (estimated at $489.7 million), and either solution will prototype an infrastructure for nation-wide health information systems interoperability, portable real-time electronic health records (EHRs), adverse events surveillance, and interventional prevention based on targeted single nucleotide polymorphisms (SNPs) discovery.

Details

The Value of Innovation: Impact on Health, Life Quality, Safety, and Regulatory Research
Type: Book
ISBN: 978-1-84950-551-2

Article
Publication date: 20 April 2010

Yongzhong Wu and Ping Ji

The purpose of this paper is to propose an effective and efficient solution method for the component allocation problem (CAP) in printed circuit board (PCB) assembly, in order to…

Abstract

Purpose

The purpose of this paper is to propose an effective and efficient solution method for the component allocation problem (CAP) in printed circuit board (PCB) assembly, in order to achieve high‐throughput rates of the PCB assembly lines.

Design/methodology/approach

The investigated CAP is intertwined with the machine optimization problems for each machine in the line because the latter determine the process time of each machine. In order to solve the CAP, a solution method, which integrates a meta‐heuristic (genetic algorithm) and a regression model is proposed.

Findings

It is found that the established regression model can estimate the process time of each machine accurately and efficiently. Experimental tests show that the proposed solution method can solve the CAP both effectively and efficiently.

Research limitations/implications

Although different regression models are required for different types of assembly machines, the proposed solution method can be adopted for solving the CAPs for assembly lines of any configuration, including a mixed‐vendor assembly line.

Practical implications

The solution method can ensure a high‐throughput rate of a PCB assembly line, and thus improve the production capacity without further investment on the expensive PCB assembly equipment.

Originality/value

The paper proposes an innovative solution method for the CAP in PCB assembly. The solution method integrates the meta‐heuristic method and the regression method, which has not been studied in the literature.

Details

Assembly Automation, vol. 30 no. 2
Type: Research Article
ISSN: 0144-5154

Keywords

1 – 10 of over 1000