Search results

1 – 10 of over 30000
Article
Publication date: 19 September 2019

Gayatri Nayak and Mitrabinda Ray

Test suite prioritization technique is the process of modifying the order in which tests run to meet certain objectives. Early fault detection and maximum coverage of source code…

Abstract

Purpose

Test suite prioritization technique is the process of modifying the order in which tests run to meet certain objectives. Early fault detection and maximum coverage of source code are the main objectives of testing. There are several test suite prioritization approaches that have been proposed at the maintenance phase of software development life cycle. A few works are done on prioritizing test suites that satisfy modified condition decision coverage (MC/DC) criteria which are derived for safety-critical systems. The authors know that it is mandatory to do MC/DC testing for Level A type software according to RTCA/DO178C standards. The paper aims to discuss this issue.

Design/methodology/approach

This paper provides a novel method to prioritize the test suites for a system that includes MC/DC criteria along with other important criteria that ensure adequate testing.

Findings

In this approach, the authors generate test suites from the input Java program using concolic testing. These test suites are utilized to measure MC/DC% by using the coverage calculator algorithm. Now, use MC/DC% and the execution time of these test suites in the basic particle swarm optimization technique with a modified objective function to prioritize the generated test suites.

Originality/value

The proposed approach maximizes MC/DC% and minimizes the execution time of the test suites. The effectiveness of this approach is validated by experiments on 20 moderate-sized Java programs using average percentage of fault detected metric.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 12 no. 4
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 25 November 2021

Saurabh Panwar, Vivek Kumar, P.K. Kapur and Ompal Singh

Software testing is needed to produce extremely reliable software products. A crucial decision problem that the software developer encounters is to ascertain when to terminate the…

Abstract

Purpose

Software testing is needed to produce extremely reliable software products. A crucial decision problem that the software developer encounters is to ascertain when to terminate the testing process and when to release the software system in the market. With the growing need to deliver quality software, the critical assessment of reliability, cost of testing and release time strategy is requisite for project managers. This study seeks to examine the reliability of the software system by proposing a generalized testing coverage-based software reliability growth model (SRGM) that incorporates the effect of testing efforts and change point. Moreover, the strategic software time-to-market policy based on costreliability criteria is suggested.

Design/methodology/approach

The fault detection process is modeled as a composite function of testing coverage, testing efforts and the continuation time of the testing process. Also, to assimilate factual scenarios, the current research exhibits the influence of software users refer as reporters in the fault detection process. Thus, this study models the reliability growth phenomenon by integrating the number of reporters and the number of instructions executed in the field environment. Besides, it is presumed that the managers release the software early to capture maximum market share and continue the testing process for an added period in the user environment. The multiattribute utility theory (MAUT) is applied to solve the optimization model with release time and testing termination time as two decision variables.

Findings

The practical applicability and performance of the proposed methodology are demonstrated through real-life software failure data. The findings of the empirical analysis have shown the superiority of the present study as compared to conventional approaches.

Originality/value

This study is the first attempt to assimilate testing coverage phenomenon in joint optimization of software time to market and testing duration.

Details

International Journal of Quality & Reliability Management, vol. 39 no. 3
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 1 October 1997

Zhonglin He, Geoff Staples, Margaret Ross, Ian Court and Keith Hazzard

Suggests that, in order to detect and correct software defects as early as possible, identifying and generating more defect‐sensitive test cases for software unit and subsystem…

1342

Abstract

Suggests that, in order to detect and correct software defects as early as possible, identifying and generating more defect‐sensitive test cases for software unit and subsystem testing is one solution. Proposes an orthogonal software testing approach based on the quality optimization techniques, Taguchi methods. This orthogonal approach treats the input parameters of a software unit or subsystem as design factors in an orthogonal arrays, and stratifies input parameter domains into equivalent classes to form levels of factors. Describes how test cases are generated statistically for each trial of factorial orthogonal experiments. The adequacy of the generated test cases can be validated by examining testing coverage metrics. The results of test case executions can be analysed in order to find the sensibility of test cases for detecting defects, to generate more effective test cases in further testing, and to help locate and correct defects in the early stage of testing.

Details

Logistics Information Management, vol. 10 no. 5
Type: Research Article
ISSN: 0957-6053

Keywords

Open Access
Article
Publication date: 5 April 2023

Tomás Lopes and Sérgio Guerreiro

Testing business processes is crucial to assess the compliance of business process models with requirements. Automating this task optimizes testing efforts and reduces human error…

2502

Abstract

Purpose

Testing business processes is crucial to assess the compliance of business process models with requirements. Automating this task optimizes testing efforts and reduces human error while also providing improvement insights for the business process modeling activity. The primary purposes of this paper are to conduct a literature review of Business Process Model and Notation (BPMN) testing and formal verification and to propose the Business Process Evaluation and Research Framework for Enhancement and Continuous Testing (bPERFECT) framework, which aims to guide business process testing (BPT) research and implementation. Secondary objectives include (1) eliciting the existing types of testing, (2) evaluating their impact on efficiency and (3) assessing the formal verification techniques that complement testing.

Design/methodology/approach

The methodology used is based on Kitchenham's (2004) original procedures for conducting systematic literature reviews.

Findings

Results of this study indicate that three distinct business process model testing types can be found in the literature: black/gray-box, regression and integration. Testing and verification approaches differ in aspects such as awareness of test data, coverage criteria and auxiliary representations used. However, most solutions pose notable hindrances, such as BPMN element limitations, that lead to limited practicality.

Research limitations/implications

The databases selected in the review protocol may have excluded relevant studies on this topic. More databases and gray literature could also be considered for inclusion in this review.

Originality/value

Three main originality aspects are identified in this study as follows: (1) the classification of process model testing types, (2) the future trends foreseen for BPMN model testing and verification and (3) the bPERFECT framework for testing business processes.

Details

Business Process Management Journal, vol. 29 no. 8
Type: Research Article
ISSN: 1463-7154

Keywords

Article
Publication date: 16 January 2017

Sharif Mozumder, Michael Dempsey and M. Humayun Kabir

The purpose of the paper is to back-test value-at-risk (VaR) models for conditional distributions belonging to a Generalized Hyperbolic (GH) family of Lévy processes – Variance…

Abstract

Purpose

The purpose of the paper is to back-test value-at-risk (VaR) models for conditional distributions belonging to a Generalized Hyperbolic (GH) family of Lévy processes – Variance Gamma, Normal Inverse Gaussian, Hyperbolic distribution and GH – and compare their risk-management features with a traditional unconditional extreme value (EV) approach using data from future contracts return data of S&P500, FTSE100, DAX, HangSeng and Nikkei 225 indices.

Design/methodology/approach

The authors apply tail-based and Lévy-based calibration to estimate the parameters of the models as part of the initial data analysis. While the authors utilize the peaks-over-threshold approach for generalized Pareto distribution, the conditional maximum likelihood method is followed in case of Lévy models. As the Lévy models do not have closed form expressions for VaR, the authors follow a bootstrap method to determine the VaR and the confidence intervals. Finally, for back-testing, they use both static calibration (on the entire data) and dynamic calibration (on a four-year rolling window) to test the unconditional, independence and conditional coverage hypotheses implemented with 95 and 99 per cent VaRs.

Findings

Both EV and Lévy models provide the authors with a conservative proportion of violation for VaR forecasts. A model targeting tail or fitting the entire distribution has little effect on either VaR calculation or a VaR model’s back-testing performance.

Originality/value

To the best of the authors’ knowledge, this is the first study to explore the back-testing performance of Lévy-based VaR models. The authors conduct various calibration and bootstrap techniques to test the unconditional, independence and conditional coverage hypotheses for the VaRs.

Details

The Journal of Risk Finance, vol. 18 no. 1
Type: Research Article
ISSN: 1526-5943

Keywords

Article
Publication date: 11 July 2023

Patricia J. Goldsmith

HR leaders and corporate benefits managers must balance organizational costs with decisions about which new tools and treatments will be covered by their employee health insurance…

Abstract

Purpose

HR leaders and corporate benefits managers must balance organizational costs with decisions about which new tools and treatments will be covered by their employee health insurance plans. Getting it right can mean the difference between life and death for cancer patients. Most HR leaders, however, are not experts in cancer treatment and do not know how to make sure their plan benefits do not create roadblocks to treatment.

Design/methodology/approach

A total of 295 people who were diagnosed with cancer from 2019 to 2022 participated in the 2023 CancerCare Biomarker Survey, which was conducted in January 2023.

Findings

CancerCare’s 2023 survey of cancer patients found that biomarker testing helped doctors tailor therapy for nearly all the patients (93%) whose cancers were tested over the past three years. Two in 10 cancer patients (20%) avoided unnecessary chemotherapy and/or radiation and one in 10 (10%) became eligible for a clinical trial because of biomarker testing.

Research limitations/implications

Biomarker testing is a necessary tool in the advancing world of precision cancer treatment. Despite the significant and demonstrable benefits to surveyed patients, three out of 10 respondents (29%) who received biomarker testing did not have the test covered by their insurance. Some survey respondents reported that biomarker test coverage was originally denied and they had to fight to get it covered. Others had to find ways to pay out-of-pocket or seek financial assistance to cover the cost of the testing.

Practical implications

Unfortunately, health insurance plans often limit cancer patients’ access to recommended biomarker testing, impose burdensome prior authorization (PA) protocols or require unaffordable cost-sharing, which can prevent or delay cancer patients’ access to optimal treatments. PA, a significant source of roadblocks to timely testing and treatment, was required by a quarter (25%) of the cancer patients surveyed.

Originality/value

Biomarker testing is increasingly a health care equity issue and there are significant gaps in the rate of biomarker testing between black and white lung and colorectal cancer patients, which can lead to disparities in clinical trial participation and hinder access to the most effective treatments. A key way to address these barriers is to broaden insurance coverage of biomarker testing, as recommended by medical experts.

Details

Strategic HR Review, vol. 22 no. 4
Type: Research Article
ISSN: 1475-4398

Keywords

Article
Publication date: 30 October 2019

Vibha Verma, Sameer Anand and Anu Gupta Aggarwal

The purpose of this paper is to identify and quantify the key components of the overall cost of software development when warranty coverage is given by a developer. Also, the…

Abstract

Purpose

The purpose of this paper is to identify and quantify the key components of the overall cost of software development when warranty coverage is given by a developer. Also, the authors have studied the impact of imperfect debugging on the optimal release time, warranty policy and development cost which signifies that it is important for the developers to control the parameters that cause a sharp increase in cost.

Design/methodology/approach

An optimization problem is formulated to minimize software development cost by considering imperfect fault removal process, faults generation at a constant rate and an environmental factor to differentiate the operational phase from the testing phase. Another optimization problem under perfect debugging conditions, i.e. without error generation is constructed for comparison. These optimization models are solved in MATLAB, and their solutions provide insights to the degree of impact of imperfect debugging on the optimal policies with respect to software release time and warranty time.

Findings

A real-life fault data set of Radar System is used to study the impact of various cost factors via sensitivity analysis on release and warranty policy. If firms tend to provide warranty for a longer period of time, then they may have to bear losses due to increased debugging cost with more number of failures occurring during the warrantied time but if the warranty is not provided for sufficient time it may not act as sufficient hedge during field failures.

Originality/value

Every firm is fighting to remain in the competition and expand market share by offering the latest technology-based products, using innovative marketing strategies. Warranty is one such strategic tool to promote the product among masses and develop a sense of quality in the user’s mind. In this paper, the failures encountered during development and after software release are considered to model the failure process.

Details

International Journal of Quality & Reliability Management, vol. 37 no. 9/10
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 21 August 2017

Nadi Serhan Aydın

This paper aims to introduce a model-based stress-testing methodology for Islamic finance products. The importance of stress testing was indeed clearly underlined by the adverse…

Abstract

Purpose

This paper aims to introduce a model-based stress-testing methodology for Islamic finance products. The importance of stress testing was indeed clearly underlined by the adverse developments in the global finance industry. One of the key takeaways was the need to strengthen the coverage of the capital framework. Cognisant of this fact, Basel III encapsulates provisions to enhance the financial sector’s ability to withstand shocks arising from possible stress events, thereby reducing adverse spillovers into the real economy. Similarly, the Islamic Financial Services Board requires Islamic financial institutions to run stress tests as part of capital planning.

Design/methodology/approach

The authors perform thorough backtests on Islamic and conventional portfolios under widely used risk models, which are characterised by an underlying conditional volatility framework and distribution, to identify the most suitable risk model specification. Associated with an appropriate initial shock and estimation window size, the paper also conducts a model-based stress test to examine whether the stress losses estimated by the selected models compare favourably to the historical shocks.

Findings

The results suggest that the model-based framework, when combined with an appropriate risk model and distribution, can successfully reproduce past stress periods. The conditional empirical risk model is the most effective one in both long and short portfolio cases – particularly when combined with a long-enough estimation window. The relative performance of normal vs heavy-tailed distributions and symmetric vs asymmetric risk models, on the other hand, is highly dependent on whether the portfolio is long or short. Finally, the authors find that the Islamic portfolio is generally associated with lower historical stress losses as compared to the conventional portfolio.

Originality/value

The model-based framework eliminates some of the key problems associated with traditional scenario-based approaches and is easily adaptable to Islamic finance.

Details

International Journal of Islamic and Middle Eastern Finance and Management, vol. 10 no. 3
Type: Research Article
ISSN: 1753-8394

Keywords

Article
Publication date: 5 September 2021

Li Gao, Jinnan Song, Jianxiao Guo and Jiajuan Liang

Share pledge is a popular way to raise funds in China, but it aggravates information asymmetry. As an indispensable information intermediary in the financial market, media coverage

Abstract

Purpose

Share pledge is a popular way to raise funds in China, but it aggravates information asymmetry. As an indispensable information intermediary in the financial market, media coverage affects asset price and pricing efficiency and impacts information asymmetry. This study aims to explore the governance role of media coverage as an information intermediary in the share pledge context in China.

Design/methodology/approach

Moderating effect and mediating effect analyses are the primary methods used to test the governance role of media coverage. The ordinary least squares model was used to test the relationship between share pledge and market performance and then proved the moderating effect of media coverage toward the corporate market value of pledge firms. Accounting earnings value relevance models were explored to test the path of media coverage on firm market value by mediating effect analysis. At last, subgroup tests were used to verify the heterogeneity of the moderating effect of media coverage.

Findings

In the context of share pledge in China, the higher the share pledge ratio, the higher is the market value of listed firms, which verifies the motivation of controlling shareholders to avoid the transfer of control right and the motivation to tunneling. Media coverage has a significant negative moderating effect on the relationship between share pledge rate and corporate value and has a significant impact on the accounting earnings value relevance of share pledge firms. From the perspective of long-term earnings, media coverage reduces the market performance of share pledge firms by reducing the value correlation of accounting earnings information. From the short-term price point of view, media coverage reduces the market performance of share pledge firms by improving the value correlation of accounting earnings information. Furthermore, media coverage has a more significant moderating effect in state-owned share pledge firms and low information transparency and low information disclosure quality firms.

Research limitations/implications

This paper does not distinguish the mode difference of spreading news and the impact of non-pledge media coverage. Also, this paper does not consider factors other than accounting information value relevance when exploring how media coverage affects the corporate market value. Share pledge firms should use media for publicity and play a role in media governance and should actively improve their information disclosure quality, strengthen communication with investors and reduce information asymmetry fundamentally.

Practical implications

This paper diversify the governance choices for share pledge firms and has important implications for firms, investors, information intermediaries and regulators. Media reports play an increasingly important role today, and any reports and predictions of major events may profoundly affect investors’ decisions. Although media reports can make up for the weakness of accounting information disclosure of equity pledge companies in some sense, it is still not a long-term strategy. Equity pledge companies should not only make use of media for publicity and play a role of media governance but also actively improve their information disclosure quality.

Originality/value

This paper focuses on share pledge firms to carry out in-depth research. Based on exploring the influence mechanism of share pledges, the authors find the importance of media governance. This paper expands the literature about the economic consequences of share pledges and provides empirical data for media governance of share pledge firms. This paper innovatively proves the governance role of media coverage from the view of accounting information value relevance. The main innovation point is the long and short-term perspective analysis of the influence of media coverage on the correlation of accounting earnings value. The heterogeneity effect analysis of media coverage also reflects the depth and strong practical guiding significance of this study.

Book part
Publication date: 19 November 2012

Sabrina Khanniche

Purpose – This chapter aimed to investigate hedge funds market risk. One aims to go further the traditional measures of risk that underestimates it by introducing a more…

Abstract

Purpose – This chapter aimed to investigate hedge funds market risk. One aims to go further the traditional measures of risk that underestimates it by introducing a more appropriate method to hedge funds. One demonstrates that daily hedge fund return distributions are asymmetric and leptokurtic. Furthermore, volatility clustering phenomenon and the existence of ARCH effects demonstrate that hedge funds volatility varies through time. These features suggest the modelisation of their volatility using symmetric (GARCH) and asymmetric (EGARCH and TGARCH) models used to evaluate a 1-day-ahead value at risk (VaR).

Methodology/Approach – The conditional variances were estimated under the assumption that residuals t follow the normal and the student law. The knowledge of the conditional variance was used to forecast 1-day-ahead VaR. The estimations are compared with the Gaussian, the student and the modified VaR. To sum up, 12 VaRs are computed; those based on standard deviation and computed with normal, student and cornish fisher quantile and those based on conditional volatility models (GARCH, TGARCH and EGARCH) computed with the same quantiles.

Findings – The results demonstrate that VaR models based on normal quantile underestimate risk while those based on student and cornish fisher quantiles seem to be more relevant measurements. GARCH-type VaRs are very sensitive to changes in the return process. Back-testing results show that the choice of the model used to forecast volatility has an importance. Indeed, the VaR based on standard deviation is not relevant to measure hedge funds risks as it fails the appropriate tests. On the opposite side, GARCH-, TGARCH- and EGARCH-type VaRs are accurate as they pass most of the time successfully the back-testing tests. But, the quantile used has a more significant impact on the relevance of the VaR models considered. GARCH-type VaR computed with the student and especially cornish fisher quantiles lead to better results, which is consistent with Monteiro (2004) and Pochon and Teïletche (2006).

Originality/Value of chapter – A large set of GARCH-type models are considered to estimate hedge funds volatility leading to numerous evaluation of VaRs. These estimations are very helpful. Indeed, public savings under institutional investors management then delegate to hedge funds are concerned. Therefore, an adequate risk management is required. Another contribution of this chapter is the use of daily data to measure all hedge fund strategies risks.

Details

Recent Developments in Alternative Finance: Empirical Assessments and Economic Implications
Type: Book
ISBN: 978-1-78190-399-5

Keywords

1 – 10 of over 30000