Search results

1 – 10 of 143
Article
Publication date: 7 October 2014

Maria Filipa Mourão, Ana Cristina Braga and Pedro Nuno Oliveira

The purpose of this paper is to use the kernel method to produce a smoothed receiver operating characteristic (ROC) curve and show how baby gender can influence Clinical Risk…

Abstract

Purpose

The purpose of this paper is to use the kernel method to produce a smoothed receiver operating characteristic (ROC) curve and show how baby gender can influence Clinical Risk Index for Babies (CRIB) scale according to survival risks.

Design/methodology/approach

To obtain the ROC curve, conditioned by covariates, two methods may be followed: first, indirect adjustment, in which the covariate is first modeled within groups and then by generating a modified distribution curve; second, direct smoothing in which covariate effects is modeled within the ROC curve itself. To verify if new-born gender and weight affects the classification according to the CRIB scale, the authors use the direct method. The authors sampled 160 Portuguese babies.

Findings

The smoothing applied to the ROC curves indicates that the curve's original shape does not change when a bandwidth h=0.1 is used. Furthermore, gender seems to be a significant covariate in predicting baby deaths. A higher value was obtained for the area under curve (AUC) when conditional on female babies.

Practical implications

The challenge is to determine whether gender discriminates between dead and surviving babies.

Originality/value

The authors constructed empirical ROC curves for CRIB data and empirical ROC curves conditioned on gender. The authors calculate the corresponding AUC and tested the difference between them. The authors also constructed smooth ROC curves for two approaches.

Details

International Journal of Health Care Quality Assurance, vol. 27 no. 8
Type: Research Article
ISSN: 0952-6862

Keywords

Article
Publication date: 11 March 2019

Vivien Brunel

In machine learning applications, and in credit risk modeling in particular, model performance is usually measured by using cumulative accuracy profile (CAP) and receiving…

Abstract

Purpose

In machine learning applications, and in credit risk modeling in particular, model performance is usually measured by using cumulative accuracy profile (CAP) and receiving operating characteristic curves. The purpose of this paper is to use the statistics of the CAP curve to provide a new method for credit PD curves calibration that are not based on arbitrary choices as the ones that are used in the industry.

Design/methodology/approach

The author maps CAP curves to a ball–box problem and uses statistical physics techniques to compute the statistics of the CAP curve from which the author derives the shape of PD curves.

Findings

This approach leads to a new type of shape for PD curves that have not been considered in the literature yet, namely, the Fermi–Dirac function which is a two-parameter function depending on the target default rate of the portfolio and the target accuracy ratio of the scoring model. The author shows that this type of PD curve shape is likely to outperform the logistic PD curve that practitioners often use.

Practical implications

This paper has some practical implications for practitioners in banks. The author shows that the logistic function which is widely used, in particular in the field of retail banking, should be replaced by the Fermi–Dirac function. This has an impact on pricing, the granting policy and risk management.

Social implications

Measuring credit risk accurately benefits the bank of course and the customers as well. Indeed, granting is based on a fair evaluation of risk, and pricing is done accordingly. Additionally, it provides better tools to supervisors to assess the risk of the bank and the financial system as a whole through the stress testing exercises.

Originality/value

The author suggests that practitioners should stop using logistic PD curves and should adopt the Fermi–Dirac function to improve the accuracy of their credit risk measurement.

Details

The Journal of Risk Finance, vol. 20 no. 2
Type: Research Article
ISSN: 1526-5943

Keywords

Article
Publication date: 1 February 1989

Majid Jaraiedi and Wafik H. Iskander

Signal Detection Theory (SDT) has recently been used to evaluate the performance of imperfect inspectors. SDT model is based on a priori probabilities and perceived payoffs and…

Abstract

Signal Detection Theory (SDT) has recently been used to evaluate the performance of imperfect inspectors. SDT model is based on a priori probabilities and perceived payoffs and penalties to study inspectors′ behaviour. In this article, Bayes′ theorem is used to compute posterior probabilities of the two types of inspection error. These posterior probabilities give rise to the definition of Receiver Analysis Curves (RAC), which depict the “after the facts” consequences of inspection error. A cost model is also developed that reflects the true benefits and costs of inspection accuracy to the organisation.

Details

International Journal of Quality & Reliability Management, vol. 6 no. 2
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 13 November 2007

Stylianos Z. Xanthopoulos and Christos T. Nakas

The purpose of this article is to introduce Receiver Operating Characteristic (ROC) surfaces and hyper‐surfaces within a banking context as natural generalizations of the ROC curve

1205

Abstract

Purpose

The purpose of this article is to introduce Receiver Operating Characteristic (ROC) surfaces and hyper‐surfaces within a banking context as natural generalizations of the ROC curve.

Design/methodology/approach

Nonparametric ROC analysis using U‐statistics theory was used.

Findings

Application of the proposed methodology on data from a small size Greek bank illustrates the usefulness of ROC analysis for scoring systems assessment. The area under the ROC curve and the volume under the ROC surface and hyper‐surface are useful diagnostic indices for the assessment of credit rating systems and scorecards. The notion of statistical significance is not adequate for the evaluation of the loan granting strategy of a financial institution.

Originality/value

This article will be of value to financial institutions during the process of evaluation/validation of rating models.

Details

The Journal of Risk Finance, vol. 8 no. 5
Type: Research Article
ISSN: 1526-5943

Keywords

Open Access
Article
Publication date: 12 June 2017

Aida Krichene

Loan default risk or credit risk evaluation is important to financial institutions which provide loans to businesses and individuals. Loans carry the risk of being defaulted. To…

6730

Abstract

Purpose

Loan default risk or credit risk evaluation is important to financial institutions which provide loans to businesses and individuals. Loans carry the risk of being defaulted. To understand the risk levels of credit users (corporations and individuals), credit providers (bankers) normally collect vast amounts of information on borrowers. Statistical predictive analytic techniques can be used to analyse or to determine the risk levels involved in loans. This paper aims to address the question of default prediction of short-term loans for a Tunisian commercial bank.

Design/methodology/approach

The authors have used a database of 924 files of credits granted to industrial Tunisian companies by a commercial bank in the years 2003, 2004, 2005 and 2006. The naive Bayesian classifier algorithm was used, and the results show that the good classification rate is of the order of 63.85 per cent. The default probability is explained by the variables measuring working capital, leverage, solvency, profitability and cash flow indicators.

Findings

The results of the validation test show that the good classification rate is of the order of 58.66 per cent; nevertheless, the error types I and II remain relatively high at 42.42 and 40.47 per cent, respectively. A receiver operating characteristic curve is plotted to evaluate the performance of the model. The result shows that the area under the curve criterion is of the order of 69 per cent.

Originality/value

The paper highlights the fact that the Tunisian central bank obliged all commercial banks to conduct a survey study to collect qualitative data for better credit notation of the borrowers.

Propósito

El riesgo de incumplimiento de préstamos o la evaluación del riesgo de crédito es importante para las instituciones financieras que otorgan préstamos a empresas e individuos. Existe el riesgo de que el pago de préstamos no se cumpla. Para entender los niveles de riesgo de los usuarios de crédito (corporaciones e individuos), los proveedores de crédito (banqueros) normalmente recogen gran cantidad de información sobre los prestatarios. Las técnicas analíticas predictivas estadísticas pueden utilizarse para analizar o determinar los niveles de riesgo involucrados en los préstamos. En este artículo abordamos la cuestión de la predicción por defecto de los préstamos a corto plazo para un banco comercial tunecino.

Diseño/metodología/enfoque

Utilizamos una base de datos de 924 archivos de créditos concedidos a empresas industriales tunecinas por un banco comercial en 2003, 2004, 2005 y 2006. El algoritmo bayesiano de clasificadores se llevó a cabo y los resultados muestran que la tasa de clasificación buena es del orden del 63.85%. La probabilidad de incumplimiento se explica por las variables que miden el capital de trabajo, el apalancamiento, la solvencia, la rentabilidad y los indicadores de flujo de efectivo.

Hallazgos

Los resultados de la prueba de validación muestran que la buena tasa de clasificación es del orden de 58.66% ; sin embargo, los errores tipo I y II permanecen relativamente altos, siendo de 42.42% y 40.47%, respectivamente. Se traza una curva ROC para evaluar el rendimiento del modelo. El resultado muestra que el criterio de área bajo curva (AUC, por sus siglas en inglés) es del orden del 69%.

Originalidad/valor

El documento destaca el hecho de que el Banco Central tunecino obligó a todas las entidades del sector llevar a cabo un estudio de encuesta para recopilar datos cualitativos para un mejor registro de crédito de los prestatarios.

Palabras clave

Curva ROC, Evaluación de riesgos, Riesgo de incumplimiento, Sector bancario, Algoritmo clasificador bayesiano.

Tipo de artículo

Artículo de investigación

Details

Journal of Economics, Finance and Administrative Science, vol. 22 no. 42
Type: Research Article
ISSN: 2077-1886

Keywords

Article
Publication date: 17 October 2008

Xubiao He, Pu Gong and Chunxun Xie

The purpose of this paper is to simulate internal credit ratings based on stock market data and gain the credit information about listed companies.

Abstract

Purpose

The purpose of this paper is to simulate internal credit ratings based on stock market data and gain the credit information about listed companies.

Design/methodology/approach

According to the concept of default distance, default probability of listed companies is obtained from stock's price process based on generalized autoregressive conditionally heteroscedastic‐M model with the generalized error distribution, then credit ratings based on the default probability is built. Moreover, the model's validity is proved using the statistical tests and nonparametric receiver operating characteristic (ROC) curve method.

Findings

Application of the proposed methodology on data from Chinese stock market illustrates that default probability model can identify the credit risk of listed companies effectively using the statistical tests and nonparametric ROC curve method. The results from simulating credit ratings based on default probability are positive correlated with the corresponding results from Xinhua Far East China Ratings.

Originality/value

The internal credit ratings‐based default probability can reflect the change of credit quality for listed companies according to market information. For listed companies, especially which possibly suffer from accounting manipulations, the ratings will help investors and supervisors gain their credit information in time.

Details

Kybernetes, vol. 37 no. 9/10
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 17 March 2023

Le Wang, Liping Zou and Ji Wu

This paper aims to use artificial neural network (ANN) methods to predict stock price crashes in the Chinese equity market.

Abstract

Purpose

This paper aims to use artificial neural network (ANN) methods to predict stock price crashes in the Chinese equity market.

Design/methodology/approach

Three ANN models are developed and compared with the logistic regression model.

Findings

Results from this study conclude that the ANN approaches outperform the traditional logistic regression model, with fewer hidden layers in the ANN model having superior performance compared to the ANNs with multiple hidden layers. Results from the ANN approach also reveal that foreign institutional ownership, financial leverage, weekly average return and market-to-book ratio are the important variables when predicting stock price crashes, consistent with results from the traditional logistic model.

Originality/value

First, the ANN framework has been used in this study to forecast the stock price crashes and compared to the traditional logistic model in the world’s largest emerging market China. Second, the receiver operating characteristics curves and the area under the ROC curve have been used to evaluate the forecasting performance between the ANNs and the traditional approaches, in addition to some traditional performance evaluation methods.

Details

Pacific Accounting Review, vol. 35 no. 4
Type: Research Article
ISSN: 0114-0582

Keywords

Article
Publication date: 1 April 2001

Graham Partington, Philip Russel, Max Stevenson and Violet Torbey

Reviews previous research on predicting financial distress and the effects of US Chapter 11 bankruptcy (C11B); and explains how survival analysis and Cox’s (1972) proportional…

Abstract

Reviews previous research on predicting financial distress and the effects of US Chapter 11 bankruptcy (C11B); and explains how survival analysis and Cox’s (1972) proportional hazards model can be used to estimate the financial outcome for the shareholders of C11B. Reduces a previous data set (Russel et al 1999) of 154 companies entering C11B between 1984 and 1993 to 59 (54 of which gave no value to shareholders) and estimates two models to predict this: one based on firm‐specific covariates only and the other adding market‐wide covariates. Explains the methodology, presents the results and uses receiver operating characteristic curves to compare the predictive accuracy of the two. Finds little difference between the and suggests using the simpler model. Briefly summarizes the variables which are most useful in predicting the value outcomes of C11B for shareholders and recognizes the limitations of the study.

Details

Managerial Finance, vol. 27 no. 4
Type: Research Article
ISSN: 0307-4358

Keywords

Book part
Publication date: 26 April 2011

Kajal Lahiri, Hany A. Shawky and Yongchen Zhao

The main purpose of this chapter is to estimate a model for hedge fund returns that will endogenously generate failure probabilities using panel data where sample attrition due to…

Abstract

The main purpose of this chapter is to estimate a model for hedge fund returns that will endogenously generate failure probabilities using panel data where sample attrition due to fund failures is a dominant feature. We use the Lipper (TASS) hedge fund database, which includes all live and defunct hedge funds over the period January 1994 through March 2009, to estimate failure probabilities for hedge funds. Our results show that hedge fund failure prediction can be substantially improved by accounting for selectivity bias caused by censoring in the sample. After controlling for failure risk, we find that capital flow, lockup period, redemption notice period, and fund age are significant factors in explaining hedge fund returns. We also show that for an average hedge fund, failure risk increases substantially with age. Surprisingly, a 5-year-old fund on average has only a 65% survival rate.

Details

Research in Finance
Type: Book
ISBN: 978-0-85724-541-0

Article
Publication date: 13 September 2019

Guru Prasad Bhandari, Ratneshwer Gupta and Satyanshu Kumar Upadhyay

Software fault prediction is an important concept that can be applied at an early stage of the software life cycle. Effective prediction of faults may improve the reliability and…

Abstract

Purpose

Software fault prediction is an important concept that can be applied at an early stage of the software life cycle. Effective prediction of faults may improve the reliability and testability of software systems. As service-oriented architecture (SOA)-based systems become more and more complex, the interaction between participating services increases frequently. The component services may generate enormous reports and fault information. Although considerable research has stressed on developing fault-proneness prediction models in service-oriented systems (SOS) using machine learning (ML) techniques, there has been little work on assessing how effective the source code metrics are for fault prediction. The paper aims to discuss this issue.

Design/methodology/approach

In this paper, the authors have proposed a fault prediction framework to investigate fault prediction in SOS using metrics of web services. The effectiveness of the model has been explored by applying six ML techniques, namely, Naïve Bayes, Artificial Networks (ANN), Adaptive Boosting (AdaBoost), decision tree, Random Forests and Support Vector Machine (SVM), along with five feature selection techniques to extract the essential metrics. The authors have explored accuracy, precision, recall, f-measure and receiver operating characteristic curves of the area under curve values as performance measures.

Findings

The experimental results show that the proposed system can classify the fault-proneness of web services, whether the service is faulty or non-faulty, as a binary-valued output automatically and effectively.

Research limitations/implications

One possible threat to internal validity in the study is the unknown effects of undiscovered faults. Specifically, the authors have injected possible faults into the classes using Java C3.0 tool and only fixed faults are injected into the classes. However, considering the Java C3.0 community of development, testing and use, the authors can generalize that the undiscovered faults should be few and have less impact on the results presented in this study, and that the results may be limited to the investigated complexity metrics and the used ML techniques.

Originality/value

In the literature, only few studies have been observed to directly concentrate on metrics-based fault-proneness prediction of SOS using ML techniques. However, most of the contributions are regarding the fault prediction of the general systems rather than SOS. A majority of them have considered reliability, changeability, maintainability using a logging/history-based approach and mathematical modeling rather than fault prediction in SOS using metrics. Thus, the authors have extended the above contributions further by applying supervised ML techniques over web services metrics and measured their capability by employing fault injection methods.

Details

Data Technologies and Applications, vol. 53 no. 4
Type: Research Article
ISSN: 2514-9288

Keywords

1 – 10 of 143