Search results

1 – 10 of 21
Article
Publication date: 3 October 2023

Jie Lu, Desheng Wu, Junran Dong and Alexandre Dolgui

Credit risk evaluation is a crucial task for banks and non-bank financial institutions to support decision-making on granting loans. Most of the current credit risk methods rely…

Abstract

Purpose

Credit risk evaluation is a crucial task for banks and non-bank financial institutions to support decision-making on granting loans. Most of the current credit risk methods rely solely on expert knowledge or large amounts of data, which causes some problems like variable interactions hard to be identified, models lack interpretability, etc. To address these issues, the authors propose a new approach.

Design/methodology/approach

First, the authors improve interpretive structural model (ISM) to better capture and utilize expert knowledge, then combine expert knowledge with big data and the proposed fuzzy interpretive structural model (FISM) and K2 are used for expert knowledge acquisition and big data learning, respectively. The Bayesian network (BN) obtained is used for forward inference and backward inference. Data from Lending Club demonstrates the effectiveness of the proposed model.

Findings

Compared with the mainstream risk evaluation methods, the authors’ approach not only has higher accuracy and better presents the interaction between risk variables but also provide decision-makers with the best possible interventions in advance to avoid defaults in the financial field. The credit risk assessment framework based on the proposed method can serve as an effective tool for relevant policymakers.

Originality/value

The authors propose a novel credit risk evaluation approach, namely FISM-K2. It is a decision support method that can improve the ability of decision makers to predict risks and intervene in advance. As an attempt to combine expert knowledge and big data, the authors’ work enriches the research on financial risk.

Details

Industrial Management & Data Systems, vol. 123 no. 12
Type: Research Article
ISSN: 0263-5577

Keywords

Book part
Publication date: 23 October 2023

Glenn W. Harrison and J. Todd Swarthout

We take Cumulative Prospect Theory (CPT) seriously by rigorously estimating structural models using the full set of CPT parameters. Much of the literature only estimates a subset…

Abstract

We take Cumulative Prospect Theory (CPT) seriously by rigorously estimating structural models using the full set of CPT parameters. Much of the literature only estimates a subset of CPT parameters, or more simply assumes CPT parameter values from prior studies. Our data are from laboratory experiments with undergraduate students and MBA students facing substantial real incentives and losses. We also estimate structural models from Expected Utility Theory (EUT), Dual Theory (DT), Rank-Dependent Utility (RDU), and Disappointment Aversion (DA) for comparison. Our major finding is that a majority of individuals in our sample locally asset integrate. That is, they see a loss frame for what it is, a frame, and behave as if they evaluate the net payment rather than the gross loss when one is presented to them. This finding is devastating to the direct application of CPT to these data for those subjects. Support for CPT is greater when losses are covered out of an earned endowment rather than house money, but RDU is still the best single characterization of individual and pooled choices. Defenders of the CPT model claim, correctly, that the CPT model exists “because the data says it should.” In other words, the CPT model was borne from a wide range of stylized facts culled from parts of the cognitive psychology literature. If one is to take the CPT model seriously and rigorously then it needs to do a much better job of explaining the data than we see here.

Details

Models of Risk Preferences: Descriptive and Normative Challenges
Type: Book
ISBN: 978-1-83797-269-2

Keywords

Article
Publication date: 12 December 2023

Chun Tung Thomas Kiu and Jin Hooi Chan

This study aims to investigate the factors influencing the adoption of data analytics in performance management. By examining the role of organizational and environmental…

Abstract

Purpose

This study aims to investigate the factors influencing the adoption of data analytics in performance management. By examining the role of organizational and environmental contexts, this study contributes to the existing literature by proposing a novel and detailed technology-organization-environment (TOE) model for the complex interplay between firm characteristics and the adoption of data analytics. The results offer valuable insights and practical implications for organizations seeking to leverage data analytics for effective performance management.

Design/methodology/approach

The research draws upon a data set encompassing over 21,869 companies operating across all European Union member states. A multilevel logistic regression model was developed to evaluate the influence of organizational and environmental factors on the likelihood of adopting performance analytics in organizations.

Findings

The findings indicate that the lack of awareness of the benefits of data analytics and its practical application to address specific business challenges is a significant barrier to its adoption. Organizational contexts, such as variable-pay systems, employee training, hierarchical structures and frequency of monetary rewards, also influence the adoption of data analytics.

Research limitations/implications

The study informs managers about the strategic role of data analytics capabilities in performance management for improved business intelligence and driving data culture.

Practical implications

The study helps managers understand the strategic role of data analytics capabilities in performance management, leading to improved business intelligence and fostering a data-driven culture in five key areas: structural alignment, strategic decision-making, resource allocation, performance improvement and change management.

Originality/value

The study advances the TOE theory, making it a more detailed and complete framework, particularly applicable to the adoption of performance analytics. It identifies the main factors of adoption that play a crucial role in this process.

Details

Industrial Management & Data Systems, vol. 124 no. 2
Type: Research Article
ISSN: 0263-5577

Keywords

Article
Publication date: 10 November 2023

Chenchen Yang, Lu Chen and Qiong Xia

The development of digital technology has provided technical support to various industries. Specifically, Internet-based freight platforms can ensure the high-quality development…

Abstract

Purpose

The development of digital technology has provided technical support to various industries. Specifically, Internet-based freight platforms can ensure the high-quality development of the logistics industry. Online freight platforms can use cargo transportation insurance to improve their service capabilities, promote their differentiated development, create products with platform characteristics and increase their core competitiveness.

Design/methodology/approach

This study uses a generalised linear model to fit the claim probability and claim intensity data and analyses freight insurance pricing based on the freight insurance claim data of a freight platform in China.

Findings

Considering traditional pricing risk factors, this study adds two risk factors to fit the claim probability data, that is, the purchase behaviour of freight insurance customers and road density. The two variables can significantly influence the claim probability, and the model fitting outcomes obtained with the logit connection function are excellent. In addition, this study examines the model results under various distribution types for the fitting of the claim intensity data. The fitting outcomes under a gamma distribution are superior to those under the other distribution types, as measured by the Akaike information criterion.

Originality/value

With actual data from an online freight platform in China, this study empirically proves that a generalised linear model is superior to traditional pricing methods for freight insurance. This study constructs a generalised linear pricing model considering the unique features of the freight industry and determines that the transportation distance, cargo weight and road density have a significant influence on the claim probability and claim intensity.

Details

Industrial Management & Data Systems, vol. 123 no. 11
Type: Research Article
ISSN: 0263-5577

Keywords

Article
Publication date: 11 August 2023

Jianhui Liu, Ziyang Zhang, Longxiang Zhu, Jie Wang and Yingbao He

Due to the limitation of experimental conditions and budget, fatigue data of mechanical components are often scarce in practical engineering, which leads to low reliability of…

Abstract

Purpose

Due to the limitation of experimental conditions and budget, fatigue data of mechanical components are often scarce in practical engineering, which leads to low reliability of fatigue data and reduces the accuracy of fatigue life prediction. Therefore, this study aims to expand the available fatigue data and verify its reliability, enabling the achievement of life prediction analysis at different stress levels.

Design/methodology/approach

First, the principle of fatigue life probability percentiles consistency and the perturbation optimization technique is used to realize the equivalent conversion of small samples fatigue life test data at different stress levels. Meanwhile, checking failure model by fitting the goodness of fit test and proposing a Monte Carlo method based on the data distribution characteristics and a numerical simulation strategy of directional sampling is used to extend equivalent data. Furthermore, the relationship between effective stress and characteristic life is analyzed using a combination of the Weibull distribution and the Stromeyer equation. An iterative sequence is established to obtain predicted life.

Findings

The TC4–DT titanium alloy is selected to assess the accuracy and reliability of the proposed method and the results show that predicted life obtained with the proposed method is within the double dispersion band, indicating high accuracy.

Originality/value

The purpose of this study is to provide a reference for the expansion of small sample fatigue test data, verification of data reliability and prediction of fatigue life data. In addition, the proposed method provides a theoretical basis for engineering applications.

Details

International Journal of Structural Integrity, vol. 14 no. 5
Type: Research Article
ISSN: 1757-9864

Keywords

Article
Publication date: 17 April 2024

Jahanzaib Alvi and Imtiaz Arif

The crux of this paper is to unveil efficient features and practical tools that can predict credit default.

Abstract

Purpose

The crux of this paper is to unveil efficient features and practical tools that can predict credit default.

Design/methodology/approach

Annual data of non-financial listed companies were taken from 2000 to 2020, along with 71 financial ratios. The dataset was bifurcated into three panels with three default assumptions. Logistic regression (LR) and k-nearest neighbor (KNN) binary classification algorithms were used to estimate credit default in this research.

Findings

The study’s findings revealed that features used in Model 3 (Case 3) were the efficient and best features comparatively. Results also showcased that KNN exposed higher accuracy than LR, which proves the supremacy of KNN on LR.

Research limitations/implications

Using only two classifiers limits this research for a comprehensive comparison of results; this research was based on only financial data, which exhibits a sizeable room for including non-financial parameters in default estimation. Both limitations may be a direction for future research in this domain.

Originality/value

This study introduces efficient features and tools for credit default prediction using financial data, demonstrating KNN’s superior accuracy over LR and suggesting future research directions.

Details

Kybernetes, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 17 March 2023

Stewart Jones

This study updates the literature review of Jones (1987) published in this journal. The study pays particular attention to two important themes that have shaped the field over the…

Abstract

Purpose

This study updates the literature review of Jones (1987) published in this journal. The study pays particular attention to two important themes that have shaped the field over the past 35 years: (1) the development of a range of innovative new statistical learning methods, particularly advanced machine learning methods such as stochastic gradient boosting, adaptive boosting, random forests and deep learning, and (2) the emergence of a wide variety of bankruptcy predictor variables extending beyond traditional financial ratios, including market-based variables, earnings management proxies, auditor going concern opinions (GCOs) and corporate governance attributes. Several directions for future research are discussed.

Design/methodology/approach

This study provides a systematic review of the corporate failure literature over the past 35 years with a particular focus on the emergence of new statistical learning methodologies and predictor variables. This synthesis of the literature evaluates the strength and limitations of different modelling approaches under different circumstances and provides an overall evaluation the relative contribution of alternative predictor variables. The study aims to provide a transparent, reproducible and interpretable review of the literature. The literature review also takes a theme-centric rather than author-centric approach and focuses on structured themes that have dominated the literature since 1987.

Findings

There are several major findings of this study. First, advanced machine learning methods appear to have the most promise for future firm failure research. Not only do these methods predict significantly better than conventional models, but they also possess many appealing statistical properties. Second, there are now a much wider range of variables being used to model and predict firm failure. However, the literature needs to be interpreted with some caution given the many mixed findings. Finally, there are still a number of unresolved methodological issues arising from the Jones (1987) study that still requiring research attention.

Originality/value

The study explains the connections and derivations between a wide range of firm failure models, from simpler linear models to advanced machine learning methods such as gradient boosting, random forests, adaptive boosting and deep learning. The paper highlights the most promising models for future research, particularly in terms of their predictive power, underlying statistical properties and issues of practical implementation. The study also draws together an extensive literature on alternative predictor variables and provides insights into the role and behaviour of alternative predictor variables in firm failure research.

Details

Journal of Accounting Literature, vol. 45 no. 2
Type: Research Article
ISSN: 0737-4607

Keywords

Article
Publication date: 7 March 2023

Kinsun Tam, Qiao Xu, Guy Fernando and Richard A. Schneible

This paper aims to investigate whether the managers’ emphasis on audit in the management’s discussion and analysis (MD&A) section of the 10-K filing, as part of the firm’s “tone…

Abstract

Purpose

This paper aims to investigate whether the managers’ emphasis on audit in the management’s discussion and analysis (MD&A) section of the 10-K filing, as part of the firm’s “tone at the top,” is linked to audit quality.

Design/methodology/approach

Adopting a computational linguistics approach, the authors measure the manager’s audit emphasis as the frequency of audit-related words in the MD&A. The authors then assess the relationship between audit emphasis and audit quality with ordinary least squares and probit regression models.

Findings

This study finds that the manager’s audit emphasis, proxied by the count of audit-related words, is positively associated with audit fees, audit delay, the appointment and retention of Big 4 and industry-specialist auditors, and the probability of switching to Big 4 auditors, while negatively linked to abnormal accruals and the possibility of financial misstatements.

Research limitations/implications

The audit emphasis measure suffers from limitations. The computer program determining audit emphasis may misinterpret words in the MD&A. Researchers need to consider procedures to minimize misinterpretations.

Practical implications

Frequency of audit words in the MD&A reflects the firm’s aspiration for audit quality. Auditors, regulators and investors could ascertain such aspiration from past and current MD&As.

Originality/value

This study associates the manager’s emphasis on audit, measured with computational linguistics from the MD&A, with realized audit quality.

Details

Managerial Auditing Journal, vol. 38 no. 5
Type: Research Article
ISSN: 0268-6902

Keywords

Open Access
Article
Publication date: 20 November 2023

Asad Mehmood and Francesco De Luca

This study aims to develop a model based on the financial variables for better accuracy of financial distress prediction on the sample of private French, Spanish and Italian…

1654

Abstract

Purpose

This study aims to develop a model based on the financial variables for better accuracy of financial distress prediction on the sample of private French, Spanish and Italian firms. Thus, firms in financial difficulties could timely request for troubled debt restructuring (TDR) to continue business.

Design/methodology/approach

This study used a sample of 312 distressed and 312 non-distressed firms. It includes 60 French, 21 Spanish and 231 Italian firms in both distressed and non-distressed groups. The data are extracted from the ORBIS database. First, the authors develop a new model by replacing a ratio in the original Z”-Score model specifically for financial distress prediction and estimate its coefficients based on linear discriminant analysis (LDA). Second, using the modified Z”-Score model, the authors develop a firm TDR probability index for distressed and non-distressed firms based on the logistic regression model.

Findings

The new model (modified Z”-Score), specifically for financial distress prediction, represents higher prediction accuracy. Moreover, the firm TDR probability index accurately depicts the probabilities trend for both groups of distressed and non-distressed firms.

Research limitations/implications

The findings of this study are conclusive. However, the sample size is small. Therefore, further studies could extend the application of the prediction model developed in this study to all the EU countries.

Practical implications

This study has important practical implications. This study responds to the EU directive call by developing the financial distress prediction model to allow debtors to do timely debt restructuring and thus continue their businesses. Therefore, this study could be useful for practitioners and firm stakeholders, such as banks and other creditors, and investors.

Originality/value

This study significantly contributes to the literature in several ways. First, this study develops a model for predicting financial distress based on the argument that corporate bankruptcy and financial distress are distinct events. However, the original Z”-Score model is intended for failure prediction. Moreover, the recent literature suggests modifying and extending the prediction models. Second, the new model is tested using a sample of firms from three countries that share similarities in their TDR laws.

Details

Journal of Applied Accounting Research, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0967-5426

Keywords

Open Access
Article
Publication date: 28 September 2022

Tereza Jandásková, Tomas Hrdlicka, Martin Cupal, Petr Kleparnik, Milada Komosná and Marek Kervitcer

This study aims to provide a framework for assessing the technical condition of a house to determine its market value, including the identification of other price-setting factors…

Abstract

Purpose

This study aims to provide a framework for assessing the technical condition of a house to determine its market value, including the identification of other price-setting factors and their statistical significance. Time on market (TOM) in relation to the technical condition of a house is also addressed.

Design/methodology/approach

The primary database contains 631 houses, and the initial asking price and selling price are examined. All the houses are located in the Brno–venkov district in the Czech Republic. Regression analysis was used to test the influence of price-setting factors. The standard ordinary least squares estimator and the maximum likelihood estimator were used in the frame of generalized linear models.

Findings

Using envelope components of houses separately, such as the façade condition, windows, roof, condition of interior and year of construction, brings better results than using a single factor for the technical condition. TOM was found to be 67 days lower for houses intended for demolition – as compared to new houses – and 18 days lower for houses to refurbishment.

Originality/value

To the best of the authors’ knowledge, this paper is original in the substitution of specific price-setting factors for factors relating to the technical condition of houses as well as in proposing the framework for professionals in the Czech Republic.

Details

International Journal of Housing Markets and Analysis, vol. 16 no. 7
Type: Research Article
ISSN: 1753-8270

Keywords

1 – 10 of 21