Search results

1 – 7 of 7
Article
Publication date: 13 November 2023

Jamil Jaber, Rami S. Alkhawaldeh and Ibrahim N. Khatatbeh

This study aims to develop a novel approach for predicting default risk in bancassurance, which plays a crucial role in the relationship between interest rates in banks and…

Abstract

Purpose

This study aims to develop a novel approach for predicting default risk in bancassurance, which plays a crucial role in the relationship between interest rates in banks and premium rates in insurance companies. The proposed method aims to improve default risk predictions and assist with client segmentation in the banking system.

Design/methodology/approach

This research introduces the group method of data handling (GMDH) technique and a diversified classifier ensemble based on GMDH (dce-GMDH) for predicting default risk. The data set comprises information from 30,000 credit card clients of a large bank in Taiwan, with the output variable being a dummy variable distinguishing between default risk (0) and non-default risk (1), whereas the input variables comprise 23 distinct features characterizing each customer.

Findings

The results of this study show promising outcomes, highlighting the usefulness of the proposed technique for bancassurance and client segmentation. Remarkably, the dce-GMDH model consistently outperforms the conventional GMDH model, demonstrating its superiority in predicting default risk based on various error criteria.

Originality/value

This study presents a unique approach to predicting default risk in bancassurance by using the GMDH and dce-GMDH neural network models. The proposed method offers a valuable contribution to the field by showcasing improved accuracy and enhanced applicability within the banking sector, offering valuable insights and potential avenues for further exploration.

Details

Competitiveness Review: An International Business Journal , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1059-5422

Keywords

Article
Publication date: 17 April 2024

Jahanzaib Alvi and Imtiaz Arif

The crux of this paper is to unveil efficient features and practical tools that can predict credit default.

Abstract

Purpose

The crux of this paper is to unveil efficient features and practical tools that can predict credit default.

Design/methodology/approach

Annual data of non-financial listed companies were taken from 2000 to 2020, along with 71 financial ratios. The dataset was bifurcated into three panels with three default assumptions. Logistic regression (LR) and k-nearest neighbor (KNN) binary classification algorithms were used to estimate credit default in this research.

Findings

The study’s findings revealed that features used in Model 3 (Case 3) were the efficient and best features comparatively. Results also showcased that KNN exposed higher accuracy than LR, which proves the supremacy of KNN on LR.

Research limitations/implications

Using only two classifiers limits this research for a comprehensive comparison of results; this research was based on only financial data, which exhibits a sizeable room for including non-financial parameters in default estimation. Both limitations may be a direction for future research in this domain.

Originality/value

This study introduces efficient features and tools for credit default prediction using financial data, demonstrating KNN’s superior accuracy over LR and suggesting future research directions.

Details

Kybernetes, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 5 March 2024

Sana Ramzan and Mark Lokanan

This study aims to objectively synthesize the volume of accounting literature on financial statement fraud (FSF) using a systematic literature review research method (SLRRM). This…

Abstract

Purpose

This study aims to objectively synthesize the volume of accounting literature on financial statement fraud (FSF) using a systematic literature review research method (SLRRM). This paper analyzes the vast FSF literature based on inclusion and exclusion criteria. These criteria filter articles that are present in the accounting fraud domain and are published in peer-reviewed quality journals based on Australian Business Deans Council (ABDC) journal ranking. Lastly, a reverse search, analyzing the articles' abstracts, further narrows the search to 88 peer-reviewed articles. After examining these 88 articles, the results imply that the current literature is shifting from traditional statistical approaches towards computational methods, specifically machine learning (ML), for predicting and detecting FSF. This evolution of the literature is influenced by the impact of micro and macro variables on FSF and the inadequacy of audit procedures to detect red flags of fraud. The findings also concluded that A* peer-reviewed journals accepted articles that showed a complete picture of performance measures of computational techniques in their results. Therefore, this paper contributes to the literature by providing insights to researchers about why ML articles on fraud do not make it to top accounting journals and which computational techniques are the best algorithms for predicting and detecting FSF.

Design/methodology/approach

This paper chronicles the cluster of narratives surrounding the inadequacy of current accounting and auditing practices in preventing and detecting Financial Statement Fraud. The primary objective of this study is to objectively synthesize the volume of accounting literature on financial statement fraud. More specifically, this study will conduct a systematic literature review (SLR) to examine the evolution of financial statement fraud research and the emergence of new computational techniques to detect fraud in the accounting and finance literature.

Findings

The storyline of this study illustrates how the literature has evolved from conventional fraud detection mechanisms to computational techniques such as artificial intelligence (AI) and machine learning (ML). The findings also concluded that A* peer-reviewed journals accepted articles that showed a complete picture of performance measures of computational techniques in their results. Therefore, this paper contributes to the literature by providing insights to researchers about why ML articles on fraud do not make it to top accounting journals and which computational techniques are the best algorithms for predicting and detecting FSF.

Originality/value

This paper contributes to the literature by providing insights to researchers about why the evolution of accounting fraud literature from traditional statistical methods to machine learning algorithms in fraud detection and prediction.

Details

Journal of Accounting Literature, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0737-4607

Keywords

Article
Publication date: 24 November 2023

Yuling Ran, Wei Bai, Lingwei Kong, Henghui Fan, Xiujuan Yang and Xuemei Li

The purpose of this paper is to develop an appropriate machine learning model for predicting soil compaction degree while also examining the contribution rates of three…

Abstract

Purpose

The purpose of this paper is to develop an appropriate machine learning model for predicting soil compaction degree while also examining the contribution rates of three influential factors: moisture content, electrical conductivity and temperature, towards the prediction of soil compaction degree.

Design/methodology/approach

Taking fine-grained soil A and B as the research object, this paper utilized the laboratory test data, including compaction parameter (moisture content), electrical parameter (electrical conductivity) and temperature, to predict soil degree of compaction based on five types of commonly used machine learning models (19 models in total). According to the prediction results, these models were preliminarily compared and further evaluated.

Findings

The Gaussian process regression model has a good effect on the prediction of degree of compaction of the two kinds of soils: the error rates of the prediction of degree of compaction for fine-grained soil A and B are within 6 and 8%, respectively. As per the order, the contribution rates manifest as: moisture content > electrical conductivity >> temperature.

Originality/value

By using moisture content, electrical conductivity, temperature to predict the compaction degree directly, the predicted value of the compaction degree can be obtained with higher accuracy and the detection efficiency of the compaction degree can be improved.

Details

Engineering Computations, vol. 41 no. 1
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 15 April 2024

Seyed Abbas Rajaei, Afshin Mottaghi, Hussein Elhaei Sahar and Behnaz Bahadori

This study aims to investigate the spatial distribution of housing prices and identify the affecting factors (independent variable) on the cost of residential units (dependent…

Abstract

Purpose

This study aims to investigate the spatial distribution of housing prices and identify the affecting factors (independent variable) on the cost of residential units (dependent variable).

Design/methodology/approach

The method of the present study is descriptive-analytical and has an applied purpose. The used statistical population in this study is the residential units’ price in Tehran in 2021. For this purpose, the average per square meter of residential units in the city neighborhoods was entered in the geographical information system. Two techniques of ordinary least squares regression and geographically weighted regression have been used to analyze housing prices and modeling. Then, the results of the ordinary least squares regression and geographically weighted regression models were compared by using the housing price interpolation map predicted in each model and the accurate housing price interpolation map.

Findings

Based on the results, the ordinary least squares regression model has poorly modeled housing prices in the study area. The results of the geographically weighted regression model show that the variables (access rate to sports fields, distance from gas station and water station) have a direct and significant effect. Still, the variable (distance from fault) has a non-significant impact on increasing housing prices at a city level. In addition, to identify the affecting variables of housing prices, the results confirm the desirability of the geographically weighted regression technique in terms of accuracy compared to the ordinary least squares regression technique in explaining housing prices. The results of this study indicate that the housing prices in Tehran are affected by the access level to urban services and facilities.

Originality/value

Identifying factors affecting housing prices helps create sustainable housing in Tehran. Building sustainable housing represents spending less energy during the construction process together with the utilization phase, which ultimately provides housing at an acceptable price for all income deciles. In housing construction, the more you consider the sustainable housing principles, the more sustainable housing you provide and you take a step toward sustainable development. Therefore, sustainable housing is an important planning factor for local authorities and developers. As a result, it is necessary to institutionalize an integrated vision based on the concepts of sustainable development in the field of housing in the Tehran metropolis.

Details

International Journal of Housing Markets and Analysis, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1753-8270

Keywords

Open Access
Article
Publication date: 12 October 2023

V. Chowdary Boppana and Fahraz Ali

This paper presents an experimental investigation in establishing the relationship between FDM process parameters and tensile strength of polycarbonate (PC) samples using the…

479

Abstract

Purpose

This paper presents an experimental investigation in establishing the relationship between FDM process parameters and tensile strength of polycarbonate (PC) samples using the I-Optimal design.

Design/methodology/approach

I-optimal design methodology is used to plan the experiments by means of Minitab-17.1 software. Samples are manufactured using Stratsys FDM 400mc and tested as per ISO standards. Additionally, an artificial neural network model was developed and compared to the regression model in order to select an appropriate model for optimisation. Finally, the genetic algorithm (GA) solver is executed for improvement of tensile strength of FDM built PC components.

Findings

This study demonstrates that the selected process parameters (raster angle, raster to raster air gap, build orientation about Y axis and the number of contours) had significant effect on tensile strength with raster angle being the most influential factor. Increasing the build orientation about Y axis produced specimens with compact structures that resulted in improved fracture resistance.

Research limitations/implications

The fitted regression model has a p-value less than 0.05 which suggests that the model terms significantly represent the tensile strength of PC samples. Further, from the normal probability plot it was found that the residuals follow a straight line, thus the developed model provides adequate predictions. Furthermore, from the validation runs, a close agreement between the predicted and actual values was seen along the reference line which further supports satisfactory model predictions.

Practical implications

This study successfully investigated the effects of the selected process parameters - raster angle, raster to raster air gap, build orientation about Y axis and the number of contours - on tensile strength of PC samples utilising the I-optimal design and ANOVA. In addition, for prediction of the part strength, regression and ANN models were developed. The selected ANN model was optimised using the GA-solver for determination of optimal parameter settings.

Originality/value

The proposed ANN-GA approach is more appropriate to establish the non-linear relationship between the selected process parameters and tensile strength. Further, the proposed ANN-GA methodology can assist in manufacture of various industrial products with Nylon, polyethylene terephthalate glycol (PETG) and PET as new 3DP materials.

Details

International Journal of Industrial Engineering and Operations Management, vol. 6 no. 2
Type: Research Article
ISSN: 2690-6090

Keywords

Article
Publication date: 5 July 2023

Fredrick Otieno Okuta, Titus Kivaa, Raphael Kieti and James Ouma Okaka

The housing market in Kenya continues to experience an excessive imbalance between supply and demand. This imbalance renders the housing market volatile, and stakeholders lose…

Abstract

Purpose

The housing market in Kenya continues to experience an excessive imbalance between supply and demand. This imbalance renders the housing market volatile, and stakeholders lose repeatedly. The purpose of the study was to forecast housing prices (HPs) in Kenya using simple and complex regression models to assess the best model for projecting the HPs in Kenya.

Design/methodology/approach

The study used time series data from 1975 to 2020 of the selected macroeconomic factors sourced from Kenya National Bureau of Statistics, Central Bank of Kenya and Hass Consult Limited. Linear regression, multiple regression, autoregressive integrated moving average (ARIMA) and autoregressive distributed lag (ARDL) models regression techniques were used to model HPs.

Findings

The study concludes that the performance of the housing market is very sensitive to changes in the economic indicators, and therefore, the key players in the housing market should consider the performance of the economy during the project feasibility studies and appraisals. From the results, it can be deduced that complex models outperform simple models in forecasting HPs in Kenya. The vector autoregressive (VAR) model performs the best in forecasting HPs considering its lowest root mean squared error (RMSE), mean absolute error (MAE), mean absolute percentage error (MAPE) and bias proportion coefficient. ARIMA models perform dismally in forecasting HPs, and therefore, we conclude that HP is not a self-projecting variable.

Practical implications

A model for projecting HPs could be a game changer if applied during the project appraisal stage by the developers and project managers. The study thoroughly compared the various regression models to ascertain the best model for forecasting the prices and revealed that complex models perform better than simple models in forecasting HPs. The study recommends a VAR model in forecasting HPs considering its lowest RMSE, MAE, MAPE and bias proportion coefficient compared to other models. The model, if used in collaboration with the already existing hedonic models, will ensure that the investments in the housing markets are well-informed, and hence, a reduction in economic losses arising from poor market forecasting techniques. However, these study findings are only applicable to the commercial housing market i.e. houses for sale and rent.

Originality/value

While more research has been done on HP projections, this study was based on a comparison of simple and complex regression models of projecting HPs. A total of five models were compared in the study: the simple regression model, multiple regression model, ARIMA model, ARDL model and VAR model. The findings reveal that complex models outperform simple models in projecting HPs. Nonetheless, the study also used nine macroeconomic indicators in the model-building process. Granger causality test reveals that only household income (HHI), gross domestic product, interest rate, exchange rates (EXCR) and private capital inflows have a significant effect on the changes in HPs. Nonetheless, the study adds two little-known indicators in the projection of HPs, which are the EXCR and HHI.

Details

International Journal of Housing Markets and Analysis, vol. 17 no. 1
Type: Research Article
ISSN: 1753-8270

Keywords

1 – 7 of 7