Search results

1 – 10 of 249
Article
Publication date: 3 April 2024

Samar Shilbayeh and Rihab Grassa

Bank creditworthiness refers to the evaluation of a bank’s ability to meet its financial obligations. It is an assessment of the bank’s financial health, stability and capacity to…

Abstract

Purpose

Bank creditworthiness refers to the evaluation of a bank’s ability to meet its financial obligations. It is an assessment of the bank’s financial health, stability and capacity to manage risks. This paper aims to investigate the credit rating patterns that are crucial for assessing creditworthiness of the Islamic banks, thereby evaluating the stability of their industry.

Design/methodology/approach

Three distinct machine learning algorithms are exploited and evaluated for the desired objective. This research initially uses the decision tree machine learning algorithm as a base learner conducting an in-depth comparison with the ensemble decision tree and Random Forest. Subsequently, the Apriori algorithm is deployed to uncover the most significant attributes impacting a bank’s credit rating. To appraise the previously elucidated models, a ten-fold cross-validation method is applied. This method involves segmenting the data sets into ten folds, with nine used for training and one for testing alternatively ten times changeable. This approach aims to mitigate any potential biases that could arise during the learning and training phases. Following this process, the accuracy is assessed and depicted in a confusion matrix as outlined in the methodology section.

Findings

The findings of this investigation reveal that the Random Forest machine learning algorithm superperforms others, achieving an impressive 90.5% accuracy in predicting credit ratings. Notably, our research sheds light on the significance of the loan-to-deposit ratio as a primary attribute affecting credit rating predictions. Moreover, this study uncovers additional pivotal banking features that intensely impact the measurements under study. This paper’s findings provide evidence that the loan-to-deposit ratio looks to be the purest bank attribute that affects credit rating prediction. In addition, deposit-to-assets ratio and profit sharing investment account ratio criteria are found to be effective in credit rating prediction and the ownership structure criterion came to be viewed as one of the essential bank attributes in credit rating prediction.

Originality/value

These findings contribute significant evidence to the understanding of attributes that strongly influence credit rating predictions within the banking sector. This study uniquely contributes by uncovering patterns that have not been previously documented in the literature, broadening our understanding in this field.

Details

International Journal of Islamic and Middle Eastern Finance and Management, vol. 17 no. 2
Type: Research Article
ISSN: 1753-8394

Keywords

Article
Publication date: 8 February 2024

Juho Park, Junghwan Cho, Alex C. Gang, Hyun-Woo Lee and Paul M. Pedersen

This study aims to identify an automated machine learning algorithm with high accuracy that sport practitioners can use to identify the specific factors for predicting Major…

Abstract

Purpose

This study aims to identify an automated machine learning algorithm with high accuracy that sport practitioners can use to identify the specific factors for predicting Major League Baseball (MLB) attendance. Furthermore, by predicting spectators for each league (American League and National League) and division in MLB, the authors will identify the specific factors that increase accuracy, discuss them and provide implications for marketing strategies for academics and practitioners in sport.

Design/methodology/approach

This study used six years of daily MLB game data (2014–2019). All data were collected as predictors, such as game performance, weather and unemployment rate. Also, the attendance rate was obtained as an observation variable. The Random Forest, Lasso regression models and XGBoost were used to build the prediction model, and the analysis was conducted using Python 3.7.

Findings

The RMSE value was 0.14, and the R2 was 0.62 as a consequence of fine-tuning the tuning parameters of the XGBoost model, which had the best performance in forecasting the attendance rate. The most influential variables in the model are “Rank” of 0.247 and “Day of the week”, “Home team” and “Day/Night game” were shown as influential variables in order. The result was shown that the “Unemployment rate”, as a macroeconomic factor, has a value of 0.06 and weather factors were a total value of 0.147.

Originality/value

This research highlights unemployment rate as a determinant affecting MLB game attendance rates. Beyond contextual elements such as climate, the findings of this study underscore the significance of economic factors, particularly unemployment rates, necessitating further investigation into these factors to gain a more comprehensive understanding of game attendance.

Details

International Journal of Sports Marketing and Sponsorship, vol. 25 no. 2
Type: Research Article
ISSN: 1464-6668

Keywords

Article
Publication date: 26 May 2022

Ismail Abiodun Sulaimon, Hafiz Alaka, Razak Olu-Ajayi, Mubashir Ahmad, Saheed Ajayi and Abdul Hye

Road traffic emissions are generally believed to contribute immensely to air pollution, but the effect of road traffic data sets on air quality (AQ) predictions has not been fully…

260

Abstract

Purpose

Road traffic emissions are generally believed to contribute immensely to air pollution, but the effect of road traffic data sets on air quality (AQ) predictions has not been fully investigated. This paper aims to investigate the effects traffic data set have on the performance of machine learning (ML) predictive models in AQ prediction.

Design/methodology/approach

To achieve this, the authors have set up an experiment with the control data set having only the AQ data set and meteorological (Met) data set, while the experimental data set is made up of the AQ data set, Met data set and traffic data set. Several ML models (such as extra trees regressor, eXtreme gradient boosting regressor, random forest regressor, K-neighbors regressor and two others) were trained, tested and compared on these individual combinations of data sets to predict the volume of PM2.5, PM10, NO2 and O3 in the atmosphere at various times of the day.

Findings

The result obtained showed that various ML algorithms react differently to the traffic data set despite generally contributing to the performance improvement of all the ML algorithms considered in this study by at least 20% and an error reduction of at least 18.97%.

Research limitations/implications

This research is limited in terms of the study area, and the result cannot be generalized outside of the UK as some of the inherent conditions may not be similar elsewhere. Additionally, only the ML algorithms commonly used in literature are considered in this research, therefore, leaving out a few other ML algorithms.

Practical implications

This study reinforces the belief that the traffic data set has a significant effect on improving the performance of air pollution ML prediction models. Hence, there is an indication that ML algorithms behave differently when trained with a form of traffic data set in the development of an AQ prediction model. This implies that developers and researchers in AQ prediction need to identify the ML algorithms that behave in their best interest before implementation.

Originality/value

The result of this study will enable researchers to focus more on algorithms of benefit when using traffic data sets in AQ prediction.

Details

Journal of Engineering, Design and Technology , vol. 22 no. 3
Type: Research Article
ISSN: 1726-0531

Keywords

Article
Publication date: 14 September 2023

Cheng Liu, Yi Shi, Wenjing Xie and Xinzhong Bao

This paper aims to provide a complete analysis framework and prediction method for the construction of the patent securitization (PS) basic asset pool.

Abstract

Purpose

This paper aims to provide a complete analysis framework and prediction method for the construction of the patent securitization (PS) basic asset pool.

Design/methodology/approach

This paper proposes an integrated classification method based on genetic algorithm and random forest algorithm. First, comprehensively consider the patent value evaluation model and SME credit evaluation model, determine 17 indicators to measure the patent value and SME credit; Secondly, establish the classification label of high-quality basic assets; Then, genetic algorithm and random forest model are used to predict and screen high-quality basic assets; Finally, the performance of the model is evaluated.

Findings

The machine learning model proposed in this study is mainly used to solve the screening problem of high-quality patents that constitute the underlying asset pool of PS. The empirical research shows that the integrated classification method based on genetic algorithm and random forest has good performance and prediction accuracy, and is superior to the single method that constitutes it.

Originality/value

The main contributions of the article are twofold: firstly, the machine learning model proposed in this article determines the standards for high-quality basic assets; Secondly, this article addresses the screening issue of basic assets in PS.

Details

Kybernetes, vol. 53 no. 2
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 13 February 2024

Marcelo Cajias and Anna Freudenreich

This is the first article to apply a machine learning approach to the analysis of time on market on real estate markets.

Abstract

Purpose

This is the first article to apply a machine learning approach to the analysis of time on market on real estate markets.

Design/methodology/approach

The random survival forest approach is introduced to the real estate market. The most important predictors of time on market are revealed and it is analyzed how the survival probability of residential rental apartments responds to these major characteristics.

Findings

Results show that price, living area, construction year, year of listing and the distances to the next hairdresser, bakery and city center have the greatest impact on the marketing time of residential apartments. The time on market for an apartment in Munich is lowest at a price of 750 € per month, an area of 60 m2, built in 1985 and is in a range of 200–400 meters from the important amenities.

Practical implications

The findings might be interesting for private and institutional investors to derive real estate investment decisions and implications for portfolio management strategies and ultimately to minimize cash-flow failure.

Originality/value

Although machine learning algorithms have been applied frequently on the real estate market for the analysis of prices, its application for examining time on market is completely novel. This is the first paper to apply a machine learning approach to survival analysis on the real estate market.

Details

Journal of Property Investment & Finance, vol. 42 no. 2
Type: Research Article
ISSN: 1463-578X

Keywords

Article
Publication date: 16 April 2024

Liezl Smith and Christiaan Lamprecht

In a virtual interconnected digital space, the metaverse encompasses various virtual environments where people can interact, including engaging in business activities. Machine…

Abstract

Purpose

In a virtual interconnected digital space, the metaverse encompasses various virtual environments where people can interact, including engaging in business activities. Machine learning (ML) is a strategic technology that enables digital transformation to the metaverse, and it is becoming a more prevalent driver of business performance and reporting on performance. However, ML has limitations, and using the technology in business processes, such as accounting, poses a technology governance failure risk. To address this risk, decision makers and those tasked to govern these technologies must understand where the technology fits into the business process and consider its limitations to enable a governed transition to the metaverse. Using selected accounting processes, this study aims to describe the limitations that ML techniques pose to ensure the quality of financial information.

Design/methodology/approach

A grounded theory literature review method, consisting of five iterative stages, was used to identify the accounting tasks that ML could perform in the respective accounting processes, describe the ML techniques that could be applied to each accounting task and identify the limitations associated with the individual techniques.

Findings

This study finds that limitations such as data availability and training time may impact the quality of the financial information and that ML techniques and their limitations must be clearly understood when developing and implementing technology governance measures.

Originality/value

The study contributes to the growing literature on enterprise information and technology management and governance. In this study, the authors integrated current ML knowledge into an accounting context. As accounting is a pervasive aspect of business, the insights from this study will benefit decision makers and those tasked to govern these technologies to understand how some processes are more likely to be affected by certain limitations and how this may impact the accounting objectives. It will also benefit those users hoping to exploit the advantages of ML in their accounting processes while understanding the specific technology limitations on an accounting task level.

Details

Journal of Financial Reporting and Accounting, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1985-2517

Keywords

Article
Publication date: 5 February 2024

Elena Fedorova, Alexandr Nevredinov and Pavel Drogovoz

The purpose of our study is to study the impact of chief executive officer (CEO) optimism and narcissism on the company's capital structure.

Abstract

Purpose

The purpose of our study is to study the impact of chief executive officer (CEO) optimism and narcissism on the company's capital structure.

Design/methodology/approach

(1) The authors opt for regression, machine learning and text analysis to explore the impact of narcissism and optimism on the capital structure. (2) We analyze CEO interviews and employ three methods to evaluate narcissism: the dictionary proposed by Anglin, which enabled us to assess the following components: authority, superiority, vanity and exhibitionism; count of first-person singular and plural pronouns and count of CEO photos displayed. Following this approach, we were able to make a more thorough assessment of corporate narcissism. (3) Latent Dirichlet allocation (LDA) technique helped to find the differences in the corporate rhetoric of narcissistic and non-narcissistic CEOs and to find differences between the topics of interviews and letters provided by narcissistic and non-narcissistic CEOs.

Findings

Our research demonstrates that narcissism has a slight and nonlinear impact on capital structure. However, our findings suggest that there is an impact of pessimism and uncertainty under pandemic conditions when managers predicted doom and completely changed their strategies. We applied various approaches to estimate the gender distribution of CEOs and found that the median values of optimism and narcissism do not depend on sex. Using LDA, we examined the content and key topics of CEO interviews, defined as positive and negative. There are some differences in the topics: narcissistic CEOs are more likely to speak about long-term goals, projects and problems; they often talk about their brand and business processes.

Originality/value

First, we examine the COVID-19 pandemic period and evaluate how CEO optimism and pessimism affect their financial decisions under specific external conditions. The pandemic forced companies to shift the way they worked: either to switch to the remote work model or to interrupt operations; to lose or, on the contrary, attract clients. In addition, during this period, corporate management can have a different outlook on their company’s financial performance and goals. The LDA technique helped to find the differences in the corporate rhetoric of narcissistic and non-narcissistic CEOs. Second, we use three methods to evaluate narcissism. Third, the research is based on a set of advanced methods: machine learning techniques (random forest to reveal a nonlinear impact of CEO optimism and narcissism on capital structure).

Details

Review of Behavioral Finance, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1940-5979

Keywords

Article
Publication date: 28 February 2024

Elena Fedorova, Daria Aleshina and Igor Demin

The goal of this work is to evaluate how digital transformation disclosure in corporate news and press releases affects stock prices. We examine American and Chinese companies…

Abstract

Purpose

The goal of this work is to evaluate how digital transformation disclosure in corporate news and press releases affects stock prices. We examine American and Chinese companies from the energy and industry sectors for two periods: pre-COVID-19 and during the COVID-19 pandemic.

Design/methodology/approach

To estimate the effects of disclosure of information related to digital transformation, we applied the bag-of-words (BOW) method. As the benchmark dictionary, we used Kindermann et al. (2021), with the addition of original dictionaries created via Latent Dirichlet allocation (LDA) analysis. We also employed panel regression analysis and random forest.

Findings

For USA energy sector, all aspects of digital transformation were insignificant in pre-COVID-19 period, while sustainability topics became significant during the pandemic. As for the Chinese energy sector, digital strategy implementation was significant in pre-pandemic period, while digital technologies adoption and business model innovation became relevant in COVID-19 period. The results show the greater significance of digital transformation aspects for industrials sectors compared to the energy sector. The result of random forest analysis proves the efficiency of the authors’ dictionary which could be applied in practice. The developed methodology can be considered relevant.

Originality/value

The research contributes to the existing literature in theoretical, empirical and methodological ways. It applies signaling and information asymmetry theories to the financial markets, digital transformation being used as an instrument. The methodological contribution of this article can be described in several ways. Firstly, our data collection process differs from that in previous papers, as the data are gathered “from investor’s point of view”, i.e. we use all public information published by the company. Secondly, in addition to the use of existing dictionaries based on Kindermann et al. (2021), with our own modifications, we apply the original methodology based on LDA analysis. The empirical contribution of this research is the following. Unlike past works, we do not focus on particular technologies (Hong et al., 2023) connected with digital transformation, but try to cover all multi-dimensional aspects of the transformational process and aim to discover the most significant one.

Details

European Journal of Innovation Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1460-1060

Keywords

Article
Publication date: 28 February 2024

Yoonjae Hwang, Sungwon Jung and Eun Joo Park

Initiator crimes, also known as near-repeat crimes, occur in places with known risk factors and vulnerabilities based on prior crime-related experiences or information…

104

Abstract

Purpose

Initiator crimes, also known as near-repeat crimes, occur in places with known risk factors and vulnerabilities based on prior crime-related experiences or information. Consequently, the environment in which initiator crimes occur might be different from more general crime environments. This study aimed to analyse the differences between the environments of initiator crimes and general crimes, confirming the need for predicting initiator crimes.

Design/methodology/approach

We compared predictive models using data corresponding to initiator crimes and all residential burglaries without considering repetitive crime patterns as dependent variables. Using random forest and gradient boosting, representative ensemble models and predictive models were compared utilising various environmental factor data. Subsequently, we evaluated the performance of each predictive model to derive feature importance and partial dependence based on a highly predictive model.

Findings

By analysing environmental factors affecting overall residential burglary and initiator crimes, we observed notable differences in high-importance variables. Further analysis of the partial dependence of total residential burglary and initiator crimes based on these variables revealed distinct impacts on each crime. Moreover, initiator crimes took place in environments consistent with well-known theories in the field of environmental criminology.

Originality/value

Our findings indicate the possibility that results that do not appear through the existing theft crime prediction method will be identified in the initiator crime prediction model. Emphasising the importance of investigating the environments in which initiator crimes occur, this study underscores the potential of artificial intelligence (AI)-based approaches in creating a safe urban environment. By effectively preventing potential crimes, AI-driven prediction of initiator crimes can significantly contribute to enhancing urban safety.

Details

Archnet-IJAR: International Journal of Architectural Research, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2631-6862

Keywords

Article
Publication date: 20 February 2024

Abebe Hambe Talema and Wubshet Berhanu Nigusie

The purpose of this study is to analyze the horizontal expansion of Burayu Town between 1990 and 2020. The study typically acts as a baseline for integrated spatial planning in…

Abstract

Purpose

The purpose of this study is to analyze the horizontal expansion of Burayu Town between 1990 and 2020. The study typically acts as a baseline for integrated spatial planning in small- and medium-sized towns, which will help to plan sustainable utilization of land.

Design/methodology/approach

Landsat5-TM, Landsat7 ETM+, Landsat5 TM and Landsat8 OLI were used in the study, along with other auxiliary data. The LULC map classifications were generated using the Random Forest Package from the Comprehensive R Archive Network. Post-classification, spatial metrics, and per capita land consumption rate were used to understand the manner and rate of expansion of Burayu Town. Focus group discussions and key informant interviews were also used to validate land use classes through triangulation.

Findings

The study found that the built-up area was the most dynamic LULC category (85.1%) as it increased by over 4,000 ha between 1990 and 2020. Furthermore, population increase did not result in density increase as per capita land consumption increased from 0.024 to 0.040 during the same period.

Research limitations/implications

As a result of financial limitations, there were no high-resolution satellite images available, making it challenging to pinpoint the truth as it is on the ground. Including senior citizens in the study region allowed this study to overcome these restrictions and detect every type of land use and cover.

Practical implications

Data on urban growth are useful for planning land uses, estimating growth rates and advising the government on how best to use land. This can be achieved by monitoring and reviewing development plans using satellite imaging data and GIS tools.

Originality/value

The use of Random Forest for image classification and the employment of local knowledge to validate the accuracy of land cover classification is a novel approach to properly customize remote sensing applications.

Details

Management of Environmental Quality: An International Journal, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1477-7835

Keywords

1 – 10 of 249