Search results

1 – 10 of 37
Open Access
Article
Publication date: 3 August 2023

Ahmad Hakimi Tajuddin, Shabiha Akter, Rasidah Mohd-Rashid and Waqas Mehmood

The purpose of this study is to examine the associations between board size, board independence and triple bottom line (TBL) reporting. The TBL report consists of three…

1006

Abstract

Purpose

The purpose of this study is to examine the associations between board size, board independence and triple bottom line (TBL) reporting. The TBL report consists of three components, namely, environmental, social and economic indices.

Design/methodology/approach

This study’s sample consists of top 50 listed companies from the year 2017 to 2019 on Tadawul Stock Exchange. Ordinary least squares, quantile least squares and robust least squares are used to investigate the associations between board characteristics and TBL reporting, including its separate components.

Findings

The authors find a significant negative association between TBL reporting and board independence. Social bottom line is significantly and negatively related to board size and board independence. Results indicate that board independence negatively influences the TBL disclosure of companies. Therefore, companies are encouraged to embrace TBL reporting. This suggests that businesses should improve the quality of their reporting while ensuring that voluntary disclosures reflect an accurate and fair view in order to preserve a positive relationship with stakeholders.

Originality/value

The present study explains the evidence for the determinants of the TBL in Saudi Arabia.

Details

Arab Gulf Journal of Scientific Research, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1985-9899

Keywords

Open Access
Article
Publication date: 20 November 2023

Asad Mehmood and Francesco De Luca

This study aims to develop a model based on the financial variables for better accuracy of financial distress prediction on the sample of private French, Spanish and Italian…

1743

Abstract

Purpose

This study aims to develop a model based on the financial variables for better accuracy of financial distress prediction on the sample of private French, Spanish and Italian firms. Thus, firms in financial difficulties could timely request for troubled debt restructuring (TDR) to continue business.

Design/methodology/approach

This study used a sample of 312 distressed and 312 non-distressed firms. It includes 60 French, 21 Spanish and 231 Italian firms in both distressed and non-distressed groups. The data are extracted from the ORBIS database. First, the authors develop a new model by replacing a ratio in the original Z”-Score model specifically for financial distress prediction and estimate its coefficients based on linear discriminant analysis (LDA). Second, using the modified Z”-Score model, the authors develop a firm TDR probability index for distressed and non-distressed firms based on the logistic regression model.

Findings

The new model (modified Z”-Score), specifically for financial distress prediction, represents higher prediction accuracy. Moreover, the firm TDR probability index accurately depicts the probabilities trend for both groups of distressed and non-distressed firms.

Research limitations/implications

The findings of this study are conclusive. However, the sample size is small. Therefore, further studies could extend the application of the prediction model developed in this study to all the EU countries.

Practical implications

This study has important practical implications. This study responds to the EU directive call by developing the financial distress prediction model to allow debtors to do timely debt restructuring and thus continue their businesses. Therefore, this study could be useful for practitioners and firm stakeholders, such as banks and other creditors, and investors.

Originality/value

This study significantly contributes to the literature in several ways. First, this study develops a model for predicting financial distress based on the argument that corporate bankruptcy and financial distress are distinct events. However, the original Z”-Score model is intended for failure prediction. Moreover, the recent literature suggests modifying and extending the prediction models. Second, the new model is tested using a sample of firms from three countries that share similarities in their TDR laws.

Details

Journal of Applied Accounting Research, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0967-5426

Keywords

Open Access
Article
Publication date: 12 July 2023

Patrick Kwashie Akorsu

Credit Default Swap (CDS) trading alters equilibrium interactive monitoring of external corporate monitors due to a possible change in private lenders' incentive to monitor client…

Abstract

Purpose

Credit Default Swap (CDS) trading alters equilibrium interactive monitoring of external corporate monitors due to a possible change in private lenders' incentive to monitor client firms. This study explores how audit fees change in response to CDS trade initiation on client firms and how this effect is moderated by investor protection.

Design/methodology/approach

With 6,052 cross-country firm observations, the author conducts estimations in the systems dynamic general methods of moments framework.

Findings

The author documents that audit fees rise on average after CDS trade initiations with and/or without investor protection. Meanwhile, change in auditors' risk perception result in increased audit costs when CDS trade initiation and investor protection interact. The effect of CDS trading on audit fees remain after controlling for firm, audit, and auditor features are robust to different proxies of audit cost.

Practical implications

The need for firms in high investor protection jurisdictions to initiate CDS trade to implement policies in order to maximize their gains from investor protection activities to lessen the overall impact of any increased audit cost that may arise. Furthermore, CDS regulation may be strategically targeted to lessen the effect of increased audit costs on firms after initiation. This would ensure that the resulting increase in audit cost may not materially impact the cash or profitability position of such firms.

Originality/value

This study is distinct from previous ones by focusing on variation in private lenders incentive to monitor after CDS trade initiation after controlling for possible monitoring by short-term creditors. Given that monitoring is not costless for private lenders and CDS trading on their borrowers causes a change in this cost structure, the author documents how auditors react to such changes in incentive to monitor.

研究目的

信用違約互換交易會改變外部監督機制的均衡互動監測,這是因為私人貸款者去監控客戶公司的激勵可能有所改變。本研究擬探究審計費用如何改變,以應對向客戶公司進行的信用違約互換交易啟動;研究亦探討投資者保障、如何緩和上述的影響。

研究設計/方法/理念

我們透過6,052個穿越全國的企業觀察,進行了對系統動力廣義矩估計體系的估測。

研究結果

無論投資者保障存在與否,信用違約互換交易啟動必帶來審計費用一般的平均升高,我們已把這關聯記錄下來。同時,當信用違約互換交易啟動和投資者保障兩者互相影響時,審計員的風險認知的改變,是會導致審計費用增加的。若拔除公司和審計的影響,信用違約互換交易對審計費用的影響會保持不變;而且,就各個不同的審計費用代理權而言,審計員特點是牢固的。

實務方面的啟示

本研究的結果,確定了若公司屬高投資者保護管轄權的類別,則有需要去啟動信用違約互換交易來實施政策,其目的為能從投資者保障的行動中取得最大的收益,從而減弱審計費用的增加所帶來的全面影響。再者,信用違約互換的管理或許可戰略性地訂立目標,俾能減弱於啟動後,審計費用的上昇對公司帶來的影響;這或會確保審計費用的增加、不會對有關公司的貨幣頭寸或盈利狀況產生重大的影響。

研究的原創性/價值

本研究有別於從前的研究,因它的焦點在於短期債權人可能的監督的影響給拔除的情況下,在信用違約互換交易啟動後,以監督為目的私人貸款者激勵的變化。鑒於對私人貸款者來說,監督不是不需要成本的;而且,為他們的借貸者的信用違約互換交易會為這個成本結構帶來變化,我們記錄了審計員如何對以監督為目的的激勵的有關改變作出回應。

Details

European Journal of Management and Business Economics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2444-8451

Keywords

Open Access
Article
Publication date: 15 June 2021

Leila Ismail and Huned Materwala

Machine Learning is an intelligent methodology used for prediction and has shown promising results in predictive classifications. One of the critical areas in which machine…

2137

Abstract

Purpose

Machine Learning is an intelligent methodology used for prediction and has shown promising results in predictive classifications. One of the critical areas in which machine learning can save lives is diabetes prediction. Diabetes is a chronic disease and one of the 10 causes of death worldwide. It is expected that the total number of diabetes will be 700 million in 2045; a 51.18% increase compared to 2019. These are alarming figures, and therefore, it becomes an emergency to provide an accurate diabetes prediction.

Design/methodology/approach

Health professionals and stakeholders are striving for classification models to support prognosis of diabetes and formulate strategies for prevention. The authors conduct literature review of machine models and propose an intelligent framework for diabetes prediction.

Findings

The authors provide critical analysis of machine learning models, propose and evaluate an intelligent machine learning-based architecture for diabetes prediction. The authors implement and evaluate the decision tree (DT)-based random forest (RF) and support vector machine (SVM) learning models for diabetes prediction as the mostly used approaches in the literature using our framework.

Originality/value

This paper provides novel intelligent diabetes mellitus prediction framework (IDMPF) using machine learning. The framework is the result of a critical examination of prediction models in the literature and their application to diabetes. The authors identify the training methodologies, models evaluation strategies, the challenges in diabetes prediction and propose solutions within the framework. The research results can be used by health professionals, stakeholders, students and researchers working in the diabetes prediction area.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 19 January 2024

Ummi Ibrahim Atah, Mustafa Omar Mohammed, Abideen Adewale Adeyemi and Engku Rabiah Adawiah

The purpose of this paper is to propose a model that will demonstrate how the integration of Salam (exclusive agricultural commodity trade) with Takaful (micro-Takaful – a…

Abstract

Purpose

The purpose of this paper is to propose a model that will demonstrate how the integration of Salam (exclusive agricultural commodity trade) with Takaful (micro-Takaful – a subdivision of Islamic insurance) and value chain can address major challenges facing the agricultural sector in Kano State, Nigeria.

Design/methodology/approach

The study conducted a thorough and critical analysis of relevant literature and existing models of financing agriculture in Nigeria to come up with the proposed model.

Findings

The findings indicate that measures undertaken to address the major challenges fail. In view of this, this study proposed Bay-Salam with Takaful and value chain model to solve a number of challenges such as poor access to financing, poor marketing and pricing, delay, collateral requirement and risk issues in order to avail farmers with easy access to finance and provide effective security to financial institutions.

Research limitations/implications

The paper is limited to using secondary data. Therefore, empirical investigation can be carried out to strengthen the validation of the model.

Practical implications

The study outcome seeks to improve the productivity of the farmers through enhancing their access to finance. This will increase their level of production and provide more employment opportunities. In addition, it will boost financial inclusion, income generation, poverty alleviation, standard of living, food security and overall economic growth and development.

Originality/value

The novelty of this study lies in the integration of classical Bay-Salam with Takaful and value chain and create a unique model structure which the researchers do not come across in any research that presented it in Nigeria.

Details

Islamic Economic Studies, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1319-1616

Keywords

Open Access
Article
Publication date: 7 September 2021

Ema Utami, Irwan Oyong, Suwanto Raharjo, Anggit Dwi Hartanto and Sumarni Adi

Gathering knowledge regarding personality traits has long been the interest of academics and researchers in the fields of psychology and in computer science. Analyzing profile…

2842

Abstract

Purpose

Gathering knowledge regarding personality traits has long been the interest of academics and researchers in the fields of psychology and in computer science. Analyzing profile data from personal social media accounts reduces data collection time, as this method does not require users to fill any questionnaires. A pure natural language processing (NLP) approach can give decent results, and its reliability can be improved by combining it with machine learning (as shown by previous studies).

Design/methodology/approach

In this, cleaning the dataset and extracting relevant potential features “as assessed by psychological experts” are essential, as Indonesians tend to mix formal words, non-formal words, slang and abbreviations when writing social media posts. For this article, raw data were derived from a predefined dominance, influence, stability and conscientious (DISC) quiz website, returning 316,967 tweets from 1,244 Twitter accounts “filtered to include only personal and Indonesian-language accounts”. Using a combination of NLP techniques and machine learning, the authors aim to develop a better approach and more robust model, especially for the Indonesian language.

Findings

The authors find that employing a SMOTETomek re-sampling technique and hyperparameter tuning boosts the model’s performance on formalized datasets by 57% (as measured through the F1-score).

Originality/value

The process of cleaning dataset and extracting relevant potential features assessed by psychological experts from it are essential because Indonesian people tend to mix formal words, non-formal words, slang words and abbreviations when writing tweets. Organic data derived from a predefined DISC quiz website resulting 1244 records of Twitter accounts and 316.967 tweets.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 21 February 2024

Aysu Coşkun and Sándor Bilicz

This study focuses on the classification of targets with varying shapes using radar cross section (RCS), which is influenced by the target’s shape. This study aims to develop a…

Abstract

Purpose

This study focuses on the classification of targets with varying shapes using radar cross section (RCS), which is influenced by the target’s shape. This study aims to develop a robust classification method by considering an incident angle with minor random fluctuations and using a physical optics simulation to generate data sets.

Design/methodology/approach

The approach involves several supervised machine learning and classification methods, including traditional algorithms and a deep neural network classifier. It uses histogram-based definitions of the RCS for feature extraction, with an emphasis on resilience against noise in the RCS data. Data enrichment techniques are incorporated, including the use of noise-impacted histogram data sets.

Findings

The classification algorithms are extensively evaluated, highlighting their efficacy in feature extraction from RCS histograms. Among the studied algorithms, the K-nearest neighbour is found to be the most accurate of the traditional methods, but it is surpassed in accuracy by a deep learning network classifier. The results demonstrate the robustness of the feature extraction from the RCS histograms, motivated by mm-wave radar applications.

Originality/value

This study presents a novel approach to target classification that extends beyond traditional methods by integrating deep neural networks and focusing on histogram-based methodologies. It also incorporates data enrichment techniques to enhance the analysis, providing a comprehensive perspective for target detection using RCS.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0332-1649

Keywords

Open Access
Article
Publication date: 27 July 2023

Aicha Gasmi, Marc Heran, Noureddine Elboughdiri, Lioua Kolsi, Djamel Ghernaout, Ahmed Hannachi and Alain Grasmick

The main purpose of this study resides essentially in the development of a new tool to quantify the biomass in the bioreactor operating under steady state conditions.

Abstract

Purpose

The main purpose of this study resides essentially in the development of a new tool to quantify the biomass in the bioreactor operating under steady state conditions.

Design/methodology/approach

Modeling is the most relevant tool for understanding the functioning of some complex processes such as biological wastewater treatment. A steady state model equation of activated sludge model 1 (ASM1) was developed, especially for autotrophic biomass (XBA) and for oxygen uptake rate (OUR). Furthermore, a respirometric measurement, under steady state and endogenous conditions, was used as a new tool for quantifying the viable biomass concentration in the bioreactor.

Findings

The developed steady state equations simplified the sensitivity analysis and allowed the autotrophic biomass (XBA) quantification. Indeed, the XBA concentration was approximately 212 mg COD/L and 454 mgCOD/L for SRT, equal to 20 and 40 d, respectively. Under the steady state condition, monitoring of endogenous OUR permitted biomass quantification in the bioreactor. Comparing XBA obtained by the steady state equation and respirometric tool indicated a percentage deviation of about 3 to 13%. Modeling bioreactor using GPS-X showed an excellent agreement between simulation and experimental measurements concerning the XBA evolution.

Originality/value

These results confirmed the importance of respirometric measurements as a simple and available tool for quantifying biomass.

Details

Arab Gulf Journal of Scientific Research, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1985-9899

Keywords

Open Access
Article
Publication date: 27 February 2023

Vasileios Stamatis, Michail Salampasis and Konstantinos Diamantaras

In federated search, a query is sent simultaneously to multiple resources and each one of them returns a list of results. These lists are merged into a single list using the…

Abstract

Purpose

In federated search, a query is sent simultaneously to multiple resources and each one of them returns a list of results. These lists are merged into a single list using the results merging process. In this work, the authors apply machine learning methods for results merging in federated patent search. Even though several methods for results merging have been developed, none of them were tested on patent data nor considered several machine learning models. Thus, the authors experiment with state-of-the-art methods using patent data and they propose two new methods for results merging that use machine learning models.

Design/methodology/approach

The methods are based on a centralized index containing samples of documents from all the remote resources, and they implement machine learning models to estimate comparable scores for the documents retrieved by different resources. The authors examine the new methods in cooperative and uncooperative settings where document scores from the remote search engines are available and not, respectively. In uncooperative environments, they propose two methods for assigning document scores.

Findings

The effectiveness of the new results merging methods was measured against state-of-the-art models and found to be superior to them in many cases with significant improvements. The random forest model achieves the best results in comparison to all other models and presents new insights for the results merging problem.

Originality/value

In this article the authors prove that machine learning models can substitute other standard methods and models that used for results merging for many years. Our methods outperformed state-of-the-art estimation methods for results merging, and they proved that they are more effective for federated patent search.

Details

Data Technologies and Applications, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-9288

Keywords

Open Access
Article
Publication date: 25 September 2023

Wassim Ben Ayed and Rim Ben Hassen

This research aims to evaluate the accuracy of several Value-at-Risk (VaR) approaches for determining the Minimum Capital Requirement (MCR) for Islamic stock markets during the…

Abstract

Purpose

This research aims to evaluate the accuracy of several Value-at-Risk (VaR) approaches for determining the Minimum Capital Requirement (MCR) for Islamic stock markets during the pandemic health crisis.

Design/methodology/approach

This research evaluates the performance of numerous VaR models for computing the MCR for market risk in compliance with the Basel II and Basel II.5 guidelines for ten Islamic indices. Five models were applied—namely the RiskMetrics, Generalized Autoregressive Conditional Heteroskedasticity, denoted (GARCH), fractional integrated GARCH, denoted (FIGARCH), and SPLINE-GARCH approaches—under three innovations (normal (N), Student (St) and skewed-Student (Sk-t) and the extreme value theory (EVT).

Findings

The main findings of this empirical study reveal that (1) extreme value theory performs better for most indices during the market crisis and (2) VaR models under a normal distribution provide quite poor performance than models with fat-tailed innovations in terms of risk estimation.

Research limitations/implications

Since the world is now undergoing the third wave of the COVID-19 pandemic, this study will not be able to assess performance of VaR models during the fourth wave of COVID-19.

Practical implications

The results suggest that the Islamic Financial Services Board (IFSB) should enhance market discipline mechanisms, while central banks and national authorities should harmonize their regulatory frameworks in line with Basel/IFSB reform agenda.

Originality/value

Previous studies focused on evaluating market risk models using non-Islamic indexes. However, this research uses the Islamic indexes to analyze the VaR forecasting models. Besides, they tested the accuracy of VaR models based on traditional GARCH models, whereas the authors introduce the Spline GARCH developed by Engle and Rangel (2008). Finally, most studies have focus on the period of 2007–2008 financial crisis, while the authors investigate the issue of market risk quantification for several Islamic market equity during the sanitary crisis of COVID-19.

Details

PSU Research Review, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2399-1747

Keywords

Access

Only content I have access to

Year

Content type

Earlycite article (37)
1 – 10 of 37