Search results

1 – 10 of 34
Open Access
Article
Publication date: 4 January 2024

Ankita Kalia

This study aims to explore the relationship between chief executive officer (CEO) power and stock price crash risk in India. Furthermore, it seeks to analyse how insider trades…

Abstract

Purpose

This study aims to explore the relationship between chief executive officer (CEO) power and stock price crash risk in India. Furthermore, it seeks to analyse how insider trades may moderate the impact of CEO power on stock price crash risk.

Design/methodology/approach

A study of 236 companies from the S&P BSE 500 Index (2014–2023) have been analysed through pooled ordinary least square (OLS) regression in the baseline analysis. To enhance the results' reliability, robustness checks include alternative methodologies, such as panel data regression with fixed-effects, binary logistic regression and Bayesian regression. Additional control variables and alternative crash risk measure have also been utilised. To address potential endogeneity, instrumental variable techniques such as two-stage least squares (IV-2SLS) and difference-in-difference (DiD) methodologies are utilised.

Findings

Stakeholder theory is supported by results revealing that CEO power proxies like CEO duality, status and directorship reduce one-year ahead stock price crash risk and vice versa. Insider trades are found to moderate the link between select dimensions of CEO power and stock price crash risk. These findings persist after addressing potential endogeneity concerns, and the results remain consistent across alternative methodologies and variable inclusions.

Originality/value

This study significantly advances research on stock price crash risk, especially in emerging economies like India. The implications of these findings are crucial for investors aiming to mitigate crash risk, for corporations seeking enhanced governance measures and for policymakers considering the economic and welfare consequences associated with this phenomenon.

Details

Asian Journal of Economics and Banking, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2615-9821

Keywords

Book part
Publication date: 25 October 2023

Md Aminul Islam and Md Abu Sufian

This research navigates the confluence of data analytics, machine learning, and artificial intelligence to revolutionize the management of urban services in smart cities. The…

Abstract

This research navigates the confluence of data analytics, machine learning, and artificial intelligence to revolutionize the management of urban services in smart cities. The study thoroughly investigated with advanced tools to scrutinize key performance indicators integral to the functioning of smart cities, thereby enhancing leadership and decision-making strategies. Our work involves the implementation of various machine learning models such as Logistic Regression, Support Vector Machine, Decision Tree, Naive Bayes, and Artificial Neural Networks (ANN), to the data. Notably, the Support Vector Machine and Bernoulli Naive Bayes models exhibit robust performance with an accuracy rate of 70% precision score. In particular, the study underscores the employment of an ANN model on our existing dataset, optimized using the Adam optimizer. Although the model yields an overall accuracy of 61% and a precision score of 58%, implying correct predictions for the positive class 58% of the time, a comprehensive performance assessment using the Area Under the Receiver Operating Characteristic Curve (AUC-ROC) metrics was necessary. This evaluation results in a score of 0.475 at a threshold of 0.5, indicating that there's room for model enhancement. These models and their performance metrics serve as a key cog in our data analytics pipeline, providing decision-makers and city leaders with actionable insights that can steer urban service management decisions. Through real-time data availability and intuitive visualization dashboards, these leaders can promptly comprehend the current state of their services, pinpoint areas requiring improvement, and make informed decisions to bolster these services. This research illuminates the potential for data analytics, machine learning, and AI to significantly upgrade urban service management in smart cities, fostering sustainable and livable communities. Moreover, our findings contribute valuable knowledge to other cities aiming to adopt similar strategies, thus aiding the continued development of smart cities globally.

Details

Technology and Talent Strategies for Sustainable Smart Cities
Type: Book
ISBN: 978-1-83753-023-6

Keywords

Article
Publication date: 10 February 2023

Huiyong Wang, Ding Yang, Liang Guo and Xiaoming Zhang

Intent detection and slot filling are two important tasks in question comprehension of a question answering system. This study aims to build a joint task model with some…

Abstract

Purpose

Intent detection and slot filling are two important tasks in question comprehension of a question answering system. This study aims to build a joint task model with some generalization ability and benchmark its performance over other neural network models mentioned in this paper.

Design/methodology/approach

This study used a deep-learning-based approach for the joint modeling of question intent detection and slot filling. Meanwhile, the internal cell structure of the long short-term memory (LSTM) network was improved. Furthermore, the dataset Computer Science Literature Question (CSLQ) was constructed based on the Science and Technology Knowledge Graph. The datasets Airline Travel Information Systems, Snips (a natural language processing dataset of the consumer intent engine collected by Snips) and CSLQ were used for the empirical analysis. The accuracy of intent detection and F1 score of slot filling, as well as the semantic accuracy of sentences, were compared for several models.

Findings

The results showed that the proposed model outperformed all other benchmark methods, especially for the CSLQ dataset. This proves that the design of this study improved the comprehensive performance and generalization ability of the model to some extent.

Originality/value

This study contributes to the understanding of question sentences in a specific domain. LSTM was improved, and a computer literature domain dataset was constructed herein. This will lay the data and model foundation for the future construction of a computer literature question answering system.

Details

Data Technologies and Applications, vol. 57 no. 5
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 3 July 2023

James L. Sullivan, David Novak, Eric Hernandez and Nick Van Den Berg

This paper introduces a novel quality measure, the percent-within-distribution, or PWD, for acceptance and payment in a quality control/quality assurance (QC/QA) performance…

Abstract

Purpose

This paper introduces a novel quality measure, the percent-within-distribution, or PWD, for acceptance and payment in a quality control/quality assurance (QC/QA) performance specification (PS).

Design/methodology/approach

The new quality measure takes any sample size or distribution and uses a Bayesian updating process to re-estimate parameters of a design distribution as sample observations are fed through the algorithm. This methodology can be employed in a wide range of applications, but the authors demonstrate the use of the measure for a QC/QA PS with upper and lower bounds on 28-day compressive strength of in-place concrete for bridge decks.

Findings

The authors demonstrate the use of this new quality measure to illustrate how it addresses the shortcomings of the percent-within-limits (PWL), which is the current industry standard quality measure. The authors then use the PWD to develop initial pay factors through simulation regimes. The PWD is shown to function better than the PWL with realistic sample lots simulated to represent a variety of industry responses to a new QC/QA PS.

Originality/value

The analytical contribution of this work is the introduction of the new quality measure. However, the practical and managerial contributions of this work are of equal significance.

Details

International Journal of Quality & Reliability Management, vol. 41 no. 2
Type: Research Article
ISSN: 0265-671X

Keywords

Open Access
Article
Publication date: 28 November 2022

Ruchi Kejriwal, Monika Garg and Gaurav Sarin

Stock market has always been lucrative for various investors. But, because of its speculative nature, it is difficult to predict the price movement. Investors have been using both…

1027

Abstract

Purpose

Stock market has always been lucrative for various investors. But, because of its speculative nature, it is difficult to predict the price movement. Investors have been using both fundamental and technical analysis to predict the prices. Fundamental analysis helps to study structured data of the company. Technical analysis helps to study price trends, and with the increasing and easy availability of unstructured data have made it important to study the market sentiment. Market sentiment has a major impact on the prices in short run. Hence, the purpose is to understand the market sentiment timely and effectively.

Design/methodology/approach

The research includes text mining and then creating various models for classification. The accuracy of these models is checked using confusion matrix.

Findings

Out of the six machine learning techniques used to create the classification model, kernel support vector machine gave the highest accuracy of 68%. This model can be now used to analyse the tweets, news and various other unstructured data to predict the price movement.

Originality/value

This study will help investors classify a news or a tweet into “positive”, “negative” or “neutral” quickly and determine the stock price trends.

Details

Vilakshan - XIMB Journal of Management, vol. 21 no. 1
Type: Research Article
ISSN: 0973-1954

Keywords

Article
Publication date: 6 October 2023

Vahide Bulut

Feature extraction from 3D datasets is a current problem. Machine learning is an important tool for classification of complex 3D datasets. Machine learning classification…

Abstract

Purpose

Feature extraction from 3D datasets is a current problem. Machine learning is an important tool for classification of complex 3D datasets. Machine learning classification techniques are widely used in various fields, such as text classification, pattern recognition, medical disease analysis, etc. The aim of this study is to apply the most popular classification and regression methods to determine the best classification and regression method based on the geodesics.

Design/methodology/approach

The feature vector is determined by the unit normal vector and the unit principal vector at each point of the 3D surface along with the point coordinates themselves. Moreover, different examples are compared according to the classification methods in terms of accuracy and the regression algorithms in terms of R-squared value.

Findings

Several surface examples are analyzed for the feature vector using classification (31 methods) and regression (23 methods) machine learning algorithms. In addition, two ensemble methods XGBoost and LightGBM are used for classification and regression. Also, the scores for each surface example are compared.

Originality/value

To the best of the author’s knowledge, this is the first study to analyze datasets based on geodesics using machine learning algorithms for classification and regression.

Details

Engineering Computations, vol. 40 no. 9/10
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 28 June 2022

Samirasadat Samadi and Mohammad Saeed Taslimi

This study aims to review the features and challenges of the flood relief chain, identifies administrative measures during and after the flood occurrence and prioritizes them…

Abstract

Purpose

This study aims to review the features and challenges of the flood relief chain, identifies administrative measures during and after the flood occurrence and prioritizes them using two machine learning (ML) and analytic hierarchy process (AHP) methods. This paper aims to provide a prioritization program based on flood conditions that optimize flood management and improves society’s resilience against flood occurrence.

Design/methodology/approach

The collected database in this paper has been trained by using ML algorithms, including support vector machine (SVM), Naive Bayes (NB) and k-nearest neighbors (kNN), to create a prioritization program. Furthermore, the administrative measures in two phases of during and after the flood are prioritized by using the AHP method and questionnaires completed by experts and relief workers in flood management.

Findings

Among the ML algorithms, the SVM method was selected with 91.37% accuracy. The prioritization program provided by the model, which distinguishes it from other existing models, considers five conditions of the flood occurrence to prioritize actions (season, population affected, area affected, damage to houses and human lives lost). Therefore, the model presents a specific plan for each flood with different occurrence conditions.

Research limitations/implications

The main limitation is the lack of a comprehensive data set to determine the effect of all flood conditions on the prioritization program and the relief activities that have been done in previous flood disasters.

Originality/value

The originality of this paper is the use of ML methods to prioritize administrative measures during and after the flood and presents a prioritization program based on each flood’s conditions. Therefore, through this program, the authority and society can control the adverse impacts of flood more effectively and help to reduce human and financial losses as much as possible.

Details

International Journal of Disaster Resilience in the Built Environment, vol. 15 no. 1
Type: Research Article
ISSN: 1759-5908

Keywords

Open Access
Article
Publication date: 5 October 2023

Babitha Philip and Hamad AlJassmi

To proactively draw efficient maintenance plans, road agencies should be able to forecast main road distress parameters, such as cracking, rutting, deflection and International…

Abstract

Purpose

To proactively draw efficient maintenance plans, road agencies should be able to forecast main road distress parameters, such as cracking, rutting, deflection and International Roughness Index (IRI). Nonetheless, the behavior of those parameters throughout pavement life cycles is associated with high uncertainty, resulting from various interrelated factors that fluctuate over time. This study aims to propose the use of dynamic Bayesian belief networks for the development of time-series prediction models to probabilistically forecast road distress parameters.

Design/methodology/approach

While Bayesian belief network (BBN) has the merit of capturing uncertainty associated with variables in a domain, dynamic BBNs, in particular, are deemed ideal for forecasting road distress over time due to its Markovian and invariant transition probability properties. Four dynamic BBN models are developed to represent rutting, deflection, cracking and IRI, using pavement data collected from 32 major road sections in the United Arab Emirates between 2013 and 2019. Those models are based on several factors affecting pavement deterioration, which are classified into three categories traffic factors, environmental factors and road-specific factors.

Findings

The four developed performance prediction models achieved an overall precision and reliability rate of over 80%.

Originality/value

The proposed approach provides flexibility to illustrate road conditions under various scenarios, which is beneficial for pavement maintainers in obtaining a realistic representation of expected future road conditions, where maintenance efforts could be prioritized and optimized.

Details

Construction Innovation , vol. 24 no. 1
Type: Research Article
ISSN: 1471-4175

Keywords

Article
Publication date: 13 November 2023

Jamil Jaber, Rami S. Alkhawaldeh and Ibrahim N. Khatatbeh

This study aims to develop a novel approach for predicting default risk in bancassurance, which plays a crucial role in the relationship between interest rates in banks and…

Abstract

Purpose

This study aims to develop a novel approach for predicting default risk in bancassurance, which plays a crucial role in the relationship between interest rates in banks and premium rates in insurance companies. The proposed method aims to improve default risk predictions and assist with client segmentation in the banking system.

Design/methodology/approach

This research introduces the group method of data handling (GMDH) technique and a diversified classifier ensemble based on GMDH (dce-GMDH) for predicting default risk. The data set comprises information from 30,000 credit card clients of a large bank in Taiwan, with the output variable being a dummy variable distinguishing between default risk (0) and non-default risk (1), whereas the input variables comprise 23 distinct features characterizing each customer.

Findings

The results of this study show promising outcomes, highlighting the usefulness of the proposed technique for bancassurance and client segmentation. Remarkably, the dce-GMDH model consistently outperforms the conventional GMDH model, demonstrating its superiority in predicting default risk based on various error criteria.

Originality/value

This study presents a unique approach to predicting default risk in bancassurance by using the GMDH and dce-GMDH neural network models. The proposed method offers a valuable contribution to the field by showcasing improved accuracy and enhanced applicability within the banking sector, offering valuable insights and potential avenues for further exploration.

Details

Competitiveness Review: An International Business Journal , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1059-5422

Keywords

Article
Publication date: 4 April 2022

Shrawan Kumar Trivedi, Amrinder Singh and Somesh Kumar Malhotra

There is a need to predict whether the consumers liked the stay in the hotel rooms or not, and to remove the aspects the customers did not like. Many customers leave a review…

Abstract

Purpose

There is a need to predict whether the consumers liked the stay in the hotel rooms or not, and to remove the aspects the customers did not like. Many customers leave a review after staying in the hotel. These reviews are mostly given on the website used to book the hotel. These reviews can be considered as a valuable data, which can be analyzed to provide better services in the hotels. The purpose of this study is to use machine learning techniques for analyzing the given data to determine different sentiment polarities of the consumers.

Design/methodology/approach

Reviews given by hotel customers on the Tripadvisor website, which were made available publicly by Kaggle. Out of 10,000 reviews in the data, a sample of 3,000 negative polarity reviews (customers with bad experiences) in the hotel and 3,000 positive polarity reviews (customers with good experiences) in the hotel is taken to prepare data set. The two-stage feature selection was applied, which first involved greedy selection method and then wrapper method to generate 37 most relevant features. An improved stacked decision tree (ISD) classifier) is built, which is further compared with state-of-the-art machine learning algorithms. All the tests are done using R-Studio.

Findings

The results showed that the new model was satisfactory overall with 80.77% accuracy after doing in-depth study with 50–50 split, 80.74% accuracy for 66–34 split and 80.25% accuracy for 80–20 split, when predicting the nature of the customers’ experience in the hotel, i.e. whether they are positive or negative.

Research limitations/implications

The implication of this research is to provide a showcase of how we can predict the polarity of potentially popular reviews. This helps the authors’ perspective to help the hotel industries to take corrective measures for the betterment of business and to promote useful positive reviews. This study also has some limitations like only English reviews are considered. This study was restricted to the data from trip-adviser website; however, a new data may be generated to test the credibility of the model. Only aspect-based sentiment classification is considered in this study.

Originality/value

Stacking machine learning techniques have been proposed. At first, state-of-the-art classifiers are tested on the given data, and then, three best performing classifiers (decision tree C5.0, random forest and support vector machine) are taken to build stack and to create ISD classifier.

1 – 10 of 34