Search results

1 – 10 of 25
Article
Publication date: 13 November 2023

Jamil Jaber, Rami S. Alkhawaldeh and Ibrahim N. Khatatbeh

This study aims to develop a novel approach for predicting default risk in bancassurance, which plays a crucial role in the relationship between interest rates in banks and…

Abstract

Purpose

This study aims to develop a novel approach for predicting default risk in bancassurance, which plays a crucial role in the relationship between interest rates in banks and premium rates in insurance companies. The proposed method aims to improve default risk predictions and assist with client segmentation in the banking system.

Design/methodology/approach

This research introduces the group method of data handling (GMDH) technique and a diversified classifier ensemble based on GMDH (dce-GMDH) for predicting default risk. The data set comprises information from 30,000 credit card clients of a large bank in Taiwan, with the output variable being a dummy variable distinguishing between default risk (0) and non-default risk (1), whereas the input variables comprise 23 distinct features characterizing each customer.

Findings

The results of this study show promising outcomes, highlighting the usefulness of the proposed technique for bancassurance and client segmentation. Remarkably, the dce-GMDH model consistently outperforms the conventional GMDH model, demonstrating its superiority in predicting default risk based on various error criteria.

Originality/value

This study presents a unique approach to predicting default risk in bancassurance by using the GMDH and dce-GMDH neural network models. The proposed method offers a valuable contribution to the field by showcasing improved accuracy and enhanced applicability within the banking sector, offering valuable insights and potential avenues for further exploration.

Details

Competitiveness Review: An International Business Journal , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1059-5422

Keywords

Article
Publication date: 24 April 2024

S. Thavasi and T. Revathi

With so many placement opportunities around the students in their final or prefinal year, they start to feel the strain of the season. The students feel the need to be aware of…

Abstract

Purpose

With so many placement opportunities around the students in their final or prefinal year, they start to feel the strain of the season. The students feel the need to be aware of their position and how to increase their chances of being hired. Hence, a system to guide their career is one of the needs of the day.

Design/methodology/approach

The job role prediction system utilizes machine learning techniques such as Naïve Bayes, K-Nearest Neighbor, Support Vector machines (SVM) and Artificial Neural Networks (ANN) to suggest a student’s job role based on their academic performance and course outcomes (CO), out of which ANN performs better. The system uses the Mepco Schlenk Engineering College curriculum, placement and students’ Assessment data sets, in which the CO and syllabus are used to determine the skills that the student has gained from their courses. The necessary skills for a job position are then extracted from the job advertisements. The system compares the student’s skills with the required skills for the job role based on the placement prediction result.

Findings

The system predicts placement possibilities with an accuracy of 93.33 and 98% precision. Also, the skill analysis for students gives the students information about their skill-set strengths and weaknesses.

Research limitations/implications

For skill-set analysis, only the direct assessment of the students is considered. Indirect assessment shall also be considered for future scope.

Practical implications

The model is adaptable and flexible (customizable) to any type of academic institute or universities.

Social implications

The research will be very much useful for the students community to bridge the gap between the academic and industrial needs.

Originality/value

Several works are done for career guidance for the students. However, these career guidance methodologies are designed only using the curriculum and students’ basic personal information. The proposed system will consider the students’ academic performance through direct assessment, along with their curriculum and basic personal information.

Details

Kybernetes, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 13 February 2024

Aleena Swetapadma, Tishya Manna and Maryam Samami

A novel method has been proposed to reduce the false alarm rate of arrhythmia patients regarding life-threatening conditions in the intensive care unit. In this purpose, the…

Abstract

Purpose

A novel method has been proposed to reduce the false alarm rate of arrhythmia patients regarding life-threatening conditions in the intensive care unit. In this purpose, the atrial blood pressure, photoplethysmogram (PLETH), electrocardiogram (ECG) and respiratory (RESP) signals are considered as input signals.

Design/methodology/approach

Three machine learning approaches feed-forward artificial neural network (ANN), ensemble learning method and k-nearest neighbors searching methods are used to detect the false alarm. The proposed method has been implemented using Arduino and MATLAB/SIMULINK for real-time ICU-arrhythmia patients' monitoring data.

Findings

The proposed method detects the false alarm with an accuracy of 99.4 per cent during asystole, 100 per cent during ventricular flutter, 98.5 per cent during ventricular tachycardia, 99.6 per cent during bradycardia and 100 per cent during tachycardia. The proposed framework is adaptive in many scenarios, easy to implement, computationally friendly and highly accurate and robust with overfitting issue.

Originality/value

As ECG signals consisting with PQRST wave, any deviation from the normal pattern may signify some alarming conditions. These deviations can be utilized as input to classifiers for the detection of false alarms; hence, there is no need for other feature extraction techniques. Feed-forward ANN with the Lavenberg–Marquardt algorithm has shown higher rate of convergence than other neural network algorithms which helps provide better accuracy with no overfitting.

Details

Data Technologies and Applications, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 9 January 2024

Ning Chen, Zhenyu Zhang and An Chen

Consequence prediction is an emerging topic in safety management concerning the severity outcome of accidents. In practical applications, it is usually implemented through…

Abstract

Purpose

Consequence prediction is an emerging topic in safety management concerning the severity outcome of accidents. In practical applications, it is usually implemented through supervised learning methods; however, the evaluation of classification results remains a challenge. The previous studies mostly adopted simplex evaluation based on empirical and quantitative assessment strategies. This paper aims to shed new light on the comprehensive evaluation and comparison of diverse classification methods through visualization, clustering and ranking techniques.

Design/methodology/approach

An empirical study is conducted using 9 state-of-the-art classification methods on a real-world data set of 653 construction accidents in China for predicting the consequence with respect to 39 carefully featured factors and accident type. The proposed comprehensive evaluation enriches the interpretation of classification results from different perspectives. Furthermore, the critical factors leading to severe construction accidents are identified by analyzing the coefficients of a logistic regression model.

Findings

This paper identifies the critical factors that significantly influence the consequence of construction accidents, which include accident type (particularly collapse), improper accident reporting and handling (E21), inadequate supervision engineers (O41), no special safety department (O11), delayed or low-quality drawings (T11), unqualified contractor (C21), schedule pressure (C11), multi-level subcontracting (C22), lacking safety examination (S22), improper operation of mechanical equipment (R11) and improper construction procedure arrangement (T21). The prediction models and findings of critical factors help make safety intervention measures in a targeted way and enhance the experience of safety professionals in the construction industry.

Research limitations/implications

The empirical study using some well-known classification methods for forecasting the consequences of construction accidents provides some evidence for the comprehensive evaluation of multiple classifiers. These techniques can be used jointly with other evaluation approaches for a comprehensive understanding of the classification algorithms. Despite the limitation of specific methods used in the study, the presented methodology can be configured with other classification methods and performance metrics and even applied to other decision-making problems such as clustering.

Originality/value

This study sheds new light on the comprehensive comparison and evaluation of classification results through visualization, clustering and ranking techniques using an empirical study of consequence prediction of construction accidents. The relevance of construction accident type is discussed with the severity of accidents. The critical factors influencing the accident consequence are identified for the sake of taking prevention measures for risk reduction. The proposed method can be applied to other decision-making tasks where the evaluation is involved as an important component.

Details

Construction Innovation , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1471-4175

Keywords

Article
Publication date: 9 April 2024

Narsymbat Salimgereyev, Bulat Mukhamediyev and Aijaz A. Shaikh

This study developed new measures of the routine and non-routine task contents of managerial, professional, technical, and clerical occupations from a workload perspective. Here…

Abstract

Purpose

This study developed new measures of the routine and non-routine task contents of managerial, professional, technical, and clerical occupations from a workload perspective. Here, we present a comparative analysis of the workload structures of state and industrial sector employees.

Design/methodology/approach

Our method involves detailed descriptions of work processes and an element-wise time study. We collected and analysed data to obtain a workload structure that falls within three conceptual task categories: (i) non-routine analytic tasks, (ii) non-routine interactive tasks and (iii) routine cognitive tasks. A total of 2,312 state and industrial sector employees in Kazakhstan participated in the study. The data were collected using a proprietary web application that resembles a timesheet.

Findings

The study results are consistent with the general trend reported by previous studies: the higher the job level, the lower the occupation’s routine task content. In addition, the routine cognitive task contents of managerial, professional, technical, and clerical occupations in the industrial sector are higher than those in local governments. The work of women is also more routinary than that of men. Finally, vthe routine cognitive task contents of occupations in administrative units are higher than those of occupations in substantive units.

Originality/value

Our study sought to address the challenges of using the task-based approach associated with measuring tasks by introducing a new measurement framework. The main advantage of our task measures is a direct approach to assessing workloads consisting of routine tasks, which allows for an accurate estimation of potential staff reductions due to the automation of work processes.

Details

International Journal of Productivity and Performance Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1741-0401

Keywords

Open Access
Article
Publication date: 27 February 2024

Oscar F. Bustinza, Luis M. Molina Fernandez and Marlene Mendoza Macías

Machine learning (ML) analytical tools are increasingly being considered as an alternative quantitative methodology in management research. This paper proposes a new approach for…

Abstract

Purpose

Machine learning (ML) analytical tools are increasingly being considered as an alternative quantitative methodology in management research. This paper proposes a new approach for uncovering the antecedents behind product and product–service innovation (PSI).

Design/methodology/approach

The ML approach is novel in the field of innovation antecedents at the country level. A sample of the Equatorian National Survey on Technology and Innovation, consisting of more than 6,000 firms, is used to rank the antecedents of innovation.

Findings

The analysis reveals that the antecedents of product and PSI are distinct, yet rooted in the principles of open innovation and competitive priorities.

Research limitations/implications

The analysis is based on a sample of Equatorian firms with the objective of showing how ML techniques are suitable for testing the antecedents of innovation in any other context.

Originality/value

The novel ML approach, in contrast to traditional quantitative analysis of the topic, can consider the full set of antecedent interactions to each of the innovations analyzed.

Details

Journal of Enterprise Information Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1741-0398

Keywords

Article
Publication date: 4 January 2024

Zicheng Zhang

Advanced big data analysis and machine learning methods are concurrently used to unleash the value of the data generated by government hotline and help devise intelligent…

Abstract

Purpose

Advanced big data analysis and machine learning methods are concurrently used to unleash the value of the data generated by government hotline and help devise intelligent applications including automated process management, standard construction and more accurate dispatched orders to build high-quality government service platforms as more widely data-driven methods are in the process.

Design/methodology/approach

In this study, based on the influence of the record specifications of texts related to work orders generated by the government hotline, machine learning tools are implemented and compared to optimize classify dispatching tasks by performing exploratory studies on the hotline work order text, including linguistics analysis of text feature processing, new word discovery, text clustering and text classification.

Findings

The complexity of the content of the work order is reduced by applying more standardized writing specifications based on combining text grammar numerical features. So, order dispatch success prediction accuracy rate reaches 89.6 per cent after running the LSTM model.

Originality/value

The proposed method can help improve the current dispatching processes run by the government hotline, better guide staff to standardize the writing format of work orders, improve the accuracy of order dispatching and provide innovative support to the current mechanism.

Details

Data Technologies and Applications, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 21 February 2024

Nehal Elshaboury, Tarek Zayed and Eslam Mohammed Abdelkader

Water pipes degrade over time for a variety of pipe-related, soil-related, operational, and environmental factors. Hence, municipalities are necessitated to implement effective…

Abstract

Purpose

Water pipes degrade over time for a variety of pipe-related, soil-related, operational, and environmental factors. Hence, municipalities are necessitated to implement effective maintenance and rehabilitation strategies for water pipes based on reliable deterioration models and cost-effective inspection programs. In the light of foregoing, the paramount objective of this research study is to develop condition assessment and deterioration prediction models for saltwater pipes in Hong Kong.

Design/methodology/approach

As a perquisite to the development of condition assessment models, spherical fuzzy analytic hierarchy process (SFAHP) is harnessed to analyze the relative importance weights of deterioration factors. Afterward, the relative importance weights of deterioration factors coupled with their effective values are leveraged using the measurement of alternatives and ranking according to the compromise solution (MARCOS) algorithm to analyze the performance condition of water pipes. A condition rating system is then designed counting on the generalized entropy-based probabilistic fuzzy C means (GEPFCM) algorithm. A set of fourth order multiple regression functions are constructed to capture the degradation trends in condition of pipelines overtime covering their disparate characteristics.

Findings

Analytical results demonstrated that the top five influential deterioration factors comprise age, material, traffic, soil corrosivity and material. In addition, it was derived that developed deterioration models accomplished correlation coefficient, mean absolute error and root mean squared error of 0.8, 1.33 and 1.39, respectively.

Originality/value

It can be argued that generated deterioration models can assist municipalities in formulating accurate and cost-effective maintenance, repair and rehabilitation programs.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 26 February 2024

Chong Wu, Xiaofang Chen and Yongjie Jiang

While the Chinese securities market is booming, the phenomenon of listed companies falling into financial distress is also emerging, which affects the operation and development of…

Abstract

Purpose

While the Chinese securities market is booming, the phenomenon of listed companies falling into financial distress is also emerging, which affects the operation and development of enterprises and also jeopardizes the interests of investors. Therefore, it is important to understand how to accurately and reasonably predict the financial distress of enterprises.

Design/methodology/approach

In the present study, ensemble feature selection (EFS) and improved stacking were used for financial distress prediction (FDP). Mutual information, analysis of variance (ANOVA), random forest (RF), genetic algorithms, and recursive feature elimination (RFE) were chosen for EFS to select features. Since there may be missing information when feeding the results of the base learner directly into the meta-learner, the features with high importance were fed into the meta-learner together. A screening layer was added to select the meta-learner with better performance. Finally, Optima hyperparameters were used for parameter tuning by the learners.

Findings

An empirical study was conducted with a sample of A-share listed companies in China. The F1-score of the model constructed using the features screened by EFS reached 84.55%, representing an improvement of 4.37% compared to the original features. To verify the effectiveness of improved stacking, benchmark model comparison experiments were conducted. Compared to the original stacking model, the accuracy of the improved stacking model was improved by 0.44%, and the F1-score was improved by 0.51%. In addition, the improved stacking model had the highest area under the curve (AUC) value (0.905) among all the compared models.

Originality/value

Compared to previous models, the proposed FDP model has better performance, thus bridging the research gap of feature selection. The present study provides new ideas for stacking improvement research and a reference for subsequent research in this field.

Details

Kybernetes, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 21 November 2023

Armin Mahmoodi, Leila Hashemi and Milad Jasemi

In this study, the central objective is to foresee stock market signals with the use of a proper structure to achieve the highest accuracy possible. For this purpose, three hybrid…

Abstract

Purpose

In this study, the central objective is to foresee stock market signals with the use of a proper structure to achieve the highest accuracy possible. For this purpose, three hybrid models have been developed for the stock markets which are a combination of support vector machine (SVM) with meta-heuristic algorithms of particle swarm optimization (PSO), imperialist competition algorithm (ICA) and genetic algorithm (GA).All the analyses are technical and are based on the Japanese candlestick model.

Design/methodology/approach

Further as per the results achieved, the most suitable algorithm is chosen to anticipate sell and buy signals. Moreover, the authors have compared the results of the designed model validations in this study with basic models in three articles conducted in the past years. Therefore, SVM is examined by PSO. It is used as a classification agent to search the problem-solving space precisely and at a faster pace. With regards to the second model, SVM and ICA are tested to stock market timing, in a way that ICA is used as an optimization agent for the SVM parameters. At last, in the third model, SVM and GA are studied, where GA acts as an optimizer and feature selection agent.

Findings

As per the results, it is observed that all new models can predict accurately for only 6 days; however, in comparison with the confusion matrix results, it is observed that the SVM-GA and SVM-ICA models have correctly predicted more sell signals, and the SCM-PSO model has correctly predicted more buy signals. However, SVM-ICA has shown better performance than other models considering executing the implemented models.

Research limitations/implications

In this study, the data for stock market of the years 2013–2021 were analyzed; the long length of timeframe makes the input data analysis challenging as they must be moderated with respect to the conditions where they have been changed.

Originality/value

In this study, two methods have been developed in a candlestick model; they are raw-based and signal-based approaches in which the hit rate is determined by the percentage of correct evaluations of the stock market for a 16-day period.

Details

EuroMed Journal of Business, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1450-2194

Keywords

Access

Year

Last 6 months (25)

Content type

Earlycite article (25)
1 – 10 of 25