Search results

1 – 10 of 29
Article
Publication date: 24 April 2024

S. Thavasi and T. Revathi

With so many placement opportunities around the students in their final or prefinal year, they start to feel the strain of the season. The students feel the need to be aware of…

Abstract

Purpose

With so many placement opportunities around the students in their final or prefinal year, they start to feel the strain of the season. The students feel the need to be aware of their position and how to increase their chances of being hired. Hence, a system to guide their career is one of the needs of the day.

Design/methodology/approach

The job role prediction system utilizes machine learning techniques such as Naïve Bayes, K-Nearest Neighbor, Support Vector machines (SVM) and Artificial Neural Networks (ANN) to suggest a student’s job role based on their academic performance and course outcomes (CO), out of which ANN performs better. The system uses the Mepco Schlenk Engineering College curriculum, placement and students’ Assessment data sets, in which the CO and syllabus are used to determine the skills that the student has gained from their courses. The necessary skills for a job position are then extracted from the job advertisements. The system compares the student’s skills with the required skills for the job role based on the placement prediction result.

Findings

The system predicts placement possibilities with an accuracy of 93.33 and 98% precision. Also, the skill analysis for students gives the students information about their skill-set strengths and weaknesses.

Research limitations/implications

For skill-set analysis, only the direct assessment of the students is considered. Indirect assessment shall also be considered for future scope.

Practical implications

The model is adaptable and flexible (customizable) to any type of academic institute or universities.

Social implications

The research will be very much useful for the students community to bridge the gap between the academic and industrial needs.

Originality/value

Several works are done for career guidance for the students. However, these career guidance methodologies are designed only using the curriculum and students’ basic personal information. The proposed system will consider the students’ academic performance through direct assessment, along with their curriculum and basic personal information.

Details

Kybernetes, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 17 April 2024

Jahanzaib Alvi and Imtiaz Arif

The crux of this paper is to unveil efficient features and practical tools that can predict credit default.

Abstract

Purpose

The crux of this paper is to unveil efficient features and practical tools that can predict credit default.

Design/methodology/approach

Annual data of non-financial listed companies were taken from 2000 to 2020, along with 71 financial ratios. The dataset was bifurcated into three panels with three default assumptions. Logistic regression (LR) and k-nearest neighbor (KNN) binary classification algorithms were used to estimate credit default in this research.

Findings

The study’s findings revealed that features used in Model 3 (Case 3) were the efficient and best features comparatively. Results also showcased that KNN exposed higher accuracy than LR, which proves the supremacy of KNN on LR.

Research limitations/implications

Using only two classifiers limits this research for a comprehensive comparison of results; this research was based on only financial data, which exhibits a sizeable room for including non-financial parameters in default estimation. Both limitations may be a direction for future research in this domain.

Originality/value

This study introduces efficient features and tools for credit default prediction using financial data, demonstrating KNN’s superior accuracy over LR and suggesting future research directions.

Details

Kybernetes, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 20 February 2024

Saba Sareminia, Zahra Ghayoumian and Fatemeh Haghighat

The textile industry holds immense significance in the economy of any nation, particularly in the production of synthetic yarn and fabrics. Consequently, the pursuit of acquiring…

Abstract

Purpose

The textile industry holds immense significance in the economy of any nation, particularly in the production of synthetic yarn and fabrics. Consequently, the pursuit of acquiring high-quality products at a reduced cost has become a significant concern for countries. The primary objective of this research is to leverage data mining and data intelligence techniques to enhance and refine the production process of texturized yarn by developing an intelligent operating guide that enables the adjustment of production process parameters in the texturized yarn manufacturing process, based on the specifications of raw materials.

Design/methodology/approach

This research undertook a systematic literature review to explore the various factors that influence yarn quality. Data mining techniques, including deep learning, K-nearest neighbor (KNN), decision tree, Naïve Bayes, support vector machine and VOTE, were employed to identify the most crucial factors. Subsequently, an executive and dynamic guide was developed utilizing data intelligence tools such as Power BI (Business Intelligence). The proposed model was then applied to the production process of a textile company in Iran 2020 to 2021.

Findings

The results of this research highlight that the production process parameters exert a more significant influence on texturized yarn quality than the characteristics of raw materials. The executive production guide was designed by selecting the optimal combination of production process parameters, namely draw ratio, D/Y and primary temperature, with the incorporation of limiting indexes derived from the raw material characteristics to predict tenacity and elongation.

Originality/value

This paper contributes by introducing a novel method for creating a dynamic guide. An intelligent and dynamic guide for tenacity and elongation in texturized yarn production was proposed, boasting an approximate accuracy rate of 80%. This developed guide is dynamic and seamlessly integrated with the production database. It undergoes regular updates every three months, incorporating the selected features of the process and raw materials, their respective thresholds, and the predicted levels of elongation and tenacity.

Details

International Journal of Clothing Science and Technology, vol. 36 no. 2
Type: Research Article
ISSN: 0955-6222

Keywords

Article
Publication date: 26 September 2023

Mohammed Ayoub Ledhem and Warda Moussaoui

This paper aims to apply several data mining techniques for predicting the daily precision improvement of Jakarta Islamic Index (JKII) prices based on big data of symmetric…

Abstract

Purpose

This paper aims to apply several data mining techniques for predicting the daily precision improvement of Jakarta Islamic Index (JKII) prices based on big data of symmetric volatility in Indonesia’s Islamic stock market.

Design/methodology/approach

This research uses big data mining techniques to predict daily precision improvement of JKII prices by applying the AdaBoost, K-nearest neighbor, random forest and artificial neural networks. This research uses big data with symmetric volatility as inputs in the predicting model, whereas the closing prices of JKII were used as the target outputs of daily precision improvement. For choosing the optimal prediction performance according to the criteria of the lowest prediction errors, this research uses four metrics of mean absolute error, mean squared error, root mean squared error and R-squared.

Findings

The experimental results determine that the optimal technique for predicting the daily precision improvement of the JKII prices in Indonesia’s Islamic stock market is the AdaBoost technique, which generates the optimal predicting performance with the lowest prediction errors, and provides the optimum knowledge from the big data of symmetric volatility in Indonesia’s Islamic stock market. In addition, the random forest technique is also considered another robust technique in predicting the daily precision improvement of the JKII prices as it delivers closer values to the optimal performance of the AdaBoost technique.

Practical implications

This research is filling the literature gap of the absence of using big data mining techniques in the prediction process of Islamic stock markets by delivering new operational techniques for predicting the daily stock precision improvement. Also, it helps investors to manage the optimal portfolios and to decrease the risk of trading in global Islamic stock markets based on using big data mining of symmetric volatility.

Originality/value

This research is a pioneer in using big data mining of symmetric volatility in the prediction of an Islamic stock market index.

Details

Journal of Modelling in Management, vol. 19 no. 3
Type: Research Article
ISSN: 1746-5664

Keywords

Article
Publication date: 24 March 2022

Elavaar Kuzhali S. and Pushpa M.K.

COVID-19 has occurred in more than 150 countries and causes a huge impact on the health of many people. The main purpose of this work is, COVID-19 has occurred in more than 150…

Abstract

Purpose

COVID-19 has occurred in more than 150 countries and causes a huge impact on the health of many people. The main purpose of this work is, COVID-19 has occurred in more than 150 countries and causes a huge impact on the health of many people. The COVID-19 diagnosis is required to detect at the beginning stage and special attention should be given to them. The fastest way to detect the COVID-19 infected patients is detecting through radiology and radiography images. The few early studies describe the particular abnormalities of the infected patients in the chest radiograms. Even though some of the challenges occur in concluding the viral infection traces in X-ray images, the convolutional neural network (CNN) can determine the patterns of data between the normal and infected X-rays that increase the detection rate. Therefore, the researchers are focusing on developing a deep learning-based detection model.

Design/methodology/approach

The main intention of this proposal is to develop the enhanced lung segmentation and classification of diagnosing the COVID-19. The main processes of the proposed model are image pre-processing, lung segmentation and deep classification. Initially, the image enhancement is performed by contrast enhancement and filtering approaches. Once the image is pre-processed, the optimal lung segmentation is done by the adaptive fuzzy-based region growing (AFRG) technique, in which the constant function for fusion is optimized by the modified deer hunting optimization algorithm (M-DHOA). Further, a well-performing deep learning algorithm termed adaptive CNN (A-CNN) is adopted for performing the classification, in which the hidden neurons are tuned by the proposed DHOA to enhance the detection accuracy. The simulation results illustrate that the proposed model has more possibilities to increase the COVID-19 testing methods on the publicly available data sets.

Findings

From the experimental analysis, the accuracy of the proposed M-DHOA–CNN was 5.84%, 5.23%, 6.25% and 8.33% superior to recurrent neural network, neural networks, support vector machine and K-nearest neighbor, respectively. Thus, the segmentation and classification performance of the developed COVID-19 diagnosis by AFRG and A-CNN has outperformed the existing techniques.

Originality/value

This paper adopts the latest optimization algorithm called M-DHOA to improve the performance of lung segmentation and classification in COVID-19 diagnosis using adaptive K-means with region growing fusion and A-CNN. To the best of the authors’ knowledge, this is the first work that uses M-DHOA for improved segmentation and classification steps for increasing the convergence rate of diagnosis.

Details

Journal of Engineering, Design and Technology , vol. 22 no. 3
Type: Research Article
ISSN: 1726-0531

Keywords

Article
Publication date: 5 April 2024

Melike Artar, Yavuz Selim Balcioglu and Oya Erdil

Our proposed machine learning model contributes to improving the quality of Hire by providing a more nuanced and comprehensive analysis of candidate attributes. Instead of…

Abstract

Purpose

Our proposed machine learning model contributes to improving the quality of Hire by providing a more nuanced and comprehensive analysis of candidate attributes. Instead of focusing solely on obvious factors, such as qualifications and experience, our model also considers various dimensions of fit, including person-job fit and person-organization fit. By integrating these dimensions of fit into the model, we can better predict a candidate’s potential contribution to the organization, hence enhancing the Quality of Hire.

Design/methodology/approach

Within the scope of the investigation, the competencies of the personnel working in the IT department of one in the largest state banks of the country were used. The entire data collection includes information on 1,850 individual employees as well as 13 different characteristics. For analysis, Python’s “keras” and “seaborn” modules were used. The Gower coefficient was used to determine the distance between different records.

Findings

The K-NN method resulted in the formation of five clusters, represented as a scatter plot. The axis illustrates the cohesion that exists between things (employees) that are similar to one another and the separateness that exists between things that have their own individual identities. This shows that the clustering process is effective in improving both the degree of similarity within each cluster and the degree of dissimilarity between clusters.

Research limitations/implications

Employee competencies were evaluated within the scope of the investigation. Additionally, other criteria requested from the employee were not included in the application.

Originality/value

This study will be beneficial for academics, professionals, and researchers in their attempts to overcome the ongoing obstacles and challenges related to the securing the proper talent for an organization. In addition to creating a mechanism to use big data in the form of structured and unstructured data from multiple sources and deriving insights using ML algorithms, it contributes to the debates on the quality of hire in an entire organization. This is done in addition to developing a mechanism for using big data in the form of structured and unstructured data from multiple sources.

Details

Management Decision, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0025-1747

Keywords

Article
Publication date: 28 March 2024

Elisa Gonzalez Santacruz, David Romero, Julieta Noguez and Thorsten Wuest

This research paper aims to analyze the scientific and grey literature on Quality 4.0 and zero-defect manufacturing (ZDM) frameworks to develop an integrated quality 4.0 framework…

Abstract

Purpose

This research paper aims to analyze the scientific and grey literature on Quality 4.0 and zero-defect manufacturing (ZDM) frameworks to develop an integrated quality 4.0 framework (IQ4.0F) for quality improvement (QI) based on Six Sigma and machine learning (ML) techniques towards ZDM. The IQ4.0F aims to contribute to the advancement of defect prediction approaches in diverse manufacturing processes. Furthermore, the work enables a comprehensive analysis of process variables influencing product quality with emphasis on the use of supervised and unsupervised ML techniques in Six Sigma’s DMAIC (Define, Measure, Analyze, Improve and Control) cycle stage of “Analyze.”

Design/methodology/approach

The research methodology employed a systematic literature review (SLR) based on PRISMA guidelines to develop the integrated framework, followed by a real industrial case study set in the automotive industry to fulfill the objectives of verifying and validating the proposed IQ4.0F with primary data.

Findings

This research work demonstrates the value of a “stepwise framework” to facilitate a shift from conventional quality management systems (QMSs) to QMSs 4.0. It uses the IDEF0 modeling methodology and Six Sigma’s DMAIC cycle to structure the steps to be followed to adopt the Quality 4.0 paradigm for QI. It also proves the worth of integrating Six Sigma and ML techniques into the “Analyze” stage of the DMAIC cycle for improving defect prediction in manufacturing processes and supporting problem-solving activities for quality managers.

Originality/value

This research paper introduces a first-of-its-kind Quality 4.0 framework – the IQ4.0F. Each step of the IQ4.0F was verified and validated in an original industrial case study set in the automotive industry. It is the first Quality 4.0 framework, according to the SLR conducted, to utilize the principal component analysis technique as a substitute for “Screening Design” in the Design of Experiments phase and K-means clustering technique for multivariable analysis, identifying process parameters that significantly impact product quality. The proposed IQ4.0F not only empowers decision-makers with the knowledge to launch a Quality 4.0 initiative but also provides quality managers with a systematic problem-solving methodology for quality improvement.

Details

The TQM Journal, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1754-2731

Keywords

Open Access
Article
Publication date: 28 November 2022

Ruchi Kejriwal, Monika Garg and Gaurav Sarin

Stock market has always been lucrative for various investors. But, because of its speculative nature, it is difficult to predict the price movement. Investors have been using both…

1028

Abstract

Purpose

Stock market has always been lucrative for various investors. But, because of its speculative nature, it is difficult to predict the price movement. Investors have been using both fundamental and technical analysis to predict the prices. Fundamental analysis helps to study structured data of the company. Technical analysis helps to study price trends, and with the increasing and easy availability of unstructured data have made it important to study the market sentiment. Market sentiment has a major impact on the prices in short run. Hence, the purpose is to understand the market sentiment timely and effectively.

Design/methodology/approach

The research includes text mining and then creating various models for classification. The accuracy of these models is checked using confusion matrix.

Findings

Out of the six machine learning techniques used to create the classification model, kernel support vector machine gave the highest accuracy of 68%. This model can be now used to analyse the tweets, news and various other unstructured data to predict the price movement.

Originality/value

This study will help investors classify a news or a tweet into “positive”, “negative” or “neutral” quickly and determine the stock price trends.

Details

Vilakshan - XIMB Journal of Management, vol. 21 no. 1
Type: Research Article
ISSN: 0973-1954

Keywords

Open Access
Article
Publication date: 18 October 2023

Ivan Soukal, Jan Mačí, Gabriela Trnková, Libuse Svobodova, Martina Hedvičáková, Eva Hamplova, Petra Maresova and Frank Lefley

The primary purpose of this paper is to identify the so-called core authors and their publications according to pre-defined criteria and thereby direct the users to the fastest…

Abstract

Purpose

The primary purpose of this paper is to identify the so-called core authors and their publications according to pre-defined criteria and thereby direct the users to the fastest and easiest way to get a picture of the otherwise pervasive field of bankruptcy prediction models. The authors aim to present state-of-the-art bankruptcy prediction models assembled by the field's core authors and critically examine the approaches and methods adopted.

Design/methodology/approach

The authors conducted a literature search in November 2022 through scientific databases Scopus, ScienceDirect and the Web of Science, focussing on a publication period from 2010 to 2022. The database search query was formulated as “Bankruptcy Prediction” and “Model or Tool”. However, the authors intentionally did not specify any model or tool to make the search non-discriminatory. The authors reviewed over 7,300 articles.

Findings

This paper has addressed the research questions: (1) What are the most important publications of the core authors in terms of the target country, size of the sample, sector of the economy and specialization in SME? (2) What are the most used methods for deriving or adjusting models appearing in the articles of the core authors? (3) To what extent do the core authors include accounting-based variables, non-financial or macroeconomic indicators, in their prediction models? Despite the advantages of new-age methods, based on the information in the articles analyzed, it can be deduced that conventional methods will continue to be beneficial, mainly due to the higher degree of ease of use and the transferability of the derived model.

Research limitations/implications

The authors identify several gaps in the literature which this research does not address but could be the focus of future research.

Practical implications

The authors provide practitioners and academics with an extract from a wide range of studies, available in scientific databases, on bankruptcy prediction models or tools, resulting in a large number of records being reviewed. This research will interest shareholders, corporations, and financial institutions interested in models of financial distress prediction or bankruptcy prediction to help identify troubled firms in the early stages of distress.

Social implications

Bankruptcy is a major concern for society in general, especially in today's economic environment. Therefore, being able to predict possible business failure at an early stage will give an organization time to address the issue and maybe avoid bankruptcy.

Originality/value

To the authors' knowledge, this is the first paper to identify the core authors in the bankruptcy prediction model and methods field. The primary value of the study is the current overview and analysis of the theoretical and practical development of knowledge in this field in the form of the construction of new models using classical or new-age methods. Also, the paper adds value by critically examining existing models and their modifications, including a discussion of the benefits of non-accounting variables usage.

Details

Central European Management Journal, vol. 32 no. 1
Type: Research Article
ISSN: 2658-0845

Keywords

Article
Publication date: 9 April 2024

Lu Wang, Jiahao Zheng, Jianrong Yao and Yuangao Chen

With the rapid growth of the domestic lending industry, assessing whether the borrower of each loan is at risk of default is a pressing issue for financial institutions. Although…

Abstract

Purpose

With the rapid growth of the domestic lending industry, assessing whether the borrower of each loan is at risk of default is a pressing issue for financial institutions. Although there are some models that can handle such problems well, there are still some shortcomings in some aspects. The purpose of this paper is to improve the accuracy of credit assessment models.

Design/methodology/approach

In this paper, three different stages are used to improve the classification performance of LSTM, so that financial institutions can more accurately identify borrowers at risk of default. The first approach is to use the K-Means-SMOTE algorithm to eliminate the imbalance within the class. In the second step, ResNet is used for feature extraction, and then two-layer LSTM is used for learning to strengthen the ability of neural networks to mine and utilize deep information. Finally, the model performance is improved by using the IDWPSO algorithm for optimization when debugging the neural network.

Findings

On two unbalanced datasets (category ratios of 700:1 and 3:1 respectively), the multi-stage improved model was compared with ten other models using accuracy, precision, specificity, recall, G-measure, F-measure and the nonparametric Wilcoxon test. It was demonstrated that the multi-stage improved model showed a more significant advantage in evaluating the imbalanced credit dataset.

Originality/value

In this paper, the parameters of the ResNet-LSTM hybrid neural network, which can fully mine and utilize the deep information, are tuned by an innovative intelligent optimization algorithm to strengthen the classification performance of the model.

Details

Kybernetes, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0368-492X

Keywords

1 – 10 of 29