Search results
1 – 10 of 16Sudhaman Parthasarathy and S.T. Padmapriya
Algorithm bias refers to repetitive computer program errors that give some users more weight than others. The aim of this article is to provide a deeper insight of algorithm bias…
Abstract
Purpose
Algorithm bias refers to repetitive computer program errors that give some users more weight than others. The aim of this article is to provide a deeper insight of algorithm bias in AI-enabled ERP software customization. Although algorithmic bias in machine learning models has uneven, unfair and unjust impacts, research on it is mostly anecdotal and scattered.
Design/methodology/approach
As guided by the previous research (Akter et al., 2022), this study presents the possible design bias (model, data and method) one may experience with enterprise resource planning (ERP) software customization algorithm. This study then presents the artificial intelligence (AI) version of ERP customization algorithm using k-nearest neighbours algorithm.
Findings
This study illustrates the possible bias when the prioritized requirements customization estimation (PRCE) algorithm available in the ERP literature is executed without any AI. Then, the authors present their newly developed AI version of the PRCE algorithm that uses ML techniques. The authors then discuss its adjoining algorithmic bias with an illustration. Further, the authors also draw a roadmap for managing algorithmic bias during ERP customization in practice.
Originality/value
To the best of the authors’ knowledge, no prior research has attempted to understand the algorithmic bias that occurs during the execution of the ERP customization algorithm (with or without AI).
Details
Keywords
This study focuses on the classification of targets with varying shapes using radar cross section (RCS), which is influenced by the target’s shape. This study aims to develop a…
Abstract
Purpose
This study focuses on the classification of targets with varying shapes using radar cross section (RCS), which is influenced by the target’s shape. This study aims to develop a robust classification method by considering an incident angle with minor random fluctuations and using a physical optics simulation to generate data sets.
Design/methodology/approach
The approach involves several supervised machine learning and classification methods, including traditional algorithms and a deep neural network classifier. It uses histogram-based definitions of the RCS for feature extraction, with an emphasis on resilience against noise in the RCS data. Data enrichment techniques are incorporated, including the use of noise-impacted histogram data sets.
Findings
The classification algorithms are extensively evaluated, highlighting their efficacy in feature extraction from RCS histograms. Among the studied algorithms, the K-nearest neighbour is found to be the most accurate of the traditional methods, but it is surpassed in accuracy by a deep learning network classifier. The results demonstrate the robustness of the feature extraction from the RCS histograms, motivated by mm-wave radar applications.
Originality/value
This study presents a novel approach to target classification that extends beyond traditional methods by integrating deep neural networks and focusing on histogram-based methodologies. It also incorporates data enrichment techniques to enhance the analysis, providing a comprehensive perspective for target detection using RCS.
Details
Keywords
Ruchi Kejriwal, Monika Garg and Gaurav Sarin
Stock market has always been lucrative for various investors. But, because of its speculative nature, it is difficult to predict the price movement. Investors have been using both…
Abstract
Purpose
Stock market has always been lucrative for various investors. But, because of its speculative nature, it is difficult to predict the price movement. Investors have been using both fundamental and technical analysis to predict the prices. Fundamental analysis helps to study structured data of the company. Technical analysis helps to study price trends, and with the increasing and easy availability of unstructured data have made it important to study the market sentiment. Market sentiment has a major impact on the prices in short run. Hence, the purpose is to understand the market sentiment timely and effectively.
Design/methodology/approach
The research includes text mining and then creating various models for classification. The accuracy of these models is checked using confusion matrix.
Findings
Out of the six machine learning techniques used to create the classification model, kernel support vector machine gave the highest accuracy of 68%. This model can be now used to analyse the tweets, news and various other unstructured data to predict the price movement.
Originality/value
This study will help investors classify a news or a tweet into “positive”, “negative” or “neutral” quickly and determine the stock price trends.
Details
Keywords
Ivan Soukal, Jan Mačí, Gabriela Trnková, Libuse Svobodova, Martina Hedvičáková, Eva Hamplova, Petra Maresova and Frank Lefley
The primary purpose of this paper is to identify the so-called core authors and their publications according to pre-defined criteria and thereby direct the users to the fastest…
Abstract
Purpose
The primary purpose of this paper is to identify the so-called core authors and their publications according to pre-defined criteria and thereby direct the users to the fastest and easiest way to get a picture of the otherwise pervasive field of bankruptcy prediction models. The authors aim to present state-of-the-art bankruptcy prediction models assembled by the field's core authors and critically examine the approaches and methods adopted.
Design/methodology/approach
The authors conducted a literature search in November 2022 through scientific databases Scopus, ScienceDirect and the Web of Science, focussing on a publication period from 2010 to 2022. The database search query was formulated as “Bankruptcy Prediction” and “Model or Tool”. However, the authors intentionally did not specify any model or tool to make the search non-discriminatory. The authors reviewed over 7,300 articles.
Findings
This paper has addressed the research questions: (1) What are the most important publications of the core authors in terms of the target country, size of the sample, sector of the economy and specialization in SME? (2) What are the most used methods for deriving or adjusting models appearing in the articles of the core authors? (3) To what extent do the core authors include accounting-based variables, non-financial or macroeconomic indicators, in their prediction models? Despite the advantages of new-age methods, based on the information in the articles analyzed, it can be deduced that conventional methods will continue to be beneficial, mainly due to the higher degree of ease of use and the transferability of the derived model.
Research limitations/implications
The authors identify several gaps in the literature which this research does not address but could be the focus of future research.
Practical implications
The authors provide practitioners and academics with an extract from a wide range of studies, available in scientific databases, on bankruptcy prediction models or tools, resulting in a large number of records being reviewed. This research will interest shareholders, corporations, and financial institutions interested in models of financial distress prediction or bankruptcy prediction to help identify troubled firms in the early stages of distress.
Social implications
Bankruptcy is a major concern for society in general, especially in today's economic environment. Therefore, being able to predict possible business failure at an early stage will give an organization time to address the issue and maybe avoid bankruptcy.
Originality/value
To the authors' knowledge, this is the first paper to identify the core authors in the bankruptcy prediction model and methods field. The primary value of the study is the current overview and analysis of the theoretical and practical development of knowledge in this field in the form of the construction of new models using classical or new-age methods. Also, the paper adds value by critically examining existing models and their modifications, including a discussion of the benefits of non-accounting variables usage.
Details
Keywords
Bahareh Farhoudinia, Selcen Ozturkcan and Nihat Kasap
This paper aims to conduct an interdisciplinary systematic literature review (SLR) of fake news research and to advance the socio-technical understanding of digital information…
Abstract
Purpose
This paper aims to conduct an interdisciplinary systematic literature review (SLR) of fake news research and to advance the socio-technical understanding of digital information practices and platforms in business and management studies.
Design/methodology/approach
The paper applies a focused, SLR method to analyze articles on fake news in business and management journals from 2010 to 2020.
Findings
The paper analyzes the definition, theoretical frameworks, methods and research gaps of fake news in the business and management domains. It also identifies some promising research opportunities for future scholars.
Practical implications
The paper offers practical implications for various stakeholders who are affected by or involved in fake news dissemination, such as brands, consumers and policymakers. It provides recommendations to cope with the challenges and risks of fake news.
Social implications
The paper discusses the social consequences and future threats of fake news, especially in relation to social networking and social media. It calls for more awareness and responsibility from online communities to prevent and combat fake news.
Originality/value
The paper contributes to the literature on information management by showing the importance and consequences of fake news sharing for societies. It is among the frontier systematic reviews in the field that covers studies from different disciplines and focuses on business and management studies.
Details
Keywords
Muneza Kagzi, Sayantan Khanra and Sanjoy Kumar Paul
From a technological determinist perspective, machine learning (ML) may significantly contribute towards sustainable development. The purpose of this study is to synthesize prior…
Abstract
Purpose
From a technological determinist perspective, machine learning (ML) may significantly contribute towards sustainable development. The purpose of this study is to synthesize prior literature on the role of ML in promoting sustainability and to encourage future inquiries.
Design/methodology/approach
This study conducts a systematic review of 110 papers that demonstrate the utilization of ML in the context of sustainable development.
Findings
ML techniques may play a vital role in enabling sustainable development by leveraging data to uncover patterns and facilitate the prediction of various variables, thereby aiding in decision-making processes. Through the synthesis of findings from prior research, it is evident that ML may help in achieving many of the United Nations’ sustainable development goals.
Originality/value
This study represents one of the initial investigations that conducted a comprehensive examination of the literature concerning ML’s contribution to sustainability. The analysis revealed that the research domain is still in its early stages, indicating a need for further exploration.
Details
Keywords
Francois Du Rand, André Francois van der Merwe and Malan van Tonder
This paper aims to discuss the development of a defect classification system that can be used to detect and classify powder bed surface defects from captured layer images without…
Abstract
Purpose
This paper aims to discuss the development of a defect classification system that can be used to detect and classify powder bed surface defects from captured layer images without the need for specialised computational hardware. The idea is to develop this system by making use of more traditional machine learning (ML) models instead of using computationally intensive deep learning (DL) models.
Design/methodology/approach
The approach that is used by this study is to use traditional image processing and classification techniques that can be applied to captured layer images to detect and classify defects without the need for DL algorithms.
Findings
The study proved that a defect classification algorithm could be developed by making use of traditional ML models with a high degree of accuracy and the images could be processed at higher speeds than typically reported in literature when making use of DL models.
Originality/value
This paper addresses a need that has been identified for a high-speed defect classification algorithm that can detect and classify defects without the need for specialised hardware that is typically used when making use of DL technologies. This is because when developing closed-loop feedback systems for these additive manufacturing machines, it is important to detect and classify defects without inducing additional delays to the control system.
Details
Keywords
Miaoxian Guo, Shouheng Wei, Chentong Han, Wanliang Xia, Chao Luo and Zhijian Lin
Surface roughness has a serious impact on the fatigue strength, wear resistance and life of mechanical products. Realizing the evolution of surface quality through theoretical…
Abstract
Purpose
Surface roughness has a serious impact on the fatigue strength, wear resistance and life of mechanical products. Realizing the evolution of surface quality through theoretical modeling takes a lot of effort. To predict the surface roughness of milling processing, this paper aims to construct a neural network based on deep learning and data augmentation.
Design/methodology/approach
This study proposes a method consisting of three steps. Firstly, the machine tool multisource data acquisition platform is established, which combines sensor monitoring with machine tool communication to collect processing signals. Secondly, the feature parameters are extracted to reduce the interference and improve the model generalization ability. Thirdly, for different expectations, the parameters of the deep belief network (DBN) model are optimized by the tent-SSA algorithm to achieve more accurate roughness classification and regression prediction.
Findings
The adaptive synthetic sampling (ADASYN) algorithm can improve the classification prediction accuracy of DBN from 80.67% to 94.23%. After the DBN parameters were optimized by Tent-SSA, the roughness prediction accuracy was significantly improved. For the classification model, the prediction accuracy is improved by 5.77% based on ADASYN optimization. For regression models, different objective functions can be set according to production requirements, such as root-mean-square error (RMSE) or MaxAE, and the error is reduced by more than 40% compared to the original model.
Originality/value
A roughness prediction model based on multiple monitoring signals is proposed, which reduces the dependence on the acquisition of environmental variables and enhances the model's applicability. Furthermore, with the ADASYN algorithm, the Tent-SSA intelligent optimization algorithm is introduced to optimize the hyperparameters of the DBN model and improve the optimization performance.
Details