Search results

1 – 10 of over 30000
Article
Publication date: 8 August 2023

Smita Abhijit Ganjare, Sunil M. Satao and Vaibhav Narwane

In today's fast developing era, the volume of data is increasing day by day. The traditional methods are lagging for efficiently managing the huge amount of data. The adoption of…

Abstract

Purpose

In today's fast developing era, the volume of data is increasing day by day. The traditional methods are lagging for efficiently managing the huge amount of data. The adoption of machine learning techniques helps in efficient management of data and draws relevant patterns from that data. The main aim of this research paper is to provide brief information about the proposed adoption of machine learning techniques in different sectors of manufacturing supply chain.

Design/methodology/approach

This research paper has done rigorous systematic literature review of adoption of machine learning techniques in manufacturing supply chain from year 2015 to 2023. Out of 511 papers, 74 papers are shortlisted for detailed analysis.

Findings

The papers are subcategorised into 8 sections which helps in scrutinizing the work done in manufacturing supply chain. This paper helps in finding out the contribution of application of machine learning techniques in manufacturing field mostly in automotive sector.

Practical implications

The research is limited to papers published from year 2015 to year 2023. The limitation of the current research that book chapters, unpublished work, white papers and conference papers are not considered for study. Only English language articles and review papers are studied in brief. This study helps in adoption of machine learning techniques in manufacturing supply chain.

Originality/value

This study is one of the few studies which investigate machine learning techniques in manufacturing sector and supply chain through systematic literature survey.

Highlights

  1. A comprehensive understanding of Machine Learning techniques is presented.

  2. The state of art of adoption of Machine Learning techniques are investigated.

  3. The methodology of (SLR) is proposed.

  4. An innovative study of Machine Learning techniques in manufacturing supply chain.

A comprehensive understanding of Machine Learning techniques is presented.

The state of art of adoption of Machine Learning techniques are investigated.

The methodology of (SLR) is proposed.

An innovative study of Machine Learning techniques in manufacturing supply chain.

Details

The TQM Journal, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1754-2731

Keywords

Article
Publication date: 4 November 2014

Ahmad Mozaffari, Nasser Lashgarian Azad and Alireza Fathi

The purpose of this paper is to demonstrate the applicability of swarm and evolutionary techniques for regularized machine learning. Generally, by defining a proper penalty…

Abstract

Purpose

The purpose of this paper is to demonstrate the applicability of swarm and evolutionary techniques for regularized machine learning. Generally, by defining a proper penalty function, regularization laws are embedded into the structure of common least square solutions to increase the numerical stability, sparsity, accuracy and robustness of regression weights. Several regularization techniques have been proposed so far which have their own advantages and disadvantages. Several efforts have been made to find fast and accurate deterministic solvers to handle those regularization techniques. However, the proposed numerical and deterministic approaches need certain knowledge of mathematical programming, and also do not guarantee the global optimality of the obtained solution. In this research, the authors propose the use of constraint swarm and evolutionary techniques to cope with demanding requirements of regularized extreme learning machine (ELM).

Design/methodology/approach

To implement the required tools for comparative numerical study, three steps are taken. The considered algorithms contain both classical and swarm and evolutionary approaches. For the classical regularization techniques, Lasso regularization, Tikhonov regularization, cascade Lasso-Tikhonov regularization, and elastic net are considered. For swarm and evolutionary-based regularization, an efficient constraint handling technique known as self-adaptive penalty function constraint handling is considered, and its algorithmic structure is modified so that it can efficiently perform the regularized learning. Several well-known metaheuristics are considered to check the generalization capability of the proposed scheme. To test the efficacy of the proposed constraint evolutionary-based regularization technique, a wide range of regression problems are used. Besides, the proposed framework is applied to a real-life identification problem, i.e. identifying the dominant factors affecting the hydrocarbon emissions of an automotive engine, for further assurance on the performance of the proposed scheme.

Findings

Through extensive numerical study, it is observed that the proposed scheme can be easily used for regularized machine learning. It is indicated that by defining a proper objective function and considering an appropriate penalty function, near global optimum values of regressors can be easily obtained. The results attest the high potentials of swarm and evolutionary techniques for fast, accurate and robust regularized machine learning.

Originality/value

The originality of the research paper lies behind the use of a novel constraint metaheuristic computing scheme which can be used for effective regularized optimally pruned extreme learning machine (OP-ELM). The self-adaption of the proposed method alleviates the user from the knowledge of the underlying system, and also increases the degree of the automation of OP-ELM. Besides, by using different types of metaheuristics, it is demonstrated that the proposed methodology is a general flexible scheme, and can be combined with different types of swarm and evolutionary-based optimization techniques to form a regularized machine learning approach.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 7 no. 4
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 16 August 2021

Rajshree Varma, Yugandhara Verma, Priya Vijayvargiya and Prathamesh P. Churi

The rapid advancement of technology in online communication and fingertip access to the Internet has resulted in the expedited dissemination of fake news to engage a global…

1406

Abstract

Purpose

The rapid advancement of technology in online communication and fingertip access to the Internet has resulted in the expedited dissemination of fake news to engage a global audience at a low cost by news channels, freelance reporters and websites. Amid the coronavirus disease 2019 (COVID-19) pandemic, individuals are inflicted with these false and potentially harmful claims and stories, which may harm the vaccination process. Psychological studies reveal that the human ability to detect deception is only slightly better than chance; therefore, there is a growing need for serious consideration for developing automated strategies to combat fake news that traverses these platforms at an alarming rate. This paper systematically reviews the existing fake news detection technologies by exploring various machine learning and deep learning techniques pre- and post-pandemic, which has never been done before to the best of the authors’ knowledge.

Design/methodology/approach

The detailed literature review on fake news detection is divided into three major parts. The authors searched papers no later than 2017 on fake news detection approaches on deep learning and machine learning. The papers were initially searched through the Google scholar platform, and they have been scrutinized for quality. The authors kept “Scopus” and “Web of Science” as quality indexing parameters. All research gaps and available databases, data pre-processing, feature extraction techniques and evaluation methods for current fake news detection technologies have been explored, illustrating them using tables, charts and trees.

Findings

The paper is dissected into two approaches, namely machine learning and deep learning, to present a better understanding and a clear objective. Next, the authors present a viewpoint on which approach is better and future research trends, issues and challenges for researchers, given the relevance and urgency of a detailed and thorough analysis of existing models. This paper also delves into fake new detection during COVID-19, and it can be inferred that research and modeling are shifting toward the use of ensemble approaches.

Originality/value

The study also identifies several novel automated web-based approaches used by researchers to assess the validity of pandemic news that have proven to be successful, although currently reported accuracy has not yet reached consistent levels in the real world.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 14 no. 4
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 19 February 2020

Shashidhar Kaparthi and Daniel Bumblauskas

The after-sale service industry is estimated to contribute over 8 percent to the US GDP. For use in this considerably large service management industry, this article provides…

2647

Abstract

Purpose

The after-sale service industry is estimated to contribute over 8 percent to the US GDP. For use in this considerably large service management industry, this article provides verification in the application of decision tree-based machine learning algorithms for optimal maintenance decision-making. The motivation for this research arose from discussions held with a large agricultural equipment manufacturing company interested in increasing the uptime of their expensive machinery and in helping their dealer network.

Design/methodology/approach

We propose a general strategy for the design of predictive maintenance systems using machine learning techniques. Then, we present a case study where multiple machine learning algorithms are applied to a particular example situation for an illustration of the proposed strategy and evaluation of its performance.

Findings

We found progressive improvements using such machine learning techniques in terms of accuracy in predictions of failure, demonstrating that the proposed strategy is successful.

Research limitations/implications

This approach is scalable to a wide variety of applications to aid in failure prediction. These approaches are generalizable to many systems irrespective of the underlying physics. Even though we focus on decision tree-based machine learning techniques in this study, the general design strategy proposed can be used with all other supervised learning techniques like neural networks, boosting algorithms, support vector machines, and statistical methods.

Practical implications

This approach is applicable to many different types of systems that require maintenance and repair decision-making. A case is provided for a cloud data storage provider. The methods described in the case can be used in any number of systems and industrial applications, making this a very scalable case for industry practitioners. This scalability is possible as the machine learning techniques learn the correspondence between machine conditions and outcome state irrespective of the underlying physics governing the systems.

Social implications

Sustainable systems and operations require allocating and utilizing resources efficiently and effectively. This approach can help asset managers decide how to sustainably allocate resources by increasing uptime and utilization for expensive equipment.

Originality/value

This is a novel application and case study for decision tree-based machine learning that will aid researchers in developing tools and techniques in this area as well as those working in the artificial intelligence and service management space.

Details

International Journal of Quality & Reliability Management, vol. 37 no. 4
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 29 December 2022

Xiaoguang Tian, Robert Pavur, Henry Han and Lili Zhang

Studies on mining text and generating intelligence on human resource documents are rare. This research aims to use artificial intelligence and machine learning techniques to…

1871

Abstract

Purpose

Studies on mining text and generating intelligence on human resource documents are rare. This research aims to use artificial intelligence and machine learning techniques to facilitate the employee selection process through latent semantic analysis (LSA), bidirectional encoder representations from transformers (BERT) and support vector machines (SVM). The research also compares the performance of different machine learning, text vectorization and sampling approaches on the human resource (HR) resume data.

Design/methodology/approach

LSA and BERT are used to discover and understand the hidden patterns from a textual resume dataset, and SVM is applied to build the screening model and improve performance.

Findings

Based on the results of this study, LSA and BERT are proved useful in retrieving critical topics, and SVM can optimize the prediction model performance with the help of cross-validation and variable selection strategies.

Research limitations/implications

The technique and its empirical conclusions provide a practical, theoretical basis and reference for HR research.

Practical implications

The novel methods proposed in the study can assist HR practitioners in designing and improving their existing recruitment process. The topic detection techniques used in the study provide HR practitioners insights to identify the skill set of a particular recruiting position.

Originality/value

To the best of the authors’ knowledge, this research is the first study that uses LSA, BERT, SVM and other machine learning models in human resource management and resume classification. Compared with the existing machine learning-based resume screening system, the proposed system can provide more interpretable insights for HR professionals to understand the recommendation results through the topics extracted from the resumes. The findings of this study can also help organizations to find a better and effective approach for resume screening and evaluation.

Details

Business Process Management Journal, vol. 29 no. 1
Type: Research Article
ISSN: 1463-7154

Keywords

Open Access
Article
Publication date: 28 April 2023

Prudence Kadebu, Robert T.R. Shoniwa, Kudakwashe Zvarevashe, Addlight Mukwazvure, Innocent Mapanga, Nyasha Fadzai Thusabantu and Tatenda Trust Gotora

Given how smart today’s malware authors have become through employing highly sophisticated techniques, it is only logical that methods be developed to combat the most potent…

1808

Abstract

Purpose

Given how smart today’s malware authors have become through employing highly sophisticated techniques, it is only logical that methods be developed to combat the most potent threats, particularly where the malware is stealthy and makes indicators of compromise (IOC) difficult to detect. After the analysis is completed, the output can be employed to detect and then counteract the attack. The goal of this work is to propose a machine learning approach to improve malware detection by combining the strengths of both supervised and unsupervised machine learning techniques. This study is essential as malware has certainly become ubiquitous as cyber-criminals use it to attack systems in cyberspace. Malware analysis is required to reveal hidden IOC, to comprehend the attacker’s goal and the severity of the damage and to find vulnerabilities within the system.

Design/methodology/approach

This research proposes a hybrid approach for dynamic and static malware analysis that combines unsupervised and supervised machine learning algorithms and goes on to show how Malware exploiting steganography can be exposed.

Findings

The tactics used by malware developers to circumvent detection are becoming more advanced with steganography becoming a popular technique applied in obfuscation to evade mechanisms for detection. Malware analysis continues to call for continuous improvement of existing techniques. State-of-the-art approaches applying machine learning have become increasingly popular with highly promising results.

Originality/value

Cyber security researchers globally are grappling with devising innovative strategies to identify and defend against the threat of extremely sophisticated malware attacks on key infrastructure containing sensitive data. The process of detecting the presence of malware requires expertise in malware analysis. Applying intelligent methods to this process can aid practitioners in identifying malware’s behaviour and features. This is especially expedient where the malware is stealthy, hiding IOC.

Details

International Journal of Industrial Engineering and Operations Management, vol. 5 no. 2
Type: Research Article
ISSN: 2690-6090

Keywords

Article
Publication date: 30 December 2022

Aishwarya Narang, Ravi Kumar and Amit Dhiman

This study seeks to understand the connection of methodology by finding relevant papers and their full review using the “Preferred Reporting Items for Systematic Reviews and…

Abstract

Purpose

This study seeks to understand the connection of methodology by finding relevant papers and their full review using the “Preferred Reporting Items for Systematic Reviews and Meta-Analyses” (PRISMA).

Design/methodology/approach

Concrete-filled steel tubular (CFST) columns have gained popularity in construction in recent decades as they offer the benefit of constituent materials and cost-effectiveness. Artificial Neural Networks (ANNs), Support Vector Machines (SVMs), Gene Expression Programming (GEP) and Decision Trees (DTs) are some of the approaches that have been widely used in recent decades in structural engineering to construct predictive models, resulting in effective and accurate decision making. Despite the fact that there are numerous research studies on the various parameters that influence the axial compression capacity (ACC) of CFST columns, there is no systematic review of these Machine Learning methods.

Findings

The implications of a variety of structural characteristics on machine learning performance parameters are addressed and reviewed. The comparison analysis of current design codes and machine learning tools to predict the performance of CFST columns is summarized. The discussion results indicate that machine learning tools better understand complex datasets and intricate testing designs.

Originality/value

This study examines machine learning techniques for forecasting the axial bearing capacity of concrete-filled steel tubular (CFST) columns. This paper also highlights the drawbacks of utilizing existing techniques to build CFST columns, and the benefits of Machine Learning approaches over them. This article attempts to introduce beginners and experienced professionals to various research trajectories.

Details

Multidiscipline Modeling in Materials and Structures, vol. 19 no. 2
Type: Research Article
ISSN: 1573-6105

Keywords

Open Access
Article
Publication date: 3 August 2020

Djordje Cica, Branislav Sredanovic, Sasa Tesic and Davorin Kramar

Sustainable manufacturing is one of the most important and most challenging issues in present industrial scenario. With the intention of diminish negative effects associated with…

2094

Abstract

Sustainable manufacturing is one of the most important and most challenging issues in present industrial scenario. With the intention of diminish negative effects associated with cutting fluids, the machining industries are continuously developing technologies and systems for cooling/lubricating of the cutting zone while maintaining machining efficiency. In the present study, three regression based machine learning techniques, namely, polynomial regression (PR), support vector regression (SVR) and Gaussian process regression (GPR) were developed to predict machining force, cutting power and cutting pressure in the turning of AISI 1045. In the development of predictive models, machining parameters of cutting speed, depth of cut and feed rate were considered as control factors. Since cooling/lubricating techniques significantly affects the machining performance, prediction model development of quality characteristics was performed under minimum quantity lubrication (MQL) and high-pressure coolant (HPC) cutting conditions. The prediction accuracy of developed models was evaluated by statistical error analyzing methods. Results of regressions based machine learning techniques were also compared with probably one of the most frequently used machine learning method, namely artificial neural networks (ANN). Finally, a metaheuristic approach based on a neural network algorithm was utilized to perform an efficient multi-objective optimization of process parameters for both cutting environment.

Details

Applied Computing and Informatics, vol. 20 no. 1/2
Type: Research Article
ISSN: 2634-1964

Keywords

Article
Publication date: 21 December 2021

Laouni Djafri

This work can be used as a building block in other settings such as GPU, Map-Reduce, Spark or any other. Also, DDPML can be deployed on other distributed systems such as P2P…

384

Abstract

Purpose

This work can be used as a building block in other settings such as GPU, Map-Reduce, Spark or any other. Also, DDPML can be deployed on other distributed systems such as P2P networks, clusters, clouds computing or other technologies.

Design/methodology/approach

In the age of Big Data, all companies want to benefit from large amounts of data. These data can help them understand their internal and external environment and anticipate associated phenomena, as the data turn into knowledge that can be used for prediction later. Thus, this knowledge becomes a great asset in companies' hands. This is precisely the objective of data mining. But with the production of a large amount of data and knowledge at a faster pace, the authors are now talking about Big Data mining. For this reason, the authors’ proposed works mainly aim at solving the problem of volume, veracity, validity and velocity when classifying Big Data using distributed and parallel processing techniques. So, the problem that the authors are raising in this work is how the authors can make machine learning algorithms work in a distributed and parallel way at the same time without losing the accuracy of classification results. To solve this problem, the authors propose a system called Dynamic Distributed and Parallel Machine Learning (DDPML) algorithms. To build it, the authors divided their work into two parts. In the first, the authors propose a distributed architecture that is controlled by Map-Reduce algorithm which in turn depends on random sampling technique. So, the distributed architecture that the authors designed is specially directed to handle big data processing that operates in a coherent and efficient manner with the sampling strategy proposed in this work. This architecture also helps the authors to actually verify the classification results obtained using the representative learning base (RLB). In the second part, the authors have extracted the representative learning base by sampling at two levels using the stratified random sampling method. This sampling method is also applied to extract the shared learning base (SLB) and the partial learning base for the first level (PLBL1) and the partial learning base for the second level (PLBL2). The experimental results show the efficiency of our solution that the authors provided without significant loss of the classification results. Thus, in practical terms, the system DDPML is generally dedicated to big data mining processing, and works effectively in distributed systems with a simple structure, such as client-server networks.

Findings

The authors got very satisfactory classification results.

Originality/value

DDPML system is specially designed to smoothly handle big data mining classification.

Details

Data Technologies and Applications, vol. 56 no. 4
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 5 July 2023

Maan Habib, Bashar Bashir, Abdullah Alsalman and Hussein Bachir

Slope stability analysis is essential for ensuring the safe design of road embankments. While various conventional methods, such as the finite element approach, are used to…

Abstract

Purpose

Slope stability analysis is essential for ensuring the safe design of road embankments. While various conventional methods, such as the finite element approach, are used to determine the safety factor of road embankments, there is ongoing interest in exploring the potential of machine learning techniques for this purpose.

Design/methodology/approach

Within the study context, the outcomes of the ensemble machine learning models will be compared and benchmarked against the conventional techniques used to predict this parameter.

Findings

Generally, the study results have shown that the proposed machine learning models provide rapid and accurate estimates of the safety factor of road embankments and are, therefore, promising alternatives to traditional methods.

Originality/value

Although machine learning algorithms hold promise for rapidly and accurately estimating the safety factor of road embankments, few studies have systematically compared their performance with traditional methods. To address this gap, this study introduces a novel approach using advanced ensemble machine learning techniques for efficient and precise estimation of the road embankment safety factor. Besides, the study comprehensively assesses the performance of these ensemble techniques, in contrast with established methods such as the finite element approach and empirical models, demonstrating their potential as robust and reliable alternatives in the realm of slope stability assessment.

Details

Multidiscipline Modeling in Materials and Structures, vol. 19 no. 5
Type: Research Article
ISSN: 1573-6105

Keywords

1 – 10 of over 30000