Search results

1 – 10 of 21
Article
Publication date: 10 June 2022

Yasser Alharbi

This strategy significantly reduces the computational overhead and storage overhead required when using the kernel density estimation method to calculate the abnormal evaluation…

Abstract

Purpose

This strategy significantly reduces the computational overhead and storage overhead required when using the kernel density estimation method to calculate the abnormal evaluation value of the test sample.

Design/methodology/approach

To effectively deal with the security threats of botnets to the home and personal Internet of Things (IoT), especially for the objective problem of insufficient resources for anomaly detection in the home environment, a novel kernel density estimation-based federated learning-based lightweight Internet of Things anomaly traffic detection based on nuclear density estimation (KDE-LIATD) method. First, the KDE-LIATD method uses Gaussian kernel density estimation method to estimate every normal sample in the training set. The eigenvalue probability density function of the dimensional feature and the corresponding probability density; then, a feature selection algorithm based on kernel density estimation, obtained features that make outstanding contributions to anomaly detection, thereby reducing the feature dimension while improving the accuracy of anomaly detection; finally, the anomaly evaluation value of the test sample is calculated by the cubic spine interpolation method and anomaly detection is performed.

Findings

The simulation experiment results show that the proposed KDE-LIATD method is relatively strong in the detection of abnormal traffic for heterogeneous IoT devices.

Originality/value

With its robustness and compatibility, it can effectively detect abnormal traffic of household and personal IoT botnets.

Details

International Journal of Pervasive Computing and Communications, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 29 December 2022

K.V. Sheelavathy and V. Udaya Rani

Internet of Things (IoT) is a network, which provides the connection with various physical objects such as smart machines, smart home appliance and so on. The physical objects are…

Abstract

Purpose

Internet of Things (IoT) is a network, which provides the connection with various physical objects such as smart machines, smart home appliance and so on. The physical objects are allocated with a unique internet address, namely, Internet Protocol, which is used to perform the data broadcasting with the external objects using the internet. The sudden increment in the number of attacks generated by intruders, causes security-related problems in IoT devices while performing the communication. The main purpose of this paper is to develop an effective attack detection to enhance the robustness against the attackers in IoT.

Design/methodology/approach

In this research, the lasso regression algorithm is proposed along with ensemble classifier for identifying the IoT attacks. The lasso algorithm is used for the process of feature selection that modeled fewer parameters for the sparse models. The type of regression is analyzed for showing higher levels when certain parts of model selection is needed for parameter elimination. The lasso regression obtains the subset for predictors to lower the prediction error with respect to the quantitative response variable. The lasso does not impose a constraint for modeling the parameters caused the coefficients with some variables shrink as zero. The selected features are classified by using an ensemble classifier, that is important for linear and nonlinear types of data in the dataset, and the models are combined for handling these data types.

Findings

The lasso regression with ensemble classifier–based attack classification comprises distributed denial-of-service and Mirai botnet attacks which achieved an improved accuracy of 99.981% than the conventional deep neural network (DNN) methods.

Originality/value

Here, an efficient lasso regression algorithm is developed for extracting the features to perform the network anomaly detection using ensemble classifier.

Details

International Journal of Pervasive Computing and Communications, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 28 March 2024

Elisa Gonzalez Santacruz, David Romero, Julieta Noguez and Thorsten Wuest

This research paper aims to analyze the scientific and grey literature on Quality 4.0 and zero-defect manufacturing (ZDM) frameworks to develop an integrated quality 4.0 framework…

Abstract

Purpose

This research paper aims to analyze the scientific and grey literature on Quality 4.0 and zero-defect manufacturing (ZDM) frameworks to develop an integrated quality 4.0 framework (IQ4.0F) for quality improvement (QI) based on Six Sigma and machine learning (ML) techniques towards ZDM. The IQ4.0F aims to contribute to the advancement of defect prediction approaches in diverse manufacturing processes. Furthermore, the work enables a comprehensive analysis of process variables influencing product quality with emphasis on the use of supervised and unsupervised ML techniques in Six Sigma’s DMAIC (Define, Measure, Analyze, Improve and Control) cycle stage of “Analyze.”

Design/methodology/approach

The research methodology employed a systematic literature review (SLR) based on PRISMA guidelines to develop the integrated framework, followed by a real industrial case study set in the automotive industry to fulfill the objectives of verifying and validating the proposed IQ4.0F with primary data.

Findings

This research work demonstrates the value of a “stepwise framework” to facilitate a shift from conventional quality management systems (QMSs) to QMSs 4.0. It uses the IDEF0 modeling methodology and Six Sigma’s DMAIC cycle to structure the steps to be followed to adopt the Quality 4.0 paradigm for QI. It also proves the worth of integrating Six Sigma and ML techniques into the “Analyze” stage of the DMAIC cycle for improving defect prediction in manufacturing processes and supporting problem-solving activities for quality managers.

Originality/value

This research paper introduces a first-of-its-kind Quality 4.0 framework – the IQ4.0F. Each step of the IQ4.0F was verified and validated in an original industrial case study set in the automotive industry. It is the first Quality 4.0 framework, according to the SLR conducted, to utilize the principal component analysis technique as a substitute for “Screening Design” in the Design of Experiments phase and K-means clustering technique for multivariable analysis, identifying process parameters that significantly impact product quality. The proposed IQ4.0F not only empowers decision-makers with the knowledge to launch a Quality 4.0 initiative but also provides quality managers with a systematic problem-solving methodology for quality improvement.

Details

The TQM Journal, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1754-2731

Keywords

Article
Publication date: 4 April 2024

Rita Sleiman, Quoc-Thông Nguyen, Sandra Lacaze, Kim-Phuc Tran and Sébastien Thomassey

We propose a machine learning based methodology to deal with data collected from a mobile application asking users their opinion regarding fashion products. Based on different…

Abstract

Purpose

We propose a machine learning based methodology to deal with data collected from a mobile application asking users their opinion regarding fashion products. Based on different machine learning techniques, the proposed approach relies on the data value chain principle to enrich data into knowledge, insights and learning experience.

Design/methodology/approach

Online interaction and the usage of social media have dramatically altered both consumers’ behaviors and business practices. Companies invest in social media platforms and digital marketing in order to increase their brand awareness and boost their sales. Especially for fashion retailers, understanding consumers’ behavior before launching a new collection is crucial to reduce overstock situations. In this study, we aim at providing retailers better understand consumers’ different assessments of newly introduced products.

Findings

By creating new product-related and user-related attributes, the proposed prediction model attends an average of 70.15% accuracy when evaluating the potential success of new future products during the design process of the collection. Results showed that by harnessing artificial intelligence techniques, along with social media data and mobile apps, new ways of interacting with clients and understanding their preferences are established.

Practical implications

From a practical point of view, the proposed approach helps businesses better target their marketing campaigns, localize their potential clients and adjust manufactured quantities.

Originality/value

The originality of the proposed approach lies in (1) the implementation of the data value chain principle to enhance the information of raw data collected from mobile apps and improve the prediction model performances, and (2) the combination consumer and product attributes to provide an accurate prediction of new fashion, products.

Details

International Journal of Clothing Science and Technology, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0955-6222

Keywords

Article
Publication date: 30 November 2023

Geo Finna Aprilia and Meiryani  

Regarding the magnitude of the impact caused by money laundering, the size of the organization and the many parties involved, this paper aims to explore the methods used in…

Abstract

Purpose

Regarding the magnitude of the impact caused by money laundering, the size of the organization and the many parties involved, this paper aims to explore the methods used in detecting money laundering, especially the use of technology.

Design/methodology/approach

This research is a literature review from various research sources originating from Pro-Quest, Emerald, Science Direct and Google Scholar.

Findings

The researchers found that the most widely used methods for detecting money laundering were artificial intelligence, machine learning, data mining and social network analysis.

Research limitations/implications

This research is expected to help the government or institutions such as the police, forensic accountants and investigative auditors in the fight against money laundering. This research is limited to only a few sources, and it is hoped that further research can explore more deeply related to other methods for detecting money laundering.

Originality/value

This paper discusses the methods that are widely used in detecting money laundering.

Details

Journal of Money Laundering Control, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1368-5201

Keywords

Article
Publication date: 28 September 2023

Moh. Riskiyadi

This study aims to compare machine learning models, datasets and splitting training-testing using data mining methods to detect financial statement fraud.

3568

Abstract

Purpose

This study aims to compare machine learning models, datasets and splitting training-testing using data mining methods to detect financial statement fraud.

Design/methodology/approach

This study uses a quantitative approach from secondary data on the financial reports of companies listed on the Indonesia Stock Exchange in the last ten years, from 2010 to 2019. Research variables use financial and non-financial variables. Indicators of financial statement fraud are determined based on notes or sanctions from regulators and financial statement restatements with special supervision.

Findings

The findings show that the Extremely Randomized Trees (ERT) model performs better than other machine learning models. The best original-sampling dataset compared to other dataset treatments. Training testing splitting 80:10 is the best compared to other training-testing splitting treatments. So the ERT model with an original-sampling dataset and 80:10 training-testing splitting are the most appropriate for detecting future financial statement fraud.

Practical implications

This study can be used by regulators, investors, stakeholders and financial crime experts to add insight into better methods of detecting financial statement fraud.

Originality/value

This study proposes a machine learning model that has not been discussed in previous studies and performs comparisons to obtain the best financial statement fraud detection results. Practitioners and academics can use findings for further research development.

Details

Asian Review of Accounting, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1321-7348

Keywords

Article
Publication date: 15 January 2024

Faris Elghaish, Sandra Matarneh, Essam Abdellatef, Farzad Rahimian, M. Reza Hosseini and Ahmed Farouk Kineber

Cracks are prevalent signs of pavement distress found on highways globally. The use of artificial intelligence (AI) and deep learning (DL) for crack detection is increasingly…

Abstract

Purpose

Cracks are prevalent signs of pavement distress found on highways globally. The use of artificial intelligence (AI) and deep learning (DL) for crack detection is increasingly considered as an optimal solution. Consequently, this paper introduces a novel, fully connected, optimised convolutional neural network (CNN) model using feature selection algorithms for the purpose of detecting cracks in highway pavements.

Design/methodology/approach

To enhance the accuracy of the CNN model for crack detection, the authors employed a fully connected deep learning layers CNN model along with several optimisation techniques. Specifically, three optimisation algorithms, namely adaptive moment estimation (ADAM), stochastic gradient descent with momentum (SGDM), and RMSProp, were utilised to fine-tune the CNN model and enhance its overall performance. Subsequently, the authors implemented eight feature selection algorithms to further improve the accuracy of the optimised CNN model. These feature selection techniques were thoughtfully selected and systematically applied to identify the most relevant features contributing to crack detection in the given dataset. Finally, the authors subjected the proposed model to testing against seven pre-trained models.

Findings

The study's results show that the accuracy of the three optimisers (ADAM, SGDM, and RMSProp) with the five deep learning layers model is 97.4%, 98.2%, and 96.09%, respectively. Following this, eight feature selection algorithms were applied to the five deep learning layers to enhance accuracy, with particle swarm optimisation (PSO) achieving the highest F-score at 98.72. The model was then compared with other pre-trained models and exhibited the highest performance.

Practical implications

With an achieved precision of 98.19% and F-score of 98.72% using PSO, the developed model is highly accurate and effective in detecting and evaluating the condition of cracks in pavements. As a result, the model has the potential to significantly reduce the effort required for crack detection and evaluation.

Originality/value

The proposed method for enhancing CNN model accuracy in crack detection stands out for its unique combination of optimisation algorithms (ADAM, SGDM, and RMSProp) with systematic application of multiple feature selection techniques to identify relevant crack detection features and comparing results with existing pre-trained models.

Details

Smart and Sustainable Built Environment, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2046-6099

Keywords

Article
Publication date: 5 March 2024

Sana Ramzan and Mark Lokanan

This study aims to objectively synthesize the volume of accounting literature on financial statement fraud (FSF) using a systematic literature review research method (SLRRM). This…

Abstract

Purpose

This study aims to objectively synthesize the volume of accounting literature on financial statement fraud (FSF) using a systematic literature review research method (SLRRM). This paper analyzes the vast FSF literature based on inclusion and exclusion criteria. These criteria filter articles that are present in the accounting fraud domain and are published in peer-reviewed quality journals based on Australian Business Deans Council (ABDC) journal ranking. Lastly, a reverse search, analyzing the articles' abstracts, further narrows the search to 88 peer-reviewed articles. After examining these 88 articles, the results imply that the current literature is shifting from traditional statistical approaches towards computational methods, specifically machine learning (ML), for predicting and detecting FSF. This evolution of the literature is influenced by the impact of micro and macro variables on FSF and the inadequacy of audit procedures to detect red flags of fraud. The findings also concluded that A* peer-reviewed journals accepted articles that showed a complete picture of performance measures of computational techniques in their results. Therefore, this paper contributes to the literature by providing insights to researchers about why ML articles on fraud do not make it to top accounting journals and which computational techniques are the best algorithms for predicting and detecting FSF.

Design/methodology/approach

This paper chronicles the cluster of narratives surrounding the inadequacy of current accounting and auditing practices in preventing and detecting Financial Statement Fraud. The primary objective of this study is to objectively synthesize the volume of accounting literature on financial statement fraud. More specifically, this study will conduct a systematic literature review (SLR) to examine the evolution of financial statement fraud research and the emergence of new computational techniques to detect fraud in the accounting and finance literature.

Findings

The storyline of this study illustrates how the literature has evolved from conventional fraud detection mechanisms to computational techniques such as artificial intelligence (AI) and machine learning (ML). The findings also concluded that A* peer-reviewed journals accepted articles that showed a complete picture of performance measures of computational techniques in their results. Therefore, this paper contributes to the literature by providing insights to researchers about why ML articles on fraud do not make it to top accounting journals and which computational techniques are the best algorithms for predicting and detecting FSF.

Originality/value

This paper contributes to the literature by providing insights to researchers about why the evolution of accounting fraud literature from traditional statistical methods to machine learning algorithms in fraud detection and prediction.

Details

Journal of Accounting Literature, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0737-4607

Keywords

Article
Publication date: 22 April 2022

Sreedhar Jyothi and Geetanjali Nelloru

Patients having ventricular arrhythmias and atrial fibrillation, that are early markers of stroke and sudden cardiac death, as well as benign subjects are all studied using the…

Abstract

Purpose

Patients having ventricular arrhythmias and atrial fibrillation, that are early markers of stroke and sudden cardiac death, as well as benign subjects are all studied using the electrocardiogram (ECG). In order to identify cardiac anomalies, ECG signals analyse the heart's electrical activity and show output in the form of waveforms. Patients with these disorders must be identified as soon as possible. ECG signals can be difficult, time-consuming and subject to inter-observer variability when inspected manually.

Design/methodology/approach

There are various forms of arrhythmias that are difficult to distinguish in complicated non-linear ECG data. It may be beneficial to use computer-aided decision support systems (CAD). It is possible to classify arrhythmias in a rapid, accurate, repeatable and objective manner using the CAD, which use machine learning algorithms to identify the tiny changes in cardiac rhythms. Cardiac infractions can be classified and detected using this method. The authors want to categorize the arrhythmia with better accurate findings in even less computational time as the primary objective. Using signal and axis characteristics and their association n-grams as features, this paper makes a significant addition to the field. Using a benchmark dataset as input to multi-label multi-fold cross-validation, an experimental investigation was conducted.

Findings

This dataset was used as input for cross-validation on contemporary models and the resulting cross-validation metrics have been weighed against the performance metrics of other contemporary models. There have been few false alarms with the suggested model's high sensitivity and specificity.

Originality/value

The results of cross validation are significant. In terms of specificity, sensitivity, and decision accuracy, the proposed model outperforms other contemporary models.

Details

International Journal of Intelligent Unmanned Systems, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2049-6427

Keywords

Article
Publication date: 16 April 2024

Liezl Smith and Christiaan Lamprecht

In a virtual interconnected digital space, the metaverse encompasses various virtual environments where people can interact, including engaging in business activities. Machine…

Abstract

Purpose

In a virtual interconnected digital space, the metaverse encompasses various virtual environments where people can interact, including engaging in business activities. Machine learning (ML) is a strategic technology that enables digital transformation to the metaverse, and it is becoming a more prevalent driver of business performance and reporting on performance. However, ML has limitations, and using the technology in business processes, such as accounting, poses a technology governance failure risk. To address this risk, decision makers and those tasked to govern these technologies must understand where the technology fits into the business process and consider its limitations to enable a governed transition to the metaverse. Using selected accounting processes, this study aims to describe the limitations that ML techniques pose to ensure the quality of financial information.

Design/methodology/approach

A grounded theory literature review method, consisting of five iterative stages, was used to identify the accounting tasks that ML could perform in the respective accounting processes, describe the ML techniques that could be applied to each accounting task and identify the limitations associated with the individual techniques.

Findings

This study finds that limitations such as data availability and training time may impact the quality of the financial information and that ML techniques and their limitations must be clearly understood when developing and implementing technology governance measures.

Originality/value

The study contributes to the growing literature on enterprise information and technology management and governance. In this study, the authors integrated current ML knowledge into an accounting context. As accounting is a pervasive aspect of business, the insights from this study will benefit decision makers and those tasked to govern these technologies to understand how some processes are more likely to be affected by certain limitations and how this may impact the accounting objectives. It will also benefit those users hoping to exploit the advantages of ML in their accounting processes while understanding the specific technology limitations on an accounting task level.

Details

Journal of Financial Reporting and Accounting, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1985-2517

Keywords

1 – 10 of 21