Search results

1 – 4 of 4
Article
Publication date: 6 May 2024

Ahmed Taibi, Said Touati, Lyes Aomar and Nabil Ikhlef

Bearings play a critical role in the reliable operation of induction machines, and their failure can lead to significant operational challenges and downtime. Detecting and…

Abstract

Purpose

Bearings play a critical role in the reliable operation of induction machines, and their failure can lead to significant operational challenges and downtime. Detecting and diagnosing these defects is imperative to ensure the longevity of induction machines and preventing costly downtime. The purpose of this paper is to develop a novel approach for diagnosis of bearing faults in induction machine.

Design/methodology/approach

To identify the different fault states of the bearing with accurately and efficiently in this paper, the original bearing vibration signal is first decomposed into several intrinsic mode functions (IMFs) using variational mode decomposition (VMD). The IMFs that contain more noise information are selected using the Pearson correlation coefficient. Subsequently, discrete wavelet transform (DWT) is used to filter the noisy IMFs. Second, the composite multiscale weighted permutation entropy (CMWPE) of each component is calculated to form the features vector. Finally, the features vector is reduced using the locality-sensitive discriminant analysis algorithm, to be fed into the support vector machine model for training and classification.

Findings

The obtained results showed the ability of the VMD_DWT algorithm to reduce the noise of raw vibration signals. It also demonstrated that the proposed method can effectively extract different fault features from vibration signals.

Originality/value

This study suggested a new VMD_DWT method to reduce the noise of the bearing vibration signal. The proposed approach for bearing fault diagnosis of induction machine based on VMD-DWT and CMWPE is highly effective. Its effectiveness has been verified using experimental data.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0332-1649

Keywords

Open Access
Article
Publication date: 26 April 2024

Adela Sobotkova, Ross Deans Kristensen-McLachlan, Orla Mallon and Shawn Adrian Ross

This paper provides practical advice for archaeologists and heritage specialists wishing to use ML approaches to identify archaeological features in high-resolution satellite…

Abstract

Purpose

This paper provides practical advice for archaeologists and heritage specialists wishing to use ML approaches to identify archaeological features in high-resolution satellite imagery (or other remotely sensed data sources). We seek to balance the disproportionately optimistic literature related to the application of ML to archaeological prospection through a discussion of limitations, challenges and other difficulties. We further seek to raise awareness among researchers of the time, effort, expertise and resources necessary to implement ML successfully, so that they can make an informed choice between ML and manual inspection approaches.

Design/methodology/approach

Automated object detection has been the holy grail of archaeological remote sensing for the last two decades. Machine learning (ML) models have proven able to detect uniform features across a consistent background, but more variegated imagery remains a challenge. We set out to detect burial mounds in satellite imagery from a diverse landscape in Central Bulgaria using a pre-trained Convolutional Neural Network (CNN) plus additional but low-touch training to improve performance. Training was accomplished using MOUND/NOT MOUND cutouts, and the model assessed arbitrary tiles of the same size from the image. Results were assessed using field data.

Findings

Validation of results against field data showed that self-reported success rates were misleadingly high, and that the model was misidentifying most features. Setting an identification threshold at 60% probability, and noting that we used an approach where the CNN assessed tiles of a fixed size, tile-based false negative rates were 95–96%, false positive rates were 87–95% of tagged tiles, while true positives were only 5–13%. Counterintuitively, the model provided with training data selected for highly visible mounds (rather than all mounds) performed worse. Development of the model, meanwhile, required approximately 135 person-hours of work.

Research limitations/implications

Our attempt to deploy a pre-trained CNN demonstrates the limitations of this approach when it is used to detect varied features of different sizes within a heterogeneous landscape that contains confounding natural and modern features, such as roads, forests and field boundaries. The model has detected incidental features rather than the mounds themselves, making external validation with field data an essential part of CNN workflows. Correcting the model would require refining the training data as well as adopting different approaches to model choice and execution, raising the computational requirements beyond the level of most cultural heritage practitioners.

Practical implications

Improving the pre-trained model’s performance would require considerable time and resources, on top of the time already invested. The degree of manual intervention required – particularly around the subsetting and annotation of training data – is so significant that it raises the question of whether it would be more efficient to identify all of the mounds manually, either through brute-force inspection by experts or by crowdsourcing the analysis to trained – or even untrained – volunteers. Researchers and heritage specialists seeking efficient methods for extracting features from remotely sensed data should weigh the costs and benefits of ML versus manual approaches carefully.

Social implications

Our literature review indicates that use of artificial intelligence (AI) and ML approaches to archaeological prospection have grown exponentially in the past decade, approaching adoption levels associated with “crossing the chasm” from innovators and early adopters to the majority of researchers. The literature itself, however, is overwhelmingly positive, reflecting some combination of publication bias and a rhetoric of unconditional success. This paper presents the failure of a good-faith attempt to utilise these approaches as a counterbalance and cautionary tale to potential adopters of the technology. Early-majority adopters may find ML difficult to implement effectively in real-life scenarios.

Originality/value

Unlike many high-profile reports from well-funded projects, our paper represents a serious but modestly resourced attempt to apply an ML approach to archaeological remote sensing, using techniques like transfer learning that are promoted as solutions to time and cost problems associated with, e.g. annotating and manipulating training data. While the majority of articles uncritically promote ML, or only discuss how challenges were overcome, our paper investigates how – despite reasonable self-reported scores – the model failed to locate the target features when compared to field data. We also present time, expertise and resourcing requirements, a rarity in ML-for-archaeology publications.

Details

Journal of Documentation, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 5 April 2024

Liyi Zhang, Mingyue Fu, Teng Fei, Ming K. Lim and Ming-Lang Tseng

This study reduces carbon emission in logistics distribution to realize the low-carbon site optimization for a cold chain logistics distribution center problem.

Abstract

Purpose

This study reduces carbon emission in logistics distribution to realize the low-carbon site optimization for a cold chain logistics distribution center problem.

Design/methodology/approach

This study involves cooling, commodity damage and carbon emissions and establishes the site selection model of low-carbon cold chain logistics distribution center aiming at minimizing total cost, and grey wolf optimization algorithm is used to improve the artificial fish swarm algorithm to solve a cold chain logistics distribution center problem.

Findings

The optimization results and stability of the improved algorithm are significantly improved and compared with other intelligent algorithms. The result is confirmed to use the Beijing-Tianjin-Hebei region site selection. This study reduces composite cost of cold chain logistics and reduces damage to environment to provide a new idea for developing cold chain logistics.

Originality/value

This study contributes to propose an optimization model of low-carbon cold chain logistics site by considering various factors affecting cold chain products and converting carbon emissions into costs. Prior studies are lacking to take carbon emissions into account in the logistics process. The main trend of current economic development is low-carbon and the logistics distribution is an energy consumption and high carbon emissions.

Details

Industrial Management & Data Systems, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0263-5577

Keywords

Article
Publication date: 12 April 2024

Ahmad Honarjoo and Ehsan Darvishan

This study aims to obtain methods to identify and find the place of damage, which is one of the topics that has always been discussed in structural engineering. The cost of…

Abstract

Purpose

This study aims to obtain methods to identify and find the place of damage, which is one of the topics that has always been discussed in structural engineering. The cost of repairing and rehabilitating massive bridges and buildings is very high, highlighting the need to monitor the structures continuously. One way to track the structure's health is to check the cracks in the concrete. Meanwhile, the current methods of concrete crack detection have complex and heavy calculations.

Design/methodology/approach

This paper presents a new lightweight architecture based on deep learning for crack classification in concrete structures. The proposed architecture was identified and classified in less time and with higher accuracy than other traditional and valid architectures in crack detection. This paper used a standard dataset to detect two-class and multi-class cracks.

Findings

Results show that two images were recognized with 99.53% accuracy based on the proposed method, and multi-class images were classified with 91% accuracy. The low execution time of the proposed architecture compared to other valid architectures in deep learning on the same hardware platform. The use of Adam's optimizer in this research had better performance than other optimizers.

Originality/value

This paper presents a framework based on a lightweight convolutional neural network for nondestructive monitoring of structural health to optimize the calculation costs and reduce execution time in processing.

Details

International Journal of Structural Integrity, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1757-9864

Keywords

Access

Year

Last month (4)

Content type

Earlycite article (4)
1 – 4 of 4