Search results

1 – 10 of 928
Article
Publication date: 15 January 2024

Faris Elghaish, Sandra Matarneh, Essam Abdellatef, Farzad Rahimian, M. Reza Hosseini and Ahmed Farouk Kineber

Cracks are prevalent signs of pavement distress found on highways globally. The use of artificial intelligence (AI) and deep learning (DL) for crack detection is increasingly…

Abstract

Purpose

Cracks are prevalent signs of pavement distress found on highways globally. The use of artificial intelligence (AI) and deep learning (DL) for crack detection is increasingly considered as an optimal solution. Consequently, this paper introduces a novel, fully connected, optimised convolutional neural network (CNN) model using feature selection algorithms for the purpose of detecting cracks in highway pavements.

Design/methodology/approach

To enhance the accuracy of the CNN model for crack detection, the authors employed a fully connected deep learning layers CNN model along with several optimisation techniques. Specifically, three optimisation algorithms, namely adaptive moment estimation (ADAM), stochastic gradient descent with momentum (SGDM), and RMSProp, were utilised to fine-tune the CNN model and enhance its overall performance. Subsequently, the authors implemented eight feature selection algorithms to further improve the accuracy of the optimised CNN model. These feature selection techniques were thoughtfully selected and systematically applied to identify the most relevant features contributing to crack detection in the given dataset. Finally, the authors subjected the proposed model to testing against seven pre-trained models.

Findings

The study's results show that the accuracy of the three optimisers (ADAM, SGDM, and RMSProp) with the five deep learning layers model is 97.4%, 98.2%, and 96.09%, respectively. Following this, eight feature selection algorithms were applied to the five deep learning layers to enhance accuracy, with particle swarm optimisation (PSO) achieving the highest F-score at 98.72. The model was then compared with other pre-trained models and exhibited the highest performance.

Practical implications

With an achieved precision of 98.19% and F-score of 98.72% using PSO, the developed model is highly accurate and effective in detecting and evaluating the condition of cracks in pavements. As a result, the model has the potential to significantly reduce the effort required for crack detection and evaluation.

Originality/value

The proposed method for enhancing CNN model accuracy in crack detection stands out for its unique combination of optimisation algorithms (ADAM, SGDM, and RMSProp) with systematic application of multiple feature selection techniques to identify relevant crack detection features and comparing results with existing pre-trained models.

Details

Smart and Sustainable Built Environment, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2046-6099

Keywords

Article
Publication date: 28 December 2023

Ankang Ji, Xiaolong Xue, Limao Zhang, Xiaowei Luo and Qingpeng Man

Crack detection of pavement is a critical task in the periodic survey. Efficient, effective and consistent tracking of the road conditions by identifying and locating crack…

Abstract

Purpose

Crack detection of pavement is a critical task in the periodic survey. Efficient, effective and consistent tracking of the road conditions by identifying and locating crack contributes to establishing an appropriate road maintenance and repair strategy from the promptly informed managers but still remaining a significant challenge. This research seeks to propose practical solutions for targeting the automatic crack detection from images with efficient productivity and cost-effectiveness, thereby improving the pavement performance.

Design/methodology/approach

This research applies a novel deep learning method named TransUnet for crack detection, which is structured based on Transformer, combined with convolutional neural networks as encoder by leveraging a global self-attention mechanism to better extract features for enhancing automatic identification. Afterward, the detected cracks are used to quantify morphological features from five indicators, such as length, mean width, maximum width, area and ratio. Those analyses can provide valuable information for engineers to assess the pavement condition with efficient productivity.

Findings

In the training process, the TransUnet is fed by a crack dataset generated by the data augmentation with a resolution of 224 × 224 pixels. Subsequently, a test set containing 80 new images is used for crack detection task based on the best selected TransUnet with a learning rate of 0.01 and a batch size of 1, achieving an accuracy of 0.8927, a precision of 0.8813, a recall of 0.8904, an F1-measure and dice of 0.8813, and a Mean Intersection over Union of 0.8082, respectively. Comparisons with several state-of-the-art methods indicate that the developed approach in this research outperforms with greater efficiency and higher reliability.

Originality/value

The developed approach combines TransUnet with an integrated quantification algorithm for crack detection and quantification, performing excellently in terms of comparisons and evaluation metrics, which can provide solutions with potentially serving as the basis for an automated, cost-effective pavement condition assessment scheme.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 10 June 2022

Yasser Alharbi

This strategy significantly reduces the computational overhead and storage overhead required when using the kernel density estimation method to calculate the abnormal evaluation…

Abstract

Purpose

This strategy significantly reduces the computational overhead and storage overhead required when using the kernel density estimation method to calculate the abnormal evaluation value of the test sample.

Design/methodology/approach

To effectively deal with the security threats of botnets to the home and personal Internet of Things (IoT), especially for the objective problem of insufficient resources for anomaly detection in the home environment, a novel kernel density estimation-based federated learning-based lightweight Internet of Things anomaly traffic detection based on nuclear density estimation (KDE-LIATD) method. First, the KDE-LIATD method uses Gaussian kernel density estimation method to estimate every normal sample in the training set. The eigenvalue probability density function of the dimensional feature and the corresponding probability density; then, a feature selection algorithm based on kernel density estimation, obtained features that make outstanding contributions to anomaly detection, thereby reducing the feature dimension while improving the accuracy of anomaly detection; finally, the anomaly evaluation value of the test sample is calculated by the cubic spine interpolation method and anomaly detection is performed.

Findings

The simulation experiment results show that the proposed KDE-LIATD method is relatively strong in the detection of abnormal traffic for heterogeneous IoT devices.

Originality/value

With its robustness and compatibility, it can effectively detect abnormal traffic of household and personal IoT botnets.

Details

International Journal of Pervasive Computing and Communications, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 8 January 2024

Na Ye, Dingguo Yu, Xiaoyu Ma, Yijie Zhou and Yanqin Yan

Fake news in cyberspace has greatly interfered with national governance, economic development and cultural communication, which has greatly increased the demand for fake news…

Abstract

Purpose

Fake news in cyberspace has greatly interfered with national governance, economic development and cultural communication, which has greatly increased the demand for fake news detection and intervention. At present, the recognition methods based on news content all lose part of the information to varying degrees. This paper proposes a lightweight content-based detection method to achieve early identification of false information with low computation costs.

Design/methodology/approach

The authors' research proposes a lightweight fake news detection framework for English text, including a new textual feature extraction method, specifically mapping English text and symbols to 0–255 using American Standard Code for Information Interchange (ASCII) codes, treating the completed sequence of numbers as the values of picture pixel points and using a computer vision model to detect them. The authors also compare the authors' framework with traditional word2vec, Glove, bidirectional encoder representations from transformers (BERT) and other methods.

Findings

The authors conduct experiments on the lightweight neural networks Ghostnet and Shufflenet, and the experimental results show that the authors' proposed framework outperforms the baseline in accuracy on both lightweight networks.

Originality/value

The authors' method does not rely on additional information from text data and can efficiently perform the fake news detection task with less computational resource consumption. In addition, the feature extraction method of this framework is relatively new and enlightening for text content-based classification detection, which can detect fake news in time at the early stage of fake news propagation.

Details

Online Information Review, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 28 February 2023

Sandra Matarneh, Faris Elghaish, Amani Al-Ghraibah, Essam Abdellatef and David John Edwards

Incipient detection of pavement deterioration (such as crack identification) is critical to optimizing road maintenance because it enables preventative steps to be implemented to…

Abstract

Purpose

Incipient detection of pavement deterioration (such as crack identification) is critical to optimizing road maintenance because it enables preventative steps to be implemented to mitigate damage and possible failure. Traditional visual inspection has been largely superseded by semi-automatic/automatic procedures given significant advancements in image processing. Therefore, there is a need to develop automated tools to detect and classify cracks.

Design/methodology/approach

The literature review is employed to evaluate existing attempts to use Hough transform algorithm and highlight issues that should be improved. Then, developing a simple low-cost crack detection method based on the Hough transform algorithm for pavement crack detection and classification.

Findings

Analysis results reveal that model accuracy reaches 92.14% for vertical cracks, 93.03% for diagonal cracks and 95.61% for horizontal cracks. The time lapse for detecting the crack type for one image is circa 0.98 s for vertical cracks, 0.79 s for horizontal cracks and 0.83 s for diagonal cracks. Ensuing discourse serves to illustrate the inherent potential of a simple low-cost image processing method in automated pavement crack detection. Moreover, this method provides direct guidance for long-term pavement optimal maintenance decisions.

Research limitations/implications

The outcome of this research can help highway agencies to detect and classify cracks accurately for a very long highway without a need for manual inspection, which can significantly minimize cost.

Originality/value

Hough transform algorithm was tested in terms of detect and classify a large dataset of highway images, and the accuracy reaches 92.14%, which can be considered as a very accurate percentage regarding automated cracks and distresses classification.

Details

Smart and Sustainable Built Environment, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2046-6099

Keywords

Article
Publication date: 12 April 2024

Ahmad Honarjoo and Ehsan Darvishan

This study aims to obtain methods to identify and find the place of damage, which is one of the topics that has always been discussed in structural engineering. The cost of…

Abstract

Purpose

This study aims to obtain methods to identify and find the place of damage, which is one of the topics that has always been discussed in structural engineering. The cost of repairing and rehabilitating massive bridges and buildings is very high, highlighting the need to monitor the structures continuously. One way to track the structure's health is to check the cracks in the concrete. Meanwhile, the current methods of concrete crack detection have complex and heavy calculations.

Design/methodology/approach

This paper presents a new lightweight architecture based on deep learning for crack classification in concrete structures. The proposed architecture was identified and classified in less time and with higher accuracy than other traditional and valid architectures in crack detection. This paper used a standard dataset to detect two-class and multi-class cracks.

Findings

Results show that two images were recognized with 99.53% accuracy based on the proposed method, and multi-class images were classified with 91% accuracy. The low execution time of the proposed architecture compared to other valid architectures in deep learning on the same hardware platform. The use of Adam's optimizer in this research had better performance than other optimizers.

Originality/value

This paper presents a framework based on a lightweight convolutional neural network for nondestructive monitoring of structural health to optimize the calculation costs and reduce execution time in processing.

Details

International Journal of Structural Integrity, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1757-9864

Keywords

Article
Publication date: 25 March 2024

Hongxiao Yu, Haemoon Oh and Kuo-Ching Wang

This study aims to examine the underlying emotional process that explains how context-specific stimuli involved in virtual reality (VR) destinations translate into presence…

Abstract

Purpose

This study aims to examine the underlying emotional process that explains how context-specific stimuli involved in virtual reality (VR) destinations translate into presence perceptions and behavioral intentions.

Design/methodology/approach

In total, 403 potential tourists participated in a self-administered online survey after they watched a randomly assigned VR tour. The Lavaan package in R software was used to conduct structural equation analysis and examine the proposed theoretical framework.

Findings

The results reveal that media content consisting of informativeness, aesthetics and novelty was positively related to users’ sense of presence in a VR tour. The effect of media content on presence was partially mediated by emotional arousal.

Practical implications

Managers and VR designers can create an emotive virtual tour that contributes to the user’s sense of presence to promote attraction to the target destination. The VR content needs to be informative, aesthetic and novel, which can excite users during the VR tour, portray virtual destinations clearly and eventually influence potential tourists’ visit intentions.

Originality/value

Research on the emotional mechanism to generate presence is still in its infancy. This study integrates presence theory into a conceptual framework to explore how media content influences presence and decision-making through the emotional mechanism.

Details

International Journal of Contemporary Hospitality Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0959-6119

Keywords

Article
Publication date: 8 April 2024

Hu Luo, Haobin Ruan and Dawei Tu

The purpose of this paper is to propose a whole set of methods for underwater target detection, because most underwater objects have small samples, low quality underwater images…

Abstract

Purpose

The purpose of this paper is to propose a whole set of methods for underwater target detection, because most underwater objects have small samples, low quality underwater images problems such as detail loss, low contrast and color distortion, and verify the feasibility of the proposed methods through experiments.

Design/methodology/approach

The improved RGHS algorithm to enhance the original underwater target image is proposed, and then the YOLOv4 deep learning network for underwater small sample targets detection is improved based on the combination of traditional data expansion method and Mosaic algorithm, expanding the feature extraction capability with SPP (Spatial Pyramid Pooling) module after each feature extraction layer to extract richer feature information.

Findings

The experimental results, using the official dataset, reveal a 3.5% increase in average detection accuracy for three types of underwater biological targets compared to the traditional YOLOv4 algorithm. In underwater robot application testing, the proposed method achieves an impressive 94.73% average detection accuracy for the three types of underwater biological targets.

Originality/value

Underwater target detection is an important task for underwater robot application. However, most underwater targets have the characteristics of small samples, and the detection of small sample targets is a comprehensive problem because it is affected by the quality of underwater images. This paper provides a whole set of methods to solve the problems, which is of great significance to the application of underwater robot.

Details

Robotic Intelligence and Automation, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 11 July 2023

Abhinandan Chatterjee, Pradip Bala, Shruti Gedam, Sanchita Paul and Nishant Goyal

Depression is a mental health problem characterized by a persistent sense of sadness and loss of interest. EEG signals are regarded as the most appropriate instruments for…

Abstract

Purpose

Depression is a mental health problem characterized by a persistent sense of sadness and loss of interest. EEG signals are regarded as the most appropriate instruments for diagnosing depression because they reflect the operating status of the human brain. The purpose of this study is the early detection of depression among people using EEG signals.

Design/methodology/approach

(i) Artifacts are removed by filtering and linear and non-linear features are extracted; (ii) feature scaling is done using a standard scalar while principal component analysis (PCA) is used for feature reduction; (iii) the linear, non-linear and combination of both (only for those whose accuracy is highest) are taken for further analysis where some ML and DL classifiers are applied for the classification of depression; and (iv) in this study, total 15 distinct ML and DL methods, including KNN, SVM, bagging SVM, RF, GB, Extreme Gradient Boosting, MNB, Adaboost, Bagging RF, BootAgg, Gaussian NB, RNN, 1DCNN, RBFNN and LSTM, that have been effectively utilized as classifiers to handle a variety of real-world issues.

Findings

1. Among all, alpha, alpha asymmetry, gamma and gamma asymmetry give the best results in linear features, while RWE, DFA, CD and AE give the best results in non-linear feature. 2. In the linear features, gamma and alpha asymmetry have given 99.98% accuracy for Bagging RF, while gamma asymmetry has given 99.98% accuracy for BootAgg. 3. For non-linear features, it has been shown 99.84% of accuracy for RWE and DFA in RF, 99.97% accuracy for DFA in XGBoost and 99.94% accuracy for RWE in BootAgg. 4. By using DL, in linear features, gamma asymmetry has given more than 96% accuracy in RNN and 91% accuracy in LSTM and for non-linear features, 89% accuracy has been achieved for CD and AE in LSTM. 5. By combining linear and non-linear features, the highest accuracy was achieved in Bagging RF (98.50%) gamma asymmetry + RWE. In DL, Alpha + RWE, Gamma asymmetry + CD and gamma asymmetry + RWE have achieved 98% accuracy in LSTM.

Originality/value

A novel dataset was collected from the Central Institute of Psychiatry (CIP), Ranchi which was recorded using a 128-channels whereas major previous studies used fewer channels; the details of the study participants are summarized and a model is developed for statistical analysis using N-way ANOVA; artifacts are removed by high and low pass filtering of epoch data followed by re-referencing and independent component analysis for noise removal; linear features, namely, band power and interhemispheric asymmetry and non-linear features, namely, relative wavelet energy, wavelet entropy, Approximate entropy, sample entropy, detrended fluctuation analysis and correlation dimension are extracted; this model utilizes Epoch (213,072) for 5 s EEG data, which allows the model to train for longer, thereby increasing the efficiency of classifiers. Features scaling is done using a standard scalar rather than normalization because it helps increase the accuracy of the models (especially for deep learning algorithms) while PCA is used for feature reduction; the linear, non-linear and combination of both features are taken for extensive analysis in conjunction with ML and DL classifiers for the classification of depression. The combination of linear and non-linear features (only for those whose accuracy is highest) is used for the best detection results.

Details

Aslib Journal of Information Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2050-3806

Keywords

Article
Publication date: 3 January 2023

Saleem Raja A., Sundaravadivazhagan Balasubaramanian, Pradeepa Ganesan, Justin Rajasekaran and Karthikeyan R.

The internet has completely merged into contemporary life. People are addicted to using internet services for everyday activities. Consequently, an abundance of information about…

Abstract

Purpose

The internet has completely merged into contemporary life. People are addicted to using internet services for everyday activities. Consequently, an abundance of information about people and organizations is available online, which encourages the proliferation of cybercrimes. Cybercriminals often use malicious links for large-scale cyberattacks, which are disseminated via email, SMS and social media. Recognizing malicious links online can be exceedingly challenging. The purpose of this paper is to present a strong security system that can detect malicious links in the cyberspace using natural language processing technique.

Design/methodology/approach

The researcher recommends a variety of approaches, including blacklisting and rules-based machine/deep learning, for automatically recognizing malicious links. But the approaches generally necessitate the generation of a set of features to generalize the detection process. Most of the features are generated by processing URLs and content of the web page, as well as some external features such as the ranking of the web page and domain name system information. This process of feature extraction and selection typically takes more time and demands a high level of expertise in the domain. Sometimes the generated features may not leverage the full potentials of the data set. In addition, the majority of the currently deployed systems make use of a single classifier for the classification of malicious links. However, prediction accuracy may vary widely depending on the data set and the classifier used.

Findings

To address the issue of generating feature sets, the proposed method uses natural language processing techniques (term frequency and inverse document frequency) that vectorize URLs. To build a robust system for the classification of malicious links, the proposed system implements weighted soft voting classifier, an ensemble classifier that combines predictions of base classifiers. The ability or skill of each classifier serves as the base for the weight that is assigned to it.

Originality/value

The proposed method performs better when the optimal weights are assigned. The performance of the proposed method was assessed by using two different data sets (D1 and D2) and compared performance against base machine learning classifiers and previous research results. The outcome accuracy shows that the proposed method is superior to the existing methods, offering 91.4% and 98.8% accuracy for data sets D1 and D2, respectively.

Details

International Journal of Pervasive Computing and Communications, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1742-7371

Keywords

1 – 10 of 928