Search results

1 – 10 of over 27000
Article
Publication date: 11 October 2022

Chuanzhi Sun, Yin Chu Wang, Qing Lu, Yongmeng Liu and Jiubin Tan

Aiming at the problem that the transmission mechanism of the assembly error of the multi-stage rotor with saddle surface type is not clear, the purpose of this paper is to propose…

Abstract

Purpose

Aiming at the problem that the transmission mechanism of the assembly error of the multi-stage rotor with saddle surface type is not clear, the purpose of this paper is to propose a deep belief network to realize the prediction of the coaxiality and perpendicularity of the multi-stage rotor.

Design/methodology/approach

First, the surface type of the aero-engine rotor is classified. The rotor surface profile sampling data is converted into image structure data, and a rotor surface type classifier based on convolutional neural network is established. Then, for the saddle surface rotor, a prediction model of coaxiality and perpendicularity based on deep belief network is established. To verify the effectiveness of the coaxiality and perpendicularity prediction method proposed in this paper, a multi-stage rotor coaxiality and perpendicularity assembly measurement experiment is carried out.

Findings

The results of this paper show that the accuracy rate of face type classification using convolutional neural network is 99%, which meets the requirements of subsequent assembly process. For the 80 sets of test samples, the average errors of the coaxiality and perpendicularity of the deep belief network prediction method are 0.1 and 1.6 µm, respectively.

Originality/value

Therefore, the method proposed in this paper can be used not only for rotor surface classification but also to guide the assembly of aero-engine multi-stage rotors.

Details

Assembly Automation, vol. 42 no. 6
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 26 January 2024

Merly Thomas and Meshram B.B.

Denial-of-service (DoS) attacks develop unauthorized entry to various network services and user information by building traffic that creates multiple requests simultaneously…

Abstract

Purpose

Denial-of-service (DoS) attacks develop unauthorized entry to various network services and user information by building traffic that creates multiple requests simultaneously making the system unavailable to users. Protection of internet services requires effective DoS attack detection to keep an eye on traffic passing across protected networks, freeing the protected internet servers from surveillance threats and ensuring they can focus on offering high-quality services with the fewest response times possible.

Design/methodology/approach

This paper aims to develop a hybrid optimization-based deep learning model to precisely detect DoS attacks.

Findings

The designed Aquila deer hunting optimization-enabled deep belief network technique achieved improved performance with an accuracy of 92.8%, a true positive rate of 92.8% and a true negative rate of 93.6.

Originality/value

The introduced detection approach effectively detects DoS attacks available on the internet.

Details

International Journal of Web Information Systems, vol. 20 no. 1
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 2 July 2018

Jinghan Du, Haiyan Chen and Weining Zhang

In large-scale monitoring systems, sensors in different locations are deployed to collect massive useful time-series data, which can help in real-time data analytics and its…

Abstract

Purpose

In large-scale monitoring systems, sensors in different locations are deployed to collect massive useful time-series data, which can help in real-time data analytics and its related applications. However, affected by hardware device itself, sensor nodes often fail to work, resulting in a common phenomenon that the collected data are incomplete. The purpose of this study is to predict and recover the missing data in sensor networks.

Design/methodology/approach

Considering the spatio-temporal correlation of large-scale sensor data, this paper proposes a data recover model in sensor networks based on a deep learning method, i.e. deep belief network (DBN). Specifically, when one sensor fails, the historical time-series data of its own and the real-time data from surrounding sensor nodes, which have high similarity with a failure observed using the proposed similarity filter, are collected first. Then, the high-level feature representation of these spatio-temporal correlation data is extracted by DBN. Moreover, to determine the structure of a DBN model, a reconstruction error-based algorithm is proposed. Finally, the missing data are predicted based on these features by a single-layer neural network.

Findings

This paper collects a noise data set from an airport monitoring system for experiments. Various comparative experiments show that the proposed algorithms are effective. The proposed data recovery model is compared with several other classical models, and the experimental results prove that the deep learning-based model can not only get a better prediction accuracy but also get a better performance in training time and model robustness.

Originality/value

A deep learning method is investigated in data recovery task, and it proved to be effective compared with other previous methods. This might provide a practical experience in the application of a deep learning method.

Details

Sensor Review, vol. 39 no. 2
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 21 July 2020

Arshey M. and Angel Viji K. S.

Phishing is a serious cybersecurity problem, which is widely available through multimedia, such as e-mail and Short Messaging Service (SMS) to collect the personal information of…

Abstract

Purpose

Phishing is a serious cybersecurity problem, which is widely available through multimedia, such as e-mail and Short Messaging Service (SMS) to collect the personal information of the individual. However, the rapid growth of the unsolicited and unwanted information needs to be addressed, raising the necessity of the technology to develop any effective anti-phishing methods.

Design/methodology/approach

The primary intention of this research is to design and develop an approach for preventing phishing by proposing an optimization algorithm. The proposed approach involves four steps, namely preprocessing, feature extraction, feature selection and classification, for dealing with phishing e-mails. Initially, the input data set is subjected to the preprocessing, which removes stop words and stemming in the data and the preprocessed output is given to the feature extraction process. By extracting keyword frequency from the preprocessed, the important words are selected as the features. Then, the feature selection process is carried out using the Bhattacharya distance such that only the significant features that can aid the classification are selected. Using the selected features, the classification is done using the deep belief network (DBN) that is trained using the proposed fractional-earthworm optimization algorithm (EWA). The proposed fractional-EWA is designed by the integration of EWA and fractional calculus to determine the weights in the DBN optimally.

Findings

The accuracy of the methods, naive Bayes (NB), DBN, neural network (NN), EWA-DBN and fractional EWA-DBN is 0.5333, 0.5455, 0.5556, 0.5714 and 0.8571, respectively. The sensitivity of the methods, NB, DBN, NN, EWA-DBN and fractional EWA-DBN is 0.4558, 0.5631, 0.7035, 0.7045 and 0.8182, respectively. Likewise, the specificity of the methods, NB, DBN, NN, EWA-DBN and fractional EWA-DBN is 0.5052, 0.5631, 0.7028, 0.7040 and 0.8800, respectively. It is clear from the comparative table that the proposed method acquired the maximal accuracy, sensitivity and specificity compared with the existing methods.

Originality/value

The e-mail phishing detection is performed in this paper using the optimization-based deep learning networks. The e-mails include a number of unwanted messages that are to be detected in order to avoid the storage issues. The importance of the method is that the inclusion of the historical data in the detection process enhances the accuracy of detection.

Details

Data Technologies and Applications, vol. 54 no. 4
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 3 July 2020

Azra Nazir, Roohie Naaz Mir and Shaima Qureshi

The trend of “Deep Learning for Internet of Things (IoT)” has gained fresh momentum with enormous upcoming applications employing these models as their processing engine and Cloud…

274

Abstract

Purpose

The trend of “Deep Learning for Internet of Things (IoT)” has gained fresh momentum with enormous upcoming applications employing these models as their processing engine and Cloud as their resource giant. But this picture leads to underutilization of ever-increasing device pool of IoT that has already passed 15 billion mark in 2015. Thus, it is high time to explore a different approach to tackle this issue, keeping in view the characteristics and needs of the two fields. Processing at the Edge can boost applications with real-time deadlines while complementing security.

Design/methodology/approach

This review paper contributes towards three cardinal directions of research in the field of DL for IoT. The first section covers the categories of IoT devices and how Fog can aid in overcoming the underutilization of millions of devices, forming the realm of the things for IoT. The second direction handles the issue of immense computational requirements of DL models by uncovering specific compression techniques. An appropriate combination of these techniques, including regularization, quantization, and pruning, can aid in building an effective compression pipeline for establishing DL models for IoT use-cases. The third direction incorporates both these views and introduces a novel approach of parallelization for setting up a distributed systems view of DL for IoT.

Findings

DL models are growing deeper with every passing year. Well-coordinated distributed execution of such models using Fog displays a promising future for the IoT application realm. It is realized that a vertically partitioned compressed deep model can handle the trade-off between size, accuracy, communication overhead, bandwidth utilization, and latency but at the expense of an additionally considerable memory footprint. To reduce the memory budget, we propose to exploit Hashed Nets as potentially favorable candidates for distributed frameworks. However, the critical point between accuracy and size for such models needs further investigation.

Originality/value

To the best of our knowledge, no study has explored the inherent parallelism in deep neural network architectures for their efficient distribution over the Edge-Fog continuum. Besides covering techniques and frameworks that have tried to bring inference to the Edge, the review uncovers significant issues and possible future directions for endorsing deep models as processing engines for real-time IoT. The study is directed to both researchers and industrialists to take on various applications to the Edge for better user experience.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 13 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 31 December 2021

Jyothi N. and Rekha Patil

This study aims to develop a trust mechanism in a Vehicular ad hoc Network (VANET) based on an optimized deep learning for selfish node detection.

Abstract

Purpose

This study aims to develop a trust mechanism in a Vehicular ad hoc Network (VANET) based on an optimized deep learning for selfish node detection.

Design/methodology/approach

The authors built a deep learning-based optimized trust mechanism that removes malicious content generated by selfish VANET nodes. This deep learning-based optimized trust framework is the combination of the Deep Belief Network-based Red Fox Optimization algorithm. A novel deep learning-based optimized model is developed to identify the type of vehicle in the non-line of sight (nLoS) condition. This authentication scheme satisfies both the security and privacy goals of the VANET environment. The message authenticity and integrity are verified using the vehicle location to determine the trust level. The location is verified via distance and time. It identifies whether the sender is in its actual location based on the time and distance.

Findings

A deep learning-based optimized Trust model is used to detect the obstacles that are present in both the line of sight and nLoS conditions to reduce the accident rate. While compared to the previous methods, the experimental results outperform better prediction results in terms of accuracy, precision, recall, computational cost and communication overhead.

Practical implications

The experiments are conducted using the Network Simulator Version 2 simulator and evaluated using different performance metrics including computational cost, accuracy, precision, recall and communication overhead with simple attack and opinion tampering attack. However, the proposed method provided better prediction results in terms of computational cost, accuracy, precision, recall, and communication overhead than other existing methods, such as K-nearest neighbor and Artificial Neural Network. Hence, the proposed method highly against the simple attack and opinion tampering attacks.

Originality/value

This paper proposed a deep learning-based optimized Trust framework for trust prediction in VANET. A deep learning-based optimized Trust model is used to evaluate both event message senders and event message integrity and accuracy.

Details

International Journal of Pervasive Computing and Communications, vol. 18 no. 3
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 24 September 2019

Qinghua Liu, Lu Sun, Alain Kornhauser, Jiahui Sun and Nick Sangwa

To realize classification of different pavements, a road roughness acquisition system design and an improved restricted Boltzmann machine deep neural network algorithm based on…

Abstract

Purpose

To realize classification of different pavements, a road roughness acquisition system design and an improved restricted Boltzmann machine deep neural network algorithm based on Adaboost Backward Propagation algorithm for road roughness detection is presented in this paper. The developed measurement system, including hardware designs and algorithm for software, constitutes an independent system which is low-cost, convenient for installation and small.

Design/methodology/approach

The inputs of restricted Boltzmann machine deep neural network are the vehicle vertical acceleration power spectrum and the pitch acceleration power spectrum, which is calculated using ADAMS finite element software. Adaboost Backward Propagation algorithm is used in each restricted Boltzmann machine deep neural network classification model for fine-tuning given its performance of global searching. The algorithm is first applied to road spectrum detection and experiments indicate that the algorithm is suitable for detecting pavement roughness.

Findings

The detection rate of RBM deep neural network algorithm based on Adaboost Backward Propagation is up to 96 per cent, and the false positive rate is below 3.34 per cent. These indices are both better than the other supervised algorithms, which also performs better in extracting the intrinsic characteristics of data, and therefore improves the classification accuracy and classification quality. Additionally, the classification performance is optimized. The experimental results show that the algorithm can improve performance of restricted Boltzmann machine deep neural networks. The system can be used for detecting pavement roughness.

Originality/value

This paper presents an improved restricted Boltzmann machine deep neural network algorithm based on Adaboost Backward Propagation for identifying the road roughness. Through the restricted Boltzmann machine, it completes pre-training and initializing sample weights. The entire neural network is fine-tuned through the Adaboost Backward Propagation algorithm, verifying the validity of the algorithm on the MNIST data set. A quarter vehicle model is used as the foundation, and the vertical acceleration spectrum of the vehicle center of mass and pitch acceleration spectrum were obtained by simulation in ADAMS as the input samples. The experimental results show that the improved algorithm has better optimization ability, improves the detection rate and can detect the road roughness more effectively.

Article
Publication date: 5 May 2022

Defeng Lv, Huawei Wang and Changchang Che

The purpose of this study is to analyze the intelligent semisupervised fault diagnosis method of aeroengine.

Abstract

Purpose

The purpose of this study is to analyze the intelligent semisupervised fault diagnosis method of aeroengine.

Design/methodology/approach

A semisupervised fault diagnosis method based on denoising autoencoder (DAE) and deep belief network (DBN) is proposed for aeroengine. Multiple state parameters of aeroengine with long time series are processed to form high-dimensional fault samples and corresponding fault types are taken as sample labels. DAE is applied for unsupervised learning of fault samples, so as to achieve denoised dimension-reduction features. Subsequently, the extracted features and sample labels are put into DBN for supervised learning. Thus, the semisupervised fault diagnosis of aeroengine can be achieved by the combination of unsupervised learning and supervised learning.

Findings

The JT9D aeroengine data set and simulated aeroengine data set are applied to test the effectiveness of the proposed method. The result shows that the semisupervised fault diagnosis method of aeroengine based on DAE and DBN has great robustness and can maintain high accuracy of fault diagnosis under noise interference. Compared with other traditional models and separate deep learning model, the proposed method also has lower error and higher accuracy of fault diagnosis.

Originality/value

Multiple state parameters with long time series are processed to form high-dimensional fault samples. As a typical unsupervised learning, DAE is used to denoise the fault samples and extract dimension-reduction features for future deep learning. Based on supervised learning, DBN is applied to process the extracted features and fault diagnosis of aeroengine with multiple state parameters can be achieved through the pretraining and reverse fine-tuning of restricted Boltzmann machines.

Details

Aircraft Engineering and Aerospace Technology, vol. 94 no. 10
Type: Research Article
ISSN: 1748-8842

Keywords

Article
Publication date: 30 April 2021

J Aruna Santhi and T Vijaya Saradhi

This paper tactics to implement the attack detection in medical Internet of things (IoT) devices using improved deep learning architecture for accomplishing the concept bring your…

Abstract

Purpose

This paper tactics to implement the attack detection in medical Internet of things (IoT) devices using improved deep learning architecture for accomplishing the concept bring your own device (BYOD). Here, a simulation-based hospital environment is modeled where many IoT devices or medical equipment are communicated with each other. The node or the device, which is creating the attack are recognized with the support of attribute collection. The dataset pertaining to the attack detection in medical IoT is gathered from each node that is considered as features. These features are subjected to a deep belief network (DBN), which is a part of deep learning algorithm. Despite the existing DBN, the number of hidden neurons of DBN is tuned or optimized correctly with the help of a hybrid meta-heuristic algorithm by merging grasshopper optimization algorithm (GOA) and spider monkey optimization (SMO) in order to enhance the accuracy of detection. The hybrid algorithm is termed as local leader phase-based GOA (LLP-GOA). The DBN is used to train the nodes by creating the data library with attack details, thus maintaining accurate detection during testing.

Design/methodology/approach

This paper has presented novel attack detection in medical IoT devices using improved deep learning architecture as BYOD. With this, this paper aims to show the high convergence and better performance in detecting attacks in the hospital network.

Findings

From the analysis, the overall performance analysis of the proposed LLP-GOA-based DBN in terms of accuracy was 0.25% better than particle swarm optimization (PSO)-DBN, 0.15% enhanced than grey wolf algorithm (GWO)-DBN, 0.26% enhanced than SMO-DBN and 0.43% enhanced than GOA-DBN. Similarly, the accuracy of the proposed LLP-GOA-DBN model was 13% better than support vector machine (SVM), 5.4% enhanced than k-nearest neighbor (KNN), 8.7% finer than neural network (NN) and 3.5% enhanced than DBN.

Originality/value

This paper adopts a hybrid algorithm termed as LLP-GOA for the accurate detection of attacks in medical IoT for improving the enhanced security in healthcare sector using the optimized deep learning. This is the first work which utilizes LLP-GOA algorithm for improving the performance of DBN for enhancing the security in the healthcare sector.

Article
Publication date: 13 July 2018

M. Arif Wani and Saduf Afzal

Many strategies have been put forward for training deep network models, however, stacking of several layers of non-linearities typically results in poor propagation of gradients…

Abstract

Purpose

Many strategies have been put forward for training deep network models, however, stacking of several layers of non-linearities typically results in poor propagation of gradients and activations. The purpose of this paper is to explore the use of two steps strategy where initial deep learning model is obtained first by unsupervised learning and then optimizing the initial deep learning model by fine tuning. A number of fine tuning algorithms are explored in this work for optimizing deep learning models. This includes proposing a new algorithm where Backpropagation with adaptive gain algorithm is integrated with Dropout technique and the authors evaluate its performance in the fine tuning of the pretrained deep network.

Design/methodology/approach

The parameters of deep neural networks are first learnt using greedy layer-wise unsupervised pretraining. The proposed technique is then used to perform supervised fine tuning of the deep neural network model. Extensive experimental study is performed to evaluate the performance of the proposed fine tuning technique on three benchmark data sets: USPS, Gisette and MNIST. The authors have tested the approach on varying size data sets which include randomly chosen training samples of size 20, 50, 70 and 100 percent from the original data set.

Findings

Through extensive experimental study, it is concluded that the two steps strategy and the proposed fine tuning technique significantly yield promising results in optimization of deep network models.

Originality/value

This paper proposes employing several algorithms for fine tuning of deep network model. A new approach that integrates adaptive gain Backpropagation (BP) algorithm with Dropout technique is proposed for fine tuning of deep networks. Evaluation and comparison of various algorithms proposed for fine tuning on three benchmark data sets is presented in the paper.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 11 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

1 – 10 of over 27000