Search results

1 – 10 of 173
Article
Publication date: 24 September 2019

Qinghua Liu, Lu Sun, Alain Kornhauser, Jiahui Sun and Nick Sangwa

To realize classification of different pavements, a road roughness acquisition system design and an improved restricted Boltzmann machine deep neural network algorithm based on…

Abstract

Purpose

To realize classification of different pavements, a road roughness acquisition system design and an improved restricted Boltzmann machine deep neural network algorithm based on Adaboost Backward Propagation algorithm for road roughness detection is presented in this paper. The developed measurement system, including hardware designs and algorithm for software, constitutes an independent system which is low-cost, convenient for installation and small.

Design/methodology/approach

The inputs of restricted Boltzmann machine deep neural network are the vehicle vertical acceleration power spectrum and the pitch acceleration power spectrum, which is calculated using ADAMS finite element software. Adaboost Backward Propagation algorithm is used in each restricted Boltzmann machine deep neural network classification model for fine-tuning given its performance of global searching. The algorithm is first applied to road spectrum detection and experiments indicate that the algorithm is suitable for detecting pavement roughness.

Findings

The detection rate of RBM deep neural network algorithm based on Adaboost Backward Propagation is up to 96 per cent, and the false positive rate is below 3.34 per cent. These indices are both better than the other supervised algorithms, which also performs better in extracting the intrinsic characteristics of data, and therefore improves the classification accuracy and classification quality. Additionally, the classification performance is optimized. The experimental results show that the algorithm can improve performance of restricted Boltzmann machine deep neural networks. The system can be used for detecting pavement roughness.

Originality/value

This paper presents an improved restricted Boltzmann machine deep neural network algorithm based on Adaboost Backward Propagation for identifying the road roughness. Through the restricted Boltzmann machine, it completes pre-training and initializing sample weights. The entire neural network is fine-tuned through the Adaboost Backward Propagation algorithm, verifying the validity of the algorithm on the MNIST data set. A quarter vehicle model is used as the foundation, and the vertical acceleration spectrum of the vehicle center of mass and pitch acceleration spectrum were obtained by simulation in ADAMS as the input samples. The experimental results show that the improved algorithm has better optimization ability, improves the detection rate and can detect the road roughness more effectively.

Article
Publication date: 22 July 2021

Linxia Zhong, Wei Wei and Shixuan Li

Because of the extensive user coverage of news sites and apps, greater social and commercial value can be realized if users can access their favourite news as easily as possible…

Abstract

Purpose

Because of the extensive user coverage of news sites and apps, greater social and commercial value can be realized if users can access their favourite news as easily as possible. However, news has a timeliness factor; there are serious cold start and data sparsity in news recommendation, and news users are more susceptible to recent topical news. Therefore, this study aims to propose a personalized news recommendation approach based on topic model and restricted Boltzmann machine (RBM).

Design/methodology/approach

Firstly, the model extracts the news topic information based on the LDA2vec topic model. Then, the implicit behaviour data are analysed and converted into explicit rating data according to the rules. The highest weight is assigned to recent hot news stories. Finally, the topic information and the rating data are regarded as the conditional layer and visual layer of the conditional RBM (CRBM) model, respectively, to implement news recommendations.

Findings

The experimental results show that using LDA2vec-based news topic as a conditional layer in the CRBM model provides a higher prediction rating and improves the effectiveness of news recommendations.

Originality/value

This study proposes a personalized news recommendation approach based on an improved CRBM. Topic model is applied to news topic extraction and used as the conditional layer of the CRBM. It not only alleviates the sparseness of rating data to improve the efficient in CRBM but also considers that readers are more susceptible to popular or trending news.

Details

The Electronic Library , vol. 39 no. 4
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 12 November 2018

Jingshuai Zhang, Yuanxin Ouyang, Weizhu Xie, Wenge Rong and Zhang Xiong

The purpose of this paper is to propose an approach to incorporate contextual information into collaborative filtering (CF) based on the restricted Boltzmann machine (RBM) and…

Abstract

Purpose

The purpose of this paper is to propose an approach to incorporate contextual information into collaborative filtering (CF) based on the restricted Boltzmann machine (RBM) and deep belief networks (DBNs). Traditionally, neither the RBM nor its derivative model has been applied to modeling contextual information. In this work, the authors analyze the RBM and explore how to utilize a user’s occupation information to enhance recommendation accuracy.

Design/methodology/approach

The proposed approach is based on the RBM. The authors employ user occupation information as a context to design a context-aware RBM and stack the context-aware RBM to construct DBNs for recommendations.

Findings

The experiments on the MovieLens data sets show that the user occupation-aware RBM outperforms other CF models, and combinations of different context-aware models by mutual information can obtain better accuracy. Moreover, the context-aware DBNs model is superior to baseline methods, indicating that deep networks have more qualifications for extracting preference features.

Originality/value

To improve recommendation accuracy through modeling contextual information, the authors propose context-aware CF approaches based on the RBM. Additionally, the authors attempt to introduce hybrid weights based on information entropy to combine context-aware models. Furthermore, the authors stack the RBM to construct a context-aware multilayer network model. The results of the experiments not only convey that the context-aware RBM has potential in terms of contextual information but also demonstrate that the combination method, the hybrid recommendation and the multilayer neural network extension have significant benefits for the recommendation quality.

Details

Online Information Review, vol. 44 no. 2
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 5 May 2022

Defeng Lv, Huawei Wang and Changchang Che

The purpose of this study is to analyze the intelligent semisupervised fault diagnosis method of aeroengine.

Abstract

Purpose

The purpose of this study is to analyze the intelligent semisupervised fault diagnosis method of aeroengine.

Design/methodology/approach

A semisupervised fault diagnosis method based on denoising autoencoder (DAE) and deep belief network (DBN) is proposed for aeroengine. Multiple state parameters of aeroengine with long time series are processed to form high-dimensional fault samples and corresponding fault types are taken as sample labels. DAE is applied for unsupervised learning of fault samples, so as to achieve denoised dimension-reduction features. Subsequently, the extracted features and sample labels are put into DBN for supervised learning. Thus, the semisupervised fault diagnosis of aeroengine can be achieved by the combination of unsupervised learning and supervised learning.

Findings

The JT9D aeroengine data set and simulated aeroengine data set are applied to test the effectiveness of the proposed method. The result shows that the semisupervised fault diagnosis method of aeroengine based on DAE and DBN has great robustness and can maintain high accuracy of fault diagnosis under noise interference. Compared with other traditional models and separate deep learning model, the proposed method also has lower error and higher accuracy of fault diagnosis.

Originality/value

Multiple state parameters with long time series are processed to form high-dimensional fault samples. As a typical unsupervised learning, DAE is used to denoise the fault samples and extract dimension-reduction features for future deep learning. Based on supervised learning, DBN is applied to process the extracted features and fault diagnosis of aeroengine with multiple state parameters can be achieved through the pretraining and reverse fine-tuning of restricted Boltzmann machines.

Details

Aircraft Engineering and Aerospace Technology, vol. 94 no. 10
Type: Research Article
ISSN: 1748-8842

Keywords

Article
Publication date: 24 August 2021

K. Sujatha and V. Udayarani

The purpose of this paper is to improve the privacy in healthcare datasets that hold sensitive information. Putting a stop to privacy divulgence and bestowing relevant information…

Abstract

Purpose

The purpose of this paper is to improve the privacy in healthcare datasets that hold sensitive information. Putting a stop to privacy divulgence and bestowing relevant information to legitimate users are at the same time said to be of differing goals. Also, the swift evolution of big data has put forward considerable ease to all chores of life. As far as the big data era is concerned, propagation and information sharing are said to be the two main facets. Despite several research works performed on these aspects, with the incremental nature of data, the likelihood of privacy leakage is also substantially expanded through various benefits availed of big data. Hence, safeguarding data privacy in a complicated environment has become a major setback.

Design/methodology/approach

In this study, a method called deep restricted additive homomorphic ElGamal privacy preservation (DR-AHEPP) to preserve the privacy of data even in case of incremental data is proposed. An entropy-based differential privacy quasi identification and DR-AHEPP algorithms are designed, respectively, for obtaining privacy-preserved minimum falsified quasi-identifier set and computationally efficient privacy-preserved data.

Findings

Analysis results using Diabetes 130-US hospitals illustrate that the proposed DR-AHEPP method is more significant in preserving privacy on incremental data than existing methods. A comparative analysis of state-of-the-art works with the objective to minimize information loss, false positive rate and execution time with higher accuracy is calibrated.

Originality/value

The paper provides better performance using Diabetes 130-US hospitals for achieving high accuracy, low information loss and false positive rate. The result illustrates that the proposed method increases the accuracy by 4% and reduces the false positive rate and information loss by 25 and 35%, respectively, as compared to state-of-the-art works.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 15 no. 1
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 27 April 2020

Harkamal Deep Singh and Jashandeep Singh

As a result of the deregulations in the power system networks, diverse beneficial operations have been competing to optimize their operational costs and improve the consistency of…

90

Abstract

Purpose

As a result of the deregulations in the power system networks, diverse beneficial operations have been competing to optimize their operational costs and improve the consistency of their electrical infrastructure. Having certain and comprehensive state assessment of the electrical equipment helps the assortment of the suitable maintenance plan. Hence, the insulation condition monitoring and diagnostic techniques for the reliable and economic transformers are necessary to accomplish a comprehensive and proficient transformer condition assessment.

Design/methodology/approach

The main intent of this paper is to develop a new prediction model for the aging assessment of power transformer insulation oil. The data pertaining to power transformer insulation oil have been already collected using 20 working power transformers of 16-20 MVA operated at various substations in Punjab, India. It includes various parameters associated with the transformer such as breakdown voltage, moisture, resistivity, tan δ, interfacial tension and flashpoint. These data are given as input for predicting the age of the insulation oil. The proposed aging assessment model deploys a hybrid classifier model by merging the neural network (NN) and deep belief network (DBN). As the main contribution of this paper, the training algorithm of both NN and DBN is replaced by the modified lion algorithm (LA) named as a randomly modified lion algorithm (RM-LA) to reduce the error difference between the predicted and actual outcomes. Finally, the comparative analysis of different prediction models with respect to error measures proves the efficiency of the proposed model.

Findings

For the Transformer 2, root mean square error (RMSE) of the developed RM-LA-NN + DBN was 83.2, 92.5, 40.4, 57.4, 93.9 and 72 per cent improved than NN + DBN, PSO, FF, CSA, PS-CSA and LA-NN + DBN, respectively. Moreover, the RMSE of the suggested RM-LA-NN + DBN was 97.4 per cent superior to DBN + NN, 96.9 per cent superior to PSO, 81.4 per cent superior to FF, 93.2 per cent superior to CSA, 49.6 per cent superior to PS-CSA and 36.6 per cent superior to LA-based NN + DBN, respectively, for the Transformer 13.

Originality/value

This paper presents a new model for the aging assessment of transformer insulation oil using RM-LA-based DBN + NN. This is the first work uses RM-LA-based optimization for aging assessment in power transformation insulation oil.

Article
Publication date: 18 March 2021

Pandiaraj A., Sundar C. and Pavalarajan S.

Up to date development in sentiment analysis has resulted in a symbolic growth in the volume of study, especially on more subjective text types, namely, product or movie reviews…

Abstract

Purpose

Up to date development in sentiment analysis has resulted in a symbolic growth in the volume of study, especially on more subjective text types, namely, product or movie reviews. The key difference between these texts with news articles is that their target is defined and unique across the text. Hence, the reviews on newspaper articles can deal with three subtasks: correctly spotting the target, splitting the good and bad content from the reviews on the concerned target and evaluating different opinions provided in a detailed manner. On defining these tasks, this paper aims to implement a new sentiment analysis model for article reviews from the newspaper.

Design/methodology/approach

Here, tweets from various newspaper articles are taken and the sentiment analysis process is done with pre-processing, semantic word extraction, feature extraction and classification. Initially, the pre-processing phase is performed, in which different steps such as stop word removal, stemming, blank space removal are carried out and it results in producing the keywords that speak about positive, negative or neutral. Further, semantic words (similar) are extracted from the available dictionary by matching the keywords. Next, the feature extraction is done for the extracted keywords and semantic words using holoentropy to attain information statistics, which results in the attainment of maximum related information. Here, two categories of holoentropy features are extracted: joint holoentropy and cross holoentropy. These extracted features of entire keywords are finally subjected to a hybrid classifier, which merges the beneficial concepts of neural network (NN), and deep belief network (DBN). For improving the performance of sentiment classification, modification is done by inducing the idea of a modified rider optimization algorithm (ROA), so-called new steering updated ROA (NSU-ROA) into NN and DBN for weight update. Hence, the average of both improved classifiers will provide the classified sentiment as positive, negative or neutral from the reviews of newspaper articles effectively.

Findings

Three data sets were considered for experimentation. The results have shown that the developed NSU-ROA + DBN + NN attained high accuracy, which was 2.6% superior to particle swarm optimization, 3% superior to FireFly, 3.8% superior to grey wolf optimization, 5.5% superior to whale optimization algorithm and 3.2% superior to ROA-based DBN + NN from data set 1. The classification analysis has shown that the accuracy of the proposed NSU − DBN + NN was 3.4% enhanced than DBN + NN, 25% enhanced than DBN and 28.5% enhanced than NN and 32.3% enhanced than support vector machine from data set 2. Thus, the effective performance of the proposed NSU − ROA + DBN + NN on sentiment analysis of newspaper articles has been proved.

Originality/value

This paper adopts the latest optimization algorithm called the NSU-ROA to effectively recognize the sentiments of the newspapers with NN and DBN. This is the first work that uses NSU-ROA-based optimization for accurate identification of sentiments from newspaper articles.

Details

Kybernetes, vol. 51 no. 1
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 16 August 2021

Rajshree Varma, Yugandhara Verma, Priya Vijayvargiya and Prathamesh P. Churi

The rapid advancement of technology in online communication and fingertip access to the Internet has resulted in the expedited dissemination of fake news to engage a global…

1406

Abstract

Purpose

The rapid advancement of technology in online communication and fingertip access to the Internet has resulted in the expedited dissemination of fake news to engage a global audience at a low cost by news channels, freelance reporters and websites. Amid the coronavirus disease 2019 (COVID-19) pandemic, individuals are inflicted with these false and potentially harmful claims and stories, which may harm the vaccination process. Psychological studies reveal that the human ability to detect deception is only slightly better than chance; therefore, there is a growing need for serious consideration for developing automated strategies to combat fake news that traverses these platforms at an alarming rate. This paper systematically reviews the existing fake news detection technologies by exploring various machine learning and deep learning techniques pre- and post-pandemic, which has never been done before to the best of the authors’ knowledge.

Design/methodology/approach

The detailed literature review on fake news detection is divided into three major parts. The authors searched papers no later than 2017 on fake news detection approaches on deep learning and machine learning. The papers were initially searched through the Google scholar platform, and they have been scrutinized for quality. The authors kept “Scopus” and “Web of Science” as quality indexing parameters. All research gaps and available databases, data pre-processing, feature extraction techniques and evaluation methods for current fake news detection technologies have been explored, illustrating them using tables, charts and trees.

Findings

The paper is dissected into two approaches, namely machine learning and deep learning, to present a better understanding and a clear objective. Next, the authors present a viewpoint on which approach is better and future research trends, issues and challenges for researchers, given the relevance and urgency of a detailed and thorough analysis of existing models. This paper also delves into fake new detection during COVID-19, and it can be inferred that research and modeling are shifting toward the use of ensemble approaches.

Originality/value

The study also identifies several novel automated web-based approaches used by researchers to assess the validity of pandemic news that have proven to be successful, although currently reported accuracy has not yet reached consistent levels in the real world.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 14 no. 4
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 13 July 2018

M. Arif Wani and Saduf Afzal

Many strategies have been put forward for training deep network models, however, stacking of several layers of non-linearities typically results in poor propagation of gradients…

Abstract

Purpose

Many strategies have been put forward for training deep network models, however, stacking of several layers of non-linearities typically results in poor propagation of gradients and activations. The purpose of this paper is to explore the use of two steps strategy where initial deep learning model is obtained first by unsupervised learning and then optimizing the initial deep learning model by fine tuning. A number of fine tuning algorithms are explored in this work for optimizing deep learning models. This includes proposing a new algorithm where Backpropagation with adaptive gain algorithm is integrated with Dropout technique and the authors evaluate its performance in the fine tuning of the pretrained deep network.

Design/methodology/approach

The parameters of deep neural networks are first learnt using greedy layer-wise unsupervised pretraining. The proposed technique is then used to perform supervised fine tuning of the deep neural network model. Extensive experimental study is performed to evaluate the performance of the proposed fine tuning technique on three benchmark data sets: USPS, Gisette and MNIST. The authors have tested the approach on varying size data sets which include randomly chosen training samples of size 20, 50, 70 and 100 percent from the original data set.

Findings

Through extensive experimental study, it is concluded that the two steps strategy and the proposed fine tuning technique significantly yield promising results in optimization of deep network models.

Originality/value

This paper proposes employing several algorithms for fine tuning of deep network model. A new approach that integrates adaptive gain Backpropagation (BP) algorithm with Dropout technique is proposed for fine tuning of deep networks. Evaluation and comparison of various algorithms proposed for fine tuning on three benchmark data sets is presented in the paper.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 11 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 3 July 2020

Azra Nazir, Roohie Naaz Mir and Shaima Qureshi

The trend of “Deep Learning for Internet of Things (IoT)” has gained fresh momentum with enormous upcoming applications employing these models as their processing engine and Cloud…

274

Abstract

Purpose

The trend of “Deep Learning for Internet of Things (IoT)” has gained fresh momentum with enormous upcoming applications employing these models as their processing engine and Cloud as their resource giant. But this picture leads to underutilization of ever-increasing device pool of IoT that has already passed 15 billion mark in 2015. Thus, it is high time to explore a different approach to tackle this issue, keeping in view the characteristics and needs of the two fields. Processing at the Edge can boost applications with real-time deadlines while complementing security.

Design/methodology/approach

This review paper contributes towards three cardinal directions of research in the field of DL for IoT. The first section covers the categories of IoT devices and how Fog can aid in overcoming the underutilization of millions of devices, forming the realm of the things for IoT. The second direction handles the issue of immense computational requirements of DL models by uncovering specific compression techniques. An appropriate combination of these techniques, including regularization, quantization, and pruning, can aid in building an effective compression pipeline for establishing DL models for IoT use-cases. The third direction incorporates both these views and introduces a novel approach of parallelization for setting up a distributed systems view of DL for IoT.

Findings

DL models are growing deeper with every passing year. Well-coordinated distributed execution of such models using Fog displays a promising future for the IoT application realm. It is realized that a vertically partitioned compressed deep model can handle the trade-off between size, accuracy, communication overhead, bandwidth utilization, and latency but at the expense of an additionally considerable memory footprint. To reduce the memory budget, we propose to exploit Hashed Nets as potentially favorable candidates for distributed frameworks. However, the critical point between accuracy and size for such models needs further investigation.

Originality/value

To the best of our knowledge, no study has explored the inherent parallelism in deep neural network architectures for their efficient distribution over the Edge-Fog continuum. Besides covering techniques and frameworks that have tried to bring inference to the Edge, the review uncovers significant issues and possible future directions for endorsing deep models as processing engines for real-time IoT. The study is directed to both researchers and industrialists to take on various applications to the Edge for better user experience.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 13 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

1 – 10 of 173