Search results
1 – 10 of 57
The purpose of this paper is to propose a approach for data visualization and industrial process monitoring.
Abstract
Purpose
The purpose of this paper is to propose a approach for data visualization and industrial process monitoring.
Design/methodology/approach
A deep enhanced t-distributed stochastic neighbor embedding (DESNE) neural network is proposed for data visualization and process monitoring. The DESNE is composed of two deep neural networks: stacked variant auto-encoder (SVAE) and a deep label-guided t-stochastic neighbor embedding (DLSNE) neural network. In the DESNE network, SVAE extracts informative features of the raw data set, and then DLSNE projects the extracted features to a two dimensional graph.
Findings
The proposed DESNE is verified on the Tennessee Eastman process and a real data set of blade icing of wind turbines. The results indicate that DESNE outperforms some visualization methods in process monitoring.
Originality/value
This paper has significant originality. A stacked variant auto-encoder is proposed for feature extraction. The stacked variant auto-encoder can improve the separation among classes. A deep label-guided t-SNE is proposed for visualization. A novel visualization-based process monitoring method is proposed.
Details
Keywords
Xuan Ji, Jiachen Wang and Zhijun Yan
Stock price prediction is a hot topic and traditional prediction methods are usually based on statistical and econometric models. However, these models are difficult to deal with…
Abstract
Purpose
Stock price prediction is a hot topic and traditional prediction methods are usually based on statistical and econometric models. However, these models are difficult to deal with nonstationary time series data. With the rapid development of the internet and the increasing popularity of social media, online news and comments often reflect investors’ emotions and attitudes toward stocks, which contains a lot of important information for predicting stock price. This paper aims to develop a stock price prediction method by taking full advantage of social media data.
Design/methodology/approach
This study proposes a new prediction method based on deep learning technology, which integrates traditional stock financial index variables and social media text features as inputs of the prediction model. This study uses Doc2Vec to build long text feature vectors from social media and then reduce the dimensions of the text feature vectors by stacked auto-encoder to balance the dimensions between text feature variables and stock financial index variables. Meanwhile, based on wavelet transform, the time series data of stock price is decomposed to eliminate the random noise caused by stock market fluctuation. Finally, this study uses long short-term memory model to predict the stock price.
Findings
The experiment results show that the method performs better than all three benchmark models in all kinds of evaluation indicators and can effectively predict stock price.
Originality/value
In this paper, this study proposes a new stock price prediction model that incorporates traditional financial features and social media text features which are derived from social media based on deep learning technology.
Details
Keywords
Masoud Azarbik and Mostafa Sarlak
This paper aims to report how one can assess the transient stability of a power system by using stacked auto-encoders.
Abstract
Purpose
This paper aims to report how one can assess the transient stability of a power system by using stacked auto-encoders.
Design/methodology/approach
The proposed algorithm works in a power system equipped with the wide area measurement system. To be more exact, it needs pre- and post-disturbance values of frequency sent from phasor measurement units.
Findings
The authors have investigated the performance of the proposed method. Going through details, the authors have simulated many contingencies, and then have predicted the transient stability in each of which by using the proposed algorithm.
Originality/value
The results demonstrate that the algorithm is fast, and it has acceptable performance under different circumstances including the change of system topology and failures of telecommunication channels.
Details
Keywords
Kotaru Kiran and Rajeswara Rao D.
Vertical handover has been grown rapidly due to the mobility model improvements. These improvements are limited to certain circumstances and do not provide the support in the…
Abstract
Purpose
Vertical handover has been grown rapidly due to the mobility model improvements. These improvements are limited to certain circumstances and do not provide the support in the generic mobility, but offering vertical handover management in HetNets is very crucial and challenging. Therefore, this paper presents a vertical handoff management method using the effective network identification method.
Design/methodology/approach
This paper presents a vertical handoff management method using the effective network identification method. The handover triggering schemes are initially modeled to find the suitable position for starting handover using computed coverage area of the WLAN access point or cellular base station. Consequently, inappropriate networks are removed to determine the optimal network for performing the handover process. Accordingly, the network identification approach is introduced based on an adaptive particle-based Sailfish optimizer (APBSO). The APBSO is newly designed by incorporating self-adaptive particle swarm optimization (APSO) in Sailfish optimizer (SFO) and hence, modifying the update rule of the APBSO algorithm based on the location of the solutions in the past iterations. Also, the proposed APBSO is utilized for training deep-stacked autoencoder to choose the optimal weights. Several parameters, like end to end (E2E) delay, jitter, signal-to-interference-plus-noise ratio (SINR), packet loss, handover probability (HOP) are considered to find the best network.
Findings
The developed APBSO-based deep stacked autoencoder outperformed than other methods with a minimal delay of 11.37 ms, minimal HOP of 0.312, maximal stay time of 7.793 s and maximal throughput of 12.726 Mbps, respectively.
Originality/value
The network identification approach is introduced based on an APBSO. The APBSO is newly designed by incorporating self-APSO in SFO and hence, modifying the update rule of the APBSO algorithm based on the location of the solutions in the past iterations. Also, the proposed APBSO is used for training deep-stacked autoencoder to choose the optimal weights. Several parameters, like E2E delay, jitter, SINR, packet loss and HOP are considered to find the best network. The developed APBSO-based deep stacked autoencoder outperformed than other methods with minimal delay minimal HOP, maximal stay time and maximal throughput.
Details
Keywords
Wilson Charles Chanhemo, Mustafa H. Mohsini, Mohamedi M. Mjahidi and Florence U. Rashidi
This study explores challenges facing the applicability of deep learning (DL) in software-defined networks (SDN) based campus networks. The study intensively explains the…
Abstract
Purpose
This study explores challenges facing the applicability of deep learning (DL) in software-defined networks (SDN) based campus networks. The study intensively explains the automation problem that exists in traditional campus networks and how SDN and DL can provide mitigating solutions. It further highlights some challenges which need to be addressed in order to successfully implement SDN and DL in campus networks to make them better than traditional networks.
Design/methodology/approach
The study uses a systematic literature review. Studies on DL relevant to campus networks have been presented for different use cases. Their limitations are given out for further research.
Findings
Following the analysis of the selected studies, it showed that the availability of specific training datasets for campus networks, SDN and DL interfacing and integration in production networks are key issues that must be addressed to successfully deploy DL in SDN-enabled campus networks.
Originality/value
This study reports on challenges associated with implementation of SDN and DL models in campus networks. It contributes towards further thinking and architecting of proposed SDN-based DL solutions for campus networks. It highlights that single problem-based solutions are harder to implement and unlikely to be adopted in production networks.
Details
Keywords
Yang Lu, Shujuan Yi, Yurong Liu and Yuling Ji
This paper aims to design a multi-layer convolutional neural network (CNN) to solve biomimetic robot path planning problem.
Abstract
Purpose
This paper aims to design a multi-layer convolutional neural network (CNN) to solve biomimetic robot path planning problem.
Design/methodology/approach
At first, the convolution kernel with different scales can be obtained by using the sparse auto encoder training algorithm; the parameter of the hidden layer is a series of convolutional kernel, and the authors use these kernels to extract first-layer features. Then, the authors get the second-layer features through the max-pooling operators, which improve the invariance of the features. Finally, the authors use fully connected layers of neural networks to accomplish the path planning task.
Findings
The NAO biomimetic robot respond quickly and correctly to the dynamic environment. The simulation experiments show that the deep neural network outperforms in dynamic and static environment than the conventional method.
Originality/value
A new method of deep learning based biomimetic robot path planning is proposed. The authors designed a multi-layer CNN which includes max-pooling layer and convolutional kernel. Then, the first and second layers features can be extracted by these kernels. Finally, the authors use the sparse auto encoder training algorithm to train the CNN so as to accomplish the path planning task of NAO robot.
Details
Keywords
Shubham Bharti, Arun Kumar Yadav, Mohit Kumar and Divakar Yadav
With the rise of social media platforms, an increasing number of cases of cyberbullying has reemerged. Every day, large number of people, especially teenagers, become the victim…
Abstract
Purpose
With the rise of social media platforms, an increasing number of cases of cyberbullying has reemerged. Every day, large number of people, especially teenagers, become the victim of cyber abuse. A cyberbullied person can have a long-lasting impact on his mind. Due to it, the victim may develop social anxiety, engage in self-harm, go into depression or in the extreme cases, it may lead to suicide. This paper aims to evaluate various techniques to automatically detect cyberbullying from tweets by using machine learning and deep learning approaches.
Design/methodology/approach
The authors applied machine learning algorithms approach and after analyzing the experimental results, the authors postulated that deep learning algorithms perform better for the task. Word-embedding techniques were used for word representation for our model training. Pre-trained embedding GloVe was used to generate word embedding. Different versions of GloVe were used and their performance was compared. Bi-directional long short-term memory (BLSTM) was used for classification.
Findings
The dataset contains 35,787 labeled tweets. The GloVe840 word embedding technique along with BLSTM provided the best results on the dataset with an accuracy, precision and F1 measure of 92.60%, 96.60% and 94.20%, respectively.
Research limitations/implications
If a word is not present in pre-trained embedding (GloVe), it may be given a random vector representation that may not correspond to the actual meaning of the word. It means that if a word is out of vocabulary (OOV) then it may not be represented suitably which can affect the detection of cyberbullying tweets. The problem may be rectified through the use of character level embedding of words.
Practical implications
The findings of the work may inspire entrepreneurs to leverage the proposed approach to build deployable systems to detect cyberbullying in different contexts such as workplace, school, etc and may also draw the attention of lawmakers and policymakers to create systemic tools to tackle the ills of cyberbullying.
Social implications
Cyberbullying, if effectively detected may save the victims from various psychological problems which, in turn, may lead society to a healthier and more productive life.
Originality/value
The proposed method produced results that outperform the state-of-the-art approaches in detecting cyberbullying from tweets. It uses a large dataset, created by intelligently merging two publicly available datasets. Further, a comprehensive evaluation of the proposed methodology has been presented.
Details
Keywords
Sagar Pande, Aditya Khamparia and Deepak Gupta
One of the important key components of health care–based system is a reliable intrusion detection system. Traditional techniques are not adequate to handle complex data. Also, the…
Abstract
Purpose
One of the important key components of health care–based system is a reliable intrusion detection system. Traditional techniques are not adequate to handle complex data. Also, the diversified intrusion techniques cannot meet current network requirements. Not only the data is getting increased but also the attacks are increasing very rapidly. Deep learning and machine learning techniques are very trending in the area of research in the area of network security. A lot of work has been done in this area by still evolutionary algorithms along with machine learning is very rarely explored. The purpose of this study is to provide novel deep learning framework for the detection of attacks.
Design/methodology/approach
In this paper, novel deep learning is the framework is proposed for the detection of attacks. Also, a comparison of machine learning and deep learning algorithms is provided.
Findings
The obtained results are more than 99% for both the data sets.
Research limitations/implications
The diversified intrusion techniques cannot meet current network requirements.
Practical implications
The data is getting increased but also the attacks are increasing very rapidly.
Social implications
Deep learning and machine learning techniques are very trending in the area of research in the area of network security.
Originality/value
Novel deep learning is the framework is proposed for the detection of attacks.
Details
Keywords
Zhifeng Wang, Chi Zuo and Chunyan Zeng
Recently, the double joint photographic experts group (JPEG) compression detection tasks have been paid much more attention in the field of Web image forensics. Although there are…
Abstract
Purpose
Recently, the double joint photographic experts group (JPEG) compression detection tasks have been paid much more attention in the field of Web image forensics. Although there are several useful methods proposed for double JPEG compression detection when the quantization matrices are different in the primary and secondary compression processes, it is still a difficult problem when the quantization matrices are the same. Moreover, those methods for the different or the same quantization matrices are implemented in independent ways. The paper aims to build a new unified framework for detecting the doubly JPEG compression.
Design/methodology/approach
First, the Y channel of JPEG images is cut into 8 × 8 nonoverlapping blocks, and two groups of features that characterize the artifacts caused by doubly JPEG compression with the same and the different quantization matrices are extracted on those blocks. Then, the Riemannian manifold learning is applied for dimensionality reduction while preserving the local intrinsic structure of the features. Finally, a deep stack autoencoder network with seven layers is designed to detect the doubly JPEG compression.
Findings
Experimental results with different quality factors have shown that the proposed approach performs much better than the state-of-the-art approaches.
Practical implications
To verify the integrity and authenticity of Web images, the research of double JPEG compression detection is increasingly paid more attentions.
Originality/value
This paper aims to propose a unified framework to detect the double JPEG compression in the scenario whether the quantization matrix is different or not, which means this approach can be applied in more practical Web forensics tasks.
Details
Keywords
Sandeep Kumar Hegde and Monica R. Mundada
Chronic diseases are considered as one of the serious concerns and threats to public health across the globe. Diseases such as chronic diabetes mellitus (CDM), cardio…
Abstract
Purpose
Chronic diseases are considered as one of the serious concerns and threats to public health across the globe. Diseases such as chronic diabetes mellitus (CDM), cardio vasculardisease (CVD) and chronic kidney disease (CKD) are major chronic diseases responsible for millions of death. Each of these diseases is considered as a risk factor for the other two diseases. Therefore, noteworthy attention is being paid to reduce the risk of these diseases. A gigantic amount of medical data is generated in digital form from smart healthcare appliances in the current era. Although numerous machine learning (ML) algorithms are proposed for the early prediction of chronic diseases, these algorithmic models are neither generalized nor adaptive when the model is imposed on new disease datasets. Hence, these algorithms have to process a huge amount of disease data iteratively until the model converges. This limitation may make it difficult for ML models to fit and produce imprecise results. A single algorithm may not yield accurate results. Nonetheless, an ensemble of classifiers built from multiple models, that works based on a voting principle has been successfully applied to solve many classification tasks. The purpose of this paper is to make early prediction of chronic diseases using hybrid generative regression based deep intelligence network (HGRDIN) model.
Design/methodology/approach
In the proposed paper generative regression (GR) model is used in combination with deep neural network (DNN) for the early prediction of chronic disease. The GR model will obtain prior knowledge about the labelled data by analyzing the correlation between features and class labels. Hence, the weight assignment process of DNN is influenced by the relationship between attributes rather than random assignment. The knowledge obtained through these processes is passed as input to the DNN network for further prediction. Since the inference about the input data instances is drawn at the DNN through the GR model, the model is named as hybrid generative regression-based deep intelligence network (HGRDIN).
Findings
The credibility of the implemented approach is rigorously validated using various parameters such as accuracy, precision, recall, F score and area under the curve (AUC) score. During the training phase, the proposed algorithm is constantly regularized using the elastic net regularization technique and also hyper-tuned using the various parameters such as momentum and learning rate to minimize the misprediction rate. The experimental results illustrate that the proposed approach predicted the chronic disease with a minimal error by avoiding the possible overfitting and local minima problems. The result obtained with the proposed approach is also compared with the various traditional approaches.
Research limitations/implications
Usually, the diagnostic data are multi-dimension in nature where the performance of the ML algorithm will degrade due to the data overfitting, curse of dimensionality issues. The result obtained through the experiment has achieved an average accuracy of 95%. Hence, analysis can be made further to improve predictive accuracy by overcoming the curse of dimensionality issues.
Practical implications
The proposed ML model can mimic the behavior of the doctor's brain. These algorithms have the capability to replace clinical tasks. The accurate result obtained through the innovative algorithms can free the physician from the mundane care and practices so that the physician can focus more on the complex issues.
Social implications
Utilizing the proposed predictive model at the decision-making level for the early prediction of the disease is considered as a promising change towards the healthcare sector. The global burden of chronic disease can be reduced at an exceptional level through these approaches.
Originality/value
In the proposed HGRDIN model, the concept of transfer learning approach is used where the knowledge acquired through the GR process is applied on DNN that identified the possible relationship between the dependent and independent feature variables by mapping the chronic data instances to its corresponding target class before it is being passed as input to the DNN network. Hence, the result of the experiments illustrated that the proposed approach obtained superior performance in terms of various validation parameters than the existing conventional techniques.
Details