Search results
1 – 10 of over 6000The purpose of this paper is to optimize the simultaneous flight performance of a hexarotor unmanned aerial vehicle (UAV) by using simultaneous perturbation stochastic…
Abstract
Purpose
The purpose of this paper is to optimize the simultaneous flight performance of a hexarotor unmanned aerial vehicle (UAV) by using simultaneous perturbation stochastic approximation (i.e. SPSA), deep neural network and proportional integral derivative (i.e. PID) according to varying arm length (i.e. morphing).
Design/methodology/approach
In this paper, proper PID gain coefficients and morphing ratio were obtained using the stochastic optimization method, also known as SPSA to maximize flight efficiency. Because it is difficult to establish an analytical connection between the morphing ratio and hexarotor moments of inertia, the deep neural network was used to obtain the moments of inertia according to the morphing ratio. By using SPSA and deep neural network, the best performance indexes were obtained and both longitudinal and lateral flight simulations were performed with the obtained data.
Findings
With SPSA, the best PID coefficients and morphing ratio are obtained for both longitudinal and lateral flight. Because the hexarotor solid body model changes according to the morphing ratio, the moment of inertia values used in the simulations also change. According to the morphing ratio, the moment of inertia values was obtained with the deep neural network over a created data set.
Research limitations/implications
It takes a long time to obtain the morphing ratio suitable for the hexarotor model and the PID gain coefficients suitable for this morphing ratio. However, this situation can be overcome with the proposed SPSA. In addition, it takes a long time to obtain the appropriate moments of inertia according to the morphing ratio. However, in this case, it was overcome using the deep neural network.
Practical implications
Determining the morphing ratio and PID gain coefficients using the optimization method, as well as determining the moments of inertia using the deep neural network, is very useful as it can increase the efficiency of hexarotor flight and flight efficiently with different arm lengths. With the proposed method, the hexarotor design performance criteria (i.e. rise time, settling time and overshoot) values were significantly improved compared to similar studies.
Social implications
Determining the hexarotor flight parameters using SPSA and deep neural network provides advantages in terms of time, cost and applicability.
Originality/value
The hexarotor flight efficiency is improved with the proposed SPSA and deep neural network approaches. In addition, the desired flight parameters can be obtained more quickly and reliably with the proposed approaches. The design performance criteria were also improved, enabling the hexarotor UAV to follow the given trajectory in the best way and providing convenience for end users. SPSA was preferred because it converged faster than other methods. While other methods perform 2n operations per iteration, SPSA only performs two operations. To obtain the moment of inertia, many physical parameter values of the UAV are required in the existing methods. In the proposed method, by creating a date set, only arm length and moment of inertia were estimated without the need to obtain physical parameters with the deep neural network structure.
Details
Keywords
Azra Nazir, Roohie Naaz Mir and Shaima Qureshi
The trend of “Deep Learning for Internet of Things (IoT)” has gained fresh momentum with enormous upcoming applications employing these models as their processing engine and Cloud…
Abstract
Purpose
The trend of “Deep Learning for Internet of Things (IoT)” has gained fresh momentum with enormous upcoming applications employing these models as their processing engine and Cloud as their resource giant. But this picture leads to underutilization of ever-increasing device pool of IoT that has already passed 15 billion mark in 2015. Thus, it is high time to explore a different approach to tackle this issue, keeping in view the characteristics and needs of the two fields. Processing at the Edge can boost applications with real-time deadlines while complementing security.
Design/methodology/approach
This review paper contributes towards three cardinal directions of research in the field of DL for IoT. The first section covers the categories of IoT devices and how Fog can aid in overcoming the underutilization of millions of devices, forming the realm of the things for IoT. The second direction handles the issue of immense computational requirements of DL models by uncovering specific compression techniques. An appropriate combination of these techniques, including regularization, quantization, and pruning, can aid in building an effective compression pipeline for establishing DL models for IoT use-cases. The third direction incorporates both these views and introduces a novel approach of parallelization for setting up a distributed systems view of DL for IoT.
Findings
DL models are growing deeper with every passing year. Well-coordinated distributed execution of such models using Fog displays a promising future for the IoT application realm. It is realized that a vertically partitioned compressed deep model can handle the trade-off between size, accuracy, communication overhead, bandwidth utilization, and latency but at the expense of an additionally considerable memory footprint. To reduce the memory budget, we propose to exploit Hashed Nets as potentially favorable candidates for distributed frameworks. However, the critical point between accuracy and size for such models needs further investigation.
Originality/value
To the best of our knowledge, no study has explored the inherent parallelism in deep neural network architectures for their efficient distribution over the Edge-Fog continuum. Besides covering techniques and frameworks that have tried to bring inference to the Edge, the review uncovers significant issues and possible future directions for endorsing deep models as processing engines for real-time IoT. The study is directed to both researchers and industrialists to take on various applications to the Edge for better user experience.
Details
Keywords
In order to analyze the text complexity of Chinese and foreign academic English writings, the artificial neural network (ANN) under deep learning (DL) is applied to the study of…
Abstract
Purpose
In order to analyze the text complexity of Chinese and foreign academic English writings, the artificial neural network (ANN) under deep learning (DL) is applied to the study of text complexity. Firstly, the research status and existing problems of text complexity are introduced based on DL. Secondly, based on Back Propagation Neural Network (BPNN) algorithm, analyzation is made on the text complexity of Chinese and foreign academic English writings. And the research establishes a BPNN syntactic complexity evaluation system. Thirdly, MATLAB2013b is used for simulation analysis of the model. The proposed model algorithm BPANN is compared with other classical algorithms, and the weight value of each index and the model training effect are further analyzed by statistical methods. Finally, L2 Syntactic Complexity Analyzer (L2SCA) is used to calculate the syntactic complexity of the two libraries, and Mann–Whitney U test is used to compare the syntactic complexity of Chinese English learners and native English speakers. The experimental results show that compared with the shallow neural network, the deep neural network algorithm has more hidden layers and richer features, and better performance of feature extraction. BPNN algorithm shows excellent performance in the training process, and the actual output value is very close to the expected value. Meantime, the error of sample test is analyzed, and it is found that the evaluation error of BPNN algorithm is less than 1.8%, of high accuracy. However, there are significant differences in grammatical complexity among students with different English writing proficiency. Some measurement methods cannot effectively reflect the types and characteristics of written language, or may have a negative relationship with writing quality. In addition, the research also finds that the measurement of syntactic complexity is more sensitive to the language ability of writing. Therefore, BPNN algorithm can effectively analyze the text complexity of academic English writing. The results of the research provide reference for improving the evaluation system of text complexity of academic paper writing.
Design/methodology/approach
In order to analyze the text complexity of Chinese and foreign academic English writings, the artificial neural network (ANN) under deep learning (DL) is applied to the study of text complexity. Firstly, the research status and existing problems of text complexity are introduced based on DL. Secondly, based on Back Propagation Neural Network (BPNN) algorithm, analyzation is made on the text complexity of Chinese and foreign academic English writings. And the research establishes a BPNN syntactic complexity evaluation system. Thirdly, MATLAB2013b is used for simulation analysis of the model. The proposed model algorithm BPANN is compared with other classical algorithms, and the weight value of each index and the model training effect are further analyzed by statistical methods. Finally, L2 Syntactic Complexity Analyzer (L2SCA) is used to calculate the syntactic complexity of the two libraries, and Mann–Whitney U test is used to compare the syntactic complexity of Chinese English learners and native English speakers. The experimental results show that compared with the shallow neural network, the deep neural network algorithm has more hidden layers and richer features, and better performance of feature extraction. BPNN algorithm shows excellent performance in the training process, and the actual output value is very close to the expected value. Meantime, the error of sample test is analyzed, and it is found that the evaluation error of BPNN algorithm is less than 1.8%, of high accuracy. However, there are significant differences in grammatical complexity among students with different English writing proficiency. Some measurement methods cannot effectively reflect the types and characteristics of written language, or may have a negative relationship with writing quality. In addition, the research also finds that the measurement of syntactic complexity is more sensitive to the language ability of writing. Therefore, BPNN algorithm can effectively analyze the text complexity of academic English writing. The results of the research provide reference for improving the evaluation system of text complexity of academic paper writing.
Findings
In order to analyze the text complexity of Chinese and foreign academic English writings, the artificial neural network (ANN) under deep learning (DL) is applied to the study of text complexity. Firstly, the research status and existing problems of text complexity are introduced based on DL. Secondly, based on Back Propagation Neural Network (BPNN) algorithm, analyzation is made on the text complexity of Chinese and foreign academic English writings. And the research establishes a BPNN syntactic complexity evaluation system. Thirdly, MATLAB2013b is used for simulation analysis of the model. The proposed model algorithm BPANN is compared with other classical algorithms, and the weight value of each index and the model training effect are further analyzed by statistical methods. Finally, L2 Syntactic Complexity Analyzer (L2SCA) is used to calculate the syntactic complexity of the two libraries, and Mann–Whitney U test is used to compare the syntactic complexity of Chinese English learners and native English speakers. The experimental results show that compared with the shallow neural network, the deep neural network algorithm has more hidden layers and richer features, and better performance of feature extraction. BPNN algorithm shows excellent performance in the training process, and the actual output value is very close to the expected value. Meantime, the error of sample test is analyzed, and it is found that the evaluation error of BPNN algorithm is less than 1.8%, of high accuracy. However, there are significant differences in grammatical complexity among students with different English writing proficiency. Some measurement methods cannot effectively reflect the types and characteristics of written language, or may have a negative relationship with writing quality. In addition, the research also finds that the measurement of syntactic complexity is more sensitive to the language ability of writing. Therefore, BPNN algorithm can effectively analyze the text complexity of academic English writing. The results of the research provide reference for improving the evaluation system of text complexity of academic paper writing.
Originality/value
In order to analyze the text complexity of Chinese and foreign academic English writings, the artificial neural network (ANN) under deep learning (DL) is applied to the study of text complexity. Firstly, the research status and existing problems of text complexity are introduced based on DL. Secondly, based on Back Propagation Neural Network (BPNN) algorithm, analyzation is made on the text complexity of Chinese and foreign academic English writings. And the research establishes a BPNN syntactic complexity evaluation system. Thirdly, MATLAB2013b is used for simulation analysis of the model. The proposed model algorithm BPANN is compared with other classical algorithms, and the weight value of each index and the model training effect are further analyzed by statistical methods. Finally, L2 Syntactic Complexity Analyzer (L2SCA) is used to calculate the syntactic complexity of the two libraries, and Mann–Whitney U test is used to compare the syntactic complexity of Chinese English learners and native English speakers. The experimental results show that compared with the shallow neural network, the deep neural network algorithm has more hidden layers and richer features, and better performance of feature extraction. BPNN algorithm shows excellent performance in the training process, and the actual output value is very close to the expected value. Meantime, the error of sample test is analyzed, and it is found that the evaluation error of BPNN algorithm is less than 1.8%, of high accuracy. However, there are significant differences in grammatical complexity among students with different English writing proficiency. Some measurement methods cannot effectively reflect the types and characteristics of written language, or may have a negative relationship with writing quality. In addition, the research also finds that the measurement of syntactic complexity is more sensitive to the language ability of writing. Therefore, BPNN algorithm can effectively analyze the text complexity of academic English writing. The results of the research provide reference for improving the evaluation system of text complexity of academic paper writing.
Details
Keywords
Qinghua Liu, Lu Sun, Alain Kornhauser, Jiahui Sun and Nick Sangwa
To realize classification of different pavements, a road roughness acquisition system design and an improved restricted Boltzmann machine deep neural network algorithm based on…
Abstract
Purpose
To realize classification of different pavements, a road roughness acquisition system design and an improved restricted Boltzmann machine deep neural network algorithm based on Adaboost Backward Propagation algorithm for road roughness detection is presented in this paper. The developed measurement system, including hardware designs and algorithm for software, constitutes an independent system which is low-cost, convenient for installation and small.
Design/methodology/approach
The inputs of restricted Boltzmann machine deep neural network are the vehicle vertical acceleration power spectrum and the pitch acceleration power spectrum, which is calculated using ADAMS finite element software. Adaboost Backward Propagation algorithm is used in each restricted Boltzmann machine deep neural network classification model for fine-tuning given its performance of global searching. The algorithm is first applied to road spectrum detection and experiments indicate that the algorithm is suitable for detecting pavement roughness.
Findings
The detection rate of RBM deep neural network algorithm based on Adaboost Backward Propagation is up to 96 per cent, and the false positive rate is below 3.34 per cent. These indices are both better than the other supervised algorithms, which also performs better in extracting the intrinsic characteristics of data, and therefore improves the classification accuracy and classification quality. Additionally, the classification performance is optimized. The experimental results show that the algorithm can improve performance of restricted Boltzmann machine deep neural networks. The system can be used for detecting pavement roughness.
Originality/value
This paper presents an improved restricted Boltzmann machine deep neural network algorithm based on Adaboost Backward Propagation for identifying the road roughness. Through the restricted Boltzmann machine, it completes pre-training and initializing sample weights. The entire neural network is fine-tuned through the Adaboost Backward Propagation algorithm, verifying the validity of the algorithm on the MNIST data set. A quarter vehicle model is used as the foundation, and the vertical acceleration spectrum of the vehicle center of mass and pitch acceleration spectrum were obtained by simulation in ADAMS as the input samples. The experimental results show that the improved algorithm has better optimization ability, improves the detection rate and can detect the road roughness more effectively.
Details
Keywords
Hongming Wang, Ryszard Czerminski and Andrew C. Jamieson
Neural networks, which provide the basis for deep learning, are a class of machine learning methods that are being applied to a diverse array of fields in business, health…
Abstract
Neural networks, which provide the basis for deep learning, are a class of machine learning methods that are being applied to a diverse array of fields in business, health, technology, and research. In this chapter, we survey some of the key features of deep neural networks and aspects of their design and architecture. We give an overview of some of the different kinds of networks and their applications and highlight how these architectures are used for business applications such as recommender systems. We also provide a summary of some of the considerations needed for using neural network models and future directions in the field.
Details
Keywords
Emre Kiyak and Gulay Unal
The paper aims to address the tracking algorithm based on deep learning and four deep learning tracking models developed. They compared with each other to prevent collision and to…
Abstract
Purpose
The paper aims to address the tracking algorithm based on deep learning and four deep learning tracking models developed. They compared with each other to prevent collision and to obtain target tracking in autonomous aircraft.
Design/methodology/approach
First, to follow the visual target, the detection methods were used and then the tracking methods were examined. Here, four models (deep convolutional neural networks (DCNN), deep convolutional neural networks with fine-tuning (DCNNFN), transfer learning with deep convolutional neural network (TLDCNN) and fine-tuning deep convolutional neural network with transfer learning (FNDCNNTL)) were developed.
Findings
The training time of DCNN took 9 min 33 s, while the accuracy percentage was calculated as 84%. In DCNNFN, the training time of the network was calculated as 4 min 26 s and the accuracy percentage was 91%. The training of TLDCNN) took 34 min and 49 s and the accuracy percentage was calculated as 95%. With FNDCNNTL, the training time of the network was calculated as 34 min 33 s and the accuracy percentage was nearly 100%.
Originality/value
Compared to the results in the literature ranging from 89.4% to 95.6%, using FNDCNNTL, better results were found in the paper.
Details
Keywords
The purpose of this research is to achieve multi-task autonomous driving by adjusting the network architecture of the model. Meanwhile, after achieving multi-task autonomous…
Abstract
Purpose
The purpose of this research is to achieve multi-task autonomous driving by adjusting the network architecture of the model. Meanwhile, after achieving multi-task autonomous driving, the authors found that the trained neural network model performs poorly in untrained scenarios. Therefore, the authors proposed to improve the transfer efficiency of the model for new scenarios through transfer learning.
Design/methodology/approach
First, the authors achieved multi-task autonomous driving by training a model combining convolutional neural network and different structured long short-term memory (LSTM) layers. Second, the authors achieved fast transfer of neural network models in new scenarios by cross-model transfer learning. Finally, the authors combined data collection and data labeling to improve the efficiency of deep learning. Furthermore, the authors verified that the model has good robustness through light and shadow test.
Findings
This research achieved road tracking, real-time acceleration–deceleration, obstacle avoidance and left/right sign recognition. The model proposed by the authors (UniBiCLSTM) outperforms the existing models tested with model cars in terms of autonomous driving performance. Furthermore, the CMTL-UniBiCL-RL model trained by the authors through cross-model transfer learning improves the efficiency of model adaptation to new scenarios. Meanwhile, this research proposed an automatic data annotation method, which can save 1/4 of the time for deep learning.
Originality/value
This research provided novel solutions in the achievement of multi-task autonomous driving and neural network model scenario for transfer learning. The experiment was achieved on a single camera with an embedded chip and a scale model car, which is expected to simplify the hardware for autonomous driving.
Details
Keywords
Krishna Teja Perannagari and Shaphali Gupta
Artificial neural networks (ANNs), which represent computational models simulating the biological neural systems, have become a dominant paradigm for solving complex analytical…
Abstract
Artificial neural networks (ANNs), which represent computational models simulating the biological neural systems, have become a dominant paradigm for solving complex analytical problems. ANN applications have been employed in various disciplines such as psychology, computer science, mathematics, engineering, medicine, manufacturing, and business studies. Academic research on ANNs is witnessing considerable publication activity, and there exists a need to track the intellectual structure of the existing research for a better comprehension of the domain. The current study uses a bibliometric approach to ANN business literature extracted from the Web of Science database. The study also performs a chronological review using science mapping and examines the evolution trajectory to determine research areas relevant to future research. The authors suggest that researchers focus on ANN deep learning models as the bibliometric results predict an expeditious growth of the research topic in the upcoming years. The findings reveal that business research on ANNs is flourishing and suggest further work on domains, such as back-propagation neural networks, support vector machines, and predictive modeling. By providing a systematic and dynamic understanding of ANN business research, the current study enhances the readers' understanding of existing reviews and complements the domain knowledge.
Details
Keywords
Lu Wang, Jiahao Zheng, Jianrong Yao and Yuangao Chen
With the rapid growth of the domestic lending industry, assessing whether the borrower of each loan is at risk of default is a pressing issue for financial institutions. Although…
Abstract
Purpose
With the rapid growth of the domestic lending industry, assessing whether the borrower of each loan is at risk of default is a pressing issue for financial institutions. Although there are some models that can handle such problems well, there are still some shortcomings in some aspects. The purpose of this paper is to improve the accuracy of credit assessment models.
Design/methodology/approach
In this paper, three different stages are used to improve the classification performance of LSTM, so that financial institutions can more accurately identify borrowers at risk of default. The first approach is to use the K-Means-SMOTE algorithm to eliminate the imbalance within the class. In the second step, ResNet is used for feature extraction, and then two-layer LSTM is used for learning to strengthen the ability of neural networks to mine and utilize deep information. Finally, the model performance is improved by using the IDWPSO algorithm for optimization when debugging the neural network.
Findings
On two unbalanced datasets (category ratios of 700:1 and 3:1 respectively), the multi-stage improved model was compared with ten other models using accuracy, precision, specificity, recall, G-measure, F-measure and the nonparametric Wilcoxon test. It was demonstrated that the multi-stage improved model showed a more significant advantage in evaluating the imbalanced credit dataset.
Originality/value
In this paper, the parameters of the ResNet-LSTM hybrid neural network, which can fully mine and utilize the deep information, are tuned by an innovative intelligent optimization algorithm to strengthen the classification performance of the model.
Details
Keywords
Many strategies have been put forward for training deep network models, however, stacking of several layers of non-linearities typically results in poor propagation of gradients…
Abstract
Purpose
Many strategies have been put forward for training deep network models, however, stacking of several layers of non-linearities typically results in poor propagation of gradients and activations. The purpose of this paper is to explore the use of two steps strategy where initial deep learning model is obtained first by unsupervised learning and then optimizing the initial deep learning model by fine tuning. A number of fine tuning algorithms are explored in this work for optimizing deep learning models. This includes proposing a new algorithm where Backpropagation with adaptive gain algorithm is integrated with Dropout technique and the authors evaluate its performance in the fine tuning of the pretrained deep network.
Design/methodology/approach
The parameters of deep neural networks are first learnt using greedy layer-wise unsupervised pretraining. The proposed technique is then used to perform supervised fine tuning of the deep neural network model. Extensive experimental study is performed to evaluate the performance of the proposed fine tuning technique on three benchmark data sets: USPS, Gisette and MNIST. The authors have tested the approach on varying size data sets which include randomly chosen training samples of size 20, 50, 70 and 100 percent from the original data set.
Findings
Through extensive experimental study, it is concluded that the two steps strategy and the proposed fine tuning technique significantly yield promising results in optimization of deep network models.
Originality/value
This paper proposes employing several algorithms for fine tuning of deep network model. A new approach that integrates adaptive gain Backpropagation (BP) algorithm with Dropout technique is proposed for fine tuning of deep networks. Evaluation and comparison of various algorithms proposed for fine tuning on three benchmark data sets is presented in the paper.
Details