Search results

1 – 10 of over 4000
Article
Publication date: 3 July 2020

Azra Nazir, Roohie Naaz Mir and Shaima Qureshi

The trend of “Deep Learning for Internet of Things (IoT)” has gained fresh momentum with enormous upcoming applications employing these models as their processing engine…

239

Abstract

Purpose

The trend of “Deep Learning for Internet of Things (IoT)” has gained fresh momentum with enormous upcoming applications employing these models as their processing engine and Cloud as their resource giant. But this picture leads to underutilization of ever-increasing device pool of IoT that has already passed 15 billion mark in 2015. Thus, it is high time to explore a different approach to tackle this issue, keeping in view the characteristics and needs of the two fields. Processing at the Edge can boost applications with real-time deadlines while complementing security.

Design/methodology/approach

This review paper contributes towards three cardinal directions of research in the field of DL for IoT. The first section covers the categories of IoT devices and how Fog can aid in overcoming the underutilization of millions of devices, forming the realm of the things for IoT. The second direction handles the issue of immense computational requirements of DL models by uncovering specific compression techniques. An appropriate combination of these techniques, including regularization, quantization, and pruning, can aid in building an effective compression pipeline for establishing DL models for IoT use-cases. The third direction incorporates both these views and introduces a novel approach of parallelization for setting up a distributed systems view of DL for IoT.

Findings

DL models are growing deeper with every passing year. Well-coordinated distributed execution of such models using Fog displays a promising future for the IoT application realm. It is realized that a vertically partitioned compressed deep model can handle the trade-off between size, accuracy, communication overhead, bandwidth utilization, and latency but at the expense of an additionally considerable memory footprint. To reduce the memory budget, we propose to exploit Hashed Nets as potentially favorable candidates for distributed frameworks. However, the critical point between accuracy and size for such models needs further investigation.

Originality/value

To the best of our knowledge, no study has explored the inherent parallelism in deep neural network architectures for their efficient distribution over the Edge-Fog continuum. Besides covering techniques and frameworks that have tried to bring inference to the Edge, the review uncovers significant issues and possible future directions for endorsing deep models as processing engines for real-time IoT. The study is directed to both researchers and industrialists to take on various applications to the Edge for better user experience.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 13 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 17 May 2022

Qiucheng Liu

In order to analyze the text complexity of Chinese and foreign academic English writings, the artificial neural network (ANN) under deep learning (DL) is applied to the…

Abstract

Purpose

In order to analyze the text complexity of Chinese and foreign academic English writings, the artificial neural network (ANN) under deep learning (DL) is applied to the study of text complexity. Firstly, the research status and existing problems of text complexity are introduced based on DL. Secondly, based on Back Propagation Neural Network (BPNN) algorithm, analyzation is made on the text complexity of Chinese and foreign academic English writings. And the research establishes a BPNN syntactic complexity evaluation system. Thirdly, MATLAB2013b is used for simulation analysis of the model. The proposed model algorithm BPANN is compared with other classical algorithms, and the weight value of each index and the model training effect are further analyzed by statistical methods. Finally, L2 Syntactic Complexity Analyzer (L2SCA) is used to calculate the syntactic complexity of the two libraries, and Mann–Whitney U test is used to compare the syntactic complexity of Chinese English learners and native English speakers. The experimental results show that compared with the shallow neural network, the deep neural network algorithm has more hidden layers and richer features, and better performance of feature extraction. BPNN algorithm shows excellent performance in the training process, and the actual output value is very close to the expected value. Meantime, the error of sample test is analyzed, and it is found that the evaluation error of BPNN algorithm is less than 1.8%, of high accuracy. However, there are significant differences in grammatical complexity among students with different English writing proficiency. Some measurement methods cannot effectively reflect the types and characteristics of written language, or may have a negative relationship with writing quality. In addition, the research also finds that the measurement of syntactic complexity is more sensitive to the language ability of writing. Therefore, BPNN algorithm can effectively analyze the text complexity of academic English writing. The results of the research provide reference for improving the evaluation system of text complexity of academic paper writing.

Design/methodology/approach

In order to analyze the text complexity of Chinese and foreign academic English writings, the artificial neural network (ANN) under deep learning (DL) is applied to the study of text complexity. Firstly, the research status and existing problems of text complexity are introduced based on DL. Secondly, based on Back Propagation Neural Network (BPNN) algorithm, analyzation is made on the text complexity of Chinese and foreign academic English writings. And the research establishes a BPNN syntactic complexity evaluation system. Thirdly, MATLAB2013b is used for simulation analysis of the model. The proposed model algorithm BPANN is compared with other classical algorithms, and the weight value of each index and the model training effect are further analyzed by statistical methods. Finally, L2 Syntactic Complexity Analyzer (L2SCA) is used to calculate the syntactic complexity of the two libraries, and Mann–Whitney U test is used to compare the syntactic complexity of Chinese English learners and native English speakers. The experimental results show that compared with the shallow neural network, the deep neural network algorithm has more hidden layers and richer features, and better performance of feature extraction. BPNN algorithm shows excellent performance in the training process, and the actual output value is very close to the expected value. Meantime, the error of sample test is analyzed, and it is found that the evaluation error of BPNN algorithm is less than 1.8%, of high accuracy. However, there are significant differences in grammatical complexity among students with different English writing proficiency. Some measurement methods cannot effectively reflect the types and characteristics of written language, or may have a negative relationship with writing quality. In addition, the research also finds that the measurement of syntactic complexity is more sensitive to the language ability of writing. Therefore, BPNN algorithm can effectively analyze the text complexity of academic English writing. The results of the research provide reference for improving the evaluation system of text complexity of academic paper writing.

Findings

In order to analyze the text complexity of Chinese and foreign academic English writings, the artificial neural network (ANN) under deep learning (DL) is applied to the study of text complexity. Firstly, the research status and existing problems of text complexity are introduced based on DL. Secondly, based on Back Propagation Neural Network (BPNN) algorithm, analyzation is made on the text complexity of Chinese and foreign academic English writings. And the research establishes a BPNN syntactic complexity evaluation system. Thirdly, MATLAB2013b is used for simulation analysis of the model. The proposed model algorithm BPANN is compared with other classical algorithms, and the weight value of each index and the model training effect are further analyzed by statistical methods. Finally, L2 Syntactic Complexity Analyzer (L2SCA) is used to calculate the syntactic complexity of the two libraries, and Mann–Whitney U test is used to compare the syntactic complexity of Chinese English learners and native English speakers. The experimental results show that compared with the shallow neural network, the deep neural network algorithm has more hidden layers and richer features, and better performance of feature extraction. BPNN algorithm shows excellent performance in the training process, and the actual output value is very close to the expected value. Meantime, the error of sample test is analyzed, and it is found that the evaluation error of BPNN algorithm is less than 1.8%, of high accuracy. However, there are significant differences in grammatical complexity among students with different English writing proficiency. Some measurement methods cannot effectively reflect the types and characteristics of written language, or may have a negative relationship with writing quality. In addition, the research also finds that the measurement of syntactic complexity is more sensitive to the language ability of writing. Therefore, BPNN algorithm can effectively analyze the text complexity of academic English writing. The results of the research provide reference for improving the evaluation system of text complexity of academic paper writing.

Originality/value

In order to analyze the text complexity of Chinese and foreign academic English writings, the artificial neural network (ANN) under deep learning (DL) is applied to the study of text complexity. Firstly, the research status and existing problems of text complexity are introduced based on DL. Secondly, based on Back Propagation Neural Network (BPNN) algorithm, analyzation is made on the text complexity of Chinese and foreign academic English writings. And the research establishes a BPNN syntactic complexity evaluation system. Thirdly, MATLAB2013b is used for simulation analysis of the model. The proposed model algorithm BPANN is compared with other classical algorithms, and the weight value of each index and the model training effect are further analyzed by statistical methods. Finally, L2 Syntactic Complexity Analyzer (L2SCA) is used to calculate the syntactic complexity of the two libraries, and Mann–Whitney U test is used to compare the syntactic complexity of Chinese English learners and native English speakers. The experimental results show that compared with the shallow neural network, the deep neural network algorithm has more hidden layers and richer features, and better performance of feature extraction. BPNN algorithm shows excellent performance in the training process, and the actual output value is very close to the expected value. Meantime, the error of sample test is analyzed, and it is found that the evaluation error of BPNN algorithm is less than 1.8%, of high accuracy. However, there are significant differences in grammatical complexity among students with different English writing proficiency. Some measurement methods cannot effectively reflect the types and characteristics of written language, or may have a negative relationship with writing quality. In addition, the research also finds that the measurement of syntactic complexity is more sensitive to the language ability of writing. Therefore, BPNN algorithm can effectively analyze the text complexity of academic English writing. The results of the research provide reference for improving the evaluation system of text complexity of academic paper writing.

Details

Library Hi Tech, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0737-8831

Keywords

Article
Publication date: 24 September 2019

Qinghua Liu, Lu Sun, Alain Kornhauser, Jiahui Sun and Nick Sangwa

To realize classification of different pavements, a road roughness acquisition system design and an improved restricted Boltzmann machine deep neural network algorithm…

Abstract

Purpose

To realize classification of different pavements, a road roughness acquisition system design and an improved restricted Boltzmann machine deep neural network algorithm based on Adaboost Backward Propagation algorithm for road roughness detection is presented in this paper. The developed measurement system, including hardware designs and algorithm for software, constitutes an independent system which is low-cost, convenient for installation and small.

Design/methodology/approach

The inputs of restricted Boltzmann machine deep neural network are the vehicle vertical acceleration power spectrum and the pitch acceleration power spectrum, which is calculated using ADAMS finite element software. Adaboost Backward Propagation algorithm is used in each restricted Boltzmann machine deep neural network classification model for fine-tuning given its performance of global searching. The algorithm is first applied to road spectrum detection and experiments indicate that the algorithm is suitable for detecting pavement roughness.

Findings

The detection rate of RBM deep neural network algorithm based on Adaboost Backward Propagation is up to 96 per cent, and the false positive rate is below 3.34 per cent. These indices are both better than the other supervised algorithms, which also performs better in extracting the intrinsic characteristics of data, and therefore improves the classification accuracy and classification quality. Additionally, the classification performance is optimized. The experimental results show that the algorithm can improve performance of restricted Boltzmann machine deep neural networks. The system can be used for detecting pavement roughness.

Originality/value

This paper presents an improved restricted Boltzmann machine deep neural network algorithm based on Adaboost Backward Propagation for identifying the road roughness. Through the restricted Boltzmann machine, it completes pre-training and initializing sample weights. The entire neural network is fine-tuned through the Adaboost Backward Propagation algorithm, verifying the validity of the algorithm on the MNIST data set. A quarter vehicle model is used as the foundation, and the vertical acceleration spectrum of the vehicle center of mass and pitch acceleration spectrum were obtained by simulation in ADAMS as the input samples. The experimental results show that the improved algorithm has better optimization ability, improves the detection rate and can detect the road roughness more effectively.

Book part
Publication date: 15 March 2021

Hongming Wang, Ryszard Czerminski and Andrew C. Jamieson

Neural networks, which provide the basis for deep learning, are a class of machine learning methods that are being applied to a diverse array of fields in business…

Abstract

Neural networks, which provide the basis for deep learning, are a class of machine learning methods that are being applied to a diverse array of fields in business, health, technology, and research. In this chapter, we survey some of the key features of deep neural networks and aspects of their design and architecture. We give an overview of some of the different kinds of networks and their applications and highlight how these architectures are used for business applications such as recommender systems. We also provide a summary of some of the considerations needed for using neural network models and future directions in the field.

Article
Publication date: 2 June 2021

Emre Kiyak and Gulay Unal

The paper aims to address the tracking algorithm based on deep learning and four deep learning tracking models developed. They compared with each other to prevent…

Abstract

Purpose

The paper aims to address the tracking algorithm based on deep learning and four deep learning tracking models developed. They compared with each other to prevent collision and to obtain target tracking in autonomous aircraft.

Design/methodology/approach

First, to follow the visual target, the detection methods were used and then the tracking methods were examined. Here, four models (deep convolutional neural networks (DCNN), deep convolutional neural networks with fine-tuning (DCNNFN), transfer learning with deep convolutional neural network (TLDCNN) and fine-tuning deep convolutional neural network with transfer learning (FNDCNNTL)) were developed.

Findings

The training time of DCNN took 9 min 33 s, while the accuracy percentage was calculated as 84%. In DCNNFN, the training time of the network was calculated as 4 min 26 s and the accuracy percentage was 91%. The training of TLDCNN) took 34 min and 49 s and the accuracy percentage was calculated as 95%. With FNDCNNTL, the training time of the network was calculated as 34 min 33 s and the accuracy percentage was nearly 100%.

Originality/value

Compared to the results in the literature ranging from 89.4% to 95.6%, using FNDCNNTL, better results were found in the paper.

Details

Aircraft Engineering and Aerospace Technology, vol. 93 no. 4
Type: Research Article
ISSN: 1748-8842

Keywords

Book part
Publication date: 14 November 2022

Krishna Teja Perannagari and Shaphali Gupta

Artificial neural networks (ANNs), which represent computational models simulating the biological neural systems, have become a dominant paradigm for solving complex…

Abstract

Artificial neural networks (ANNs), which represent computational models simulating the biological neural systems, have become a dominant paradigm for solving complex analytical problems. ANN applications have been employed in various disciplines such as psychology, computer science, mathematics, engineering, medicine, manufacturing, and business studies. Academic research on ANNs is witnessing considerable publication activity, and there exists a need to track the intellectual structure of the existing research for a better comprehension of the domain. The current study uses a bibliometric approach to ANN business literature extracted from the Web of Science database. The study also performs a chronological review using science mapping and examines the evolution trajectory to determine research areas relevant to future research. The authors suggest that researchers focus on ANN deep learning models as the bibliometric results predict an expeditious growth of the research topic in the upcoming years. The findings reveal that business research on ANNs is flourishing and suggest further work on domains, such as back-propagation neural networks, support vector machines, and predictive modeling. By providing a systematic and dynamic understanding of ANN business research, the current study enhances the readers' understanding of existing reviews and complements the domain knowledge.

Details

Exploring the Latest Trends in Management Literature
Type: Book
ISBN: 978-1-80262-357-4

Keywords

Article
Publication date: 13 July 2018

M. Arif Wani and Saduf Afzal

Many strategies have been put forward for training deep network models, however, stacking of several layers of non-linearities typically results in poor propagation of…

Abstract

Purpose

Many strategies have been put forward for training deep network models, however, stacking of several layers of non-linearities typically results in poor propagation of gradients and activations. The purpose of this paper is to explore the use of two steps strategy where initial deep learning model is obtained first by unsupervised learning and then optimizing the initial deep learning model by fine tuning. A number of fine tuning algorithms are explored in this work for optimizing deep learning models. This includes proposing a new algorithm where Backpropagation with adaptive gain algorithm is integrated with Dropout technique and the authors evaluate its performance in the fine tuning of the pretrained deep network.

Design/methodology/approach

The parameters of deep neural networks are first learnt using greedy layer-wise unsupervised pretraining. The proposed technique is then used to perform supervised fine tuning of the deep neural network model. Extensive experimental study is performed to evaluate the performance of the proposed fine tuning technique on three benchmark data sets: USPS, Gisette and MNIST. The authors have tested the approach on varying size data sets which include randomly chosen training samples of size 20, 50, 70 and 100 percent from the original data set.

Findings

Through extensive experimental study, it is concluded that the two steps strategy and the proposed fine tuning technique significantly yield promising results in optimization of deep network models.

Originality/value

This paper proposes employing several algorithms for fine tuning of deep network model. A new approach that integrates adaptive gain Backpropagation (BP) algorithm with Dropout technique is proposed for fine tuning of deep networks. Evaluation and comparison of various algorithms proposed for fine tuning on three benchmark data sets is presented in the paper.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 11 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 3 April 2017

Yasser F. Hassan

This paper aims to utilize machine learning and soft computing to propose a new method of rough sets using deep learning architecture for many real-world applications.

Abstract

Purpose

This paper aims to utilize machine learning and soft computing to propose a new method of rough sets using deep learning architecture for many real-world applications.

Design/methodology/approach

The objective of this work is to propose a model for deep rough set theory that uses more than decision table and approximating these tables to a classification system, i.e. the paper propose a novel framework of deep learning based on multi-decision tables.

Findings

The paper tries to coordinate the local properties of individual decision table to provide an appropriate global decision from the system.

Research limitations/implications

The rough set learning assumes the existence of a single decision table, whereas real-world decision problem implies several decisions with several different decision tables. The new proposed model can handle multi-decision tables.

Practical implications

The proposed classification model is implemented on social networks with preferred features which are freely distribute as social entities with accuracy around 91 per cent.

Social implications

The deep learning using rough sets theory simulate the way of brain thinking and can solve the problem of existence of different information about same problem in different decision systems

Originality/value

This paper utilizes machine learning and soft computing to propose a new method of rough sets using deep learning architecture for many real-world applications.

Details

Kybernetes, vol. 46 no. 4
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 3 April 2020

Abdelhalim Saadi and Hacene Belhadef

The purpose of this paper is to present a system based on deep neural networks to extract particular entities from natural language text, knowing that a massive amount of…

Abstract

Purpose

The purpose of this paper is to present a system based on deep neural networks to extract particular entities from natural language text, knowing that a massive amount of textual information is electronically available at present. Notably, a large amount of electronic text data indicates great difficulty in finding or extracting relevant information from them.

Design/methodology/approach

This study presents an original system to extract Arabic-named entities by combining a deep neural network-based part-of-speech tagger and a neural network-based named entity extractor. Firstly, the system extracts the grammatical classes of the words with high precision depending on the context of the word. This module plays the role of the disambiguation process. Then, a second module is used to extract the named entities.

Findings

Using deep neural networks in natural language processing, requires tuning many hyperparameters, which is a time-consuming process. To deal with this problem, applying statistical methods like the Taguchi method is much requested. In this study, the system is successfully applied to the Arabic-named entities recognition, where accuracy of 96.81 per cent was reported, which is better than the state-of-the-art results.

Research limitations/implications

The system is designed and trained for the Arabic language, but the architecture can be used for other languages.

Practical implications

Information extraction systems are developed for different applications, such as analysing newspaper articles and databases for commercial, political and social objectives. Information extraction systems also can be built over an information retrieval (IR) system. The IR system eliminates irrelevant documents and paragraphs.

Originality/value

The proposed system can be regarded as the first attempt to use double deep neural networks to increase the accuracy. It also can be built over an IR system. The IR system eliminates irrelevant documents and paragraphs. This process reduces the mass number of documents from which the authors wish to extract the relevant information using an information extraction system.

Details

Smart and Sustainable Built Environment, vol. 9 no. 4
Type: Research Article
ISSN: 2046-6099

Keywords

Article
Publication date: 29 October 2021

Ran Feng and Xiaoe Qu

To identify and analyze the occurrence of Internet financial market risk, data mining technology is combined with deep learning to process and analyze. The market risk…

Abstract

Purpose

To identify and analyze the occurrence of Internet financial market risk, data mining technology is combined with deep learning to process and analyze. The market risk management of the Internet is to improve the management level of Internet financial risk, improve the policy of Internet financial supervision and promote the healthy development of Internet finance.

Design/methodology/approach

In this exploration, data mining technology is combined with deep learning to mine the Internet financial data, warn the potential risks in the market and provide targeted risk management measures. Therefore, in this article, to improve the application ability of data mining in dealing with Internet financial risk management, the radial basis function (RBF) neural network algorithm optimized by ant colony optimization (ACO) is proposed.

Findings

The results show that the actual error of the ACO optimized RBF neural network is 0.249, which is 0.149 different from the target error, indicating that the optimized algorithm can make the calculation results more accurate. The fitting results of the RBF neural network and ACO optimized RBF neural network for nonlinear function are compared. Compared with the performance of other algorithms, the error of ACO optimized RBF neural network is 0.249, the running time is 2.212 s, and the number of iterations is 36, which is far less than the actual results of the other two algorithms.

Originality/value

The optimized algorithm has a better spatial mapping and generalization ability and can get higher accuracy in short-term training. Therefore, the ACO optimized RBF neural network algorithm designed in this exploration has a high accuracy for the prediction of Internet financial market risk.

Details

Journal of Enterprise Information Management, vol. 35 no. 4/5
Type: Research Article
ISSN: 1741-0398

Keywords

1 – 10 of over 4000