Search results

1 – 10 of 433
To view the access options for this content please click here
Article
Publication date: 2 June 2021

Emre Kiyak and Gulay Unal

The paper aims to address the tracking algorithm based on deep learning and four deep learning tracking models developed. They compared with each other to prevent…

Abstract

Purpose

The paper aims to address the tracking algorithm based on deep learning and four deep learning tracking models developed. They compared with each other to prevent collision and to obtain target tracking in autonomous aircraft.

Design/methodology/approach

First, to follow the visual target, the detection methods were used and then the tracking methods were examined. Here, four models (deep convolutional neural networks (DCNN), deep convolutional neural networks with fine-tuning (DCNNFN), transfer learning with deep convolutional neural network (TLDCNN) and fine-tuning deep convolutional neural network with transfer learning (FNDCNNTL)) were developed.

Findings

The training time of DCNN took 9 min 33 s, while the accuracy percentage was calculated as 84%. In DCNNFN, the training time of the network was calculated as 4 min 26 s and the accuracy percentage was 91%. The training of TLDCNN) took 34 min and 49 s and the accuracy percentage was calculated as 95%. With FNDCNNTL, the training time of the network was calculated as 34 min 33 s and the accuracy percentage was nearly 100%.

Originality/value

Compared to the results in the literature ranging from 89.4% to 95.6%, using FNDCNNTL, better results were found in the paper.

Details

Aircraft Engineering and Aerospace Technology, vol. 93 no. 4
Type: Research Article
ISSN: 1748-8842

Keywords

Content available
Article
Publication date: 12 April 2019

Darlington A. Akogo and Xavier-Lewis Palmer

Computer vision for automated analysis of cells and tissues usually include extracting features from images before analyzing such features via various machine learning and…

Abstract

Purpose

Computer vision for automated analysis of cells and tissues usually include extracting features from images before analyzing such features via various machine learning and machine vision algorithms. The purpose of this work is to explore and demonstrate the ability of a Convolutional Neural Network (CNN) to classify cells pictured via brightfield microscopy without the need of any feature extraction, using a minimum of images, improving work-flows that involve cancer cell identification.

Design/methodology/approach

The methodology involved a quantitative measure of the performance of a Convolutional Neural Network in distinguishing between two cancer lines. In their approach, they trained, validated and tested their 6-layer CNN on 1,241 images of MDA-MB-468 and MCF7 breast cancer cell line in an end-to-end fashion, allowing the system to distinguish between the two different cancer cell types.

Findings

They obtained a 99% accuracy, providing a foundation for more comprehensive systems.

Originality/value

Value can be found in that systems based on this design can be used to assist cell identification in a variety of contexts, whereas a practical implication can be found that these systems can be deployed to assist biomedical workflows quickly and at low cost. In conclusion, this system demonstrates the potentials of end-to-end learning systems for faster and more accurate automated cell analysis.

Details

Journal of Industry-University Collaboration, vol. 1 no. 1
Type: Research Article
ISSN: 2631-357X

Keywords

To view the access options for this content please click here
Article
Publication date: 23 August 2019

Haiqing He, Ting Chen, Minqiang Chen, Dajun Li and Penggen Cheng

This paper aims to present a novel approach of image super-resolution based on deep–shallow cascaded convolutional neural networks for reconstructing a clear and…

Abstract

Purpose

This paper aims to present a novel approach of image super-resolution based on deep–shallow cascaded convolutional neural networks for reconstructing a clear and high-resolution (HR) remote sensing image from a low-resolution (LR) input.

Design/methodology/approach

The proposed approach directly learns the residuals and mapping between simulated LR and their corresponding HR remote sensing images based on deep and shallow end-to-end convolutional networks instead of assuming any specific restored models. Extra max-pooling and up-sampling are used to achieve a multiscale space by concatenating low- and high-level feature maps, and an HR image is generated by combining LR input and the residual image. This model ensures a strong response to spatially local input patterns by using a large filter and cascaded small filters. The authors adopt a strategy based on epochs to update the learning rate for boosting convergence speed.

Findings

The proposed deep network is trained to reconstruct high-quality images for low-quality inputs through a simulated dataset, which is generated with Set5, Set14, Berkeley Segmentation Data set and remote sensing images. Experimental results demonstrate that this model considerably enhances remote sensing images in terms of spatial detail and spectral fidelity and outperforms state-of-the-art SR methods in terms of peak signal-to-noise ratio, structural similarity and visual assessment.

Originality/value

The proposed method can reconstruct an HR remote sensing image from an LR input and significantly improve the quality of remote sensing images in terms of spatial detail and fidelity.

Details

Sensor Review, vol. 39 no. 5
Type: Research Article
ISSN: 0260-2288

Keywords

To view the access options for this content please click here
Article
Publication date: 4 April 2016

Yang Lu, Shujuan Yi, Yurong Liu and Yuling Ji

This paper aims to design a multi-layer convolutional neural network (CNN) to solve biomimetic robot path planning problem.

Abstract

Purpose

This paper aims to design a multi-layer convolutional neural network (CNN) to solve biomimetic robot path planning problem.

Design/methodology/approach

At first, the convolution kernel with different scales can be obtained by using the sparse auto encoder training algorithm; the parameter of the hidden layer is a series of convolutional kernel, and the authors use these kernels to extract first-layer features. Then, the authors get the second-layer features through the max-pooling operators, which improve the invariance of the features. Finally, the authors use fully connected layers of neural networks to accomplish the path planning task.

Findings

The NAO biomimetic robot respond quickly and correctly to the dynamic environment. The simulation experiments show that the deep neural network outperforms in dynamic and static environment than the conventional method.

Originality/value

A new method of deep learning based biomimetic robot path planning is proposed. The authors designed a multi-layer CNN which includes max-pooling layer and convolutional kernel. Then, the first and second layers features can be extracted by these kernels. Finally, the authors use the sparse auto encoder training algorithm to train the CNN so as to accomplish the path planning task of NAO robot.

Details

Assembly Automation, vol. 36 no. 2
Type: Research Article
ISSN: 0144-5154

Keywords

To view the access options for this content please click here
Article
Publication date: 8 February 2021

Adireddy Rajasekhar Reddy and Appini Narayana Rao

In modern technology, the wireless sensor networks (WSNs) are generally most promising solutions for better reliability, object tracking, remote monitoring and more, which…

Abstract

Purpose

In modern technology, the wireless sensor networks (WSNs) are generally most promising solutions for better reliability, object tracking, remote monitoring and more, which is directly related to the sensor nodes. Received signal strength indication (RSSI) is main challenges in sensor networks, which is fully depends on distance measurement. The learning algorithm based traditional models are involved in error correction, distance measurement and improve the accuracy of effectiveness. But, most of the existing models are not able to protect the user’s data from the unknown or malicious data during the signal transmission. The simulation outcomes indicate that proposed methodology may reach more constant and accurate position states of the unknown nodes and the target node in WSNs domain than the existing methods.

Design/methodology/approach

This paper present a deep convolutional neural network (DCNN) from the adaptation of machine learning to identify the problems on deep ranging sensor networks and overthrow the problems of unknown sensor nodes localization in WSN networks by using instance parameters of elephant herding optimization (EHO) technique and which is used to optimize the localization problem.

Findings

In this proposed method, the signal propagation properties can be extracted automatically because of this image data and RSSI data values. Rest of this manuscript shows that the ECO can find the better performance analysis of distance estimation accuracy, localized nodes and its transmission range than those traditional algorithms. ECO has been proposed as one of the main tools to promote a transformation from unsustainable development to one of sustainable development. It will reduce the material intensity of goods and services.

Originality/value

The proposed technique is compared to existing systems to show the proposed method efficiency. The simulation results indicate that this proposed methodology can achieve more constant and accurate position states of the unknown nodes and the target node in WSNs domain than the existing methods.

Details

International Journal of Pervasive Computing and Communications, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1742-7371

Keywords

To view the access options for this content please click here
Article
Publication date: 29 April 2021

Omobolanle Ruth Ogunseiju, Johnson Olayiwola, Abiola Abosede Akanmu and Chukwuma Nnaji

Construction action recognition is essential to efficiently manage productivity, health and safety risks. These can be achieved by tracking and monitoring construction…

Abstract

Purpose

Construction action recognition is essential to efficiently manage productivity, health and safety risks. These can be achieved by tracking and monitoring construction work. This study aims to examine the performance of a variant of deep convolutional neural networks (CNNs) for recognizing actions of construction workers from images of signals of time-series data.

Design/methodology/approach

This paper adopts Inception v1 to classify actions involved in carpentry and painting activities from images of motion data. Augmented time-series data from wearable sensors attached to worker's lower arms are converted to signal images to train an Inception v1 network. Performance of Inception v1 is compared with the highest performing supervised learning classifier, k-nearest neighbor (KNN).

Findings

Results show that the performance of Inception v1 network improved when trained with signal images of the augmented data but at a high computational cost. Inception v1 network and KNN achieved an accuracy of 95.2% and 99.8%, respectively when trained with 50-fold augmented carpentry dataset. The accuracy of Inception v1 and KNN with 10-fold painting augmented dataset is 95.3% and 97.1%, respectively.

Research limitations/implications

Only acceleration data of the lower arm of the two trades were used for action recognition. Each signal image comprises 20 datasets.

Originality/value

Little has been reported on recognizing construction workers' actions from signal images. This study adds value to the existing literature, in particular by providing insights into the extent to which a deep CNN can classify subtasks from patterns in signal images compared to a traditional best performing shallow network.

Details

Smart and Sustainable Built Environment, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2046-6099

Keywords

To view the access options for this content please click here
Article
Publication date: 25 June 2020

Minghua Wei and Feng Lin

Aiming at the shortcomings of EEG signals generated by brain's sensorimotor region activated tasks, such as poor performance, low efficiency and weak robustness, this…

Abstract

Purpose

Aiming at the shortcomings of EEG signals generated by brain's sensorimotor region activated tasks, such as poor performance, low efficiency and weak robustness, this paper proposes an EEG signals classification method based on multi-dimensional fusion features.

Design/methodology/approach

First, the improved Morlet wavelet is used to extract the spectrum feature maps from EEG signals. Then, the spatial-frequency features are extracted from the PSD maps by using the three-dimensional convolutional neural networks (3DCNNs) model. Finally, the spatial-frequency features are incorporated to the bidirectional gated recurrent units (Bi-GRUs) models to extract the spatial-frequency-sequential multi-dimensional fusion features for recognition of brain's sensorimotor region activated task.

Findings

In the comparative experiments, the data sets of motor imagery (MI)/action observation (AO)/action execution (AE) tasks are selected to test the classification performance and robustness of the proposed algorithm. In addition, the impact of extracted features on the sensorimotor region and the impact on the classification processing are also analyzed by visualization during experiments.

Originality/value

The experimental results show that the proposed algorithm extracts the corresponding brain activation features for different action related tasks, so as to achieve more stable classification performance in dealing with AO/MI/AE tasks, and has the best robustness on EEG signals of different subjects.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 13 no. 2
Type: Research Article
ISSN: 1756-378X

Keywords

To view the access options for this content please click here
Article
Publication date: 24 November 2020

Changro Lee and Key-Ho Park

Most prior attempts at real estate valuation have focused on the use of metadata such as size and property age, neglecting the fact that the building workmanship in the…

Abstract

Purpose

Most prior attempts at real estate valuation have focused on the use of metadata such as size and property age, neglecting the fact that the building workmanship in the construction of a house is also a key factor for the estimation of house prices. Building workmanship, such as exterior walls and floor tiling correspond to the visual attributes of a house, and it is difficult to capture and evaluate such attributes efficiently through classical models like regression analysis. Deep learning approach is taken in the valuation process to utilize this visual information.

Design/methodology/approach

The authors propose a two-input neural network comprising a multilayer perceptron and a convolutional neural network that can utilize both metadata and the visual information from images of the front view of the house.

Findings

The authors applied the two-input neural network to Guri City in Gyeonggi Province, South Korea, as a case study and found that the accuracy of house price estimations can be improved by employing image information along with metadata.

Originality/value

Few studies considered the impact of the building workmanship in the valuation process. The authors revealed that it is useful to use both photographs and metadata for enhancing the accuracy of house price estimation.

Details

Data Technologies and Applications, vol. 55 no. 2
Type: Research Article
ISSN: 2514-9288

Keywords

To view the access options for this content please click here
Article
Publication date: 30 September 2020

Li Xiaoling

In order to improve the weak recognition accuracy and robustness of the classification algorithm for brain-computer interface (BCI), this paper proposed a novel…

Abstract

Purpose

In order to improve the weak recognition accuracy and robustness of the classification algorithm for brain-computer interface (BCI), this paper proposed a novel classification algorithm for motor imagery based on temporal and spatial characteristics extracted by using convolutional neural networks (TS-CNN) model.

Design/methodology/approach

According to the proposed algorithm, a five-layer neural network model was constructed to classify the electroencephalogram (EEG) signals. Firstly, the author designed a motor imagery-based BCI experiment, and four subjects were recruited to participate in the experiment for the recording of EEG signals. Then, after the EEG signals were preprocessed, the temporal and spatial characteristics of EEG signals were extracted by longitudinal convolutional kernel and transverse convolutional kernels, respectively. Finally, the classification of motor imagery was completed by using two fully connected layers.

Findings

To validate the classification performance and efficiency of the proposed algorithm, the comparative experiments with the state-of-the-arts algorithms are applied to validate the proposed algorithm. Experimental results have shown that the proposed TS-CNN model has the best performance and efficiency in the classification of motor imagery, reflecting on the introduced accuracy, precision, recall, ROC curve and F-score indexes.

Originality/value

The proposed TS-CNN model accurately recognized the EEG signals for different tasks of motor imagery, and provided theoretical basis and technical support for the application of BCI control system in the field of rehabilitation exoskeleton.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 13 no. 4
Type: Research Article
ISSN: 1756-378X

Keywords

To view the access options for this content please click here
Article
Publication date: 5 May 2021

Haina Song, Shengpei Zhou, Zhenting Chang, Yuejiang Su, Xiaosong Liu and Jingfeng Yang

Autonomous driving depends on the collection, processing and analysis of environmental information and vehicle information. Environmental perception and processing are…

Abstract

Purpose

Autonomous driving depends on the collection, processing and analysis of environmental information and vehicle information. Environmental perception and processing are important prerequisite for the safety of self-driving of vehicles; it involves road boundary detection, vehicle detection, pedestrian detection using sensors such as laser rangefinder, video camera, vehicle borne radar, etc.

Design/methodology/approach

Subjected to various environmental factors, the data clock information is often out of sync because of different data acquisition frequency, which leads to the difficulty in data fusion. In this study, according to practical requirements, a multi-sensor environmental perception collaborative method was first proposed; then, based on the principle of target priority, large-scale priority, moving target priority and difference priority, a multi-sensor data fusion optimization algorithm based on convolutional neural network was proposed.

Findings

The average unload scheduling delay of the algorithm for test data before and after optimization under different network transmission rates. It can be seen that with the improvement of network transmission rate and processing capacity, the unload scheduling delay decreased after optimization and the performance of the test results is the closest to the optimal solution indicating the excellent performance of the optimization algorithm and its adaptivity to different environments.

Originality/value

In this paper, the results showed that the proposed method significantly improved the redundancy and fault tolerance of the system thus ensuring fast and correct decision-making during driving.

Details

Assembly Automation, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0144-5154

Keywords

1 – 10 of 433