Search results

1 – 10 of over 1000
Article
Publication date: 13 January 2022

Jiang Daqi, Wang Hong, Zhou Bin and Wei Chunfeng

This paper aims to save time spent on manufacturing the data set and make the intelligent grasping system easy to deploy into a practical industrial environment. Due to the…

Abstract

Purpose

This paper aims to save time spent on manufacturing the data set and make the intelligent grasping system easy to deploy into a practical industrial environment. Due to the accuracy and robustness of the convolutional neural network, the success rate of the gripping operation reached a high level.

Design/Methodology/Approach

The proposed system comprises two diverse kinds of convolutional neuron network (CNN) algorithms used in different stages and a binocular eye-in-hand system on the end effector, which detects the position and orientation of workpiece. Both algorithms are trained by the data sets containing images and annotations, which are generated automatically by the proposed method.

Findings

The approach can be successfully applied to standard position-controlled robots common in the industry. The algorithm performs excellently in terms of elapsed time. Procession of a 256 × 256 image spends less than 0.1 s without relying on high-performance GPUs. The approach is validated in a series of grasping experiments. This method frees workers from monotonous work and improves factory productivity.

Originality/Value

The authors propose a novel neural network whose performance is tested to be excellent. Moreover, experimental results demonstrate that the proposed second level is extraordinary robust subject to environmental variations. The data sets are generated automatically which saves time spent on manufacturing the data set and makes the intelligent grasping system easy to deploy into a practical industrial environment. Due to the accuracy and robustness of the convolutional neural network, the success rate of the gripping operation reached a high level.

Details

Assembly Automation, vol. 42 no. 2
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 29 August 2022

Jianbin Xiong, Jinji Nie and Jiehao Li

This paper primarily aims to focus on a review of convolutional neural network (CNN)-based eye control systems. The performance of CNNs in big data has led to the development of…

Abstract

Purpose

This paper primarily aims to focus on a review of convolutional neural network (CNN)-based eye control systems. The performance of CNNs in big data has led to the development of eye control systems. Therefore, a review of eye control systems based on CNNs is helpful for future research.

Design/methodology/approach

In this paper, first, it covers the fundamentals of the eye control system as well as the fundamentals of CNNs. Second, the standard CNN model and the target detection model are summarized. The eye control system’s CNN gaze estimation approach and model are next described and summarized. Finally, the progress of the gaze estimation of the eye control system is discussed and anticipated.

Findings

The eye control system accomplishes the control effect using gaze estimation technology, which focuses on the features and information of the eyeball, eye movement and gaze, among other things. The traditional eye control system adopts pupil monitoring, pupil positioning, Hough algorithm and other methods. This study will focus on a CNN-based eye control system. First of all, the authors present the CNN model, which is effective in image identification, target detection and tracking. Furthermore, the CNN-based eye control system is separated into three categories: semantic information, monocular/binocular and full-face. Finally, three challenges linked to the development of an eye control system based on a CNN are discussed, along with possible solutions.

Originality/value

This research can provide theoretical and engineering basis for the eye control system platform. In addition, it also summarizes the ideas of predecessors to support the development of future research.

Details

Assembly Automation, vol. 42 no. 5
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 2 June 2021

Emre Kiyak and Gulay Unal

The paper aims to address the tracking algorithm based on deep learning and four deep learning tracking models developed. They compared with each other to prevent collision and to…

Abstract

Purpose

The paper aims to address the tracking algorithm based on deep learning and four deep learning tracking models developed. They compared with each other to prevent collision and to obtain target tracking in autonomous aircraft.

Design/methodology/approach

First, to follow the visual target, the detection methods were used and then the tracking methods were examined. Here, four models (deep convolutional neural networks (DCNN), deep convolutional neural networks with fine-tuning (DCNNFN), transfer learning with deep convolutional neural network (TLDCNN) and fine-tuning deep convolutional neural network with transfer learning (FNDCNNTL)) were developed.

Findings

The training time of DCNN took 9 min 33 s, while the accuracy percentage was calculated as 84%. In DCNNFN, the training time of the network was calculated as 4 min 26 s and the accuracy percentage was 91%. The training of TLDCNN) took 34 min and 49 s and the accuracy percentage was calculated as 95%. With FNDCNNTL, the training time of the network was calculated as 34 min 33 s and the accuracy percentage was nearly 100%.

Originality/value

Compared to the results in the literature ranging from 89.4% to 95.6%, using FNDCNNTL, better results were found in the paper.

Details

Aircraft Engineering and Aerospace Technology, vol. 93 no. 4
Type: Research Article
ISSN: 1748-8842

Keywords

Article
Publication date: 4 July 2023

Karim Atashgar and Mahnaz Boush

When a process experiences an out-of-control condition, identification of the change point is capable of leading practitioners to an effective root cause analysis. The change…

Abstract

Purpose

When a process experiences an out-of-control condition, identification of the change point is capable of leading practitioners to an effective root cause analysis. The change point addresses the time when a special cause(s) manifests itself into the process. In the statistical process monitoring when the chart signals an out-of-control condition, the change point analysis is an important step for the root cause analysis of the process. This paper attempts to propose a model approaching the artificial neural network to identify the change point of a multistage process with cascade property in the case that the process is modeled properly by a simple linear profile.

Design/methodology/approach

In practice, many processes can be modeled by a functional relationship rather than a single random variable or a random vector. This approach of modeling is referred to as the profile in the statistical process control literature. In this paper, two models based on multilayer perceptron (MLP) and convolutional neural network (CNN) approaches are proposed for identifying the change point of the profile of a multistage process.

Findings

The capability of the proposed models are evaluated and compared using several numerical scenarios. The numerical analysis of the proposed neural networks indicates that the two proposed models are capable of identifying the change point in different scenarios effectively. The comparative sensitivity analysis shows that the capability of the proposed convolutional network is superior compared to MLP network.

Originality/value

To the best of the authors' knowledge, this is the first time that: (1) A model is proposed to identify the change point of the profile of a multistage process. (2) A convolutional neural network is modeled for identifying the change point of an out-of-control condition.

Details

International Journal of Quality & Reliability Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 1 March 2022

Yanwen Yang, Yuping Jiang, Qingqi Zhang, Fengyuan Zou and Lei Du

It is an important style classification way to sort out suits according to the button arrangement. However, since the different dressing ways of suit cause the buttons to be…

Abstract

Purpose

It is an important style classification way to sort out suits according to the button arrangement. However, since the different dressing ways of suit cause the buttons to be easily occluded, the traditional identification methods are difficult to identify the details of suits, and the recognition accuracy is not ideal. The purpose of this paper is to solve the problem of fine-grained classification of suit by button arrangement. Taking men's suits as an example, a method of coordinate position discrimination algorithm combined faster region-based convolutional neural network (R-CNN) algorithm is proposed to achieve accurate batch classification of suit styles under different dressing modes.

Design/methodology/approach

The detection algorithm of suit buttons proposed in this paper includes faster R-CNN algorithm and coordinate position discrimination algorithm. Firstly, a small sample base was established, which includes six suit styles in different dressing states. Secondly, buttons and buttonholes in the image were marked, and the image features were extracted by the residual network to identify the object. The anchors regression coordinates in the sample were obtained through convolution, pooling and other operations. Finally, the position coordinate relation of buttons and buttonholes was used to accurately judge and distinguish suit styles under different dressing ways, so as to eliminate the wrong results of direct classification by the network and achieve accurate classification.

Findings

The experimental results show that this method could be used to accurately classify suits based on small samples. The recognition accuracy rate reaches 95.42%. It can effectively solve the problem of machine misjudgment of suit style due to the cover of buttons, which provides an effective method for the fine-grained classification of suit style.

Originality/value

A method combining coordinate position discrimination algorithm with convolutional neural network was proposed for the first time to realize the fine-grained classification of suit style. It solves the problem of machine misreading, which is easily caused by buttons occluded in different suits.

Details

International Journal of Clothing Science and Technology, vol. 34 no. 4
Type: Research Article
ISSN: 0955-6222

Keywords

Article
Publication date: 23 December 2022

Jinchao Huang

Recently, the convolutional neural network (ConvNet) has a wide application in the classification of motor imagery EEG signals. However, the low signal-to-noise…

86

Abstract

Purpose

Recently, the convolutional neural network (ConvNet) has a wide application in the classification of motor imagery EEG signals. However, the low signal-to-noise electroencephalogram (EEG) signals are collected under the interference of noises. However, the conventional ConvNet model cannot directly solve this problem. This study aims to discuss the aforementioned issues.

Design/methodology/approach

To solve this problem, this paper adopted a novel residual shrinkage block (RSB) to construct the ConvNet model (RSBConvNet). During the feature extraction from EEG signals, the proposed RSBConvNet prevented the noise component in EEG signals, and improved the classification accuracy of motor imagery. In the construction of RSBConvNet, the author applied the soft thresholding strategy to prevent the non-related motor imagery features in EEG signals. The soft thresholding was inserted into the residual block (RB), and the suitable threshold for the current EEG signals distribution can be learned by minimizing the loss function. Therefore, during the feature extraction of motor imagery, the proposed RSBConvNet de-noised the EEG signals and improved the discriminative of classification features.

Findings

Comparative experiments and ablation studies were done on two public benchmark datasets. Compared with conventional ConvNet models, the proposed RSBConvNet model has obvious improvements in motor imagery classification accuracy and Kappa coefficient. Ablation studies have also shown the de-noised abilities of the RSBConvNet model. Moreover, different parameters and computational methods of the RSBConvNet model have been tested on the classification of motor imagery.

Originality/value

Based on the experimental results, the RSBConvNet constructed in this paper has an excellent recognition accuracy of MI-BCI, which can be used for further applications for the online MI-BCI.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 16 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

Open Access
Article
Publication date: 12 April 2019

Darlington A. Akogo and Xavier-Lewis Palmer

Computer vision for automated analysis of cells and tissues usually include extracting features from images before analyzing such features via various machine learning and machine…

1086

Abstract

Purpose

Computer vision for automated analysis of cells and tissues usually include extracting features from images before analyzing such features via various machine learning and machine vision algorithms. The purpose of this work is to explore and demonstrate the ability of a Convolutional Neural Network (CNN) to classify cells pictured via brightfield microscopy without the need of any feature extraction, using a minimum of images, improving work-flows that involve cancer cell identification.

Design/methodology/approach

The methodology involved a quantitative measure of the performance of a Convolutional Neural Network in distinguishing between two cancer lines. In their approach, they trained, validated and tested their 6-layer CNN on 1,241 images of MDA-MB-468 and MCF7 breast cancer cell line in an end-to-end fashion, allowing the system to distinguish between the two different cancer cell types.

Findings

They obtained a 99% accuracy, providing a foundation for more comprehensive systems.

Originality/value

Value can be found in that systems based on this design can be used to assist cell identification in a variety of contexts, whereas a practical implication can be found that these systems can be deployed to assist biomedical workflows quickly and at low cost. In conclusion, this system demonstrates the potentials of end-to-end learning systems for faster and more accurate automated cell analysis.

Details

Journal of Industry-University Collaboration, vol. 1 no. 1
Type: Research Article
ISSN: 2631-357X

Keywords

Article
Publication date: 7 February 2023

Riju Bhattacharya, Naresh Kumar Nagwani and Sarsij Tripathi

A community demonstrates the unique qualities and relationships between its members that distinguish it from other communities within a network. Network analysis relies heavily on…

Abstract

Purpose

A community demonstrates the unique qualities and relationships between its members that distinguish it from other communities within a network. Network analysis relies heavily on community detection. Despite the traditional spectral clustering and statistical inference methods, deep learning techniques for community detection have grown in popularity due to their ease of processing high-dimensional network data. Graph convolutional neural networks (GCNNs) have received much attention recently and have developed into a potential and ubiquitous method for directly detecting communities on graphs. Inspired by the promising results of graph convolutional networks (GCNs) in analyzing graph structure data, a novel community graph convolutional network (CommunityGCN) as a semi-supervised node classification model has been proposed and compared with recent baseline methods graph attention network (GAT), GCN-based technique for unsupervised community detection and Markov random fields combined with graph convolutional network (MRFasGCN).

Design/methodology/approach

This work presents the method for identifying communities that combines the notion of node classification via message passing with the architecture of a semi-supervised graph neural network. Six benchmark datasets, namely, Cora, CiteSeer, ACM, Karate, IMDB and Facebook, have been used in the experimentation.

Findings

In the first set of experiments, the scaled normalized average matrix of all neighbor's features including the node itself was obtained, followed by obtaining the weighted average matrix of low-dimensional nodes. In the second set of experiments, the average weighted matrix was forwarded to the GCN with two layers and the activation function for predicting the node class was applied. The results demonstrate that node classification with GCN can improve the performance of identifying communities on graph datasets.

Originality/value

The experiment reveals that the CommunityGCN approach has given better results with accuracy, normalized mutual information, F1 and modularity scores of 91.26, 79.9, 92.58 and 70.5 per cent, respectively, for detecting communities in the graph network, which is much greater than the range of 55.7–87.07 per cent reported in previous literature. Thus, it has been concluded that the GCN with node classification models has improved the accuracy.

Details

Data Technologies and Applications, vol. 57 no. 4
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 23 August 2019

Haiqing He, Ting Chen, Minqiang Chen, Dajun Li and Penggen Cheng

This paper aims to present a novel approach of image super-resolution based on deep–shallow cascaded convolutional neural networks for reconstructing a clear and high-resolution…

Abstract

Purpose

This paper aims to present a novel approach of image super-resolution based on deep–shallow cascaded convolutional neural networks for reconstructing a clear and high-resolution (HR) remote sensing image from a low-resolution (LR) input.

Design/methodology/approach

The proposed approach directly learns the residuals and mapping between simulated LR and their corresponding HR remote sensing images based on deep and shallow end-to-end convolutional networks instead of assuming any specific restored models. Extra max-pooling and up-sampling are used to achieve a multiscale space by concatenating low- and high-level feature maps, and an HR image is generated by combining LR input and the residual image. This model ensures a strong response to spatially local input patterns by using a large filter and cascaded small filters. The authors adopt a strategy based on epochs to update the learning rate for boosting convergence speed.

Findings

The proposed deep network is trained to reconstruct high-quality images for low-quality inputs through a simulated dataset, which is generated with Set5, Set14, Berkeley Segmentation Data set and remote sensing images. Experimental results demonstrate that this model considerably enhances remote sensing images in terms of spatial detail and spectral fidelity and outperforms state-of-the-art SR methods in terms of peak signal-to-noise ratio, structural similarity and visual assessment.

Originality/value

The proposed method can reconstruct an HR remote sensing image from an LR input and significantly improve the quality of remote sensing images in terms of spatial detail and fidelity.

Details

Sensor Review, vol. 39 no. 5
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 4 April 2016

Yang Lu, Shujuan Yi, Yurong Liu and Yuling Ji

This paper aims to design a multi-layer convolutional neural network (CNN) to solve biomimetic robot path planning problem.

1046

Abstract

Purpose

This paper aims to design a multi-layer convolutional neural network (CNN) to solve biomimetic robot path planning problem.

Design/methodology/approach

At first, the convolution kernel with different scales can be obtained by using the sparse auto encoder training algorithm; the parameter of the hidden layer is a series of convolutional kernel, and the authors use these kernels to extract first-layer features. Then, the authors get the second-layer features through the max-pooling operators, which improve the invariance of the features. Finally, the authors use fully connected layers of neural networks to accomplish the path planning task.

Findings

The NAO biomimetic robot respond quickly and correctly to the dynamic environment. The simulation experiments show that the deep neural network outperforms in dynamic and static environment than the conventional method.

Originality/value

A new method of deep learning based biomimetic robot path planning is proposed. The authors designed a multi-layer CNN which includes max-pooling layer and convolutional kernel. Then, the first and second layers features can be extracted by these kernels. Finally, the authors use the sparse auto encoder training algorithm to train the CNN so as to accomplish the path planning task of NAO robot.

Details

Assembly Automation, vol. 36 no. 2
Type: Research Article
ISSN: 0144-5154

Keywords

1 – 10 of over 1000