Search results

1 – 10 of 744
Article
Publication date: 13 January 2022

Jiang Daqi, Wang Hong, Zhou Bin and Wei Chunfeng

This paper aims to save time spent on manufacturing the data set and make the intelligent grasping system easy to deploy into a practical industrial environment. Due to…

Abstract

Purpose

This paper aims to save time spent on manufacturing the data set and make the intelligent grasping system easy to deploy into a practical industrial environment. Due to the accuracy and robustness of the convolutional neural network, the success rate of the gripping operation reached a high level.

Design/Methodology/Approach

The proposed system comprises two diverse kinds of convolutional neuron network (CNN) algorithms used in different stages and a binocular eye-in-hand system on the end effector, which detects the position and orientation of workpiece. Both algorithms are trained by the data sets containing images and annotations, which are generated automatically by the proposed method.

Findings

The approach can be successfully applied to standard position-controlled robots common in the industry. The algorithm performs excellently in terms of elapsed time. Procession of a 256 × 256 image spends less than 0.1 s without relying on high-performance GPUs. The approach is validated in a series of grasping experiments. This method frees workers from monotonous work and improves factory productivity.

Originality/Value

The authors propose a novel neural network whose performance is tested to be excellent. Moreover, experimental results demonstrate that the proposed second level is extraordinary robust subject to environmental variations. The data sets are generated automatically which saves time spent on manufacturing the data set and makes the intelligent grasping system easy to deploy into a practical industrial environment. Due to the accuracy and robustness of the convolutional neural network, the success rate of the gripping operation reached a high level.

Details

Assembly Automation, vol. 42 no. 2
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 2 June 2021

Emre Kiyak and Gulay Unal

The paper aims to address the tracking algorithm based on deep learning and four deep learning tracking models developed. They compared with each other to prevent…

Abstract

Purpose

The paper aims to address the tracking algorithm based on deep learning and four deep learning tracking models developed. They compared with each other to prevent collision and to obtain target tracking in autonomous aircraft.

Design/methodology/approach

First, to follow the visual target, the detection methods were used and then the tracking methods were examined. Here, four models (deep convolutional neural networks (DCNN), deep convolutional neural networks with fine-tuning (DCNNFN), transfer learning with deep convolutional neural network (TLDCNN) and fine-tuning deep convolutional neural network with transfer learning (FNDCNNTL)) were developed.

Findings

The training time of DCNN took 9 min 33 s, while the accuracy percentage was calculated as 84%. In DCNNFN, the training time of the network was calculated as 4 min 26 s and the accuracy percentage was 91%. The training of TLDCNN) took 34 min and 49 s and the accuracy percentage was calculated as 95%. With FNDCNNTL, the training time of the network was calculated as 34 min 33 s and the accuracy percentage was nearly 100%.

Originality/value

Compared to the results in the literature ranging from 89.4% to 95.6%, using FNDCNNTL, better results were found in the paper.

Details

Aircraft Engineering and Aerospace Technology, vol. 93 no. 4
Type: Research Article
ISSN: 1748-8842

Keywords

Article
Publication date: 1 March 2022

Yanwen Yang, Yuping Jiang, Qingqi Zhang, Fengyuan Zou and Lei Du

It is an important style classification way to sort out suits according to the button arrangement. However, since the different dressing ways of suit cause the buttons to…

Abstract

Purpose

It is an important style classification way to sort out suits according to the button arrangement. However, since the different dressing ways of suit cause the buttons to be easily occluded, the traditional identification methods are difficult to identify the details of suits, and the recognition accuracy is not ideal. The purpose of this paper is to solve the problem of fine-grained classification of suit by button arrangement. Taking men's suits as an example, a method of coordinate position discrimination algorithm combined faster region-based convolutional neural network (R-CNN) algorithm is proposed to achieve accurate batch classification of suit styles under different dressing modes.

Design/methodology/approach

The detection algorithm of suit buttons proposed in this paper includes faster R-CNN algorithm and coordinate position discrimination algorithm. Firstly, a small sample base was established, which includes six suit styles in different dressing states. Secondly, buttons and buttonholes in the image were marked, and the image features were extracted by the residual network to identify the object. The anchors regression coordinates in the sample were obtained through convolution, pooling and other operations. Finally, the position coordinate relation of buttons and buttonholes was used to accurately judge and distinguish suit styles under different dressing ways, so as to eliminate the wrong results of direct classification by the network and achieve accurate classification.

Findings

The experimental results show that this method could be used to accurately classify suits based on small samples. The recognition accuracy rate reaches 95.42%. It can effectively solve the problem of machine misjudgment of suit style due to the cover of buttons, which provides an effective method for the fine-grained classification of suit style.

Originality/value

A method combining coordinate position discrimination algorithm with convolutional neural network was proposed for the first time to realize the fine-grained classification of suit style. It solves the problem of machine misreading, which is easily caused by buttons occluded in different suits.

Details

International Journal of Clothing Science and Technology, vol. 34 no. 4
Type: Research Article
ISSN: 0955-6222

Keywords

Open Access
Article
Publication date: 12 April 2019

Darlington A. Akogo and Xavier-Lewis Palmer

Computer vision for automated analysis of cells and tissues usually include extracting features from images before analyzing such features via various machine learning and…

Abstract

Purpose

Computer vision for automated analysis of cells and tissues usually include extracting features from images before analyzing such features via various machine learning and machine vision algorithms. The purpose of this work is to explore and demonstrate the ability of a Convolutional Neural Network (CNN) to classify cells pictured via brightfield microscopy without the need of any feature extraction, using a minimum of images, improving work-flows that involve cancer cell identification.

Design/methodology/approach

The methodology involved a quantitative measure of the performance of a Convolutional Neural Network in distinguishing between two cancer lines. In their approach, they trained, validated and tested their 6-layer CNN on 1,241 images of MDA-MB-468 and MCF7 breast cancer cell line in an end-to-end fashion, allowing the system to distinguish between the two different cancer cell types.

Findings

They obtained a 99% accuracy, providing a foundation for more comprehensive systems.

Originality/value

Value can be found in that systems based on this design can be used to assist cell identification in a variety of contexts, whereas a practical implication can be found that these systems can be deployed to assist biomedical workflows quickly and at low cost. In conclusion, this system demonstrates the potentials of end-to-end learning systems for faster and more accurate automated cell analysis.

Details

Journal of Industry-University Collaboration, vol. 1 no. 1
Type: Research Article
ISSN: 2631-357X

Keywords

Article
Publication date: 23 August 2019

Haiqing He, Ting Chen, Minqiang Chen, Dajun Li and Penggen Cheng

This paper aims to present a novel approach of image super-resolution based on deep–shallow cascaded convolutional neural networks for reconstructing a clear and…

Abstract

Purpose

This paper aims to present a novel approach of image super-resolution based on deep–shallow cascaded convolutional neural networks for reconstructing a clear and high-resolution (HR) remote sensing image from a low-resolution (LR) input.

Design/methodology/approach

The proposed approach directly learns the residuals and mapping between simulated LR and their corresponding HR remote sensing images based on deep and shallow end-to-end convolutional networks instead of assuming any specific restored models. Extra max-pooling and up-sampling are used to achieve a multiscale space by concatenating low- and high-level feature maps, and an HR image is generated by combining LR input and the residual image. This model ensures a strong response to spatially local input patterns by using a large filter and cascaded small filters. The authors adopt a strategy based on epochs to update the learning rate for boosting convergence speed.

Findings

The proposed deep network is trained to reconstruct high-quality images for low-quality inputs through a simulated dataset, which is generated with Set5, Set14, Berkeley Segmentation Data set and remote sensing images. Experimental results demonstrate that this model considerably enhances remote sensing images in terms of spatial detail and spectral fidelity and outperforms state-of-the-art SR methods in terms of peak signal-to-noise ratio, structural similarity and visual assessment.

Originality/value

The proposed method can reconstruct an HR remote sensing image from an LR input and significantly improve the quality of remote sensing images in terms of spatial detail and fidelity.

Details

Sensor Review, vol. 39 no. 5
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 21 December 2021

Shanling Han, Shoudong Zhang, Yong Li and Long Chen

Intelligent diagnosis of equipment faults can effectively avoid the shutdown caused by equipment faults and improve the safety of the equipment. At present, the diagnosis…

Abstract

Purpose

Intelligent diagnosis of equipment faults can effectively avoid the shutdown caused by equipment faults and improve the safety of the equipment. At present, the diagnosis of various kinds of bearing fault information, such as the occurrence, location and degree of fault, can be carried out by machine learning and deep learning and realized through the multiclassification method. However, the multiclassification method is not perfect in distinguishing similar fault categories and visual representation of fault information. To improve the above shortcomings, an end-to-end fault multilabel classification model is proposed for bearing fault diagnosis.

Design/methodology/approach

In this model, the labels of each bearing are binarized by using the binary relevance method. Then, the integrated convolutional neural network and gated recurrent unit (CNN-GRU) is employed to classify faults. Different from the general CNN networks, the CNN-GRU network adds multiple GRU layers after the convolutional layers and the pool layers.

Findings

The Paderborn University bearing dataset is utilized to demonstrate the practicability of the model. The experimental results show that the average accuracy in test set is 99.7%, and the proposed network is better than multilayer perceptron and CNN in fault diagnosis of bearing, and the multilabel classification method is superior to the multiclassification method. Consequently, the model can intuitively classify faults with higher accuracy.

Originality/value

The fault labels of each bearing are labeled according to the failure or not, the fault location, the damage mode and the damage degree, and then the binary value is obtained. The multilabel problem is transformed into a binary classification problem of each fault label by the binary relevance method, and the predicted probability value of each fault label is directly output in the output layer, which visually distinguishes different fault conditions.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 15 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 4 April 2016

Yang Lu, Shujuan Yi, Yurong Liu and Yuling Ji

This paper aims to design a multi-layer convolutional neural network (CNN) to solve biomimetic robot path planning problem.

Abstract

Purpose

This paper aims to design a multi-layer convolutional neural network (CNN) to solve biomimetic robot path planning problem.

Design/methodology/approach

At first, the convolution kernel with different scales can be obtained by using the sparse auto encoder training algorithm; the parameter of the hidden layer is a series of convolutional kernel, and the authors use these kernels to extract first-layer features. Then, the authors get the second-layer features through the max-pooling operators, which improve the invariance of the features. Finally, the authors use fully connected layers of neural networks to accomplish the path planning task.

Findings

The NAO biomimetic robot respond quickly and correctly to the dynamic environment. The simulation experiments show that the deep neural network outperforms in dynamic and static environment than the conventional method.

Originality/value

A new method of deep learning based biomimetic robot path planning is proposed. The authors designed a multi-layer CNN which includes max-pooling layer and convolutional kernel. Then, the first and second layers features can be extracted by these kernels. Finally, the authors use the sparse auto encoder training algorithm to train the CNN so as to accomplish the path planning task of NAO robot.

Details

Assembly Automation, vol. 36 no. 2
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 13 May 2022

Qiang Zhang, Zijian Ye, Siyu Shao, Tianlin Niu and Yuwei Zhao

The current studies on remaining useful life (RUL) prediction mainly rely on convolutional neural networks (CNNs) and long short-term memories (LSTMs) and do not take full…

Abstract

Purpose

The current studies on remaining useful life (RUL) prediction mainly rely on convolutional neural networks (CNNs) and long short-term memories (LSTMs) and do not take full advantage of the attention mechanism, resulting in lack of prediction accuracy. To further improve the performance of the above models, this study aims to propose a novel end-to-end RUL prediction framework, called convolutional recurrent attention network (CRAN) to achieve high accuracy.

Design/methodology/approach

The proposed CRAN is a CNN-LSTM-based model that effectively combines the powerful feature extraction ability of CNN and sequential processing capability of LSTM. The channel attention mechanism, spatial attention mechanism and LSTM attention mechanism are incorporated in CRAN, assigning different attention coefficients to CNN and LSTM. First, features of the bearing vibration data are extracted from both time and frequency domain. Next, the training and testing set are constructed. Then, the CRAN is trained offline using the training set. Finally, online RUL estimation is performed by applying data from the testing set to the trained CRAN.

Findings

CNN-LSTM-based models have higher RUL prediction accuracy than CNN-based and LSTM-based models. Using a combination of max pooling and average pooling can reduce the loss of feature information, and in addition, the structure of the serial attention mechanism is superior to the parallel attention structure. Comparing the proposed CRAN with six different state-of-the-art methods, for the predicted results of two testing bearings, the proposed CRAN has an average reduction in the root mean square error of 57.07/80.25%, an average reduction in the mean absolute error of 62.27/85.87% and an average improvement in score of 12.65/6.57%.

Originality/value

This article provides a novel end-to-end rolling bearing RUL prediction framework, which can provide a reference for the formulation of bearing maintenance programs in the industry.

Details

Assembly Automation, vol. 42 no. 3
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 8 February 2021

Adireddy Rajasekhar Reddy and Appini Narayana Rao

In modern technology, the wireless sensor networks (WSNs) are generally most promising solutions for better reliability, object tracking, remote monitoring and more, which…

Abstract

Purpose

In modern technology, the wireless sensor networks (WSNs) are generally most promising solutions for better reliability, object tracking, remote monitoring and more, which is directly related to the sensor nodes. Received signal strength indication (RSSI) is main challenges in sensor networks, which is fully depends on distance measurement. The learning algorithm based traditional models are involved in error correction, distance measurement and improve the accuracy of effectiveness. But, most of the existing models are not able to protect the user’s data from the unknown or malicious data during the signal transmission. The simulation outcomes indicate that proposed methodology may reach more constant and accurate position states of the unknown nodes and the target node in WSNs domain than the existing methods.

Design/methodology/approach

This paper present a deep convolutional neural network (DCNN) from the adaptation of machine learning to identify the problems on deep ranging sensor networks and overthrow the problems of unknown sensor nodes localization in WSN networks by using instance parameters of elephant herding optimization (EHO) technique and which is used to optimize the localization problem.

Findings

In this proposed method, the signal propagation properties can be extracted automatically because of this image data and RSSI data values. Rest of this manuscript shows that the ECO can find the better performance analysis of distance estimation accuracy, localized nodes and its transmission range than those traditional algorithms. ECO has been proposed as one of the main tools to promote a transformation from unsustainable development to one of sustainable development. It will reduce the material intensity of goods and services.

Originality/value

The proposed technique is compared to existing systems to show the proposed method efficiency. The simulation results indicate that this proposed methodology can achieve more constant and accurate position states of the unknown nodes and the target node in WSNs domain than the existing methods.

Details

International Journal of Pervasive Computing and Communications, vol. 18 no. 2
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 15 November 2021

Priyanka Yadlapalli, D. Bhavana and Suryanarayana Gunnam

Computed tomography (CT) scan can provide valuable information in the diagnosis of lung diseases. To detect the location of the cancerous lung nodules, this work uses…

Abstract

Purpose

Computed tomography (CT) scan can provide valuable information in the diagnosis of lung diseases. To detect the location of the cancerous lung nodules, this work uses novel deep learning methods. The majority of the early investigations used CT, magnetic resonance and mammography imaging. Using appropriate procedures, the professional doctor in this sector analyses these images to discover and diagnose the various degrees of lung cancer. All of the methods used to discover and detect cancer illnesses are time-consuming, expensive and stressful for the patients. To address all of these issues, appropriate deep learning approaches for analyzing these medical images, which included CT scan images, were utilized.

Design/methodology/approach

Radiologists currently employ chest CT scans to detect lung cancer at an early stage. In certain situations, radiologists' perception plays a critical role in identifying lung melanoma which is incorrectly detected. Deep learning is a new, capable and influential approach for predicting medical images. In this paper, the authors employed deep transfer learning algorithms for intelligent classification of lung nodules. Convolutional neural networks (VGG16, VGG19, MobileNet and DenseNet169) are used to constrain the input and output layers of a chest CT scan image dataset.

Findings

The collection includes normal chest CT scan pictures as well as images from two kinds of lung cancer, squamous and adenocarcinoma impacted chest CT scan images. According to the confusion matrix results, the VGG16 transfer learning technique has the highest accuracy in lung cancer classification with 91.28% accuracy, followed by VGG19 with 89.39%, MobileNet with 85.60% and DenseNet169 with 83.71% accuracy, which is analyzed using Google Collaborator.

Originality/value

The proposed approach using VGG16 maximizes the classification accuracy when compared to VGG19, MobileNet and DenseNet169. The results are validated by computing the confusion matrix for each network type.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 15 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

1 – 10 of 744