Unmanned machine vision system for automated recognition of mechanical parts

Tushar Jain (Department of Mechanical Engineering, National Institute of Technology Kurukshetra, Kurukshetra, India) (Department of Mechanical Engineering, Meerut Institute of Engineering and Technology, Meerut, India)
Meenu Gupta (Department of Mechanical Engineering, National Institute of Technology Kurukshetra, Kurukshetra, India)
H.K. Sardana (Central Scientific Instruments Organisation CSIR, Chandigarh, India)

International Journal of Intelligent Unmanned Systems

ISSN: 2049-6427

Publication date: 8 October 2018

Abstract

Purpose

The field of machine vision, or computer vision, has been growing at fast pace. The growth in this field, unlike most established fields, has been both in breadth and depth of concepts and techniques. Machine vision techniques are being applied in areas ranging from medical imaging to remote sensing, industrial inspection to document processing and nanotechnology to multimedia databases. The goal of a machine vision system is to create a model of the real world from images. Computer vision recognition has attracted the attention of researchers in many application areas and has been used to solve many ranges of problems. The purpose of this paper is to consider recognition of objects manufactured in mechanical industry. Mechanically manufactured parts have recognition difficulties due to manufacturing process including machine malfunctioning, tool wear and variations in raw material. This paper considers the problem of recognizing and classifying the objects of such parts. RGB images of five objects are used as an input. The Fourier descriptor technique is used for recognition of objects. Artificial neural network (ANN) is used for classification of five different objects. These objects are kept in different orientations for invariant rotation, translation and scaling. The feed forward neural network with back-propagation learning algorithm is used to train the network. This paper shows the effect of different network architecture and numbers of hidden nodes on the classification accuracy of objects.

Design/methodology/approach

The overall goal of this research is to develop algorithms for feature-based recognition of 2D parts from intensity images. Most present industrial vision systems are custom-designed systems, which can only handle a specific application. This is not surprising, since different applications have different geometry, different reflectance properties of the parts.

Findings

Classification accuracy is affected by the changing network architecture. ANN is computationally demanding and slow. A total of 20 hidden nodes network structure produced the best results at 500 iterations (90 percent accuracy based on overall accuracy and 87.50 percent based on κ coefficient). So, 20 hidden nodes are selected for further analysis. The learning rate is set to 0.1, and momentum term used is 0.2 that give the best results architectures. The confusion matrix also shows the accuracy of the classifier. Hence, with these results the proposed system can be used efficiently for more objects.

Originality/value

After calculating the variation of overall accuracy with different network architectures, the results of different configuration of the sample size of 50 testing images are taken. Table II shows the results of the confusion matrix obtained on these testing samples of objects.

Keywords

Citation

Tushar Jain, Meenu Gupta and H.K. Sardana (2018) "Unmanned machine vision system for automated recognition of mechanical parts", International Journal of Intelligent Unmanned Systems, Vol. 6 No. 4, pp. 184-196

Download as .RIS

DOI

: https://doi.org/10.1108/IJIUS-03-2018-0008

Publisher

:

Emerald Publishing Limited

Copyright © 2018, Emerald Publishing Limited

Please note you might not have access to this content

You may be able to access this content by login via Shibboleth, Open Athens or with your Emerald account.
If you would like to contact us about accessing this content, click the button and fill out the form.
To rent this content from Deepdyve, please click the button.