Recognition of a tactile image independent of position, size and orientation has been a goal of much recent research. Many tasks (e.g. parts identification) often give rise to situations which demand a more generalized methodology than the derivation of a single forward measurement, such as the computation of part area and perimeter from its run‐length‐coding representation. In this situation, an interpretation procedure generally adopts the techniques and methodology of a pattern recognition approach. To achieve maximum utility and flexibility, the methods used should be sensitive to any image change in size, translation and rotation, and should provide good repeatability. The algorithm used in this article generally meets these conditions. The results show that recognition schemes based on these invariants are position, size and orientation independent, and also flexible enough to learn most sets of parts. Assuming that parts can vary only in location, orientation and size, then certain moments are very convenient for normalization. For instance, the first moments of area give the centroid of a part, which is a natural origin of co‐ordinates for translation invariant measurements. Similarly, the eigenvectors of the matrix of second central moments define the directions of principal axes, which leads to rotation moment invariant measurements.
CitationDownload as .RIS
MCB UP Ltd
Copyright © 1993, MCB UP Limited