Search results
1 – 8 of 8T. Mahalingam and M. Subramoniam
Surveillance is the emerging concept in the current technology, as it plays a vital role in monitoring keen activities at the nooks and corner of the world. Among which moving…
Abstract
Surveillance is the emerging concept in the current technology, as it plays a vital role in monitoring keen activities at the nooks and corner of the world. Among which moving object identifying and tracking by means of computer vision techniques is the major part in surveillance. If we consider moving object detection in video analysis is the initial step among the various computer applications. The main drawbacks of the existing object tracking method is a time-consuming approach if the video contains a high volume of information. There arise certain issues in choosing the optimum tracking technique for this huge volume of data. Further, the situation becomes worse when the tracked object varies orientation over time and also it is difficult to predict multiple objects at the same time. In order to overcome these issues here, we have intended to propose an effective method for object detection and movement tracking. In this paper, we proposed robust video object detection and tracking technique. The proposed technique is divided into three phases namely detection phase, tracking phase and evaluation phase in which detection phase contains Foreground segmentation and Noise reduction. Mixture of Adaptive Gaussian (MoAG) model is proposed to achieve the efficient foreground segmentation. In addition to it the fuzzy morphological filter model is implemented for removing the noise present in the foreground segmented frames. Moving object tracking is achieved by the blob detection which comes under tracking phase. Finally, the evaluation phase has feature extraction and classification. Texture based and quality based features are extracted from the processed frames which is given for classification. For classification we are using J48 ie, decision tree based classifier. The performance of the proposed technique is analyzed with existing techniques k-NN and MLP in terms of precision, recall, f-measure and ROC.
Details
Keywords
J.F. Aviles-Viñas, I. Lopez-Juarez and R. Rios-Cabrera
– The purpose of this paper was to propose a method based on an Artificial Neural Network and a real-time vision algorithm, to learn welding skills in industrial robotics.
Abstract
Purpose
The purpose of this paper was to propose a method based on an Artificial Neural Network and a real-time vision algorithm, to learn welding skills in industrial robotics.
Design/methodology/approach
By using an optic camera to measure the bead geometry (width and height), the authors propose a real-time computer vision algorithm to extract training patterns and to enable an industrial robot to acquire and learn autonomously the welding skill. To test the approach, an industrial KUKA robot and a welding gas metal arc welding machine were used in a manufacturing cell.
Findings
Several data analyses are described, showing empirically that industrial robots can acquire the skill even if the specific welding parameters are unknown.
Research limitations/implications
The approach considers only stringer beads. Weave bead and bead penetration are not considered.
Practical implications
With the proposed approach, it is possible to learn specific welding parameters despite of the material, type of robot or welding machine. This is due to the fact that the feedback system produces automatic measurements that are labelled prior to the learning process.
Originality/value
The main contribution is that the complex learning process is reduced into an input-process-output system, where the process part is learnt automatically without human supervision, by registering the patterns with an automatically calibrated vision system.
Details
Keywords
Tomasz Chady, Ryszard Sikora, Mariusz Szwagiel, Bogdan Grzywacz, Leszek Misztal, Pawel Waszczuk, Michal Szydlowski and Barbara Szymanik
The purpose of this paper is to describe a multisource system for nondestructive inspection of welded elements exploited in aircraft industry developed in West Pomeranian…
Abstract
Purpose
The purpose of this paper is to describe a multisource system for nondestructive inspection of welded elements exploited in aircraft industry developed in West Pomeranian University of Technology, Szczecin in the frame of CASELOT project. The system task is to support the operator in flaws identification of welded aircraft elements using data obtained from X-ray inspection and 3D triangulation laser scanners.
Design/methodology/approach
For proper defects detection a set of special processing algorithms were developed. For easier system exploitation and integration of all components a user friendly interface in LabVIEW environment was designed.
Findings
It is possible to create the fully independent, intelligent system for welds’ flaws detection. This kind of technology might be crucial in further development of aircraft industry.
Originality/value
In this paper a number of innovative solutions (new algorithms, algorithms’ combinations) for defects’ detection in welds are presented. All of these solutions are the basis of presented complete system. One of the main original solution is a combination of the systems based on 3D triangulation laser scanner and X-ray testing.
Details
Keywords
Xiangyang Ju, J. Paul Siebert, Nigel J.B. McFarlane, Jiahua Wu, Robin D. Tillett and Charles Patrick Schofield
We have succeeded in capturing porcine 3D surface anatomy in vivo by developing a high‐resolution stereo imaging system. The system achieved accurate 3D shape recovery by matching…
Abstract
We have succeeded in capturing porcine 3D surface anatomy in vivo by developing a high‐resolution stereo imaging system. The system achieved accurate 3D shape recovery by matching stereo pair images containing only natural surface textures at high (image) resolution. The 3D imaging system presented for pig shape capture is based on photogrammetry and comprises: stereo pair image acquisition, stereo camera calibration and stereo matching and surface and texture integration. Practical issues have been addressed, and in particular the integration of multiple range images into a single 3D surface. Robust image segmentation successfully isolated the pigs within the stereo images and was employed in conjunction with depth discontinuity detection to facilitate the integration process. The capture and processing chain is detailed here and the resulting 3D pig anatomy obtained using the system presented.
Details
Keywords
Asanka G. Perera, Yee Wei Law, Ali Al-Naji and Javaan Chahl
The purpose of this paper is to present a preliminary solution to address the problem of estimating human pose and trajectory by an aerial robot with a monocular camera in near…
Abstract
Purpose
The purpose of this paper is to present a preliminary solution to address the problem of estimating human pose and trajectory by an aerial robot with a monocular camera in near real time.
Design/methodology/approach
The distinguishing feature of the solution is a dynamic classifier selection architecture. Each video frame is corrected for perspective using projective transformation. Then, a silhouette is extracted as a Histogram of Oriented Gradients (HOG). The HOG is then classified using a dynamic classifier. A class is defined as a pose-viewpoint pair, and a total of 64 classes are defined to represent a forward walking and turning gait sequence. The dynamic classifier consists of a Support Vector Machine (SVM) classifier C64 that recognizes all 64 classes, and 64 SVM classifiers that recognize four classes each – these four classes are chosen based on the temporal relationship between them, dictated by the gait sequence.
Findings
The solution provides three main advantages: first, classification is efficient due to dynamic selection (4-class vs 64-class classification). Second, classification errors are confined to neighbors of the true viewpoints. This means a wrongly estimated viewpoint is at most an adjacent viewpoint of the true viewpoint, enabling fast recovery from incorrect estimations. Third, the robust temporal relationship between poses is used to resolve the left-right ambiguities of human silhouettes.
Originality/value
Experiments conducted on both fronto-parallel videos and aerial videos confirm that the solution can achieve accurate pose and trajectory estimation for these different kinds of videos. For example, the “walking on an 8-shaped path” data set (1,652 frames) can achieve the following estimation accuracies: 85 percent for viewpoints and 98.14 percent for poses.
Details
Keywords
The purpose of this paper is to propose a defect detection method of bare printed circuit boards (PCB) with high accuracy.
Abstract
Purpose
The purpose of this paper is to propose a defect detection method of bare printed circuit boards (PCB) with high accuracy.
Design/methodology/approach
First, bilateral filtering of the PCB image was performed in the uniform color space, and the copper-clad areas were segmented according to the color difference among different areas. Then, according to the chaotic characteristics of the spatial distribution and the gradient direction of the edge pixels on the boundary of the defective areas, the feature vector, which evaluates quantitatively the significant degree of the defect characteristics by using the gradient direction information entropy and the uniform local binary patterns, was constructed. Finally, support vector machine classifier was used for the identification and localization of the PCB defects.
Findings
Experimental results show that the proposed algorithm can accurately detect typical defects of the bare PCB, such as short circuit, open circuit, scratches and voids.
Originality/value
Considering the limitations of describing all kinds of defects on bare PCB by using single kind of feature, the gradient direction information entropy and the local binary patterns were fused to build a feature vector, which evaluates quantitatively the significant degree of the defect features.
Details
Keywords
Srinivas Talasila, Kirti Rawal and Gaurav Sethi
Extraction of leaf region from the plant leaf images is a prerequisite process for species recognition, disease detection and classification and so on, which are required for crop…
Abstract
Purpose
Extraction of leaf region from the plant leaf images is a prerequisite process for species recognition, disease detection and classification and so on, which are required for crop management. Several approaches were developed to implement the process of leaf region segmentation from the background. However, most of the methods were applied to the images taken under laboratory setups or plain background, but the application of leaf segmentation methods is vital to be used on real-time cultivation field images that contain complex backgrounds. So far, the efficient method that automatically segments leaf region from the complex background exclusively for black gram plant leaf images has not been developed.
Design/methodology/approach
Extracting leaf regions from the complex background is cumbersome, and the proposed PLRSNet (Plant Leaf Region Segmentation Net) is one of the solutions to this problem. In this paper, a customized deep network is designed and applied to extract leaf regions from the images taken from cultivation fields.
Findings
The proposed PLRSNet compared with the state-of-the-art methods and the experimental results evident that proposed PLRSNet yields 96.9% of Similarity Index/Dice, 94.2% of Jaccard/IoU, 98.55% of Correct Detection Ratio, Total Segmentation Error of 0.059 and Average Surface Distance of 3.037, representing a significant improvement over existing methods particularly taking into account of cultivation field images.
Originality/value
In this work, a customized deep learning network is designed for segmenting plant leaf region under complex background and named it as a PLRSNet.
Details
Keywords
Mahdi Jampour, Amin KarimiSardar and Hossein Rezaei Estakhroyeh
The purpose of this study is to design, program and implement an intelligent robot for shelf-reading. an essential task in library maintenance is shelf-reading, which refers to…
Abstract
Purpose
The purpose of this study is to design, program and implement an intelligent robot for shelf-reading. an essential task in library maintenance is shelf-reading, which refers to the process of checking the disciplines of books based on their call numbers to ensure that they are correctly shelved. Shelf-reading is a routine yet challenging task for librarians, as it involves controlling call numbers on the scale of thousands of books promptly.
Design/methodology/approach
Leveraging the strength of autonomous robots in handling repetitive tasks, this paper introduces a novel vision-based shelf-reader robot, called \emph{Pars} and demonstrate its effectiveness in accomplishing shelf-reading tasks. Also, this paper proposes a novel supervised approach to power the vision system of \emph{Pars}, allowing it to handle motion blur on images captured while it moves. An approach based on Faster R-CNN is also incorporated into the vision system, allowing the robot to efficiently detect the region of interest for retrieving a book’s information.
Findings
This paper evaluated the robot’s performance in a library with $120,000 books and discovered problems such as missing and misplaced books. Besides, this paper introduces a new challenging data set of blurred barcodes free publicly available for similar research studies.
Originality/value
The robot is equipped with six parallel cameras, which enable it to check books and decide moving paths. Through its vision-based system, it is also capable of routing and tracking paths between bookcases in a library and it can also turn around bends. Moreover, \emph{Pars} addresses the blurred barcodes, which may appear because of its motion.
Details