Search results

1 – 10 of 104
Article
Publication date: 16 March 2015

Zeng Jinle, Zou Yirong, Du Dong, Chang Baohua and Pan Jiluan

This paper aims to develop a feasible visual weld detection method to solve the problems in multi-layer welding detection (e.g. cover pass welding detection) for seam tracking and…

Abstract

Purpose

This paper aims to develop a feasible visual weld detection method to solve the problems in multi-layer welding detection (e.g. cover pass welding detection) for seam tracking and non-destructive testing. It seeks for an adaptive and accurate way to determine the edge between the seam and the base metal in the grayscale image of weld automatically. This paper tries to contribute to next-generation real-time robotic welding systems for multi-layer welding.

Design/methodology/approach

This paper opted for invariant moments to characterize the seam and the base metal for classification purposes. The properties of invariant moments, such as high degree of self-similarity and separation, affine invariance and repetition invariance, were discussed to verify the adaptability of the invariant moment in weld detection. Then, a weld detection method based on invariant moments was proposed to extract the edge between the seam and the base metal, including image division, invariant moment features extraction, K-Means adaptive thresholding, maximum connected domain detection and edge position extraction.

Findings

This paper highlights the significance of high degree of self-similarity and separation, affine invariance and repetition invariance of the invariant moment for weld detection. An adaptive, effective and accurate method is proposed to detect the edge between the seam and the base metal based on invariant moments.

Research limitations/implications

It is necessary to verify the applicability of the proposed method in variable welding conditions further. Further works will focus on the establishment of a real-time seam tracking system during the whole multi-layer/multi-pass welding process based on such adaptive visual features.

Practical implications

This paper includes the implications for development of an adaptive and real-time weld detection method, which is expected to be applied to online seam tracking in multi-layer welding.

Originality/value

This paper presents an accurate weld detection method in multi-layer welding, overcoming difficulties in effectiveness, adaptability and efficiency of existing weld detection methods.

Details

Industrial Robot: An International Journal, vol. 42 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 14 October 2021

Ankit Kumar Srivastava, A.N. Tiwari and S.N. Singh

This paper aims to accurately estimate harmonics/interharmonics in modern power system. There are several high spectral resolution techniques that have been in use for several…

Abstract

Purpose

This paper aims to accurately estimate harmonics/interharmonics in modern power system. There are several high spectral resolution techniques that have been in use for several years like Estimation of Signal Parameters via Rotational Invariance Technique (ESPRIT), Prony methods, etc. but these techniques require prior knowledge of number of modes present in the signal. Model Order (MO) estimation techniques have to make a trade-off between accuracy and their speed i.e., computational burden. Therefore, there is always a requirement of a technique that is fast as well as accurate.

Design/methodology/approach

The proposed standard deviation (SD) method eliminates the requirement of energy validation test and analyses the distribution pattern, i.e. standard deviation of eigenvalues to identify the number of modes present in the signal. Signal is reconstructed using estimated modes and reconstruction error is obtained to show accuracy of the proposed estimation.

Findings

Six test synthetic signals as well as one practical signal have been taken for validating the proposed method. The paper shows that proposed methodology has a better accuracy compared to modified exact model order (MEMO) method in high noise environment and takes very less computation time compared to the exact model order (EMO) method.

Practical implications

The proposed method has been practically implemented for harmonic/interharmonic analysis at a sewage treatment plant at GIFT City, Gujarat, India. Apart from this the proposed method is modeled in python-based tool and is run into low-cost Raspberry Pi like hardware to create an onsite as well as remote monitoring device.

Originality/value

SD-based approach for model order estimation is novel to this area. Further, the proposed method is compared with EMO and MEMO under varying noise conditions to check for accuracy and estimation time.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering , vol. 40 no. 6
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 1 February 1997

Siang Kok Sim and Ming Yeong Teo

Describes work based on the hypothesis that the use of artificial neural networks can imbue vision‐based robots with the ability to learn about their environment and hence enhance…

301

Abstract

Describes work based on the hypothesis that the use of artificial neural networks can imbue vision‐based robots with the ability to learn about their environment and hence enhance their competence and flexibility. The Neocognitron neural network provides the vision‐based robot with the capability of learning about its environment through training to recognize certain objects. The Neocognitron network is selected because of its ability to tolerate translational, rotational and scaling invariance in the input pattern of objects. Presents results which support the use of Neocognitron in enhancing the flexibility of vision‐based robots.

Details

Integrated Manufacturing Systems, vol. 8 no. 1
Type: Research Article
ISSN: 0957-6061

Keywords

Article
Publication date: 27 June 2008

E. Menegatti, G. Gatto, E. Pagello, Takashi Minato and Hiroshi Ishiguro

Image‐based localisation has been widely investigated in mobile robotics. However, traditional image‐based localisation approaches do not work when the environment appearance…

Abstract

Purpose

Image‐based localisation has been widely investigated in mobile robotics. However, traditional image‐based localisation approaches do not work when the environment appearance changes. The purpose of this paper is to propose a new system for image‐based localisation, which enables the approach to work also in highly dynamic environments.

Design/methodology/approach

The proposed technique is based on the use of a distributed vision system (DVS) composed of a set of cameras installed in the environment and of a camera mounted on a mobile robot. The localisation of the robot is achieved by comparing the current image grabbed by the robot with the images grabbed, at the same time, by the DVS. Finding the DVS's image, most similar to the robot's image, gives a topological localisation of the robot.

Findings

Experiments reported in the paper proved the system to be effective, even exploiting a pre‐existent DVS not designed for this application.

Originality/value

Whilst, aware that DVSs, as the one used in this work, are not diffuse nowadays, this work is significant because a novel idea is proposed for dealing with dynamic environments in the image‐based localisation approach and the idea is validated with experiments. Camera Sensor networks currently are an emerging technology and they may be introduced in several daily environments in the future.

Details

Sensor Review, vol. 28 no. 3
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 22 July 2021

Zirui Guo, Huimin Lu, Qinghua Yu, Ruibin Guo, Junhao Xiao and Hongshan Yu

This paper aims to design a novel feature descriptor to improve the performance of feature matching in challenge scenes, such as low texture and wide-baseline scenes. Common…

Abstract

Purpose

This paper aims to design a novel feature descriptor to improve the performance of feature matching in challenge scenes, such as low texture and wide-baseline scenes. Common descriptors are not suitable for low texture scenes and other challenging scenes mainly owing to encoding only one kind of features. The proposed feature descriptor considers multiple features and their locations, which is more expressive.

Design/methodology/approach

A graph neural network–based descriptors enhancement algorithm for feature matching is proposed. In this paper, point and line features are the primary concerns. In the graph, commonly used descriptors for points and lines constitute the nodes and the edges are determined by the geometric relationship between points and lines. After the graph convolution designed for incomplete join graph, enhanced descriptors are obtained.

Findings

Experiments are carried out in indoor, outdoor and low texture scenes. The experiments investigate the real-time performance, rotation invariance, scale invariance, viewpoint invariance and noise sensitivity of the descriptors in three types of scenes. The results show that the enhanced descriptors are robust to scene changes and can be used in wide-baseline matching.

Originality/value

A graph structure is designed to represent multiple features in an image. In the process of building graph structure, the geometric relation between multiple features is used to establish the edges. Furthermore, a novel hybrid descriptor for points and lines is obtained using graph convolutional neural network. This enhanced descriptor has the advantages of both point features and line features in feature matching.

Details

Industrial Robot: the international journal of robotics research and application, vol. 48 no. 5
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 14 June 2013

Christian Ivancsits and Min‐Fan Ricky Lee

This paper aims to address three major issues in the development of a vision‐based navigation system for small unmanned aerial vehicles (UAVs) which can be characterized as…

1043

Abstract

Purpose

This paper aims to address three major issues in the development of a vision‐based navigation system for small unmanned aerial vehicles (UAVs) which can be characterized as follows: technical constraints, robust image feature matching and an efficient and precise method for visual navigation.

Design/methodology/approach

The authors present and evaluate methods for their solution such as wireless networked control, highly distinctive feature descriptors (HDF) and a visual odometry system.

Findings

Proposed feature descriptors achieve significant improvements in computation time by detaching the explicit scale invariance of the widely used scale invariant feature transform. The feasibility of wireless networked real‐time control for vision‐based navigation is evaluated in terms of latency and data throughput. The visual odometry system uses a single camera to reconstruct the camera path and the structure of the environment, and achieved and error of 1.65 percent w.r.t total path length on a circular trajectory of 9.43 m.

Originality/value

The originality/value lies in the contribution of the presented work to the solution of visual odometry for small unmanned aerial vehicles.

Article
Publication date: 11 April 2020

Mohammad Rezaiee-Pajand, Nima Gharaei-Moghaddam and Mohammadreza Ramezani

This paper aims to propose a new robust membrane finite element for the analysis of plane problems. The suggested element has triangular geometry. Four nodes and 11 degrees of…

Abstract

Purpose

This paper aims to propose a new robust membrane finite element for the analysis of plane problems. The suggested element has triangular geometry. Four nodes and 11 degrees of freedom (DOF) are considered for the element. Each of the three vertex nodes has three DOF, two displacements and one drilling. The fourth node that is located inside the element has only two translational DOF.

Design/methodology/approach

The suggested formulation is based on the assumed strain method and satisfies both compatibility and equilibrium conditions within each element. This establishment results in higher insensitivity to the mesh distortion. Enforcement of the equilibrium condition to the assumed strain field leads to considerably high accuracy of the developed formulation.

Findings

To show the merits of the suggested plane element, its different properties, including insensitivity to mesh distortion, particularly under transverse shear forces, immunities to the various locking phenomena and convergence of the element are studied. The obtained results demonstrate the superiority of the suggested element compared with many of the available robust membrane elements.

Originality/value

According to the attained results, the proposed element performs better than the well-known displacement-based elements such as linear strain triangular element, Q4 and Q8 and even is comparable with robust modified membrane elements.

Details

Engineering Computations, vol. 37 no. 9
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 23 August 2019

Shenlong Wang, Kaixin Han and Jiafeng Jin

In the past few decades, the content-based image retrieval (CBIR), which focuses on the exploration of image feature extraction methods, has been widely investigated. The term of…

Abstract

Purpose

In the past few decades, the content-based image retrieval (CBIR), which focuses on the exploration of image feature extraction methods, has been widely investigated. The term of feature extraction is used in two cases: application-based feature expression and mathematical approaches for dimensionality reduction. Feature expression is a technique of describing the image color, texture and shape information with feature descriptors; thus, obtaining effective image features expression is the key to extracting high-level semantic information. However, most of the previous studies regarding image feature extraction and expression methods in the CBIR have not performed systematic research. This paper aims to introduce the basic image low-level feature expression techniques for color, texture and shape features that have been developed in recent years.

Design/methodology/approach

First, this review outlines the development process and expounds the principle of various image feature extraction methods, such as color, texture and shape feature expression. Second, some of the most commonly used image low-level expression algorithms are implemented, and the benefits and drawbacks are summarized. Third, the effectiveness of the global and local features in image retrieval, including some classical models and their illustrations provided by part of our experiment, are analyzed. Fourth, the sparse representation and similarity measurement methods are introduced, and the retrieval performance of statistical methods is evaluated and compared.

Findings

The core of this survey is to review the state of the image low-level expression methods and study the pros and cons of each method, their applicable occasions and certain implementation measures. This review notes that image peculiarities of single-feature descriptions may lead to unsatisfactory image retrieval capabilities, which have significant singularity and considerable limitations and challenges in the CBIR.

Originality/value

A comprehensive review of the latest developments in image retrieval using low-level feature expression techniques is provided in this paper. This review not only introduces the major approaches for image low-level feature expression but also supplies a pertinent reference for those engaging in research regarding image feature extraction.

Article
Publication date: 23 March 2012

Ovidiu Ghita, Dana Ilea, Antonio Fernandez and Paul Whelan

The purpose of this paper is to review and provide a detailed performance evaluation of a number of texture descriptors that analyse texture at micro‐level such as local binary…

Abstract

Purpose

The purpose of this paper is to review and provide a detailed performance evaluation of a number of texture descriptors that analyse texture at micro‐level such as local binary patterns (LBP) and a number of standard filtering techniques that sample the texture information using either a bank of isotropic filters or Gabor filters.

Design/methodology/approach

The experimental tests were conducted on standard databases where the classification results are obtained for single and multiple texture orientations. The authors also analysed the performance of standard filtering texture analysis techniques (such as those based of LM and MR8 filter banks) when applied to the classification of texture images contained in standard Outex and Brodatz databases.

Findings

The most important finding resulting from this study is that although the LBP/C and the multi‐channel Gabor filtering techniques approach texture analysis from a different theoretical perspective, in this paper the authors have experimentally demonstrated that they share some common properties in regard to the way they sample the macro and micro properties of the texture.

Practical implications

Texture is a fundamental property of digital images and the development of robust image descriptors plays a crucial role in the process of image segmentation and scene understanding.

Originality/value

This paper contrast, from a practical and theoretical standpoint, the LBP and representative multi‐channel texture analysis approaches and a substantial number of experimental results were provided to evaluate their performance when applied to standard texture databases.

Article
Publication date: 20 June 2016

Wenhao Zhang, Melvyn Lionel Smith, Lyndon Neal Smith and Abdul Rehman Farooq

This paper aims to introduce an unsupervised modular approach for eye centre localisation in images and videos following a coarse-to-fine, global-to-regional scheme. The design of…

Abstract

Purpose

This paper aims to introduce an unsupervised modular approach for eye centre localisation in images and videos following a coarse-to-fine, global-to-regional scheme. The design of the algorithm aims at excellent accuracy, robustness and real-time performance for use in real-world applications.

Design/methodology/approach

A modular approach has been designed that makes use of isophote and gradient features to estimate eye centre locations. This approach embraces two main modalities that progressively reduce global facial features to local levels for more precise inspections. A novel selective oriented gradient (SOG) filter has been specifically designed to remove strong gradients from eyebrows, eye corners and self-shadows, which sabotage most eye centre localisation methods. The proposed algorithm, tested on the BioID database, has shown superior accuracy.

Findings

The eye centre localisation algorithm has been compared with 11 other methods on the BioID database and six other methods on the GI4E database. The proposed algorithm has outperformed all the other algorithms in comparison in terms of localisation accuracy while exhibiting excellent real-time performance. This method is also inherently robust against head poses, partial eye occlusions and shadows.

Originality/value

The eye centre localisation method uses two mutually complementary modalities as a novel, fast, accurate and robust approach. In addition, other than assisting eye centre localisation, the SOG filter is able to resolve general tasks regarding the detection of curved shapes. From an applied point of view, the proposed method has great potentials in benefiting a wide range of real-world human-computer interaction (HCI) applications.

Details

Sensor Review, vol. 36 no. 3
Type: Research Article
ISSN: 0260-2288

Keywords

1 – 10 of 104