Search results

1 – 10 of 131
Article
Publication date: 8 July 2022

Lin Zhang and Yingjie Zhang

This paper aims to quickly obtain an accurate and complete dense three-dimensional map of indoor environment with lower cost, which can be directly used in navigation.

Abstract

Purpose

This paper aims to quickly obtain an accurate and complete dense three-dimensional map of indoor environment with lower cost, which can be directly used in navigation.

Design/methodology/approach

This paper proposes an improved ORB-SLAM2 dense map optimization algorithm. This algorithm consists of three parts: ORB feature extraction based on improved FAST-12, feature point extraction with progressive sample consensus (PROSAC) and the dense ORB-SLAM2 algorithm for mapping. Here, the dense ORB-SLAM2 algorithm adds LoopClose optimization thread and dense point cloud map and octree map construction thread. The dense map is computationally expensive and occupies a large amount of memory. Therefore, the proposed method takes higher efficiency, voxel filtering can reduce the memory while ensuring the density of the map and then use the octree format to store the map to further reduce memory.

Findings

The improved ORB-SLAM2 algorithm is compared with the original ORB-SLAM2 algorithm, and the experimental results show that the map through improved ORB-SLAM2 can be directly used in navigation process with higher accuracy, shorter tracking time and smaller memory.

Originality/value

The improved ORB-SLAM2 algorithm can obtain a dense environment map, which ensures the integrity of data. The comparisons of FAST-12 and improved FAST-12, RANSAC and PROSAC prove that the improved FAST-12 and PROSAC both make the feature point extraction process faster and more accurate. Voxel filter helps to take small storage memory and low computation cost, and octree map construction on the dense map can be directly used in navigation.

Details

Assembly Automation, vol. 42 no. 4
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 12 March 2020

Akif Hacinecipoglu, Erhan Ilhan Konukseven and Ahmet Bugra Koku

This study aims to develop a real-time algorithm, which can detect people even in arbitrary poses. To cover poor and changing light conditions, it does not rely on color…

Abstract

Purpose

This study aims to develop a real-time algorithm, which can detect people even in arbitrary poses. To cover poor and changing light conditions, it does not rely on color information. The developed method is expected to run on computers with low computational resources so that it can be deployed on autonomous mobile robots.

Design/methodology/approach

The method is designed to have a people detection pipeline with a series of operations. Efficient point cloud processing steps with a novel head extraction operation provide possible head clusters in the scene. Classification of these clusters using support vector machines results in high speed and robust people detector.

Findings

The method is implemented on an autonomous mobile robot and results show that it can detect people with a frame rate of 28 Hz and equal error rate of 92 per cent. Also, in various non-standard poses, the detector is still able to classify people effectively.

Research limitations/implications

The main limitation would be for point clouds similar to head shape causing false positives and disruptive accessories (like large hats) causing false negatives. Still, these can be overcome with sufficient training samples.

Practical implications

The method can be used in industrial and social mobile applications because of its robustness, low resource needs and low power consumption.

Originality/value

The paper introduces a novel and efficient technique to detect people in arbitrary poses, with poor light conditions and low computational resources. Solving all these problems in a single and lightweight method makes the study fulfill an important need for collaborative and autonomous mobile robots.

Details

Sensor Review, vol. 40 no. 2
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 3 December 2020

Giuseppe Gillini, Paolo Di Lillo, Filippo Arrichiello, Daniele Di Vito, Alessandro Marino, Gianluca Antonelli and Stefano Chiaverini

In the past decade, more than 700 million people are affected by some kind of disability or handicap. In this context, the research interest in assistive robotics is growing up…

Abstract

Purpose

In the past decade, more than 700 million people are affected by some kind of disability or handicap. In this context, the research interest in assistive robotics is growing up. For people with mobility impairments, daily life operations, as dressing or feeding, require the assistance of dedicated people; thus, the use of devices providing independent mobility can have a large impact on improving their life quality. The purpose of this paper is to present the development of a robotic system aimed at assisting people with this kind of severe motion disabilities by providing a certain level of autonomy.

Design/methodology/approach

The system is based on a hierarchical architecture where, at the top level, the user generates simple and high-level commands by resorting to a graphical user interface operated via a P300-based brain computer interface. These commands are ultimately converted into joint and Cartesian space tasks for the robotic system that are then handled by the robot motion control algorithm resorting to a set-based task priority inverse kinematic strategy. The overall architecture is realized by integrating control and perception software modules developed in the robots and systems environment with the BCI2000 framework, used to operate the brain–computer interfaces (BCI) device.

Findings

The effectiveness of the proposed architecture is validated through experiments where a user generates commands, via an Emotiv Epoc+ BCI, to perform assistive tasks that are executed by a Kinova MOVO robot, i.e. an omnidirectional mobile robotic platform equipped with two lightweight seven degrees of freedoms manipulators.

Originality/value

The P300 paradigm has been successfully integrated with a control architecture that allows us to command a complex robotic system to perform daily life operations. The user defines high-level commands via the BCI, letting all the low-level tasks, for example, safety-related tasks, to be handled by the system in a completely autonomous manner.

Details

Industrial Robot: the international journal of robotics research and application, vol. 49 no. 1
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 23 September 2020

Siyuan Huang, Limin Liu, Jian Dong, Xiongjun Fu and Leilei Jia

Most of the existing ground filtering algorithms are based on the Cartesian coordinate system, which is not compatible with the working principle of mobile light detection and…

Abstract

Purpose

Most of the existing ground filtering algorithms are based on the Cartesian coordinate system, which is not compatible with the working principle of mobile light detection and ranging and difficult to obtain good filtering accuracy. The purpose of this paper is to improve the accuracy of ground filtering by making full use of the order information between the point and the point in the spherical coordinate.

Design/methodology/approach

First, the cloth simulation (CS) algorithm is modified into a sorting algorithm for scattered point clouds to obtain the adjacent relationship of the point clouds and to generate a matrix containing the adjacent information of the point cloud. Then, according to the adjacent information of the points, a projection distance comparison and local slope analysis are simultaneously performed. These results are integrated to process the point cloud details further and the algorithm is finally used to filter a point cloud in a scene from the KITTI data set.

Findings

The results show that the accuracy of KITTI point cloud sorting is 96.3% and the kappa coefficient of the ground filtering result is 0.7978. Compared with other algorithms applied to the same scene, the proposed algorithm has higher processing accuracy.

Research limitations/implications

Steps of the algorithm are parallel computing, which saves time owing to the small amount of computation. In addition, the generality of the algorithm is improved and it could be used for different data sets from urban streets. However, due to the lack of point clouds from the field environment with labeled ground points, the filtering result of this algorithm in the field environment needs further study.

Originality/value

In this study, the point cloud neighboring information was obtained by a modified CS algorithm. The ground filtering algorithm distinguish ground points and off-ground points according to the flatness, continuity and minimality of ground points in point cloud data. In addition, it has little effect on the algorithm results if thresholds were changed.

Details

Engineering Computations, vol. 38 no. 4
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 28 January 2020

James Robert Wingham, Robert Turner, Joanna Shepherd and Candice Majewski

X-Ray-computed micro-tomography (micro-CT) is relatively well established in additive manufacturing as a method to determine the porosity and geometry of printed parts and, in…

264

Abstract

Purpose

X-Ray-computed micro-tomography (micro-CT) is relatively well established in additive manufacturing as a method to determine the porosity and geometry of printed parts and, in some cases, the presence of inclusions or contamination. This paper aims to demonstrate that micro-CT can also be used to quantitatively analyse the homogeneity of micro-composite parts, in this case created using laser sintering (LS).

Design/methodology/approach

LS specimens were manufactured in polyamide 12 with and without incorporation of a silver phosphate glass additive in different sizes. The specimens were scanned using micro-CT to characterise both their porosity and the homogeneity of dispersion of the additive throughout the volume.

Findings

This work showed that it was possible to use micro-CT to determine information related to both porosity and additive dispersion from the same scan. Analysis of the pores revealed the overall porosity of the printed parts, with linear elastic fracture mechanics used to identify any pores likely to lead to premature failure of the parts. Analysis of the additive was found to be possible above a certain size of particle, with the size distribution used to identify any agglomeration of the silver phosphate glass. The particle positions were also used to determine the complete spatial randomness of the additive as a quantitative measure of the dispersion.

Practical implications

This shows that micro-CT is an effective method of identifying both porosity and additive agglomeration within printed parts, meaning it can be used for quality control of micro-composites and to validate the homogeneity of the polymer/additive mixture prior to printing.

Originality/value

This is believed to be the first instance of micro-CT being used to identify and analyse the distribution of an additive within a laser sintered part.

Details

Rapid Prototyping Journal, vol. 26 no. 4
Type: Research Article
ISSN: 1355-2546

Keywords

Article
Publication date: 5 April 2021

Shifeng Lin and Ning Wang

In multi-robot cooperation, the cloud can share sensor data, which can help robots better perceive the environment. For cloud robotics, robot grasping is an important ability that…

Abstract

Purpose

In multi-robot cooperation, the cloud can share sensor data, which can help robots better perceive the environment. For cloud robotics, robot grasping is an important ability that must be mastered. Usually, the information source of grasping mainly comes from visual sensors. However, due to the uncertainty of the working environment, the information acquisition of the vision sensor may encounter the situation of being blocked by unknown objects. This paper aims to propose a solution to the problem in robot grasping when the vision sensor information is blocked by sharing the information of multi-vision sensors in the cloud.

Design/methodology/approach

First, the random sampling consensus algorithm and principal component analysis (PCA) algorithms are used to detect the desktop range. Then, the minimum bounding rectangle of the occlusion area is obtained by the PCA algorithm. The candidate camera view range is obtained by plane segmentation. Then the candidate camera view range is combined with the manipulator workspace to obtain the camera posture and drive the arm to take pictures of the desktop occlusion area. Finally, the Gaussian mixture model (GMM) is used to approximate the shape of the object projection and for every single Gaussian model, the grabbing rectangle is generated and evaluated to get the most suitable one.

Findings

In this paper, a variety of cloud robotic being blocked are tested. Experimental results show that the proposed algorithm can capture the image of the occluded desktop and grab the objects in the occluded area successfully.

Originality/value

In the existing work, there are few research studies on using active multi-sensor to solve the occlusion problem. This paper presents a new solution to the occlusion problem. The proposed method can be applied to the multi-cloud robotics working environment through cloud sharing, which helps the robot to perceive the environment better. In addition, this paper proposes a method to obtain the object-grabbing rectangle based on GMM shape approximation of point cloud projection. Experiments show that the proposed methods can work well.

Details

Assembly Automation, vol. 41 no. 3
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 20 February 2018

Nick Lee, Laura Chamberlain and Leif Brandes

To grow, any field of research must both encourage newcomers to work within its boundaries, and help them learn to conduct excellent research within the field’s parameters. This…

3889

Abstract

Purpose

To grow, any field of research must both encourage newcomers to work within its boundaries, and help them learn to conduct excellent research within the field’s parameters. This paper aims to examine whether the existing body of neuromarketing literature can support such growth. Specifically, the authors attempt to replicate how a newcomer to the field of neuromarketing would go about orienting themselves to the field and learn how to conduct excellent neuromarketing research.

Design/methodology/approach

A total of 131 papers, published in the areas of “neuromarketing” and “consumer neuroscience” were downloaded and then identified as conceptual or empirical in nature. A separate database was created for each type of research paper and information was recorded. For both conceptual and empirical papers, the citation details, notably year of publication, journal, journal ranking and impact factor were recorded. Papers were then descriptively analysed with regards to number of publications over the years, content and journal quality.

Findings

It is found that interest in the field is growing, with a greater variety of topics and methods appearing year on year. However, the authors also identify some issues of concern for the field if it wishes to sustain this growth. First, the highly fragmented literature and the lack of signposting makes it very difficult for newcomers to find the relevant work and journal outlets. Second, there is a lack of high-quality, user-oriented methodological primers that a newcomer would come across. Finally, neuromarketing as it appears to a newcomer suffers from a lack of clear guidance on what defines good vs bad neuromarketing research. As a large majority of the reviewed papers have appeared in lower-ranked journals, newcomers might get a biased view on the acceptable research standards in the field.

Originality/value

The insights from the analysis inform a tentative agenda for future work which gives neuromarketing itself greater scientific purpose, and the potential to grow into a better-established field of study within marketing as a whole.

Details

European Journal of Marketing, vol. 52 no. 1/2
Type: Research Article
ISSN: 0309-0566

Keywords

Article
Publication date: 17 May 2022

Lin Li, Xi Chen and Tie Zhang

Many metal workpieces have the characteristics of less texture, symmetry and reflectivity, which presents a challenge to existing pose estimation methods. The purpose of this…

Abstract

Purpose

Many metal workpieces have the characteristics of less texture, symmetry and reflectivity, which presents a challenge to existing pose estimation methods. The purpose of this paper is to propose a pose estimation method for grasping metal workpieces by industrial robots.

Design/methodology/approach

Dual-hypothesis robust point matching registration network (RPM-Net) is proposed to estimate pose from point cloud. The proposed method uses the Point Cloud Library (PCL) to segment workpiece point cloud from scenes and a trained-well robust point matching registration network to estimate pose through dual-hypothesis point cloud registration.

Findings

In the experiment section, an experimental platform is built, which contains a six-axis industrial robot, a binocular structured-light sensor. A data set that contains three subsets is set up on the experimental platform. After training with the emulation data set, the dual-hypothesis RPM-Net is tested on the experimental data set, and the success rates of the three real data sets are 94.0%, 92.0% and 96.0%, respectively.

Originality/value

The contributions are as follows: first, dual-hypothesis RPM-Net is proposed which can realize the pose estimation of discrete and less-textured metal workpieces from point cloud, and second, a method of making training data sets is proposed using only CAD models with the visualization algorithm of the PCL.

Details

Industrial Robot: the international journal of robotics research and application, vol. 49 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 27 April 2020

Seungjun Woo, Francisco Yumbla, Chanyong Park, Hyouk Ryeol Choi and Hyungpil Moon

The purpose of this study is to propose a novel plane-based mapping method for legged-robot navigation in a stairway environment.

Abstract

Purpose

The purpose of this study is to propose a novel plane-based mapping method for legged-robot navigation in a stairway environment.

Design/methodology/approach

The approach implemented in this study estimates a plane for each step of a stairway using a weighted average of sensor measurements and predictions. It segments planes from point cloud data via random sample consensus (RANSAC). The prediction uses the regular structure of a stairway. When estimating a plane, the algorithm considers the errors introduced by the distance sensor and RANSAC, in addition to stairstep irregularities, by using covariance matrices. The plane coefficients are managed separately with the data structure suggested in this study. In addition, this data structure allows the algorithm to store the information of each stairstep as a single entity.

Findings

In the case of a stairway environment, the accuracy delivered by the proposed algorithm was higher than those delivered by traditional mapping methods. The hardware experiment verified the accuracy and applicability of the algorithm.

Originality/value

The proposed algorithm provides accurate stairway-environment mapping and detailed specifications of each stairstep. Using this information, a legged robot can navigate and plan its motion in a stairway environment more efficiently.

Details

Industrial Robot: the international journal of robotics research and application, vol. 47 no. 4
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 11 January 2021

Mingyang Li, Zhijiang Du, Xiaoxing Ma, Wei Dong, Yongzhi Wang, Yongzhuo Gao and Wei Chen

This paper aims to propose a robotic automation system for processing special-shaped thin-walled workpieces, which includes a measurement part and a processing part.

Abstract

Purpose

This paper aims to propose a robotic automation system for processing special-shaped thin-walled workpieces, which includes a measurement part and a processing part.

Design/methodology/approach

In the measurement part, to efficiently and accurately realize the three-dimensional camera hand-eye calibration based on a large amount of measurement data, this paper improves the traditional probabilistic method. To solve the problem of time-consuming in the extraction of point cloud features, this paper proposes a point cloud feature extraction method based on seed points. In the processing part, the authors design a new type of chamfering tool. During the process, the robot adopts admittance control to perform compensation according to the feedback of four sensors mounted on the tool.

Findings

Experiments show that the proposed system can make the tool smoothly fit the chamfered edge during processing and the machined chamfer meets the processing requirements of 0.5 × 0.5 to 0.9 × 0.9 mm2.

Practical implications

The proposed design and approach can be applied on many types of special-shaped thin-walled parts. This will give a new solution for the automation integration problem in aerospace manufacturing.

Originality/value

A novel robotic automation system for processing special-shaped thin-walled workpieces is proposed and a new type of chamfering tool is designed. Furthermore, a more accurate probabilistic hand-eye calibration method and a more efficient point cloud extraction method are proposed, which are suitable for this system when comparing with the traditional methods.

Details

Assembly Automation, vol. 41 no. 1
Type: Research Article
ISSN: 0144-5154

Keywords

1 – 10 of 131