Search results

1 – 10 of 15
Article
Publication date: 5 June 2017

Ge Wu, Duan Li, Yueqi Zhong and PengPeng Hu

The calibration is a key but cumbersome process for 3D body scanning using multiple depth cameras. The purpose of this paper is to simplify the calibration process by introducing…

Abstract

Purpose

The calibration is a key but cumbersome process for 3D body scanning using multiple depth cameras. The purpose of this paper is to simplify the calibration process by introducing a new method to calibrate the extrinsic parameters of multiple depth cameras simultaneously.

Design/methodology/approach

An improved method is introduced to enhance the accuracy based on the virtual checkerboards. Laplace coordinates are employed for a point-to-point adjustment to increase the accuracy of scanned data. A system with eight depth cameras is developed for full-body scanning, and the performance of this system is verified by actual results.

Findings

The agreement of measurements between scanned human bodies and the real subjects demonstrates the accuracy of the proposed method. The entire calibration process is automatic.

Originality/value

A complete algorithm for a full human body scanning system is introduced in this paper. This is the first publically study on the refinement and the point-by-point adjustment based on the virtual checkerboards toward the scanning accuracy enhancement.

Details

International Journal of Clothing Science and Technology, vol. 29 no. 3
Type: Research Article
ISSN: 0955-6222

Keywords

Article
Publication date: 21 February 2024

Amruta Rout, Golak Bihari Mahanta, Bibhuti Bhusan Biswal, Renin Francy T., Sri Vardhan Raj and Deepak B.B.V.L.

The purpose of this study is to plan and develop a cost-effective health-care robot for assisting and observing the patients in an accurate and effective way during pandemic…

91

Abstract

Purpose

The purpose of this study is to plan and develop a cost-effective health-care robot for assisting and observing the patients in an accurate and effective way during pandemic situation like COVID-19. The purposed research work can help in better management of pandemic situations in rural areas as well as developing countries where medical facility is not easily available.

Design/methodology/approach

It becomes very difficult for the medical staff to have a continuous check on patient’s condition in terms of symptoms and critical parameters during pandemic situations. For dealing with these situations, a service mobile robot with multiple sensors for measuring patients bodily indicators has been proposed and the prototype for the same has been developed that can monitor and aid the patient using the robotic arm. The fuzzy controller has also been incorporated with the mobile robot through which decisions on patient monitoring can be taken automatically. Mamdani implication method has been utilized for formulating mathematical expression of M number of “if and then condition based rules” with defined input Xj (j = 1, 2, ………. s), and output yi. The inputs and output variables are formed by the membership functions µAij(xj) and µCi(yi) to execute the Fuzzy Inference System controller. Here, Aij and Ci are the developed fuzzy sets.

Findings

The fuzzy-based prediction model has been tested with the output of medicines for the initial 27 runs and was validated by the correlation of predicted and actual values. The correlation coefficient has been found to be 0.989 with a mean square error value of 0.000174, signifying a strong relationship between the predicted values and the actual values. The proposed research work can handle multiple tasks like online consulting, continuous patient condition monitoring in general wards and ICUs, telemedicine services, hospital waste disposal and providing service to patients at regular time intervals.

Originality/value

The novelty of the proposed research work lies in the integration of artificial intelligence techniques like fuzzy logic with the multi-sensor-based service robot for easy decision-making and continuous patient monitoring in hospitals in rural areas and to reduce the work stress on medical staff during pandemic situation.

Details

Robotic Intelligence and Automation, vol. 44 no. 1
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 30 April 2018

Yueqi Zhong, Duan Li, Ge Wu and PengPeng Hu

The automatic body measurement is the key of tailoring, mass customization and fit/ease evaluation. The major challenges include finding the landmarks and extracting the sizes…

Abstract

Purpose

The automatic body measurement is the key of tailoring, mass customization and fit/ease evaluation. The major challenges include finding the landmarks and extracting the sizes accurately. The purpose of this paper is to propose a new method of body measurement based on the loop structure.

Design/methodology/approach

The scanned human model is sliced equally to layers consist of various shapes of loops. The semantic feature analysis has been regarded as a problem of finding the points of interest (POI) and the loop of interest (LOI) according to the types of loop connections. Methods for determining the basic landmarks have been detailed.

Findings

The experimental results validate that the proposed methods can be used to locate the landmarks and to extract sizes on markless human scans robustly and efficiently.

Originality/value

With the method, the body measurement can be quickly performed with average errors around 0.5 cm. The results of segmentation, landmarking and body measurements also validate the robustness and efficiency of the proposed methods.

Details

International Journal of Clothing Science and Technology, vol. 30 no. 3
Type: Research Article
ISSN: 0955-6222

Keywords

Article
Publication date: 4 August 2022

Zelin Wang, Feng Gao, Yue Zhao, Yunpeng Yin and Liangyu Wang

Path planning is a fundamental and significant issue in robotics research, especially for the legged robots, since it is the core technology for robots to complete complex tasks…

Abstract

Purpose

Path planning is a fundamental and significant issue in robotics research, especially for the legged robots, since it is the core technology for robots to complete complex tasks such as autonomous navigation and exploration. The purpose of this paper is to propose a path planning and tracking framework for the autonomous navigation of hexapod robots.

Design/methodology/approach

First, a hexapod robot called Hexapod-Mini is briefly introduced. Then a path planning algorithm based on improved A* is proposed, which introduces the artificial potential field (APF) factor into the evaluation function to generate a safe and collision-free initial path. Then we apply a turning point optimization based on the greedy algorithm, which optimizes the number of turns of the path. And a fast-turning trajectory for hexapod robot is proposed, which is applied to path smoothing. Besides, a model predictive control-based motion tracking controller is used for path tracking.

Findings

The simulation and experiment results show that the framework can generate a safe, fast, collision-free and smooth path, and the author’s Hexapod robot can effectively track the path that demonstrates the performance of the framework.

Originality/value

The work presented a framework for autonomous path planning and tracking of hexapod robots. This new approach overcomes the disadvantages of the traditional path planning approach, such as lack of security, insufficient smoothness and an excessive number of turns. And the proposed method has been successfully applied to an actual hexapod robot.

Details

Industrial Robot: the international journal of robotics research and application, vol. 50 no. 1
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 23 September 2020

Siyuan Huang, Limin Liu, Jian Dong, Xiongjun Fu and Leilei Jia

Most of the existing ground filtering algorithms are based on the Cartesian coordinate system, which is not compatible with the working principle of mobile light detection and…

Abstract

Purpose

Most of the existing ground filtering algorithms are based on the Cartesian coordinate system, which is not compatible with the working principle of mobile light detection and ranging and difficult to obtain good filtering accuracy. The purpose of this paper is to improve the accuracy of ground filtering by making full use of the order information between the point and the point in the spherical coordinate.

Design/methodology/approach

First, the cloth simulation (CS) algorithm is modified into a sorting algorithm for scattered point clouds to obtain the adjacent relationship of the point clouds and to generate a matrix containing the adjacent information of the point cloud. Then, according to the adjacent information of the points, a projection distance comparison and local slope analysis are simultaneously performed. These results are integrated to process the point cloud details further and the algorithm is finally used to filter a point cloud in a scene from the KITTI data set.

Findings

The results show that the accuracy of KITTI point cloud sorting is 96.3% and the kappa coefficient of the ground filtering result is 0.7978. Compared with other algorithms applied to the same scene, the proposed algorithm has higher processing accuracy.

Research limitations/implications

Steps of the algorithm are parallel computing, which saves time owing to the small amount of computation. In addition, the generality of the algorithm is improved and it could be used for different data sets from urban streets. However, due to the lack of point clouds from the field environment with labeled ground points, the filtering result of this algorithm in the field environment needs further study.

Originality/value

In this study, the point cloud neighboring information was obtained by a modified CS algorithm. The ground filtering algorithm distinguish ground points and off-ground points according to the flatness, continuity and minimality of ground points in point cloud data. In addition, it has little effect on the algorithm results if thresholds were changed.

Details

Engineering Computations, vol. 38 no. 4
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 7 August 2017

Shenglan Liu, Muxin Sun, Xiaodong Huang, Wei Wang and Feilong Wang

Robot vision is a fundamental device for human–robot interaction and robot complex tasks. In this paper, the authors aim to use Kinect and propose a feature graph fusion (FGF) for…

Abstract

Purpose

Robot vision is a fundamental device for human–robot interaction and robot complex tasks. In this paper, the authors aim to use Kinect and propose a feature graph fusion (FGF) for robot recognition.

Design/methodology/approach

The feature fusion utilizes red green blue (RGB) and depth information to construct fused feature from Kinect. FGF involves multi-Jaccard similarity to compute a robust graph and word embedding method to enhance the recognition results.

Findings

The authors also collect DUT RGB-Depth (RGB-D) face data set and a benchmark data set to evaluate the effectiveness and efficiency of this method. The experimental results illustrate that FGF is robust and effective to face and object data sets in robot applications.

Originality/value

The authors first utilize Jaccard similarity to construct a graph of RGB and depth images, which indicates the similarity of pair-wise images. Then, fusion feature of RGB and depth images can be computed by the Extended Jaccard Graph using word embedding method. The FGF can get better performance and efficiency in RGB-D sensor for robots.

Details

Assembly Automation, vol. 37 no. 3
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 12 November 2019

John Oyekan, Axel Fischer, Windo Hutabarat, Christopher Turner and Ashutosh Tiwari

The purpose of this paper is to explore the role that computer vision can play within new industrial paradigms such as Industry 4.0 and in particular to support production line…

Abstract

Purpose

The purpose of this paper is to explore the role that computer vision can play within new industrial paradigms such as Industry 4.0 and in particular to support production line improvements to achieve flexible manufacturing. As Industry 4.0 requires “big data”, it is accepted that computer vision could be one of the tools for its capture and efficient analysis. RGB-D data gathered from real-time machine vision systems such as Kinect ® can be processed using computer vision techniques.

Design/methodology/approach

This research exploits RGB-D cameras such as Kinect® to investigate the feasibility of using computer vision techniques to track the progress of a manual assembly task on a production line. Several techniques to track the progress of a manual assembly task are presented. The use of CAD model files to track the manufacturing tasks is also outlined.

Findings

This research has found that RGB-D cameras can be suitable for object recognition within an industrial environment if a number of constraints are considered or different devices/techniques combined. Furthermore, through the use of a HMM inspired state-based workflow, the algorithm presented in this paper is computationally tractable.

Originality/value

Processing of data from robust and cheap real-time machine vision systems could bring increased understanding of production line features. In addition, new techniques that enable the progress tracking of manual assembly sequences may be defined through the further analysis of such visual data. The approaches explored within this paper make a contribution to the utilisation of visual information “big data” sets for more efficient and automated production.

Details

Assembly Automation, vol. 40 no. 6
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 11 January 2023

Yongyao Li, Guanyu Ding, Chao Li, Sen Wang, Qinglei Zhao and Qi Song

This paper presents a comprehensive pallet-picking approach for forklift robots, comprising a pallet identification and localization algorithm (PILA) to detect and locate the…

Abstract

Purpose

This paper presents a comprehensive pallet-picking approach for forklift robots, comprising a pallet identification and localization algorithm (PILA) to detect and locate the pallet and a vehicle alignment algorithm (VAA) to align the vehicle fork arms with the targeted pallet.

Design/methodology/approach

Opposing vision-based methods or point cloud data strategies, we utilize a low-cost RGB-D camera, and thus PILA exploits both RGB and depth data to quickly and precisely recognize and localize the pallet. The developed method guarantees a high identification rate from RGB images and more precise 3D localization information than a depth camera. Additionally, a deep neural network (DNN) method is applied to detect and locate the pallet in the RGB images. Specifically, the point cloud data is correlated with the labeled region of interest (RoI) in the RGB images, and the pallet's front-face plane is extracted from the point cloud. Furthermore, PILA introduces a universal geometrical rule to identify the pallet's center as a “T-shape” without depending on specific pallet types. Finally, VAA is proposed to implement the vehicle approaching and pallet picking operations as a “proof-of-concept” to test PILA’s performance.

Findings

Experimentally, the orientation angle and centric location of the two kinds of pallets are investigated without any artificial marking. The results show that the pallet could be located with a three-dimensional localization accuracy of 1 cm and an angle resolution of 0.4 degrees at a distance of 3 m with the vehicle control algorithm.

Research limitations/implications

PILA’s performance is limited by the current depth camera’s range (< = 3 m), and this is expected to be improved by using a better depth measurement device in the future.

Originality/value

The results demonstrate that the pallets can be located with an accuracy of 1cm along the x, y, and z directions and affording an angular resolution of 0.4 degrees at a distance of 3m in 700ms.

Details

Industrial Robot: the international journal of robotics research and application, vol. 50 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 3 February 2020

Hui Zhang, Jinwen Tan, Chenyang Zhao, Zhicong Liang, Li Liu, Hang Zhong and Shaosheng Fan

This paper aims to solve the problem between detection efficiency and performance in grasp commodities rapidly. A fast detection and grasping method based on improved faster R-CNN…

Abstract

Purpose

This paper aims to solve the problem between detection efficiency and performance in grasp commodities rapidly. A fast detection and grasping method based on improved faster R-CNN is purposed and applied to the mobile manipulator to grab commodities on the shelf.

Design/methodology/approach

To reduce the time cost of algorithm, a new structure of neural network based on faster R CNN is designed. To select the anchor box reasonably according to the data set, the data set-adaptive algorithm for choosing anchor box is presented; multiple models of ten types of daily objects are trained for the validation of the improved faster R-CNN. The proposed algorithm is deployed to the self-developed mobile manipulator, and three experiments are designed to evaluate the proposed method.

Findings

The result indicates that the proposed method is successfully performed on the mobile manipulator; it not only accomplishes the detection effectively but also grasps the objects on the shelf successfully.

Originality/value

The proposed method can improve the efficiency of faster R-CNN, maintain excellent performance, meet the requirement of real-time detection, and the self-developed mobile manipulator can accomplish the task of grasping objects.

Details

Industrial Robot: the international journal of robotics research and application, vol. 47 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 16 October 2018

Lin Feng, Yang Liu, Zan Li, Meng Zhang, Feilong Wang and Shenglan Liu

The purpose of this paper is to promote the efficiency of RGB-depth (RGB-D)-based object recognition in robot vision and find discriminative binary representations for RGB-D based…

Abstract

Purpose

The purpose of this paper is to promote the efficiency of RGB-depth (RGB-D)-based object recognition in robot vision and find discriminative binary representations for RGB-D based objects.

Design/methodology/approach

To promote the efficiency of RGB-D-based object recognition in robot vision, this paper applies hashing methods to RGB-D-based object recognition by utilizing the approximate nearest neighbors (ANN) to vote for the final result. To improve the object recognition accuracy in robot vision, an “Encoding+Selection” binary representation generation pattern is proposed. “Encoding+Selection” pattern can generate more discriminative binary representations for RGB-D-based objects. Moreover, label information is utilized to enhance the discrimination of each bit, which guarantees that the most discriminative bits can be selected.

Findings

The experiment results validate that the ANN-based voting recognition method is more efficient and effective compared to traditional recognition method in RGB-D-based object recognition for robot vision. Moreover, the effectiveness of the proposed bit selection method is also validated to be effective.

Originality/value

Hashing learning is applied to RGB-D-based object recognition, which significantly promotes the recognition efficiency for robot vision while maintaining high recognition accuracy. Besides, the “Encoding+Selection” pattern is utilized in the process of binary encoding, which effectively enhances the discrimination of binary representations for objects.

Details

Assembly Automation, vol. 39 no. 1
Type: Research Article
ISSN: 0144-5154

Keywords

1 – 10 of 15