Search results

1 – 10 of over 5000
Article
Publication date: 7 September 2015

Qing Wang, Yadong Dou, Jiangxiong Li and Yinglin Ke

The purpose of this paper is to design a reasonable joining path and achieve assembly automation for multiple arc-shaped panels. A fuselage panel is primarily composed of skins…

Abstract

Purpose

The purpose of this paper is to design a reasonable joining path and achieve assembly automation for multiple arc-shaped panels. A fuselage panel is primarily composed of skins, stringers, frames and clips. Both inserted and nested structures are adopted in the panels to improve the strength and hermeticity of the fuselage. Due to the complex structures and relationships, it is a challenge to coordinate the arc-shaped panels in the assembly process.

Design/methodology/approach

A motion sequence model which achieves arc approximation based on the relative motion of multiple panels is established. The initial position of the panels is obtained by decomposing the computer-aided design model of the panels. Two translation rules, i.e. progressively decreasing translation and limited deformation translation, are applied to determine the moving path of the panels. If a panel is not at its path node, a search algorithm is used to find the nearest path node. Finally, the key algorithms are implemented in an integration system to promote joining automation of multiple panels.

Findings

The zigzag path is effective for the joining of multiple panels with complex mating relationships. The automation of the join–separate–rejoin operations is time-saving and safety-assuring. The proposed method is demonstrated in practical engineering and a good efficiency is obtained.

Practical implications

This method has been used in a middle fuselage assembly project. The practical results show that the zigzag path is convenient to be stored and reused, and the synchronous movements of multiple curved panels are precisely realized. Additionally, the posture accuracy of panels is significantly improved, and the operating time is reduced considerably.

Originality/value

This paper gives a solution including path planning and process integration to solve the joining problem of multiple panels. The research will promote the automation of fuselage assembly.

Details

Assembly Automation, vol. 35 no. 4
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 25 January 2023

Runqing Miao, Qingxuan Jia and Fuchun Sun

Autonomous robots must be able to understand long-term manipulation tasks described by humans and perform task analysis and planning based on the current environment in a variety…

Abstract

Purpose

Autonomous robots must be able to understand long-term manipulation tasks described by humans and perform task analysis and planning based on the current environment in a variety of scenes, such as daily manipulation and industrial assembly. However, both classical task and motion planning algorithms and single data-driven learning planning methods have limitations in practicability, generalization and interpretability. The purpose of this work is to overcome the limitations of the above methods and achieve generalized and explicable long-term robot manipulation task planning.

Design/methodology/approach

The authors propose a planning method for long-term manipulation tasks that combines the advantages of existing methods and the prior cognition brought by the knowledge graph. This method integrates visual semantic understanding based on scene graph generation, regression planning based on deep learning and multi-level representation and updating based on a knowledge base.

Findings

The authors evaluated the capability of this method in a kitchen cooking task and tabletop arrangement task in simulation and real-world environments. Experimental results show that the proposed method has a significantly improved success rate compared with the baselines and has excellent generalization performance for new tasks.

Originality/value

The authors demonstrate that their method is scalable to long-term manipulation tasks with varying complexity and visibility. This advantage allows their method to perform better in new manipulation tasks. The planning method proposed in this work is meaningful for the present robot manipulation task and can be intuitive for similar high-level robot planning.

Details

Robotic Intelligence and Automation, vol. 43 no. 1
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 1 September 2001

Satyandra K. Gupta, Christiaan J.J. Paredis, Rajarishi Sinha and Peter F. Brown

Because of the intense competition in the current global economy, a company must conceive, design, and manufacture new products quickly and inexpensively. The design cycle can be…

1668

Abstract

Because of the intense competition in the current global economy, a company must conceive, design, and manufacture new products quickly and inexpensively. The design cycle can be shortened through simulation. Rapid technical advances in many different areas of scientific computing provide the enabling technologies for creating a comprehensive simulation and visualization environment for assembly design and planning. An intelligent environment has been built in which simple simulation tools can be composed into complex simulations for detecting potential assembly problems. The goal in this research is to develop high fidelity assembly simulation and visualization tools that can detect assembly related problems without going through physical mock‐ups. In addition, these tools can be used to create easy‐to‐visualize instructions for performing assembly and service operations.

Details

Assembly Automation, vol. 21 no. 3
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 7 July 2023

Wuyan Liang and Xiaolong Xu

In the COVID-19 era, sign language (SL) translation has gained attention in online learning, which evaluates the physical gestures of each student and bridges the communication…

Abstract

Purpose

In the COVID-19 era, sign language (SL) translation has gained attention in online learning, which evaluates the physical gestures of each student and bridges the communication gap between dysphonia and hearing people. The purpose of this paper is to devote the alignment between SL sequence and nature language sequence with high translation performance.

Design/methodology/approach

SL can be characterized as joint/bone location information in two-dimensional space over time, forming skeleton sequences. To encode joint, bone and their motion information, we propose a multistream hierarchy network (MHN) along with a vocab prediction network (VPN) and a joint network (JN) with the recurrent neural network transducer. The JN is used to concatenate the sequences encoded by the MHN and VPN and learn their sequence alignments.

Findings

We verify the effectiveness of the proposed approach and provide experimental results on three large-scale datasets, which show that translation accuracy is 94.96, 54.52, and 92.88 per cent, and the inference time is 18 and 1.7 times faster than listen-attend-spell network (LAS) and visual hierarchy to lexical sequence network (H2SNet) , respectively.

Originality/value

In this paper, we propose a novel framework that can fuse multimodal input (i.e. joint, bone and their motion stream) and align input streams with nature language. Moreover, the provided framework is improved by the different properties of MHN, VPN and JN. Experimental results on the three datasets demonstrate that our approaches outperform the state-of-the-art methods in terms of translation accuracy and speed.

Details

Data Technologies and Applications, vol. 58 no. 2
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 21 August 2009

Mathew Price and Garry Morrison

The purpose of this paper is to present an image based method for estimating the 3D motion of rigid particles from high‐speed video footage (HSV). The computed motion can be used…

Abstract

Purpose

The purpose of this paper is to present an image based method for estimating the 3D motion of rigid particles from high‐speed video footage (HSV). The computed motion can be used as either a means to generate quantitative feedback for a process or to validate the accuracy of discrete element method (DEM) simulation models.

Design/methodology/approach

Experiments consist of a diamond impacting an angled plate and video is captured at 4,000 frames per second. Simple image analysis is used to track the particle in each frame and to extract its 2D silhouette boundary. Using an approximate 3D model of the particle generated from a multi‐camera setup, a pose estimation scheme based on silhouette consistency is used in conjunction with a rigid body model to compute the 3D motion.

Findings

Under reasonable conditions, the method can reliably estimate the linear and angular motion of the particle to within 1 per cent of their true values.

Practical implications

As an example application, we demonstrate how the method can be used to validate DEM simulations of simple impact experiments captured with HSV, providing valuable insight towards further development. In particular, we investigate the effects of shape representation through sphere‐clumping and the applicability of different contact models.

Originality/value

The novelty of our method is its ability to accurately compute the motion associated with a real world interaction, such as an impact, which provides numerical ground truth at an individual particle level. While similar schemes have been attempted with ideal particles (e.g. spheres), the resulting models do not naturally extend to realistic particle shapes. Since our method can track real particles, real‐world processes can be better quantified.

Details

Engineering Computations, vol. 26 no. 6
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 29 September 2023

Yue Qiao, Wang Wei, Yunxiang Li, Shengzui Xu, Lang Wei, Xu Hao and Re Xia

The purpose of this paper is to introduce a motion control method for WFF-AmphiRobot, which can effectively realize the flexible motion of the robot on land, underwater and in the…

159

Abstract

Purpose

The purpose of this paper is to introduce a motion control method for WFF-AmphiRobot, which can effectively realize the flexible motion of the robot on land, underwater and in the transition zone between land and water.

Design/methodology/approach

Based on the dynamics model, the authors selected the appropriate state variables to construct the state space model of the robot and estimated the feedback state of the robot through the maximum a posteriori probability estimation. The nonlinear predictive model controller of the robot is constructed by local linearization of the model to perform closed-loop control on the overall motion of the robot. For the control problem of the terminal trajectory, using the neural rhythmic movement theory in bionics to construct a robot central pattern generator (CPG) for real-time generation of terminal trajectory.

Findings

In this paper, the motion state of WFF-AmphiRobot is estimated, and a model-based overall motion controller for the robot and an end-effector controller based on neural rhythm control are constructed. The effectiveness of the controller and motion control algorithm is verified by simulation and physical prototype motion experiments on land and underwater, and the robot can ideally complete the desired behavior.

Originality/value

The paper designed a controller for WFF-AmphiRobot. First, when constructing the robot state estimator in this paper, the robot dynamics model is introduced as the a priori estimation model, and the error compensation of the a priori model is performed by the method of maximum a posteriori probability estimation, which improves the accuracy of the state estimator. Second, for the underwater oscillation motion characteristics of the flipper, the Hopf oscillator is used as the basis, and the flipper fluctuation equation is modified and improved by the CPG signal is adapted to the flipper oscillation demand. The controller effectively controls the position error and heading angle error within the desired range during the movement of the WFF-AmphiRobot.

Details

Industrial Robot: the international journal of robotics research and application, vol. 50 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 23 August 2011

Cailing Wang, Chunxia Zhao and Jingyu Yang

Positioning is a key task in most field robotics applications but can be very challenging in GPS‐denied or high‐slip environments. The purpose of this paper is to describe a…

Abstract

Purpose

Positioning is a key task in most field robotics applications but can be very challenging in GPS‐denied or high‐slip environments. The purpose of this paper is to describe a visual odometry strategy using only one camera in country roads.

Design/methodology/approach

This monocular odometery system uses as input only those images provided by a single camera mounted on the roof of the vehicle and the framework is composed of three main parts: image motion estimation, ego‐motion computation and visual odometry. The image motion is estimated based on a hyper‐complex wavelet phase‐derived optical flow field. The ego‐motion of the vehicle is computed by a blocked RANdom SAmple Consensus algorithm and a maximum likelihood estimator based on a 4‐degrees of freedom motion model. These as instantaneous ego‐motion measurements are used to update the vehicle trajectory according to a dead‐reckoning model and unscented Kalman filter.

Findings

The authors' proposed framework and algorithms are validated on videos from a real automotive platform. Furthermore, the recovered trajectory is superimposed onto a digital map, and the localization results from this method are compared to the ground truth measured with a GPS/INS joint system. These experimental results indicate that the framework and the algorithms are effective.

Originality/value

The effective framework and algorithms for visual odometry using only one camera in country roads are introduced in this paper.

Details

Industrial Robot: An International Journal, vol. 38 no. 5
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 18 June 2020

Shiqiu Gong, Jing Zhao, Ziqiang Zhang and Biyun Xie

This paper aims to introduce the human arm movement primitive (HAMP) to express and plan the motions of anthropomorphic arms. The task planning method is established for the…

Abstract

Purpose

This paper aims to introduce the human arm movement primitive (HAMP) to express and plan the motions of anthropomorphic arms. The task planning method is established for the minimum task cost and a novel human-like motion planning method based on the HAMPs is proposed to help humans better understand and plan the motions of anthropomorphic arms.

Design/methodology/approach

The HAMPs are extracted based on the structure and motion expression of the human arm. A method to slice the complex tasks into simple subtasks and sort subtasks is proposed. Then, a novel human-like motion planning method is built through the selection, sequencing and quantification of HAMPs. Finally, the HAMPs are mapped to the traditional joint angles of a robot by an analytical inverse kinematics method to control the anthropomorphic arms.

Findings

For the exploration of the motion laws of the human arm, the human arm motion capture experiments on 12 subjects are performed. The results show that the motion laws of human arm are reflected in the selection, sequencing and quantification of HAMPs. These motion laws can facilitate the human-like motion planning of anthropomorphic arms.

Originality/value

This study presents the HAMPs and a method for selecting, sequencing and quantifying them in human-like style, which leads to a new motion planning method for the anthropomorphic arms. A similar methodology is suitable for robots with anthropomorphic arms such as service robots, upper extremity exoskeleton robots and humanoid robots.

Details

Industrial Robot: the international journal of robotics research and application, vol. 47 no. 5
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 20 December 2017

Weiwei Wan, Kensuke Harada and Kazuyuki Nagata

The purpose of this paper is to develop a planner for finding an optimal assembly sequence for robots to assemble objects. Each manipulated object in the optimal sequence is…

Abstract

Purpose

The purpose of this paper is to develop a planner for finding an optimal assembly sequence for robots to assemble objects. Each manipulated object in the optimal sequence is stable during assembly. They are easy to grasp and robust to motion uncertainty.

Design/methodology/approach

The input to the planner is the mesh models of the objects, the relative poses between the objects in the assembly and the final pose of the assembly. The output is an optimal assembly sequence, namely, in which order should one assemble the objects, from which directions should the objects be dropped and candidate grasps of each object. The proposed planner finds the optimal solution by automatically permuting, evaluating and searching the possible assembly sequences considering stability, graspability and assemblability qualities.

Findings

The proposed planner could plan an optimal sequence to guide robots to do assembly using translational motion. The sequence provides initial and goal configurations to motion planning algorithms and is ready to be used by robots. The usefulness of the proposed method is verified by both simulation and real-world executions.

Originality/value

The paper proposes an assembly planner which can find an optimal assembly sequence automatically without teaching of the assembly orders and directions by skilled human technicians. The planner is highly expected to improve teachingless robotic manufacturing.

Details

Assembly Automation, vol. 38 no. 2
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 6 May 2021

Zhe Wang, Xisheng Li, Xiaojuan Zhang, Yanru Bai and Chengcai Zheng

How to model blind image deblurring that arises when a camera undergoes ego-motion while observing a static and close scene. In particular, this paper aims to detail how the…

Abstract

Purpose

How to model blind image deblurring that arises when a camera undergoes ego-motion while observing a static and close scene. In particular, this paper aims to detail how the blurry image can be restored under a sequence of the linear model of the point spread function (PSF) that are derived from the 6-degree of freedom (DOF) camera’s accurate path during the long exposure time.

Design/methodology/approach

There are two existing techniques, namely, an estimation of the PSF and a blind image deconvolution. Based on online and short-period inertial measurement unit (IMU) self-calibration, this motion path has discretized a sequence of the uniform speed of 3-DOF rectilinear motion, which unites with a 3-DOF rotational motion to form a discrete 6-DOF camera’s path. These PSFs are evaluated through the discrete path, then combine with a blurry image to restoration through deconvolution.

Findings

This paper describes to build a hardware attachment, which is composed of a consumer camera, an inexpensive IMU and a 3-DOF motion mechanism to the best of the knowledge, together with experimental results demonstrating its overall effectiveness.

Originality/value

First, the paper proposes that a high-precision 6-DOF motion platform periodically adjusts the speed of a three-axis rotational motion and a three-axis rectilinear motion in a short time to compensate the bias of the gyroscope and the accelerometer. Second, this paper establishes a model of 6-DOF motion and emphasizes on rotational motion, translational motion and scene depth motion. Third, this paper addresses a novel model of the discrete path that the motion during long exposure time is discretized at a uniform speed, then to estimate a sequence of PSFs.

Details

Sensor Review, vol. 41 no. 2
Type: Research Article
ISSN: 0260-2288

Keywords

1 – 10 of over 5000