Search results

1 – 10 of over 7000
Article
Publication date: 18 April 2017

Ter-Feng Wu, Pu-Sheng Tsai, Nien-Tsu Hu and Jen-Yang Chen

Visually impaired people have long been living in the dark. They cannot realize the colorful world with their vision, so they rely on hearing, touch and smell to feel the space…

412

Abstract

Purpose

Visually impaired people have long been living in the dark. They cannot realize the colorful world with their vision, so they rely on hearing, touch and smell to feel the space they live in. Lacking image information, they face challenges in the external environment and barrier spaces. They face danger that is hundreds of times higher than that faced by normal people. Especially during outdoor activities, they can only explore the surrounding environment aided by their hearing and crutches and then based on a vague impression speculate where they are located. To let the blind confidently take each step, this paper proposes sticking the electronic tag of the radio-frequency identification (RFID) system on the back of guide bricks.

Design/methodology/approach

Thus, the RFID reader, ultrasonic sensor and voice chip on a wheeled mobile robot link the front end to the crutch. Once the blind person nears a guide brick, the RFID will read the message on the tag through the voice broadcast system, and a voice will inform the visually impaired person of the direction to walk and information of the surrounding environment. In addition, the CMOS image sensor set up in the wheeled mobile robot is used to detect the black marking on the guide brick and to guide the blind to walk forward or turn around between the two markings. Finally, the lithium battery charging control unit was installed on the wheeled mobile robot. The ATtiny25 microcontroller conducts the battery charge and discharge control and monitoring of the current battery capacity.

Findings

The development of this system will let visually impaired people acquire environmental information, road guidance function and nearby traffic information.

Originality/value

Through rich spatial environment messages, the blind can have the confidence and courage to go outside.

Book part
Publication date: 14 October 2019

Sam R. Thangiah, Michael Karavias, Ryan Caldwell, Matthew Wherry, Jessica Seibert, Abdullah Wahbeh, Zachariah Miller and Alexander Gessinger

Purpose: This chapter describes the design and implementation, at the computer hardware and software level, of the Greggg robot. Greggg is a scalable high performance, low cost…

Abstract

Purpose: This chapter describes the design and implementation, at the computer hardware and software level, of the Greggg robot. Greggg is a scalable high performance, low cost hospitality robot constructed from off-the-shelf parts. Greggg has a robust architecture and acts as a tour guide on-campus, both indoors or outdoors. This research allows one to build a customized robot at a low cost, under U.S. $2,000, for accomplishing the desired hospitality tasks, and scale, and expand the capability of the robot as required.

Practical Implications: The practical implication of the research is the capability to build and program a robot for hospitality tasks. Greggg is a customizable robot capable of giving on-campus tours both indoors and outdoors. In its current architecture, Greggg can be trained to be a museum docent and give directions to visitors on-campus or at an airport and scaled up for other hospitality tasks using off-the-shelf components. Enhancing the robot by scaling it up and expanding it, in addition to testing it with a range of increasingly more difficult tasks using machine learning algorithms, is highly beneficial to advancing research on the use of robots in the hospitality sector. Greggg can also be used for Robot-as-a-service (Rass) applications.

Societal Implications: The economic implication of Greggg is the ease and low cost with which one, with minimal technology know-how, can construct an autonomous hospitality industry robot. This chapter details the hardware and software needed to build a low cost scalable and customizable autonomous robot for the hospitality industry without having to pay an exorbitant price.

Research/Limitations/Implications: This research allows one to build their own customized hospitality robot under U.S. $2,000. Given the cost of building the robot, it has limitations on the hospitality tasks it can perform. It can navigate on flat surfaces, has limited vision and speech processing capabilities and has a battery life not exceeding an hour. Furthermore, it does not have any robotic manipulators or tactile processing capabilities.

Details

Robots, Artificial Intelligence, and Service Automation in Travel, Tourism and Hospitality
Type: Book
ISBN: 978-1-78756-688-0

Keywords

Article
Publication date: 21 July 2020

Guanghui Liu, Qiang Li, Lijin Fang, Bing Han and Hualiang Zhang

The purpose of this paper is to propose a new joint friction model, which can accurately model the real friction, especially in cases with sudden changes in the motion direction…

Abstract

Purpose

The purpose of this paper is to propose a new joint friction model, which can accurately model the real friction, especially in cases with sudden changes in the motion direction. The identification and sensor-less control algorithm are investigated to verify the validity of this model.

Design/methodology/approach

The proposed friction model is nonlinear and it considers the angular displacement and angular velocity of the joint as a secondary compensation for identification. In the present study, the authors design a pipeline – including a manually designed excitation trajectory, a weighted least squares algorithm for identifying the dynamic parameters and a hand guiding controller for the arm’s direct teaching.

Findings

Compared with the conventional joint friction model, the proposed method can effectively predict friction factors during the dynamic motion of the arm. Then friction parameters are quantitatively obtained and compared with the proposed friction model and the conventional friction model indirectly. It is found that the average root mean square error of predicted six joints in the proposed method decreases by more than 54%. The arm’s force control with the full torque using the estimated dynamic parameters is qualitatively studied. It is concluded that a light-weight industrial robot can be dragged smoothly by the hand guiding.

Practical implications

In the present study, a systematic pipeline is proposed for identifying and controlling an industrial arm. The whole procedure has been verified in a commercial six DOF industrial arm. Based on the conducted experiment, it is found that the proposed approach is more accurate in comparison with conventional methods. A hand-guiding demo also illustrates that the proposed approach can provide the industrial arm with the full torque compensation. This essential functionality is widely required in many industrial arms such as kinaesthetic teaching.

Originality/value

First, a new friction model is proposed. Based on this model, identifying the dynamic parameter is carried out to obtain a set of model parameters of an industrial arm. Finally, a smooth hand guiding control is demonstrated based on the proposed dynamic model.

Details

Industrial Robot: the international journal of robotics research and application, vol. 47 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 1 March 2003

Kong Suh Chin, Mani Maran Ratnam and Rajeswari Mandava

This paper describes how force‐guided robot can be implemented in the automated assembly of mobile phone. A case study was carried out to investigate the assembly operations and…

1031

Abstract

This paper describes how force‐guided robot can be implemented in the automated assembly of mobile phone. A case study was carried out to investigate the assembly operations and strategies involved. Force‐guided robot was developed and implemented in the real environment. Proportional‐based external force control with hybrid framework was developed and implemented to perform the compliant motion. In order to perform assembly operations, three basic force‐guided robotic skills are identified. These are stopping, alignment and sliding skills, where the motions are guided by the force feedback. The force‐guided robotic skills are combined and reprogrammed with fine motion planning to perform notch‐locked assembly. The system is optimized for high assembly speed while considering the constraints and limitations involved.

Details

Assembly Automation, vol. 23 no. 1
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 21 March 2016

Alberto Brunete, Carlos Mateo, Ernesto Gambao, Miguel Hernando, Jukka Koskinen, Jari M Ahola, Tuomas Seppälä and Tapio Heikkila

This paper aims to propose a new technique for programming robotized machining tasks based on intuitive human–machine interaction. This will enable operators to create robot

Abstract

Purpose

This paper aims to propose a new technique for programming robotized machining tasks based on intuitive human–machine interaction. This will enable operators to create robot programs for small-batch production in a fast and easy way, reducing the required time to accomplish the programming tasks.

Design/methodology/approach

This technique makes use of online walk-through path guidance using an external force/torque sensor, and simple and intuitive visual programming, by a demonstration method and symbolic task-level programming.

Findings

Thanks to this technique, the operator can easily program robots without learning every robot-specific language and can design new tasks for industrial robots based on manual guidance.

Originality/value

The main contribution of the paper is a new procedure to program machining tasks based on manual guidance (walk-through teaching method) and user-friendly visual programming. Up to now, the acquisition of paths and the task programming were done in separate steps and in separate machines. The authors propose a procedure for using a tablet as the only user interface to acquire paths and to make a program to use this path for machining tasks.

Details

Industrial Robot: An International Journal, vol. 43 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 1 December 2020

Rui Lin, Haibo Huang and Maohai Li

This study aims to present an automated guided logistics robot mainly designed for pallet transportation. Logistics robot is compactly designed. It could pick up the pallet…

Abstract

Purpose

This study aims to present an automated guided logistics robot mainly designed for pallet transportation. Logistics robot is compactly designed. It could pick up the pallet precisely and transport the pallet up to 1,000 kg automatically in the warehouse. It could move freely in all directions without turning the chassis. It could work without any additional infrastructure based on laser navigation system proposed in this work.

Design/methodology/approach

Logistics robot should be able to move underneath and lift up the pallet accurately. Logistics robot mainly consists of two sub-robots, like two forks of the forklift. Each sub-robot has front and rear driving units. A new compact driving unit is compactly designed as a key component to ensure access to the narrow free entry of the pallet. Besides synchronous motions in all directions, the two sub-robots should also perform synchronous lifting up and laying down the pallet. Logistics robot uses a front laser to detect obstacles and locate itself using on-board navigation system. A rear laser is used to recognize and guide the sub-robots to pick up the pallet precisely within ± 5mm/1o in x-/yaw direction. Path planning algorithm under different constraints is proposed for logistics robot to obey the traffic rules of pallet logistics.

Findings

Compared with the traditional forklift vehicles, logistics robot has the advantages of more compact structure and higher expandability. It can realize the omnidirectional movement flexibly without turning the chassis and take zero-radius turn by controlling compact driving units synchronously. Logistics robot can move collision-free into any pallet that has not been precisely placed. It can plan the paths for returning to charge station and charge automatically. So it can work uninterruptedly for 7 × 24 h. Path planning algorithm proposed can avoid traffic congestion and improve the passability of the narrow roads to improve logistics efficiencies. Logistics robot is quite suitable for the standardized logistics factory with small working space.

Originality/value

This is a new innovation for pallet transportation vehicle to improve logistics automation.

Details

Assembly Automation, vol. 41 no. 1
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 12 January 2010

X.Z. Chen and S.B. Chen

The recognition and positioning of start welding position (SWP) is the first step and one of the key technologies to realize autonomous robot welding. The purpose of this paper is…

Abstract

Purpose

The recognition and positioning of start welding position (SWP) is the first step and one of the key technologies to realize autonomous robot welding. The purpose of this paper is to describe a method developed to accomplish successful autonomous detection and guiding of SWP.

Design/methodology/approach

The images of workpieces are snapped by charge coupled device (CCD) cameras in a relative large range without additional light. The recognized methods of SWP are analyzed according to the given definition. A two‐step method named “coarse‐to‐fine” is proposed to recognize the SWP accurately. The first step is to solve the curve functions of seam and workpieces boundaries by fitting. The intersection point is regarded as initial value of SWP. The second step is to establish a small window that takes the initial value of SWP as centre. Then, the SWP is obtained exactly by corner detection in the window. Both the abundant information of original image and the structured information of recognized image are used according to given rules, which takes full advantage of the image information and improves the recognized precision.

Findings

The detected results show that the actual and calculated positions by first step of SWP are identical for regular seam, but different for the irregular curve seam. The exact results can be calculated by the two‐step method in the paper for both regular and irregular seams. The typical planar “S‐shape” and spatial arc curved seams are selected to carry out autonomous guiding of SWP.

Originality/value

The experimental results are given based on the introduction of 3D reconstructed and guided method. The guided precision is less than 1.1 mm, which meets the requirements of practical production. The proposed two‐step method recognizes the SWP rapidly and exactly from coarse to fine.

Details

Industrial Robot: An International Journal, vol. 37 no. 1
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 1 January 1991

D.F.H. Wolfe, S.W. Wijesoma and R.J. Richards

Tasks in automated manufacturing and assembly increasingly involve robot operations guided by vision systems. The traditional “look‐and‐move” approach to linking machine vision…

Abstract

Tasks in automated manufacturing and assembly increasingly involve robot operations guided by vision systems. The traditional “look‐and‐move” approach to linking machine vision systems and robot manipulators which is generally used in these operations relies heavily on accurate camera to real‐world calibration processes and on highly accurate robot arms with well‐known kinematics. As a consequence, the cost of robot automation has not been justifiable in many applications. This article describes a novel real‐time vision control strategy giving “eye‐to‐hand co‐ordination” which offers good performance even in the presence of significant vision system miscalibrations and kinematic model parametric errors. This strategy offers the potential for low cost vision‐guided robots.

Details

Assembly Automation, vol. 11 no. 1
Type: Research Article
ISSN: 0144-5154

Article
Publication date: 10 June 2014

Du-Ming Tsai, Hao Hsu and Wei-Yao Chiu

This study aims to propose a door detection method based on the door properties in both depth and gray-level images. It can further help blind people (or mobile robots) find the…

Abstract

Purpose

This study aims to propose a door detection method based on the door properties in both depth and gray-level images. It can further help blind people (or mobile robots) find the doorway to their destination.

Design/methodology/approach

The proposed method uses the hierarchical point–line region principle with majority vote to encode the surface features pixel by pixel, and then dominant scene entities line by line, and finally the prioritized scene entities in the center, left and right of the observed scene.

Findings

This approach is very robust for noise and random misclassification in pixel, line and region levels and provides sufficient information for the pathway in the front and on the left and right of a scene. The proposed robot vision-assist system can be worn by visually impaired people or mounted on mobile robots. It provides more complete information about the surrounding environment to guide safely and effectively the user to the destination.

Originality/value

In this study, the proposed robot vision scheme provides detailed configurations of the environment encountered in daily life, including stairs (up and down), curbs/steps (up and down), obstacles, overheads, potholes/gutters, hazards and accessible ground. All these scene entities detected in the environment provide the blind people (or mobile robots) more complete information for better decision-making of their own. This paper also proposes, especially, a door detection method based on the door’s features in both depth and gray-level images. It can further help blind people find the doorway to their destination in an unfamiliar environment.

Details

Industrial Robot: An International Journal, vol. 41 no. 4
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 18 January 2021

Hua Zhou, Dong Wei, Yinglong Chen and Fa Wu

To promote the intuitiveness of collaborative tasks, the negotiation ability of humans with each other has inspired a large amount of studies aimed at reproducing the capacity in…

227

Abstract

Purpose

To promote the intuitiveness of collaborative tasks, the negotiation ability of humans with each other has inspired a large amount of studies aimed at reproducing the capacity in physical human-robot interaction (pHRI). This paper aims to promote mutual adaptation in negotiation when both parties possess incomplete information.

Design/methodology/approach

This paper introduces virtual fixtures into the traditional negotiation mechanism, locally regulating tracking trajectory and impedance parameters in the negotiating phase until the final plan integrates bilateral intentions well. In the strategy, robots convey its task information to humans and offer groups of guide plans for them to choose, on the premise of maximizing the robot’s own profits.

Findings

Compared with traditional negotiation strategies, humans adapt to robots easily and show lower cognitive load in the method, while the satisfied plan shows better performance for the whole human-robot system.

Originality/value

In this study, this paper proposes a novel negotiation strategy to facilitate the mutual adaptation of humans and robots in complicated shared tasks, especially when both parties possess incomplete information of tasks.

Details

Industrial Robot: the international journal of robotics research and application, vol. 48 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

1 – 10 of over 7000