Search results

1 – 10 of 44
Content available
Article
Publication date: 1 June 1999

70

Abstract

Details

Industrial Robot: An International Journal, vol. 26 no. 4
Type: Research Article
ISSN: 0143-991X

Keywords

Content available
Article
Publication date: 13 November 2023

Sheuli Paul

This paper presents a survey of research into interactive robotic systems for the purpose of identifying the state of the art capabilities as well as the extant gaps in this…

1162

Abstract

Purpose

This paper presents a survey of research into interactive robotic systems for the purpose of identifying the state of the art capabilities as well as the extant gaps in this emerging field. Communication is multimodal. Multimodality is a representation of many modes chosen from rhetorical aspects for its communication potentials. The author seeks to define the available automation capabilities in communication using multimodalities that will support a proposed Interactive Robot System (IRS) as an AI mounted robotic platform to advance the speed and quality of military operational and tactical decision making.

Design/methodology/approach

This review will begin by presenting key developments in the robotic interaction field with the objective of identifying essential technological developments that set conditions for robotic platforms to function autonomously. After surveying the key aspects in Human Robot Interaction (HRI), Unmanned Autonomous System (UAS), visualization, Virtual Environment (VE) and prediction, the paper then proceeds to describe the gaps in the application areas that will require extension and integration to enable the prototyping of the IRS. A brief examination of other work in HRI-related fields concludes with a recapitulation of the IRS challenge that will set conditions for future success.

Findings

Using insights from a balanced cross section of sources from the government, academic, and commercial entities that contribute to HRI a multimodal IRS in military communication is introduced. Multimodal IRS (MIRS) in military communication has yet to be deployed.

Research limitations/implications

Multimodal robotic interface for the MIRS is an interdisciplinary endeavour. This is not realistic that one can comprehend all expert and related knowledge and skills to design and develop such multimodal interactive robotic interface. In this brief preliminary survey, the author has discussed extant AI, robotics, NLP, CV, VDM, and VE applications that is directly related to multimodal interaction. Each mode of this multimodal communication is an active research area. Multimodal human/military robot communication is the ultimate goal of this research.

Practical implications

A multimodal autonomous robot in military communication using speech, images, gestures, VST and VE has yet to be deployed. Autonomous multimodal communication is expected to open wider possibilities for all armed forces. Given the density of the land domain, the army is in a position to exploit the opportunities for human–machine teaming (HMT) exposure. Naval and air forces will adopt platform specific suites for specially selected operators to integrate with and leverage this emerging technology. The possession of a flexible communications means that readily adapts to virtual training will enhance planning and mission rehearsals tremendously.

Social implications

Interaction, perception, cognition and visualization based multimodal communication system is yet missing. Options to communicate, express and convey information in HMT setting with multiple options, suggestions and recommendations will certainly enhance military communication, strength, engagement, security, cognition, perception as well as the ability to act confidently for a successful mission.

Originality/value

The objective is to develop a multimodal autonomous interactive robot for military communications. This survey reports the state of the art, what exists and what is missing, what can be done and possibilities of extension that support the military in maintaining effective communication using multimodalities. There are some separate ongoing progresses, such as in machine-enabled speech, image recognition, tracking, visualizations for situational awareness, and virtual environments. At this time, there is no integrated approach for multimodal human robot interaction that proposes a flexible and agile communication. The report briefly introduces the research proposal about multimodal interactive robot in military communication.

Open Access
Article
Publication date: 18 April 2023

Wenzhen Yang, Johan K. Crone, Claus R. Lønkjær, Macarena Mendez Ribo, Shuo Shan, Flavia Dalia Frumosu, Dimitrios Papageorgiou, Yu Liu, Lazaros Nalpantidis and Yang Zhang

This study aims to present a vision-guided robotic system design for application in vat photopolymerization additive manufacturing (AM), enabling vat photopolymerization AM hybrid…

Abstract

Purpose

This study aims to present a vision-guided robotic system design for application in vat photopolymerization additive manufacturing (AM), enabling vat photopolymerization AM hybrid with injection molding process.

Design/methodology/approach

In the system, a robot equipped with a camera and a custom-made gripper as well as driven by a visual servoing (VS) controller is expected to perceive objective, handle variation, connect multi-process steps in soft tooling process and realize automation of vat photopolymerization AM. Meanwhile, the vat photopolymerization AM printer is customized in both hardware and software to interact with the robotic system.

Findings

By ArUco marker-based vision-guided robotic system, the printing platform can be manipulated in arbitrary initial position quickly and robustly, which constitutes the first step in exploring automation of vat photopolymerization AM hybrid with soft tooling process.

Originality/value

The vision-guided robotic system monitors and controls vat photopolymerization AM process, which has potential for vat photopolymerization AM hybrid with other mass production methods, for instance, injection molding.

Details

Journal of Intelligent Manufacturing and Special Equipment, vol. 4 no. 2
Type: Research Article
ISSN: 2633-6596

Keywords

Content available
Article
Publication date: 19 October 2010

427

Abstract

Details

Industrial Robot: An International Journal, vol. 37 no. 6
Type: Research Article
ISSN: 0143-991X

Open Access
Article
Publication date: 25 March 2021

Bartłomiej Kulecki, Kamil Młodzikowski, Rafał Staszak and Dominik Belter

The purpose of this paper is to propose and evaluate the method for grasping a defined set of objects in an unstructured environment. To this end, the authors propose the method…

2131

Abstract

Purpose

The purpose of this paper is to propose and evaluate the method for grasping a defined set of objects in an unstructured environment. To this end, the authors propose the method of integrating convolutional neural network (CNN)-based object detection and the category-free grasping method. The considered scenario is related to mobile manipulating platforms that move freely between workstations and manipulate defined objects. In this application, the robot is not positioned with respect to the table and manipulated objects. The robot detects objects in the environment and uses grasping methods to determine the reference pose of the gripper.

Design/methodology/approach

The authors implemented the whole pipeline which includes object detection, grasp planning and motion execution on the real robot. The selected grasping method uses raw depth images to find the configuration of the gripper. The authors compared the proposed approach with a representative grasping method that uses a 3D point cloud as an input to determine the grasp for the robotic arm equipped with a two-fingered gripper. To measure and compare the efficiency of these methods, the authors measured the success rate in various scenarios. Additionally, they evaluated the accuracy of object detection and pose estimation modules.

Findings

The performed experiments revealed that the CNN-based object detection and the category-free grasping methods can be integrated to obtain the system which allows grasping defined objects in the unstructured environment. The authors also identified the specific limitations of neural-based and point cloud-based methods. They show how the determined properties influence the performance of the whole system.

Research limitations/implications

The authors identified the limitations of the proposed methods and the improvements are envisioned as part of future research.

Practical implications

The evaluation of the grasping and object detection methods on the mobile manipulating robot may be useful for all researchers working on the autonomy of similar platforms in various applications.

Social implications

The proposed method increases the autonomy of robots in applications in the small industry which is related to repetitive tasks in a noisy and potentially risky environment. This allows reducing the human workload in these types of environments.

Originality/value

The main contribution of this research is the integration of the state-of-the-art methods for grasping objects with object detection methods and evaluation of the whole system on the industrial robot. Moreover, the properties of each subsystem are identified and measured.

Details

Industrial Robot: the international journal of robotics research and application, vol. 48 no. 5
Type: Research Article
ISSN: 0143-991X

Keywords

Content available
Article
Publication date: 12 December 2018

Hesheng Wang

441

Abstract

Details

Assembly Automation, vol. 38 no. 5
Type: Research Article
ISSN: 0144-5154

Open Access
Article
Publication date: 4 April 2024

Yanmin Zhou, Zheng Yan, Ye Yang, Zhipeng Wang, Ping Lu, Philip F. Yuan and Bin He

Vision, audition, olfactory, tactile and taste are five important senses that human uses to interact with the real world. As facing more and more complex environments, a sensing…

Abstract

Purpose

Vision, audition, olfactory, tactile and taste are five important senses that human uses to interact with the real world. As facing more and more complex environments, a sensing system is essential for intelligent robots with various types of sensors. To mimic human-like abilities, sensors similar to human perception capabilities are indispensable. However, most research only concentrated on analyzing literature on single-modal sensors and their robotics application.

Design/methodology/approach

This study presents a systematic review of five bioinspired senses, especially considering a brief introduction of multimodal sensing applications and predicting current trends and future directions of this field, which may have continuous enlightenments.

Findings

This review shows that bioinspired sensors can enable robots to better understand the environment, and multiple sensor combinations can support the robot’s ability to behave intelligently.

Originality/value

The review starts with a brief survey of the biological sensing mechanisms of the five senses, which are followed by their bioinspired electronic counterparts. Their applications in the robots are then reviewed as another emphasis, covering the main application scopes of localization and navigation, objection identification, dexterous manipulation, compliant interaction and so on. Finally, the trends, difficulties and challenges of this research were discussed to help guide future research on intelligent robot sensors.

Details

Robotic Intelligence and Automation, vol. 44 no. 2
Type: Research Article
ISSN: 2754-6969

Keywords

Content available
Article
Publication date: 1 April 2001

90

Abstract

Details

Industrial Robot: An International Journal, vol. 28 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

Content available
Book part
Publication date: 5 October 2018

Abstract

Details

Fuzzy Hybrid Computing in Construction Engineering and Management
Type: Book
ISBN: 978-1-78743-868-2

Content available
Book part
Publication date: 25 October 2022

Hannah R. Marston, Linda Shore, Laura Stoops and Robbie S. Turner

Abstract

Details

Transgenerational Technology and Interactions for the 21st Century: Perspectives and Narratives
Type: Book
ISBN: 978-1-83982-639-9

1 – 10 of 44