Search results

1 – 10 of over 7000
Open Access
Article
Publication date: 22 August 2023

Mahesh Babu Purushothaman and Kasun Moolika Gedara

This pragmatic research paper aims to unravel the smart vision-based method (SVBM), an AI program to correlate the computer vision (recorded and live videos using mobile and…

1323

Abstract

Purpose

This pragmatic research paper aims to unravel the smart vision-based method (SVBM), an AI program to correlate the computer vision (recorded and live videos using mobile and embedded cameras) that aids in manual lifting human pose deduction, analysis and training in the construction sector.

Design/methodology/approach

Using a pragmatic approach combined with the literature review, this study discusses the SVBM. The research method includes a literature review followed by a pragmatic approach and lab validation of the acquired data. Adopting the practical approach, the authors of this article developed an SVBM, an AI program to correlate computer vision (recorded and live videos using mobile and embedded cameras).

Findings

Results show that SVBM observes the relevant events without additional attachments to the human body and compares them with the standard axis to identify abnormal postures using mobile and other cameras. Angles of critical nodal points are projected through human pose detection and calculating body part movement angles using a novel software program and mobile application. The SVBM demonstrates its ability to data capture and analysis in real-time and offline using videos recorded earlier and is validated for program coding and results repeatability.

Research limitations/implications

Literature review methodology limitations include not keeping in phase with the most updated field knowledge. This limitation is offset by choosing the range for literature review within the last two decades. This literature review may not have captured all published articles because the restriction of database access and search was based only on English. Also, the authors may have omitted fruitful articles hiding in a less popular journal. These limitations are acknowledged. The critical limitation is that the trust, privacy and psychological issues are not addressed in SVBM, which is recognised. However, the benefits of SVBM naturally offset this limitation to being adopted practically.

Practical implications

The theoretical and practical implications include customised and individualistic prediction and preventing most posture-related hazardous behaviours before a critical injury happens. The theoretical implications include mimicking the human pose and lab-based analysis without attaching sensors that naturally alter the working poses. SVBM would help researchers develop more accurate data and theoretical models close to actuals.

Social implications

By using SVBM, the possibility of early deduction and prevention of musculoskeletal disorders is high; the social implications include the benefits of being a healthier society and health concerned construction sector.

Originality/value

Human pose detection, especially joint angle calculation in a work environment, is crucial to early deduction of muscoloskeletal disorders. Conventional digital technology-based methods to detect pose flaws focus on location information from wearables and laboratory-controlled motion sensors. For the first time, this paper presents novel computer vision (recorded and live videos using mobile and embedded cameras) and digital image-related deep learning methods without attachment to the human body for manual handling pose deduction and analysis of angles, neckline and torso line in an actual construction work environment.

Details

Smart and Sustainable Built Environment, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2046-6099

Keywords

Content available
Article
Publication date: 13 November 2023

Sheuli Paul

This paper presents a survey of research into interactive robotic systems for the purpose of identifying the state of the art capabilities as well as the extant gaps in this…

1061

Abstract

Purpose

This paper presents a survey of research into interactive robotic systems for the purpose of identifying the state of the art capabilities as well as the extant gaps in this emerging field. Communication is multimodal. Multimodality is a representation of many modes chosen from rhetorical aspects for its communication potentials. The author seeks to define the available automation capabilities in communication using multimodalities that will support a proposed Interactive Robot System (IRS) as an AI mounted robotic platform to advance the speed and quality of military operational and tactical decision making.

Design/methodology/approach

This review will begin by presenting key developments in the robotic interaction field with the objective of identifying essential technological developments that set conditions for robotic platforms to function autonomously. After surveying the key aspects in Human Robot Interaction (HRI), Unmanned Autonomous System (UAS), visualization, Virtual Environment (VE) and prediction, the paper then proceeds to describe the gaps in the application areas that will require extension and integration to enable the prototyping of the IRS. A brief examination of other work in HRI-related fields concludes with a recapitulation of the IRS challenge that will set conditions for future success.

Findings

Using insights from a balanced cross section of sources from the government, academic, and commercial entities that contribute to HRI a multimodal IRS in military communication is introduced. Multimodal IRS (MIRS) in military communication has yet to be deployed.

Research limitations/implications

Multimodal robotic interface for the MIRS is an interdisciplinary endeavour. This is not realistic that one can comprehend all expert and related knowledge and skills to design and develop such multimodal interactive robotic interface. In this brief preliminary survey, the author has discussed extant AI, robotics, NLP, CV, VDM, and VE applications that is directly related to multimodal interaction. Each mode of this multimodal communication is an active research area. Multimodal human/military robot communication is the ultimate goal of this research.

Practical implications

A multimodal autonomous robot in military communication using speech, images, gestures, VST and VE has yet to be deployed. Autonomous multimodal communication is expected to open wider possibilities for all armed forces. Given the density of the land domain, the army is in a position to exploit the opportunities for human–machine teaming (HMT) exposure. Naval and air forces will adopt platform specific suites for specially selected operators to integrate with and leverage this emerging technology. The possession of a flexible communications means that readily adapts to virtual training will enhance planning and mission rehearsals tremendously.

Social implications

Interaction, perception, cognition and visualization based multimodal communication system is yet missing. Options to communicate, express and convey information in HMT setting with multiple options, suggestions and recommendations will certainly enhance military communication, strength, engagement, security, cognition, perception as well as the ability to act confidently for a successful mission.

Originality/value

The objective is to develop a multimodal autonomous interactive robot for military communications. This survey reports the state of the art, what exists and what is missing, what can be done and possibilities of extension that support the military in maintaining effective communication using multimodalities. There are some separate ongoing progresses, such as in machine-enabled speech, image recognition, tracking, visualizations for situational awareness, and virtual environments. At this time, there is no integrated approach for multimodal human robot interaction that proposes a flexible and agile communication. The report briefly introduces the research proposal about multimodal interactive robot in military communication.

Open Access
Article
Publication date: 13 July 2022

Jiqian Dong, Sikai Chen, Mohammad Miralinaghi, Tiantian Chen and Samuel Labi

Perception has been identified as the main cause underlying most autonomous vehicle related accidents. As the key technology in perception, deep learning (DL) based computer vision

Abstract

Purpose

Perception has been identified as the main cause underlying most autonomous vehicle related accidents. As the key technology in perception, deep learning (DL) based computer vision models are generally considered to be black boxes due to poor interpretability. These have exacerbated user distrust and further forestalled their widespread deployment in practical usage. This paper aims to develop explainable DL models for autonomous driving by jointly predicting potential driving actions with corresponding explanations. The explainable DL models can not only boost user trust in autonomy but also serve as a diagnostic approach to identify any model deficiencies or limitations during the system development phase.

Design/methodology/approach

This paper proposes an explainable end-to-end autonomous driving system based on “Transformer,” a state-of-the-art self-attention (SA) based model. The model maps visual features from images collected by onboard cameras to guide potential driving actions with corresponding explanations, and aims to achieve soft attention over the image’s global features.

Findings

The results demonstrate the efficacy of the proposed model as it exhibits superior performance (in terms of correct prediction of actions and explanations) compared to the benchmark model by a significant margin with much lower computational cost on a public data set (BDD-OIA). From the ablation studies, the proposed SA module also outperforms other attention mechanisms in feature fusion and can generate meaningful representations for downstream prediction.

Originality/value

In the contexts of situational awareness and driver assistance, the proposed model can perform as a driving alarm system for both human-driven vehicles and autonomous vehicles because it is capable of quickly understanding/characterizing the environment and identifying any infeasible driving actions. In addition, the extra explanation head of the proposed model provides an extra channel for sanity checks to guarantee that the model learns the ideal causal relationships. This provision is critical in the development of autonomous systems.

Details

Journal of Intelligent and Connected Vehicles, vol. 5 no. 3
Type: Research Article
ISSN: 2399-9802

Keywords

Open Access
Article
Publication date: 17 March 2022

Federico P. Zasa, Roberto Verganti and Paola Bellis

Having a shared vision is crucial for innovation. The purpose of this paper is to investigate the effect of individual propensity to collaborate and innovate on the development of…

1095

Abstract

Purpose

Having a shared vision is crucial for innovation. The purpose of this paper is to investigate the effect of individual propensity to collaborate and innovate on the development of a shared vision.

Design/methodology/approach

The authors build a network in which each node represents the vision of one individual and link the network structure to individual propensity of collaboration and innovativeness. During organizational workshops in four multinational organizations, the authors collected individual visions in the form of images as well as text describing the approach to innovation from 85 employees.

Findings

The study maps individual visions for innovation as a cognitive network. The authors find that individual propensity to innovate or collaborate is related to different network centrality. Innovators, individuals who see innovation as an opportunity to change and grow, are located at the center of the cognitive network. Collaborators, who see innovation as an opportunity to collaborate, have a higher closeness centrality inside a cluster.

Research limitations/implications

This paper analyses visions as a network linking recent research in psychology with the managerial longing for a more thorough investigation of group cognition. The study contributes to literature on shared vision creation, suggesting the role which innovators and collaborators can occupy in the process.

Originality/value

This paper proposes how an approach based on a cognitive network can inform innovation management. The findings suggest that visions of innovators summarize the visions of a group, helping the development of an overall shared vision. Collaborators on the other hand are representative of specific clusters and can help developing radical visions.

Details

European Journal of Innovation Management, vol. 25 no. 6
Type: Research Article
ISSN: 1460-1060

Keywords

Content available
Article
Publication date: 1 October 2003

57

Abstract

Details

Industrial Robot: An International Journal, vol. 30 no. 5
Type: Research Article
ISSN: 0143-991X

Keywords

Content available
Article
Publication date: 1 April 2004

Jon Rigelsford

62

Abstract

Details

Industrial Robot: An International Journal, vol. 31 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

Open Access
Article
Publication date: 6 July 2020

John Fiset and Melanie A. Robinson

Scholars and practitioners generally acknowledge the crucial importance of visions in motivating and inspiring organizational change. In this article, we describe a two-part…

3399

Abstract

Purpose

Scholars and practitioners generally acknowledge the crucial importance of visions in motivating and inspiring organizational change. In this article, we describe a two-part activity based on visionary leadership scholarship and theory designed to teach students to cultivate foresight and consider future possibilities through the organizational vision statement development process.

Design/methodology/approach

Using an experiential design, the exercise draws on several empirically validated techniques to encourage foresight and future thinking, to help students place themselves in the shoes of the chief executive officer of a hypothetical organization and use dramaturgical character development strategies to craft the vision statements that they will champion.

Findings

The exercise has been used in three different business courses (N = 87) and has been well received.

Originality/value

The content of the exercise is adaptable to a variety of courses in which leadership and vision are focal topics – such as organizational behavior, strategy and leadership – and could also be modified for an online classroom setting.

Details

Organization Management Journal, vol. 17 no. 2
Type: Research Article
ISSN: 1541-6518

Keywords

Open Access
Article
Publication date: 3 May 2022

Junbo Liu, Yaping Huang, Shengchun Wang, Xinxin Zhao, Qi Zou and Xingyuan Zhang

This research aims to improve the performance of rail fastener defect inspection method for multi railways, to effectively ensure the safety of railway operation.

Abstract

Purpose

This research aims to improve the performance of rail fastener defect inspection method for multi railways, to effectively ensure the safety of railway operation.

Design/methodology/approach

Firstly, a fastener region location method based on online learning strategy was proposed, which can locate fastener regions according to the prior knowledge of track image and template matching method. Online learning strategy is used to update the template library dynamically, so that the method not only can locate fastener regions in the track images of multi railways, but also can automatically collect and annotate fastener samples. Secondly, a fastener defect recognition method based on deep convolutional neural network was proposed. The structure of recognition network was designed according to the smaller size and the relatively single content of the fastener region. The data augmentation method based on the sample random sorting strategy is adopted to reduce the impact of the imbalance of sample size on recognition performance.

Findings

Test verification of the proposed method is conducted based on the rail fastener datasets of multi railways. Specifically, fastener location module has achieved an average detection rate of 99.36%, and fastener defect recognition module has achieved an average precision of 96.82%.

Originality/value

The proposed method can accurately locate fastener regions and identify fastener defect in the track images of different railways, which has high reliability and strong adaptability to multi railways.

Details

Railway Sciences, vol. 1 no. 2
Type: Research Article
ISSN: 2755-0907

Keywords

Content available
Book part
Publication date: 16 May 2017

Eric J. Bolland

Abstract

Details

Comprehensive Strategic Management
Type: Book
ISBN: 978-1-78714-225-1

Open Access
Article
Publication date: 4 April 2024

Yanmin Zhou, Zheng Yan, Ye Yang, Zhipeng Wang, Ping Lu, Philip F. Yuan and Bin He

Vision, audition, olfactory, tactile and taste are five important senses that human uses to interact with the real world. As facing more and more complex environments, a sensing…

Abstract

Purpose

Vision, audition, olfactory, tactile and taste are five important senses that human uses to interact with the real world. As facing more and more complex environments, a sensing system is essential for intelligent robots with various types of sensors. To mimic human-like abilities, sensors similar to human perception capabilities are indispensable. However, most research only concentrated on analyzing literature on single-modal sensors and their robotics application.

Design/methodology/approach

This study presents a systematic review of five bioinspired senses, especially considering a brief introduction of multimodal sensing applications and predicting current trends and future directions of this field, which may have continuous enlightenments.

Findings

This review shows that bioinspired sensors can enable robots to better understand the environment, and multiple sensor combinations can support the robot’s ability to behave intelligently.

Originality/value

The review starts with a brief survey of the biological sensing mechanisms of the five senses, which are followed by their bioinspired electronic counterparts. Their applications in the robots are then reviewed as another emphasis, covering the main application scopes of localization and navigation, objection identification, dexterous manipulation, compliant interaction and so on. Finally, the trends, difficulties and challenges of this research were discussed to help guide future research on intelligent robot sensors.

Details

Robotic Intelligence and Automation, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2754-6969

Keywords

1 – 10 of over 7000