Search results

1 – 10 of 37
Article
Publication date: 2 January 2024

Xiangdi Yue, Yihuan Zhang, Jiawei Chen, Junxin Chen, Xuanyi Zhou and Miaolei He

In recent decades, the field of robotic mapping has witnessed widespread research and development in light detection and ranging (LiDAR)-based simultaneous localization and…

Abstract

Purpose

In recent decades, the field of robotic mapping has witnessed widespread research and development in light detection and ranging (LiDAR)-based simultaneous localization and mapping (SLAM) techniques. This paper aims to provide a significant reference for researchers and engineers in robotic mapping.

Design/methodology/approach

This paper focused on the research state of LiDAR-based SLAM for robotic mapping as well as a literature survey from the perspective of various LiDAR types and configurations.

Findings

This paper conducted a comprehensive literature review of the LiDAR-based SLAM system based on three distinct LiDAR forms and configurations. The authors concluded that multi-robot collaborative mapping and multi-source fusion SLAM systems based on 3D LiDAR with deep learning will be new trends in the future.

Originality/value

To the best of the authors’ knowledge, this is the first thorough survey of robotic mapping from the perspective of various LiDAR types and configurations. It can serve as a theoretical and practical guide for the advancement of academic and industrial robot mapping.

Details

Industrial Robot: the international journal of robotics research and application, vol. 51 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

Open Access
Article
Publication date: 26 March 2024

Daniel Nygaard Ege, Pasi Aalto and Martin Steinert

This study was conducted to address the methodical shortcomings and high associated cost of understanding the use of new, poorly understood architectural spaces, such as…

Abstract

Purpose

This study was conducted to address the methodical shortcomings and high associated cost of understanding the use of new, poorly understood architectural spaces, such as makerspaces. The proposed quantified method of enhancing current post-occupancy evaluation (POE) practices aims to provide architects, engineers and building professionals with accessible and intuitive data that can be used to conduct comparative studies of spatial changes, understand changes over time (such as those resulting from COVID-19) and verify design intentions after construction through a quantified post-occupancy evaluation.

Design/methodology/approach

In this study, we demonstrate the use of ultra-wideband (UWB) technology to gather, analyze and visualize quantified data showing interactions between people, spaces and objects. The experiment was conducted in a makerspace over a four-day hackathon event with a team of four actively tracked participants.

Findings

The study shows that by moving beyond simply counting people in a space, a more nuanced pattern of interactions can be discovered, documented and analyzed. The ability to automatically visualize findings intuitively in 3D aids architects and visual thinkers to easily grasp the essence of interactions with minimal effort.

Originality/value

By providing a method for better understanding the spatial and temporal interactions between people, objects and spaces, our approach provides valuable feedback in POE. Specifically, our approach aids practitioners in comparing spaces, verifying design intent and speeding up knowledge building when developing new architectural spaces, such as makerspaces.

Details

Engineering, Construction and Architectural Management, vol. 31 no. 13
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 14 March 2024

Gülçin Baysal

The aim of this review is to present together the studies on textile-based moisture sensors developed using innovative technologies in recent years.

Abstract

Purpose

The aim of this review is to present together the studies on textile-based moisture sensors developed using innovative technologies in recent years.

Design/methodology/approach

The integration levels of the sensors studied with the textile materials are changing. Some research teams have used a combination of printing and textile technologies to produce sensors, while a group of researchers have used traditional technologies such as weaving and embroidery. Others have taken advantage of new technologies such as electro-spinning, polymerization and other techniques. In this way, they tried to combine the good working efficiency of the sensors and the flexibility of the textile. All these approaches are presented in this article.

Findings

The presentation of the latest technologies used to develop textile sensors together will give researchers an idea about new studies that can be done on highly sensitive and efficient textile-based moisture sensor systems.

Originality/value

In this paper humidity sensors have been explained in terms of measuring principle as capacitive and resistive. Then, studies conducted in the last 20 years on the textile-based humidity sensors have been presented in detail. This is a comprehensive review study that presents the latest developments together in this area for researchers.

Details

International Journal of Clothing Science and Technology, vol. 36 no. 2
Type: Research Article
ISSN: 0955-6222

Keywords

Article
Publication date: 27 September 2023

Veera Harsha Vardhan Jilludimudi, Daniel Zhou, Eric Rubstov, Alexander Gonzalez, Will Daknis, Erin Gunn and David Prawel

This study aims to collect real-time, in situ data from polymer melt extrusion (ME) 3D printing and use only the collected data to non-destructively identify printed parts that…

Abstract

Purpose

This study aims to collect real-time, in situ data from polymer melt extrusion (ME) 3D printing and use only the collected data to non-destructively identify printed parts that contain defects.

Design/methodology/approach

A set of sensors was created to collect real-time, in situ data from polymer ME 3D printing. A variance analysis was completed to identify an “acceptable” range for filament diameter on a popular desktop 3D printer. These data were used as the basis of a quality evaluation process to non-destructively identify spatial regions of printed parts in multi-part builds that contain defects.

Findings

Anomalous parts were correctly identified non-destructively using only in situ collected data.

Research limitations/implications

This methodology was developed by varying the filament diameter, one of the most common reasons for print failure in ME. Numerous other printing parameters are known to create faults in melt extruded parts, and this methodology can be extended to analyze other parameters.

Originality/value

To the best of the authors’ knowledge, this is the first report of a non-destructive evaluation of 3D-printed part quality using only in situ data in ME. The value is in improving part quality and reliability in ME, thereby reducing 3D printing part errors, plastic waste and the associated cost of time and material.

Article
Publication date: 6 March 2024

Xiaohui Li, Dongfang Fan, Yi Deng, Yu Lei and Owen Omalley

This study aims to offer a comprehensive exploration of the potential and challenges associated with sensor fusion-based virtual reality (VR) applications in the context of…

Abstract

Purpose

This study aims to offer a comprehensive exploration of the potential and challenges associated with sensor fusion-based virtual reality (VR) applications in the context of enhanced physical training. The main objective is to identify key advancements in sensor fusion technology, evaluate its application in VR systems and understand its impact on physical training.

Design/methodology/approach

The research initiates by providing context to the physical training environment in today’s technology-driven world, followed by an in-depth overview of VR. This overview includes a concise discussion on the advancements in sensor fusion technology and its application in VR systems for physical training. A systematic review of literature then follows, examining VR’s application in various facets of physical training: from exercise, skill development and technique enhancement to injury prevention, rehabilitation and psychological preparation.

Findings

Sensor fusion-based VR presents tangible advantages in the sphere of physical training, offering immersive experiences that could redefine traditional training methodologies. While the advantages are evident in domains such as exercise optimization, skill acquisition and mental preparation, challenges persist. The current research suggests there is a need for further studies to address these limitations to fully harness VR’s potential in physical training.

Originality/value

The integration of sensor fusion technology with VR in the domain of physical training remains a rapidly evolving field. Highlighting the advancements and challenges, this review makes a significant contribution by addressing gaps in knowledge and offering directions for future research.

Details

Robotic Intelligence and Automation, vol. 44 no. 1
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 8 March 2024

Wenqian Feng, Xinrong Li, Jiankun Wang, Jiaqi Wen and Hansen Li

This paper reviews the pros and cons of different parametric modeling methods, which can provide a theoretical reference for parametric reconstruction of 3D human body models for…

Abstract

Purpose

This paper reviews the pros and cons of different parametric modeling methods, which can provide a theoretical reference for parametric reconstruction of 3D human body models for virtual fitting.

Design/methodology/approach

In this study, we briefly analyze the mainstream datasets of models of the human body used in the area to provide a foundation for parametric methods of such reconstruction. We then analyze and compare parametric methods of reconstruction based on their use of the following forms of input data: point cloud data, image contours, sizes of features and points representing the joints. Finally, we summarize the advantages and problems of each method as well as the current challenges to the use of parametric modeling in virtual fitting and the opportunities provided by it.

Findings

Considering the aspects of integrity and accurate of representations of the shape and posture of the body, and the efficiency of the calculation of the requisite parameters, the reconstruction method of human body by integrating orthogonal image contour morphological features, multifeature size constraints and joint point positioning can better represent human body shape, posture and personalized feature size and has higher research value.

Originality/value

This article obtains a research thinking for reconstructing a 3D model for virtual fitting that is based on three kinds of data, which is helpful for establishing personalized and high-precision human body models.

Details

International Journal of Clothing Science and Technology, vol. 36 no. 2
Type: Research Article
ISSN: 0955-6222

Keywords

Article
Publication date: 21 February 2024

Amruta Rout, Golak Bihari Mahanta, Bibhuti Bhusan Biswal, Renin Francy T., Sri Vardhan Raj and Deepak B.B.V.L.

The purpose of this study is to plan and develop a cost-effective health-care robot for assisting and observing the patients in an accurate and effective way during pandemic…

91

Abstract

Purpose

The purpose of this study is to plan and develop a cost-effective health-care robot for assisting and observing the patients in an accurate and effective way during pandemic situation like COVID-19. The purposed research work can help in better management of pandemic situations in rural areas as well as developing countries where medical facility is not easily available.

Design/methodology/approach

It becomes very difficult for the medical staff to have a continuous check on patient’s condition in terms of symptoms and critical parameters during pandemic situations. For dealing with these situations, a service mobile robot with multiple sensors for measuring patients bodily indicators has been proposed and the prototype for the same has been developed that can monitor and aid the patient using the robotic arm. The fuzzy controller has also been incorporated with the mobile robot through which decisions on patient monitoring can be taken automatically. Mamdani implication method has been utilized for formulating mathematical expression of M number of “if and then condition based rules” with defined input Xj (j = 1, 2, ………. s), and output yi. The inputs and output variables are formed by the membership functions µAij(xj) and µCi(yi) to execute the Fuzzy Inference System controller. Here, Aij and Ci are the developed fuzzy sets.

Findings

The fuzzy-based prediction model has been tested with the output of medicines for the initial 27 runs and was validated by the correlation of predicted and actual values. The correlation coefficient has been found to be 0.989 with a mean square error value of 0.000174, signifying a strong relationship between the predicted values and the actual values. The proposed research work can handle multiple tasks like online consulting, continuous patient condition monitoring in general wards and ICUs, telemedicine services, hospital waste disposal and providing service to patients at regular time intervals.

Originality/value

The novelty of the proposed research work lies in the integration of artificial intelligence techniques like fuzzy logic with the multi-sensor-based service robot for easy decision-making and continuous patient monitoring in hospitals in rural areas and to reduce the work stress on medical staff during pandemic situation.

Details

Robotic Intelligence and Automation, vol. 44 no. 1
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 15 September 2023

Kaushal Jani

This article takes into account object identification, enhanced visual feature optimization, cost effectiveness and speed selection in response to terrain conditions. Neither…

19

Abstract

Purpose

This article takes into account object identification, enhanced visual feature optimization, cost effectiveness and speed selection in response to terrain conditions. Neither supervised machine learning nor manual engineering are used in this work. Instead, the OTV educates itself without instruction from humans or labeling. Beyond its link to stopping distance and lateral mobility, choosing the right speed is crucial. One of the biggest problems with autonomous operations is accurate perception. Obstacle avoidance is typically the focus of perceptive technology. The vehicle's shock is nonetheless controlled by the terrain's roughness at high speeds. The precision needed to recognize difficult terrain is far higher than the accuracy needed to avoid obstacles.

Design/methodology/approach

Robots that can drive unattended in an unfamiliar environment should be used for the Orbital Transfer Vehicle (OTV) for the clearance of space debris. In recent years, OTV research has attracted more attention and revealed several insights for robot systems in various applications. Improvements to advanced assistance systems like lane departure warning and intelligent speed adaptation systems are eagerly sought after by the industry, particularly space enterprises. OTV serves as a research basis for advancements in machine learning, computer vision, sensor data fusion, path planning, decision making and intelligent autonomous behavior from a computer science perspective. In the framework of autonomous OTV, this study offers a few perceptual technologies for autonomous driving in this study.

Findings

One of the most important steps in the functioning of autonomous OTVs and aid systems is the recognition of barriers, such as other satellites. Using sensors to perceive its surroundings, an autonomous car decides how to operate on its own. Driver-assistance systems like adaptive cruise control and stop-and-go must be able to distinguish between stationary and moving objects surrounding the OTV.

Originality/value

One of the most important steps in the functioning of autonomous OTVs and aid systems is the recognition of barriers, such as other satellites. Using sensors to perceive its surroundings, an autonomous car decides how to operate on its own. Driver-assistance systems like adaptive cruise control and stop-and-go must be able to distinguish between stationary and moving objects surrounding the OTV.

Details

International Journal of Intelligent Unmanned Systems, vol. 12 no. 2
Type: Research Article
ISSN: 2049-6427

Keywords

Content available
Article
Publication date: 13 November 2023

Sheuli Paul

This paper presents a survey of research into interactive robotic systems for the purpose of identifying the state of the art capabilities as well as the extant gaps in this…

1043

Abstract

Purpose

This paper presents a survey of research into interactive robotic systems for the purpose of identifying the state of the art capabilities as well as the extant gaps in this emerging field. Communication is multimodal. Multimodality is a representation of many modes chosen from rhetorical aspects for its communication potentials. The author seeks to define the available automation capabilities in communication using multimodalities that will support a proposed Interactive Robot System (IRS) as an AI mounted robotic platform to advance the speed and quality of military operational and tactical decision making.

Design/methodology/approach

This review will begin by presenting key developments in the robotic interaction field with the objective of identifying essential technological developments that set conditions for robotic platforms to function autonomously. After surveying the key aspects in Human Robot Interaction (HRI), Unmanned Autonomous System (UAS), visualization, Virtual Environment (VE) and prediction, the paper then proceeds to describe the gaps in the application areas that will require extension and integration to enable the prototyping of the IRS. A brief examination of other work in HRI-related fields concludes with a recapitulation of the IRS challenge that will set conditions for future success.

Findings

Using insights from a balanced cross section of sources from the government, academic, and commercial entities that contribute to HRI a multimodal IRS in military communication is introduced. Multimodal IRS (MIRS) in military communication has yet to be deployed.

Research limitations/implications

Multimodal robotic interface for the MIRS is an interdisciplinary endeavour. This is not realistic that one can comprehend all expert and related knowledge and skills to design and develop such multimodal interactive robotic interface. In this brief preliminary survey, the author has discussed extant AI, robotics, NLP, CV, VDM, and VE applications that is directly related to multimodal interaction. Each mode of this multimodal communication is an active research area. Multimodal human/military robot communication is the ultimate goal of this research.

Practical implications

A multimodal autonomous robot in military communication using speech, images, gestures, VST and VE has yet to be deployed. Autonomous multimodal communication is expected to open wider possibilities for all armed forces. Given the density of the land domain, the army is in a position to exploit the opportunities for human–machine teaming (HMT) exposure. Naval and air forces will adopt platform specific suites for specially selected operators to integrate with and leverage this emerging technology. The possession of a flexible communications means that readily adapts to virtual training will enhance planning and mission rehearsals tremendously.

Social implications

Interaction, perception, cognition and visualization based multimodal communication system is yet missing. Options to communicate, express and convey information in HMT setting with multiple options, suggestions and recommendations will certainly enhance military communication, strength, engagement, security, cognition, perception as well as the ability to act confidently for a successful mission.

Originality/value

The objective is to develop a multimodal autonomous interactive robot for military communications. This survey reports the state of the art, what exists and what is missing, what can be done and possibilities of extension that support the military in maintaining effective communication using multimodalities. There are some separate ongoing progresses, such as in machine-enabled speech, image recognition, tracking, visualizations for situational awareness, and virtual environments. At this time, there is no integrated approach for multimodal human robot interaction that proposes a flexible and agile communication. The report briefly introduces the research proposal about multimodal interactive robot in military communication.

Article
Publication date: 22 January 2024

Jun Liu, Junyuan Dong, Mingming Hu and Xu Lu

Existing Simultaneous Localization and Mapping (SLAM) algorithms have been relatively well developed. However, when in complex dynamic environments, the movement of the dynamic…

Abstract

Purpose

Existing Simultaneous Localization and Mapping (SLAM) algorithms have been relatively well developed. However, when in complex dynamic environments, the movement of the dynamic points on the dynamic objects in the image in the mapping can have an impact on the observation of the system, and thus there will be biases and errors in the position estimation and the creation of map points. The aim of this paper is to achieve more accurate accuracy in SLAM algorithms compared to traditional methods through semantic approaches.

Design/methodology/approach

In this paper, the semantic segmentation of dynamic objects is realized based on U-Net semantic segmentation network, followed by motion consistency detection through motion detection method to determine whether the segmented objects are moving in the current scene or not, and combined with the motion compensation method to eliminate dynamic points and compensate for the current local image, so as to make the system robust.

Findings

Experiments comparing the effect of detecting dynamic points and removing outliers are conducted on a dynamic data set of Technische Universität München, and the results show that the absolute trajectory accuracy of this paper's method is significantly improved compared with ORB-SLAM3 and DS-SLAM.

Originality/value

In this paper, in the semantic segmentation network part, the segmentation mask is combined with the method of dynamic point detection, elimination and compensation, which reduces the influence of dynamic objects, thus effectively improving the accuracy of localization in dynamic environments.

Details

Industrial Robot: the international journal of robotics research and application, vol. 51 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

1 – 10 of 37