Search results

1 – 10 of 110
Article
Publication date: 2 April 2024

Yi Liu, Rui Ning, Mingxin Du, Shuanghe Yu and Yan Yan

The purpose of this paper is to propose an new online path planning method for porcine belly cutting. With the proliferation in demand for the automatic systems of pork…

Abstract

Purpose

The purpose of this paper is to propose an new online path planning method for porcine belly cutting. With the proliferation in demand for the automatic systems of pork production, the development of efficient and robust meat cutting algorithms are hot issues. The uncertain and dynamic nature of the online porcine belly cutting imposes a challenge for the robot to identify and cut efficiently and accurately. Based on the above challenges, an online porcine belly cutting method using 3D laser point cloud is proposed.

Design/methodology/approach

The robotic cutting system is composed of an industrial robotic manipulator, customized tools, a laser sensor and a PC.

Findings

Analysis of experimental results shows that by comparing with machine vision, laser sensor-based robot cutting has more advantages, and it can handle different carcass sizes.

Originality/value

An image pyramid method is used for dimensionality reduction of the 3D laser point cloud. From a detailed analysis of the outward and inward cutting errors, the outward cutting error is the limiting condition for reducing the segments by segmentation algorithm.

Details

Industrial Robot: the international journal of robotics research and application, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 21 February 2024

Amruta Rout, Golak Bihari Mahanta, Bibhuti Bhusan Biswal, Renin Francy T., Sri Vardhan Raj and Deepak B.B.V.L.

The purpose of this study is to plan and develop a cost-effective health-care robot for assisting and observing the patients in an accurate and effective way during pandemic…

96

Abstract

Purpose

The purpose of this study is to plan and develop a cost-effective health-care robot for assisting and observing the patients in an accurate and effective way during pandemic situation like COVID-19. The purposed research work can help in better management of pandemic situations in rural areas as well as developing countries where medical facility is not easily available.

Design/methodology/approach

It becomes very difficult for the medical staff to have a continuous check on patient’s condition in terms of symptoms and critical parameters during pandemic situations. For dealing with these situations, a service mobile robot with multiple sensors for measuring patients bodily indicators has been proposed and the prototype for the same has been developed that can monitor and aid the patient using the robotic arm. The fuzzy controller has also been incorporated with the mobile robot through which decisions on patient monitoring can be taken automatically. Mamdani implication method has been utilized for formulating mathematical expression of M number of “if and then condition based rules” with defined input Xj (j = 1, 2, ………. s), and output yi. The inputs and output variables are formed by the membership functions µAij(xj) and µCi(yi) to execute the Fuzzy Inference System controller. Here, Aij and Ci are the developed fuzzy sets.

Findings

The fuzzy-based prediction model has been tested with the output of medicines for the initial 27 runs and was validated by the correlation of predicted and actual values. The correlation coefficient has been found to be 0.989 with a mean square error value of 0.000174, signifying a strong relationship between the predicted values and the actual values. The proposed research work can handle multiple tasks like online consulting, continuous patient condition monitoring in general wards and ICUs, telemedicine services, hospital waste disposal and providing service to patients at regular time intervals.

Originality/value

The novelty of the proposed research work lies in the integration of artificial intelligence techniques like fuzzy logic with the multi-sensor-based service robot for easy decision-making and continuous patient monitoring in hospitals in rural areas and to reduce the work stress on medical staff during pandemic situation.

Details

Robotic Intelligence and Automation, vol. 44 no. 1
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 8 April 2024

Matthew Peebles, Shen Hin Lim, Mike Duke, Benjamin Mcguinness and Chi Kit Au

Time of flight (ToF) imaging is a promising emerging technology for the purposes of crop identification. This paper aim to presents localization system for identifying and…

Abstract

Purpose

Time of flight (ToF) imaging is a promising emerging technology for the purposes of crop identification. This paper aim to presents localization system for identifying and localizing asparagus in the field based on point clouds from ToF imaging. Since the semantics are not included in the point cloud, it contains the geometric information of other objects such as stones and weeds other than asparagus spears. An approach is required for extracting the spear information so that a robotic system can be used for harvesting.

Design/methodology/approach

A real-time convolutional neural network (CNN)-based method is used for filtering the point cloud generated by a ToF camera, allowing subsequent processing methods to operate over smaller and more information-dense data sets, resulting in reduced processing time. The segmented point cloud can then be split into clusters of points representing each individual spear. Geometric filters are developed to eliminate the non-asparagus points in each cluster so that each spear can be modelled and localized. The spear information can then be used for harvesting decisions.

Findings

The localization system is integrated into a robotic harvesting prototype system. Several field trials have been conducted with satisfactory performance. The identification of a spear from the point cloud is the key to successful localization. Segmentation and clustering points into individual spears are two major failures for future improvements.

Originality/value

Most crop localizations in agricultural robotic applications using ToF imaging technology are implemented in a very controlled environment, such as a greenhouse. The target crop and the robotic system are stationary during the localization process. The novel proposed method for asparagus localization has been tested in outdoor farms and integrated with a robotic harvesting platform. Asparagus detection and localization are achieved in real time on a continuously moving robotic platform in a cluttered and unstructured environment.

Details

Industrial Robot: the international journal of robotics research and application, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 8 December 2023

Han Sun, Song Tang, Xiaozhi Qi, Zhiyuan Ma and Jianxin Gao

This study aims to introduce a novel noise filter module designed for LiDAR simultaneous localization and mapping (SLAM) systems. The primary objective is to enhance pose…

Abstract

Purpose

This study aims to introduce a novel noise filter module designed for LiDAR simultaneous localization and mapping (SLAM) systems. The primary objective is to enhance pose estimation accuracy and improve the overall system performance in outdoor environments.

Design/methodology/approach

Distinct from traditional approaches, MCFilter emphasizes enhancing point cloud data quality at the pixel level. This framework hinges on two primary elements. First, the D-Tracker, a tracking algorithm, is grounded on multiresolution three-dimensional (3D) descriptors and adeptly maintains a balance between precision and efficiency. Second, the R-Filter introduces a pixel-level attribute named motion-correlation, which effectively identifies and removes dynamic points. Furthermore, designed as a modular component, MCFilter ensures seamless integration into existing LiDAR SLAM systems.

Findings

Based on rigorous testing with public data sets and real-world conditions, the MCFilter reported an increase in average accuracy of 12.39% and reduced processing time by 24.18%. These outcomes emphasize the method’s effectiveness in refining the performance of current LiDAR SLAM systems.

Originality/value

In this study, the authors present a novel 3D descriptor tracker designed for consistent feature point matching across successive frames. The authors also propose an innovative attribute to detect and eliminate noise points. Experimental results demonstrate that integrating this method into existing LiDAR SLAM systems yields state-of-the-art performance.

Details

Robotic Intelligence and Automation, vol. 44 no. 1
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 6 March 2024

Xiaohui Li, Dongfang Fan, Yi Deng, Yu Lei and Owen Omalley

This study aims to offer a comprehensive exploration of the potential and challenges associated with sensor fusion-based virtual reality (VR) applications in the context of…

Abstract

Purpose

This study aims to offer a comprehensive exploration of the potential and challenges associated with sensor fusion-based virtual reality (VR) applications in the context of enhanced physical training. The main objective is to identify key advancements in sensor fusion technology, evaluate its application in VR systems and understand its impact on physical training.

Design/methodology/approach

The research initiates by providing context to the physical training environment in today’s technology-driven world, followed by an in-depth overview of VR. This overview includes a concise discussion on the advancements in sensor fusion technology and its application in VR systems for physical training. A systematic review of literature then follows, examining VR’s application in various facets of physical training: from exercise, skill development and technique enhancement to injury prevention, rehabilitation and psychological preparation.

Findings

Sensor fusion-based VR presents tangible advantages in the sphere of physical training, offering immersive experiences that could redefine traditional training methodologies. While the advantages are evident in domains such as exercise optimization, skill acquisition and mental preparation, challenges persist. The current research suggests there is a need for further studies to address these limitations to fully harness VR’s potential in physical training.

Originality/value

The integration of sensor fusion technology with VR in the domain of physical training remains a rapidly evolving field. Highlighting the advancements and challenges, this review makes a significant contribution by addressing gaps in knowledge and offering directions for future research.

Details

Robotic Intelligence and Automation, vol. 44 no. 1
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 30 April 2024

Jacqueline Humphries, Pepijn Van de Ven, Nehal Amer, Nitin Nandeshwar and Alan Ryan

Maintaining the safety of the human is a major concern in factories where humans co-exist with robots and other physical tools. Typically, the area around the robots is monitored…

Abstract

Purpose

Maintaining the safety of the human is a major concern in factories where humans co-exist with robots and other physical tools. Typically, the area around the robots is monitored using lasers. However, lasers cannot distinguish between human and non-human objects in the robot’s path. Stopping or slowing down the robot when non-human objects approach is unproductive. This research contribution addresses that inefficiency by showing how computer-vision techniques can be used instead of lasers which improve up-time of the robot.

Design/methodology/approach

A computer-vision safety system is presented. Image segmentation, 3D point clouds, face recognition, hand gesture recognition, speed and trajectory tracking and a digital twin are used. Using speed and separation, the robot’s speed is controlled based on the nearest location of humans accurate to their body shape. The computer-vision safety system is compared to a traditional laser measure. The system is evaluated in a controlled test, and in the field.

Findings

Computer-vision and lasers are shown to be equivalent by a measure of relationship and measure of agreement. R2 is given as 0.999983. The two methods are systematically producing similar results, as the bias is close to zero, at 0.060 mm. Using Bland–Altman analysis, 95% of the differences lie within the limits of maximum acceptable differences.

Originality/value

In this paper an original model for future computer-vision safety systems is described which is equivalent to existing laser systems, identifies and adapts to particular humans and reduces the need to slow and stop systems thereby improving efficiency. The implication is that computer-vision can be used to substitute lasers and permit adaptive robotic control in human–robot collaboration systems.

Details

Technological Sustainability, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2754-1312

Keywords

Article
Publication date: 3 May 2024

Dong Huan Shen, Shuai Guo, Hao Duan, Kehao Ji and Haili Jiang

The paper focuses on the issue of manual rebar-binding tasks in the construction industry, which are marked by high labor intensity, high costs and inefficient operations. The…

Abstract

Purpose

The paper focuses on the issue of manual rebar-binding tasks in the construction industry, which are marked by high labor intensity, high costs and inefficient operations. The rebar-binding robots that are currently available are not fully mature. Most of them can only bind one or two nodes in one position, which leads to significant time wastage in movement. Based on a new type of rebar-binding robot, this paper aims to propose a new movement and binding control that reduces manpower and enhances efficiency.

Design/methodology/approach

The robot is combined with photoelectric sensors, travel switches and other sensors. It is supposed to move accurately and run in a limited area on the rebar mesh through logical judgment, speed control and position control. Machine vision is used by the robot to locate the rebar nodes and then adjusts the binding-gun position to ensure that multiple rebar nodes are bound sequentially.

Findings

By moving on the rebar mesh with accuracy, the robot meets the positioning accuracy requirements of the binding module, with experimental testing accuracy within 5 mm. Furthermore, its ability to bind four rebar nodes in one place results in a high efficiency and a binding effect that meets building standards.

Originality/value

The innovative design of the robot can adapt itself to the rebar mesh, move accurately to the target position and bind four nodes at that position, which reduces the number of movements on the mesh. Repetitive and heavy rebar-binding tasks can be efficiently completed by the robot, which saves human resources, reduces worker labor intensity and reduces construction overhead. It provides a more feasible and practical solution for using robots to bind rebar nodes.

Details

Industrial Robot: the international journal of robotics research and application, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 1 November 2023

Juan Yang, Zhenkun Li and Xu Du

Although numerous signal modalities are available for emotion recognition, audio and visual modalities are the most common and predominant forms for human beings to express their…

Abstract

Purpose

Although numerous signal modalities are available for emotion recognition, audio and visual modalities are the most common and predominant forms for human beings to express their emotional states in daily communication. Therefore, how to achieve automatic and accurate audiovisual emotion recognition is significantly important for developing engaging and empathetic human–computer interaction environment. However, two major challenges exist in the field of audiovisual emotion recognition: (1) how to effectively capture representations of each single modality and eliminate redundant features and (2) how to efficiently integrate information from these two modalities to generate discriminative representations.

Design/methodology/approach

A novel key-frame extraction-based attention fusion network (KE-AFN) is proposed for audiovisual emotion recognition. KE-AFN attempts to integrate key-frame extraction with multimodal interaction and fusion to enhance audiovisual representations and reduce redundant computation, filling the research gaps of existing approaches. Specifically, the local maximum–based content analysis is designed to extract key-frames from videos for the purpose of eliminating data redundancy. Two modules, including “Multi-head Attention-based Intra-modality Interaction Module” and “Multi-head Attention-based Cross-modality Interaction Module”, are proposed to mine and capture intra- and cross-modality interactions for further reducing data redundancy and producing more powerful multimodal representations.

Findings

Extensive experiments on two benchmark datasets (i.e. RAVDESS and CMU-MOSEI) demonstrate the effectiveness and rationality of KE-AFN. Specifically, (1) KE-AFN is superior to state-of-the-art baselines for audiovisual emotion recognition. (2) Exploring the supplementary and complementary information of different modalities can provide more emotional clues for better emotion recognition. (3) The proposed key-frame extraction strategy can enhance the performance by more than 2.79 per cent on accuracy. (4) Both exploring intra- and cross-modality interactions and employing attention-based audiovisual fusion can lead to better prediction performance.

Originality/value

The proposed KE-AFN can support the development of engaging and empathetic human–computer interaction environment.

Article
Publication date: 15 April 2024

Yusuf Gökçe, Sinan Çavuşoğlu, Murat Göral, Yusuf Bayatkara, Aziz Bükey and Faruk Gökçe

This study aims to focus on publications that jointly address robots in the tourism field and the technology acceptance model (TAM).

Abstract

Purpose

This study aims to focus on publications that jointly address robots in the tourism field and the technology acceptance model (TAM).

Design/methodology/approach

This study adopts bibliometric analysis. Publications listed in the Web of Science database constitute the scope of this research. 51 publications were analyzed within the scope of the research.

Findings

Between the years 2017 and 2023, an upward trend in the number and citations of publications was identified. It has been observed that article studies are more prevalent compared to other types of publications. When considering the indexes of the publications, a significant majority were found to be in Social Sciences Citation Index (SSCI) and Science Citation Index (SCI)-EXPANDED. The status of the keywords identified within the scope of the research in the abstracts of the publications has been presented. The keyword “robot” was found to be the most frequently occurring in the abstracts. The abstracts were also analyzed, and the publications were accordingly clustered into five distinct themes.

Originality/value

This study offers a comprehensive evaluation of publications concerning the use of robots in the tourism sector, framed within the context of the TAM. Within the scope of the study, the findings were interpreted using bibliometric analysis. The publications have been categorized into themes. The results presented provide insights into the necessity for further publications in this field.

Details

Worldwide Hospitality and Tourism Themes, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1755-4217

Keywords

Article
Publication date: 16 February 2022

Pragati Agarwal, Sanjeev Swami and Sunita Kumari Malhotra

The purpose of this paper is to give an overview of artificial intelligence (AI) and other AI-enabled technologies and to describe how COVID-19 affects various industries such as…

3568

Abstract

Purpose

The purpose of this paper is to give an overview of artificial intelligence (AI) and other AI-enabled technologies and to describe how COVID-19 affects various industries such as health care, manufacturing, retail, food services, education, media and entertainment, banking and insurance, travel and tourism. Furthermore, the authors discuss the tactics in which information technology is used to implement business strategies to transform businesses and to incentivise the implementation of these technologies in current or future emergency situations.

Design/methodology/approach

The review provides the rapidly growing literature on the use of smart technology during the current COVID-19 pandemic.

Findings

The 127 empirical articles the authors have identified suggest that 39 forms of smart technologies have been used, ranging from artificial intelligence to computer vision technology. Eight different industries have been identified that are using these technologies, primarily food services and manufacturing. Further, the authors list 40 generalised types of activities that are involved including providing health services, data analysis and communication. To prevent the spread of illness, robots with artificial intelligence are being used to examine patients and give drugs to them. The online execution of teaching practices and simulators have replaced the classroom mode of teaching due to the epidemic. The AI-based Blue-dot algorithm aids in the detection of early warning indications. The AI model detects a patient in respiratory distress based on face detection, face recognition, facial action unit detection, expression recognition, posture, extremity movement analysis, visitation frequency detection, sound pressure detection and light level detection. The above and various other applications are listed throughout the paper.

Research limitations/implications

Research is largely delimited to the area of COVID-19-related studies. Also, bias of selective assessment may be present. In Indian context, advanced technology is yet to be harnessed to its full extent. Also, educational system is yet to be upgraded to add these technologies potential benefits on wider basis.

Practical implications

First, leveraging of insights across various industry sectors to battle the global threat, and smart technology is one of the key takeaways in this field. Second, an integrated framework is recommended for policy making in this area. Lastly, the authors recommend that an internet-based repository should be developed, keeping all the ideas, databases, best practices, dashboard and real-time statistical data.

Originality/value

As the COVID-19 is a relatively recent phenomenon, such a comprehensive review does not exist in the extant literature to the best of the authors’ knowledge. The review is rapidly emerging literature on smart technology use during the current COVID-19 pandemic.

Details

Journal of Science and Technology Policy Management, vol. 15 no. 3
Type: Research Article
ISSN: 2053-4620

Keywords

1 – 10 of 110