Search results

1 – 10 of 89
Article
Publication date: 15 July 2021

Nehemia Sugianto, Dian Tjondronegoro, Rosemary Stockdale and Elizabeth Irenne Yuwono

The paper proposes a privacy-preserving artificial intelligence-enabled video surveillance technology to monitor social distancing in public spaces.

Abstract

Purpose

The paper proposes a privacy-preserving artificial intelligence-enabled video surveillance technology to monitor social distancing in public spaces.

Design/methodology/approach

The paper proposes a new Responsible Artificial Intelligence Implementation Framework to guide the proposed solution's design and development. It defines responsible artificial intelligence criteria that the solution needs to meet and provides checklists to enforce the criteria throughout the process. To preserve data privacy, the proposed system incorporates a federated learning approach to allow computation performed on edge devices to limit sensitive and identifiable data movement and eliminate the dependency of cloud computing at a central server.

Findings

The proposed system is evaluated through a case study of monitoring social distancing at an airport. The results discuss how the system can fully address the case study's requirements in terms of its reliability, its usefulness when deployed to the airport's cameras, and its compliance with responsible artificial intelligence.

Originality/value

The paper makes three contributions. First, it proposes a real-time social distancing breach detection system on edge that extends from a combination of cutting-edge people detection and tracking algorithms to achieve robust performance. Second, it proposes a design approach to develop responsible artificial intelligence in video surveillance contexts. Third, it presents results and discussion from a comprehensive evaluation in the context of a case study at an airport to demonstrate the proposed system's robust performance and practical usefulness.

Details

Information Technology & People, vol. 37 no. 2
Type: Research Article
ISSN: 0959-3845

Keywords

Article
Publication date: 23 January 2024

Wang Zhang, Lizhe Fan, Yanbin Guo, Weihua Liu and Chao Ding

The purpose of this study is to establish a method for accurately extracting torch and seam features. This will improve the quality of narrow gap welding. An adaptive deflection…

Abstract

Purpose

The purpose of this study is to establish a method for accurately extracting torch and seam features. This will improve the quality of narrow gap welding. An adaptive deflection correction system based on passive light vision sensors was designed using the Halcon software from MVtec Germany as a platform.

Design/methodology/approach

This paper proposes an adaptive correction system for welding guns and seams divided into image calibration and feature extraction. In the image calibration method, the field of view distortion because of the position of the camera is resolved using image calibration techniques. In the feature extraction method, clear features of the weld gun and weld seam are accurately extracted after processing using algorithms such as impact filtering, subpixel (XLD), Gaussian Laplacian and sense region for the weld gun and weld seam. The gun and weld seam centers are accurately fitted using least squares. After calculating the deviation values, the error values are monitored, and error correction is achieved by programmable logic controller (PLC) control. Finally, experimental verification and analysis of the tracking errors are carried out.

Findings

The results show that the system achieves great results in dealing with camera aberrations. Weld gun features can be effectively and accurately identified. The difference between a scratch and a weld is effectively distinguished. The system accurately detects the center features of the torch and weld and controls the correction error to within 0.3mm.

Originality/value

An adaptive correction system based on a passive light vision sensor is designed which corrects the field-of-view distortion caused by the camera’s position deviation. Differences in features between scratches and welds are distinguished, and image features are effectively extracted. The final system weld error is controlled to 0.3 mm.

Details

Industrial Robot: the international journal of robotics research and application, vol. 51 no. 3
Type: Research Article
ISSN: 0143-991X

Keywords

Open Access
Article
Publication date: 16 January 2024

Pengyue Guo, Tianyun Shi, Zhen Ma and Jing Wang

The paper aims to solve the problem of personnel intrusion identification within the limits of high-speed railways. It adopts the fusion method of millimeter wave radar and camera…

Abstract

Purpose

The paper aims to solve the problem of personnel intrusion identification within the limits of high-speed railways. It adopts the fusion method of millimeter wave radar and camera to improve the accuracy of object recognition in dark and harsh weather conditions.

Design/methodology/approach

This paper adopts the fusion strategy of radar and camera linkage to achieve focus amplification of long-distance targets and solves the problem of low illumination by laser light filling of the focus point. In order to improve the recognition effect, this paper adopts the YOLOv8 algorithm for multi-scale target recognition. In addition, for the image distortion caused by bad weather, this paper proposes a linkage and tracking fusion strategy to output the correct alarm results.

Findings

Simulated intrusion tests show that the proposed method can effectively detect human intrusion within 0–200 m during the day and night in sunny weather and can achieve more than 80% recognition accuracy for extreme severe weather conditions.

Originality/value

(1) The authors propose a personnel intrusion monitoring scheme based on the fusion of millimeter wave radar and camera, achieving all-weather intrusion monitoring; (2) The authors propose a new multi-level fusion algorithm based on linkage and tracking to achieve intrusion target monitoring under adverse weather conditions; (3) The authors have conducted a large number of innovative simulation experiments to verify the effectiveness of the method proposed in this article.

Details

Railway Sciences, vol. 3 no. 1
Type: Research Article
ISSN: 2755-0907

Keywords

Article
Publication date: 9 June 2023

Wahib Saif and Adel Alshibani

This paper aims to present a highly accessible and affordable tracking model for earthmoving operations in an attempt to overcome some of the limitations of current tracking…

Abstract

Purpose

This paper aims to present a highly accessible and affordable tracking model for earthmoving operations in an attempt to overcome some of the limitations of current tracking models.

Design/methodology/approach

The proposed methodology involves four main processes: acquiring onsite terrestrial images, processing the images into 3D scaled cloud data, extracting volumetric measurements and crew productivity estimations from multiple point clouds using Delaunay triangulation and conducting earned value/schedule analysis and forecasting the remaining scope of work based on the estimated performance. For validation, the tracking model was compared with an observation-based tracking approach for a backfilling site. It was also used for tracking a coarse base aggregate inventory for a road construction project.

Findings

The presented model has proved to be a practical and accurate tracking approach that algorithmically estimates and forecasts all performance parameters from the captured data.

Originality/value

The proposed model is unique in extracting accurate volumetric measurements directly from multiple point clouds in a developed code using Delaunay triangulation instead of extracting them from textured models in modelling software which is neither automated nor time-effective. Furthermore, the presented model uses a self-calibration approach aiming to eliminate the pre-calibration procedure required before image capturing for each camera intended to be used. Thus, any worker onsite can directly capture the required images with an easily accessible camera (e.g. handheld camera or a smartphone) and can be sent to any processing device via e-mail, cloud-based storage or any communication application (e.g. WhatsApp).

Article
Publication date: 25 January 2023

Hui Xu, Junjie Zhang, Hui Sun, Miao Qi and Jun Kong

Attention is one of the most important factors to affect the academic performance of students. Effectively analyzing students' attention in class can promote teachers' precise…

Abstract

Purpose

Attention is one of the most important factors to affect the academic performance of students. Effectively analyzing students' attention in class can promote teachers' precise teaching and students' personalized learning. To intelligently analyze the students' attention in classroom from the first-person perspective, this paper proposes a fusion model based on gaze tracking and object detection. In particular, the proposed attention analysis model does not depend on any smart equipment.

Design/methodology/approach

Given a first-person view video of students' learning, the authors first estimate the gazing point by using the deep space–time neural network. Second, single shot multi-box detector and fast segmentation convolutional neural network are comparatively adopted to accurately detect the objects in the video. Third, they predict the gazing objects by combining the results of gazing point estimation and object detection. Finally, the personalized attention of students is analyzed based on the predicted gazing objects and the measurable eye movement criteria.

Findings

A large number of experiments are carried out on a public database and a new dataset that is built in a real classroom. The experimental results show that the proposed model not only can accurately track the students' gazing trajectory and effectively analyze the fluctuation of attention of the individual student and all students but also provide a valuable reference to evaluate the process of learning of students.

Originality/value

The contributions of this paper can be summarized as follows. The analysis of students' attention plays an important role in improving teaching quality and student achievement. However, there is little research on how to automatically and intelligently analyze students' attention. To alleviate this problem, this paper focuses on analyzing students' attention by gaze tracking and object detection in classroom teaching, which is significant for practical application in the field of education. The authors proposed an effectively intelligent fusion model based on the deep neural network, which mainly includes the gazing point module and the object detection module, to analyze students' attention in classroom teaching instead of relying on any smart wearable device. They introduce the attention mechanism into the gazing point module to improve the performance of gazing point detection and perform some comparison experiments on the public dataset to prove that the gazing point module can achieve better performance. They associate the eye movement criteria with visual gaze to get quantifiable objective data for students' attention analysis, which can provide a valuable basis to evaluate the learning process of students, provide useful learning information of students for both parents and teachers and support the development of individualized teaching. They built a new database that contains the first-person view videos of 11 subjects in a real classroom and employ it to evaluate the effectiveness and feasibility of the proposed model.

Details

Data Technologies and Applications, vol. 57 no. 5
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 8 September 2022

Johnny Kwok Wai Wong, Mojtaba Maghrebi, Alireza Ahmadian Fard Fini, Mohammad Amin Alizadeh Golestani, Mahdi Ahmadnia and Michael Er

Images taken from construction site interiors often suffer from low illumination and poor natural colors, which restrict their application for high-level site management purposes…

Abstract

Purpose

Images taken from construction site interiors often suffer from low illumination and poor natural colors, which restrict their application for high-level site management purposes. The state-of-the-art low-light image enhancement method provides promising image enhancement results. However, they generally require a longer execution time to complete the enhancement. This study aims to develop a refined image enhancement approach to improve execution efficiency and performance accuracy.

Design/methodology/approach

To develop the refined illumination enhancement algorithm named enhanced illumination quality (EIQ), a quadratic expression was first added to the initial illumination map. Subsequently, an adjusted weight matrix was added to improve the smoothness of the illumination map. A coordinated descent optimization algorithm was then applied to minimize the processing time. Gamma correction was also applied to further enhance the illumination map. Finally, a frame comparing and averaging method was used to identify interior site progress.

Findings

The proposed refined approach took around 4.36–4.52 s to achieve the expected results while outperforming the current low-light image enhancement method. EIQ demonstrated a lower lightness-order error and provided higher object resolution in enhanced images. EIQ also has a higher structural similarity index and peak-signal-to-noise ratio, which indicated better image reconstruction performance.

Originality/value

The proposed approach provides an alternative to shorten the execution time, improve equalization of the illumination map and provide a better image reconstruction. The approach could be applied to low-light video enhancement tasks and other dark or poor jobsite images for object detection processes.

Details

Construction Innovation , vol. 24 no. 2
Type: Research Article
ISSN: 1471-4175

Keywords

Article
Publication date: 9 February 2024

Xiaoqing Zhang, Genliang Xiong, Peng Yin, Yanfeng Gao and Yan Feng

To ensure the motion attitude and stable contact force of massage robot working on unknown human tissue environment, this study aims to propose a robotic system for autonomous…

Abstract

Purpose

To ensure the motion attitude and stable contact force of massage robot working on unknown human tissue environment, this study aims to propose a robotic system for autonomous massage path planning and stable interaction control.

Design/methodology/approach

First, back region extraction and acupoint recognition based on deep learning is proposed, which provides a basis for determining the working area and path points of the robot. Second, to realize the standard approach and movement trajectory of the expert massage, 3D reconstruction and path planning of the massage area are performed, and normal vectors are calculated to control the normal orientation of robot-end. Finally, to cope with the soft and hard changes of human tissue state and body movement, an adaptive force tracking control strategy is presented to compensate the uncertainty of environmental position and tissue hardness online.

Findings

Improved network model can accomplish the acupoint recognition task with a large accuracy and integrate the point cloud to generate massage trajectories adapted to the shape of the human body. Experimental results show that the adaptive force tracking control can obtain a relatively smooth force, and the error is basically within ± 0.2 N during the online experiment.

Originality/value

This paper incorporates deep learning, 3D reconstruction and impedance control, the robot can understand the shape features of the massage area and adapt its planning massage path to carry out a stable and safe force tracking control during dynamic robot–human contact.

Details

Industrial Robot: the international journal of robotics research and application, vol. 51 no. 3
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 11 September 2023

Zhongmei Zhang, Qingyang Hu, Guanxin Hou and Shuai Zhang

Vehicle companion is one of the most common companion patterns in daily life, which has great value to accident investigation, group tracking, carpooling recommendation and road…

Abstract

Purpose

Vehicle companion is one of the most common companion patterns in daily life, which has great value to accident investigation, group tracking, carpooling recommendation and road planning. Due to the complexity and large scale of vehicle sensor streaming data, existing work were difficult to ensure the efficiency and effectiveness of real-time vehicle companion discovery (VCD). This paper aims to provide a high-quality and low-cost method to discover vehicle companions in real time.

Design/methodology/approach

This paper provides a real-time VCD method based on pro-active data service collaboration. This study makes use of dynamic service collaboration to selectively process data produced by relative sensors, and relax the temporal and spatial constraints of vehicle companion pattern for discovering more potential companion vehicles.

Findings

Experiments based on real and simulated data show that the method can discover 67% more companion vehicles, with 62% less response time comparing with centralized method.

Originality/value

To reduce the amount of processing streaming data, this study provides a Service Collaboration-based Vehicle Companion Discovery method based on proactive data service model. And this study provides a new definition of vehicle companion through relaxing the temporal and spatial constraints for discover companion vehicles as many as possible.

Details

International Journal of Web Information Systems, vol. 19 no. 5/6
Type: Research Article
ISSN: 1744-0084

Keywords

Open Access
Article
Publication date: 22 September 2023

Nengsheng Bao, Yuchen Fan, Chaoping Li and Alessandro Simeone

Lubricating oil leakage is a common issue in thermal power plant operation sites, requiring prompt equipment maintenance. The real-time detection of leakage occurrences could…

Abstract

Purpose

Lubricating oil leakage is a common issue in thermal power plant operation sites, requiring prompt equipment maintenance. The real-time detection of leakage occurrences could avoid disruptive consequences caused by the lack of timely maintenance. Currently, inspection operations are mostly carried out manually, resulting in time-consuming processes prone to health and safety hazards. To overcome such issues, this paper proposes a machine vision-based inspection system aimed at automating the oil leakage detection for improving the maintenance procedures.

Design/methodology/approach

The approach aims at developing a novel modular-structured automatic inspection system. The image acquisition module collects digital images along a predefined inspection path using a dual-light (i.e. ultraviolet and blue light) illumination system, deploying the fluorescence of the lubricating oil while suppressing unwanted background noise. The image processing module is designed to detect the oil leakage within the digital images minimizing detection errors. A case study is reported to validate the industrial suitability of the proposed inspection system.

Findings

On-site experimental results demonstrate the capabilities to complete the automatic inspection procedures of the tested industrial equipment by achieving an oil leakage detection accuracy up to 99.13%.

Practical implications

The proposed inspection system can be adopted in industrial context to detect lubricant leakage ensuring the equipment and the operators safety.

Originality/value

The proposed inspection system adopts a computer vision approach, which deploys the combination of two separate sources of light, to boost the detection capabilities, enabling the application for a variety of particularly hard-to-inspect industrial contexts.

Details

Journal of Quality in Maintenance Engineering, vol. 29 no. 5
Type: Research Article
ISSN: 1355-2511

Keywords

Article
Publication date: 14 February 2024

Parsa Aghaei and Sara Bayramzadeh

This study aims to investigate how trauma team members perceive technological equipment and tools in the trauma room (TR) environment and to identify how the technological…

Abstract

Purpose

This study aims to investigate how trauma team members perceive technological equipment and tools in the trauma room (TR) environment and to identify how the technological equipment could be optimized in relation to the TR’s space.

Design/methodology/approach

A total of 21 focus group sessions were conducted with 69 trauma team members, all of whom worked in Level I TRs from six teaching hospitals in the USA.

Findings

The collected data was analyzed and categorized into three parent themes: imaging equipment, assistive devices and room features. The results of the study suggest that trauma team members place high importance on the availability and versatility of the technological equipment in the TR environment. Although CT scans are a usual procedure necessity in TRs, few facilities were optimized for easy access to CT-scanners for the TR. The implementation of cameras and screens was suggested as an improvement to accommodate situational awareness. Rapid sharing of data, such as imaging results, was highly sought after. Unorthodox approaches, such as the use of automatic doors, were associated with slowing down the course of actions.

Practical implications

This study provides health-care designers with the knowledge they need to make informed decisions when designing TRs. It will cover key considerations such as room layout, equipment selection, lighting and controls. Implementing the strategies will help minimize negative patient outcomes.

Originality/value

Level I TRs are a critical element of emergency departments and designing them correctly can significantly impact patient outcomes. However, designing a TR can be a complex process that requires careful consideration of various factors, including patient safety, workflow efficiency, equipment placement and infection control. This study suggests multiple considerations when designing TRs.

Details

Facilities , vol. 42 no. 5/6
Type: Research Article
ISSN: 0263-2772

Keywords

Access

Year

Last 6 months (89)

Content type

Article (89)
1 – 10 of 89