Search results

1 – 10 of over 19000
Article
Publication date: 17 August 2015

Mario Andrei Garzon Oviedo, Antonio Barrientos, Jaime Del Cerro, Andrés Alacid, Efstathios Fotiadis, Gonzalo R. Rodríguez-Canosa and Bang-Chen Wang

This paper aims to present a system that is fully capable of addressing the issue of detection, tracking and following pedestrians, which is a very challenging task, especially…

Abstract

Purpose

This paper aims to present a system that is fully capable of addressing the issue of detection, tracking and following pedestrians, which is a very challenging task, especially when it is considered for using in large outdoors infrastructures. Three modules, detection, tracking and following, are integrated and tested over long distances in semi-structured scenarios, where static or dynamic obstacles, including other pedestrians, can be found.

Design/methodology/approach

The detection is based on the probabilistic fusion of a laser scanner and a camera. The tracking module pairs observations with previously detected targets by using Kalman Filters and a Mahalanobis-distance. The following module allows to safely pursue the target by using a well-defined navigation scheme.

Findings

The system can track pedestrians from static position to 3.46 m/s (running). It handles occlusions, crossings or miss-detections, keeping track of the position even if the pedestrian is only detected in 55/per cent of the observations. Moreover, it autonomously selects and follows a target at a maximum speed of 1.46 m/s.

Originality/value

The main novelty of this study is the integration of the three algorithms in a fully operational system, tested in real outdoor scenarios. Furthermore, the addition of labelling to the detection algorithm allows using the full range of a single sensor while preserving the high performance of a combined detection. False-positives’ rate is reduced by handling the uncertainty level when pairing observations. The inclusion of pedestrian speed in the model speeds up and simplifies tracking process. Finally, the most suitable target is automatically selected by a scoring system.

Details

Industrial Robot: An International Journal, vol. 42 no. 5
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 20 April 2023

Vishva Payghode, Ayush Goyal, Anupama Bhan, Sailesh Suryanarayan Iyer and Ashwani Kumar Dubey

This paper aims to implement and extend the You Only Live Once (YOLO) algorithm for detection of objects and activities. The advantage of YOLO is that it only runs a neural…

Abstract

Purpose

This paper aims to implement and extend the You Only Live Once (YOLO) algorithm for detection of objects and activities. The advantage of YOLO is that it only runs a neural network once to detect the objects in an image, which is why it is powerful and fast. Cameras are found at many different crossroads and locations, but video processing of the feed through an object detection algorithm allows determining and tracking what is captured. Video Surveillance has many applications such as Car Tracking and tracking of people related to crime prevention. This paper provides exhaustive comparison between the existing methods and proposed method. Proposed method is found to have highest object detection accuracy.

Design/methodology/approach

The goal of this research is to develop a deep learning framework to automate the task of analyzing video footage through object detection in images. This framework processes video feed or image frames from CCTV, webcam or a DroidCam, which allows the camera in a mobile phone to be used as a webcam for a laptop. The object detection algorithm, with its model trained on a large data set of images, is able to load in each image given as an input, process the image and determine the categories of the matching objects that it finds. As a proof of concept, this research demonstrates the algorithm on images of several different objects. This research implements and extends the YOLO algorithm for detection of objects and activities. The advantage of YOLO is that it only runs a neural network once to detect the objects in an image, which is why it is powerful and fast. Cameras are found at many different crossroads and locations, but video processing of the feed through an object detection algorithm allows determining and tracking what is captured. For video surveillance of traffic cameras, this has many applications, such as car tracking and person tracking for crime prevention. In this research, the implemented algorithm with the proposed methodology is compared against several different prior existing methods in literature. The proposed method was found to have the highest object detection accuracy for object detection and activity recognition, better than other existing methods.

Findings

The results indicate that the proposed deep learning–based model can be implemented in real-time for object detection and activity recognition. The added features of car crash detection, fall detection and social distancing detection can be used to implement a real-time video surveillance system that can help save lives and protect people. Such a real-time video surveillance system could be installed at street and traffic cameras and in CCTV systems. When this system would detect a car crash or a fatal human or pedestrian fall with injury, it can be programmed to send automatic messages to the nearest local police, emergency and fire stations. When this system would detect a social distancing violation, it can be programmed to inform the local authorities or sound an alarm with a warning message to alert the public to maintain their distance and avoid spreading their aerosol particles that may cause the spread of viruses, including the COVID-19 virus.

Originality/value

This paper proposes an improved and augmented version of the YOLOv3 model that has been extended to perform activity recognition, such as car crash detection, human fall detection and social distancing detection. The proposed model is based on a deep learning convolutional neural network model used to detect objects in images. The model is trained using the widely used and publicly available Common Objects in Context data set. The proposed model, being an extension of YOLO, can be implemented for real-time object and activity recognition. The proposed model had higher accuracies for both large-scale and all-scale object detection. This proposed model also exceeded all the other previous methods that were compared in extending and augmenting the object detection to activity recognition. The proposed model resulted in the highest accuracy for car crash detection, fall detection and social distancing detection.

Details

International Journal of Web Information Systems, vol. 19 no. 3/4
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 5 June 2017

Eugene Yujun Fu, Hong Va Leong, Grace Ngai and Stephen C.F. Chan

Social signal processing under affective computing aims at recognizing and extracting useful human social interaction patterns. Fight is a common social interaction in real life…

Abstract

Purpose

Social signal processing under affective computing aims at recognizing and extracting useful human social interaction patterns. Fight is a common social interaction in real life. A fight detection system finds wide applications. This paper aims to detect fights in a natural and low-cost manner.

Design/methodology/approach

Research works on fight detection are often based on visual features, demanding substantive computation and good video quality. In this paper, the authors propose an approach to detect fight events through motion analysis. Most existing works evaluated their algorithms on public data sets manifesting simulated fights, where the fights are acted out by actors. To evaluate real fights, the authors collected videos involving real fights to form a data set. Based on the two types of data sets, the authors evaluated the performance of their motion signal analysis algorithm, which was then compared with the state-of-the-art approach based on MoSIFT descriptors with Bag-of-Words mechanism, and basic motion signal analysis with Bag-of-Words.

Findings

The experimental results indicate that the proposed approach accurately detects fights in real scenarios and performs better than the MoSIFT approach.

Originality/value

By collecting and annotating real surveillance videos containing real fight events and augmenting with well-known data sets, the authors proposed, implemented and evaluated a low computation approach, comparing it with the state-of-the-art approach. The authors uncovered some fundamental differences between real and simulated fights and initiated a new study in discriminating real against simulated fight events, with very good performance.

Details

International Journal of Pervasive Computing and Communications, vol. 13 no. 2
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 4 July 2022

Shih Chang Hsia, Szu-Hong Wang and Hung-Lieh Chen

This study aims to present a novel technique to localize the human position in a room, to manage people in a specified space.

Abstract

Purpose

This study aims to present a novel technique to localize the human position in a room, to manage people in a specified space.

Design/methodology/approach

In this study, a real-time human sensing detection and smart lighting control was designed within a single silicon core. The chip has been successfully realized within 1.5 mm2 silicon area using TSMC 0.25 um process.

Findings

This chip can read the weak signal of pyroelectric infrared (PIR) sensor to find the position of human body in a dark room and then help control the smart lighting system for an intelligent surveillance system.

Originality/value

This chip presented the retriggering delay control to expand the LED lighting time infinitely to avoid lighting-off suddenly while users stay on a space. This function is very useful in a practical intelligent surveillance system that is mainly based on human detection to better reduce power dissipation and memory space.

Details

Sensor Review, vol. 42 no. 5
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 16 October 2017

Robert Bogue

This paper aims to provide a technical insight into a selection of robotic people detection technologies and applications.

Abstract

Purpose

This paper aims to provide a technical insight into a selection of robotic people detection technologies and applications.

Design/methodology/approach

Following an introduction, this paper first discusses people-sensing technologies which seek to extend the capabilities of human-robot collaboration by allowing humans to operate alongside conventional, industrial robots. It then provides examples of developments in people detection and tracking in unstructured, dynamic environments. Developments in people sensing and monitoring by assistive robots are then considered and finally, brief concluding comments are drawn.

Findings

Robotic people detection technologies are the topic of an extensive research effort and are becoming increasingly important, as growing numbers of robots interact directly with humans. These are being deployed in industry, in public places and in the home. The sensing requirements vary according to the application and range from simple person detection and avoidance to human motion tracking, behaviour and safety monitoring, individual recognition and gesture sensing. Sensing technologies include cameras, lasers and ultrasonics, and low cost RGB-D cameras are having a major impact.

Originality/value

This article provides details of a range of developments involving people sensing in the important and rapidly developing field of human-robot interactions.

Details

Industrial Robot: An International Journal, vol. 44 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 17 October 2022

Jiayue Zhao, Yunzhong Cao and Yuanzhi Xiang

The safety management of construction machines is of primary importance. Considering that traditional construction machine safety monitoring and evaluation methods cannot adapt to…

Abstract

Purpose

The safety management of construction machines is of primary importance. Considering that traditional construction machine safety monitoring and evaluation methods cannot adapt to the complex construction environment, and the monitoring methods based on sensor equipment cost too much. This paper aims to introduce computer vision and deep learning technologies to propose the YOLOv5-FastPose (YFP) model to realize the pose estimation of construction machines by improving the AlphaPose human pose model.

Design/methodology/approach

This model introduced the object detection module YOLOv5m to improve the recognition accuracy for detecting construction machines. Meanwhile, to better capture the pose characteristics, the FastPose network optimized feature extraction was introduced into the Single-Machine Pose Estimation Module (SMPE) of AlphaPose. This study used Alberta Construction Image Dataset (ACID) and Construction Equipment Poses Dataset (CEPD) to establish the dataset of object detection and pose estimation of construction machines through data augmentation technology and Labelme image annotation software for training and testing the YFP model.

Findings

The experimental results show that the improved model YFP achieves an average normalization error (NE) of 12.94 × 103, an average Percentage of Correct Keypoints (PCK) of 98.48% and an average Area Under the PCK Curve (AUC) of 37.50 × 103. Compared with existing methods, this model has higher accuracy in the pose estimation of the construction machine.

Originality/value

This study extends and optimizes the human pose estimation model AlphaPose to make it suitable for construction machines, improving the performance of pose estimation for construction machines.

Details

Engineering, Construction and Architectural Management, vol. 31 no. 3
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 16 April 2024

Jinwei Zhao, Shuolei Feng, Xiaodong Cao and Haopei Zheng

This paper aims to concentrate on recent innovations in flexible wearable sensor technology tailored for monitoring vital signals within the contexts of wearable sensors and…

Abstract

Purpose

This paper aims to concentrate on recent innovations in flexible wearable sensor technology tailored for monitoring vital signals within the contexts of wearable sensors and systems developed specifically for monitoring health and fitness metrics.

Design/methodology/approach

In recent decades, wearable sensors for monitoring vital signals in sports and health have advanced greatly. Vital signals include electrocardiogram, electroencephalogram, electromyography, inertial data, body motions, cardiac rate and bodily fluids like blood and sweating, making them a good choice for sensing devices.

Findings

This report reviewed reputable journal articles on wearable sensors for vital signal monitoring, focusing on multimode and integrated multi-dimensional capabilities like structure, accuracy and nature of the devices, which may offer a more versatile and comprehensive solution.

Originality/value

The paper provides essential information on the present obstacles and challenges in this domain and provide a glimpse into the future directions of wearable sensors for the detection of these crucial signals. Importantly, it is evident that the integration of modern fabricating techniques, stretchable electronic devices, the Internet of Things and the application of artificial intelligence algorithms has significantly improved the capacity to efficiently monitor and leverage these signals for human health monitoring, including disease prediction.

Details

Sensor Review, vol. 44 no. 3
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 2 September 2024

Li Shaochen, Zhenyu Liu, Yu Huang, Daxin Liu, Guifang Duan and Jianrong Tan

Assembly action recognition plays an important role in assembly process monitoring and human-robot collaborative assembly. Previous works overlook the interaction relationship…

Abstract

Purpose

Assembly action recognition plays an important role in assembly process monitoring and human-robot collaborative assembly. Previous works overlook the interaction relationship between hands and operated objects and lack the modeling of subtle hand motions, which leads to a decline in accuracy for fine-grained action recognition. This paper aims to model the hand-object interactions and hand movements to realize high-accuracy assembly action recognition.

Design/methodology/approach

In this paper, a novel multi-stream hand-object interaction network (MHOINet) is proposed for assembly action recognition. To learn the hand-object interaction relationship in assembly sequence, an interaction modeling network (IMN) comprising both geometric and visual modeling is exploited in the interaction stream. The former captures the spatial location relation of hand and interacted parts/tools according to their detected bounding boxes, and the latter focuses on mining the visual context of hand and object at pixel level through a position attention model. To model the hand movements, a temporal enhancement module (TEM) with multiple convolution kernels is developed in the hand stream, which captures the temporal dependences of hand sequences in short and long ranges. Finally, assembly action prediction is accomplished by merging the outputs of different streams through a weighted score-level fusion. A robotic arm component assembly dataset is created to evaluate the effectiveness of the proposed method.

Findings

The method can achieve the recognition accuracy of 97.31% and 95.32% for coarse and fine assembly actions, which outperforms other comparative methods. Experiments on human-robot collaboration prove that our method can be applied to industrial production.

Originality/value

The author proposes a novel framework for assembly action recognition, which simultaneously leverages the features of hands, objects and hand-object interactions. The TEM enhances the representation of dynamics of hands and facilitates the recognition of assembly actions with various time spans. The IMN learns the semantic information from hand-object interactions, which is significant for distinguishing fine assembly actions.

Details

Robotic Intelligence and Automation, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 12 June 2018

Mingming Guo, Hua Zhang, Chuncheng Feng, Manlu Liu and Jianwen Huo

This paper aims to present a method to improve the sensitive and low probabilities of false alarm of a manipulator in a human–robot interaction environment, which can improve the…

Abstract

Purpose

This paper aims to present a method to improve the sensitive and low probabilities of false alarm of a manipulator in a human–robot interaction environment, which can improve the performance of the system owing to non-linear uncertainty in the model of the robot controller.

Design/methodology/approach

A novel collision detection method based on adaptive residual estimation is proposed, promoting the detection accuracy of the collision of the manipulator during operation. First, a general momentum residual estimator is designed to incorporate the non-linear factors of the manipulator (e.g. joint friction, speed and acceleration) into the residual-related uncertainty of the model. Second, model parameters are estimated through gradient correction. The residual filter is used to determine the dynamic threshold, resulting in higher detection accuracy. Finally, the performance of the residual estimation scheme is evaluated by comparing the dynamic threshold with residual in real-time experiments where a single Universal Robot 5 robot end–effector collides with the obstacle.

Findings

Experimental results demonstrate that the collision detection system can improve sensitivity and lead to low probabilities of false alarm of non-linear uncertainty in the model.

Practical implications

The method proposed in this article can be applied to industry and human–robot interaction area.

Originality/value

An adaptive collision detection method is proposed in this paper to address non-linear uncertainties of the model in industrial application.

Details

Industrial Robot: An International Journal, vol. 45 no. 3
Type: Research Article
ISSN: 0143-991X

Keywords

Book part
Publication date: 13 June 2013

Li Xiao, Hye-jin Kim and Min Ding

Purpose – The advancement of multimedia technology has spurred the use of multimedia in business practice. The adoption of audio and visual data will accelerate as marketing…

Abstract

Purpose – The advancement of multimedia technology has spurred the use of multimedia in business practice. The adoption of audio and visual data will accelerate as marketing scholars become more aware of the value of audio and visual data and the technologies required to reveal insights into marketing problems. This chapter aims to introduce marketing scholars into this field of research.Design/methodology/approach – This chapter reviews the current technology in audio and visual data analysis and discusses rewarding research opportunities in marketing using these data.Findings – Compared with traditional data like survey and scanner data, audio and visual data provides richer information and is easier to collect. Given these superiority, data availability, feasibility of storage, and increasing computational power, we believe that these data will contribute to better marketing practices with the help of marketing scholars in the near future.Practical implications: The adoption of audio and visual data in marketing practices will help practitioners to get better insights into marketing problems and thus make better decisions.Value/originality – This chapter makes first attempt in the marketing literature to review the current technology in audio and visual data analysis and proposes promising applications of such technology. We hope it will inspire scholars to utilize audio and visual data in marketing research.

Details

Review of Marketing Research
Type: Book
ISBN: 978-1-78190-761-0

Keywords

1 – 10 of over 19000