Search results

1 – 10 of over 23000
Article
Publication date: 9 September 2014

Michael Winkler, Kai Michael Höver and Max Mühlhäuser

The purpose of this study is to present a depth information-based solution for automatic camera control, depending on the presenter’s moving positions. Talks, presentations and…

Abstract

Purpose

The purpose of this study is to present a depth information-based solution for automatic camera control, depending on the presenter’s moving positions. Talks, presentations and lectures are often captured on video to give a broad audience the possibility to (re-)access the content. As presenters are often moving around during a talk, it is necessary to steer recording cameras.

Design/methodology/approach

We use depth information from Kinect to implement a prototypical application to automatically steer multiple cameras for recording a talk.

Findings

We present our experiences with the system during actual lectures at a university. We found out that Kinect is applicable for tracking a presenter during a talk robustly. Nevertheless, our prototypical solution reveals potential for improvements, which we discuss in our future work section.

Originality/value

Tracking a presenter is based on a skeleton model extracted from depth information instead of using two-dimensional (2D) motion- or brightness-based image processing techniques. The solution uses a scalable networking architecture based on publish/subscribe messaging for controlling multiple video cameras.

Details

Interactive Technology and Smart Education, vol. 11 no. 3
Type: Research Article
ISSN: 1741-5659

Keywords

Article
Publication date: 4 March 2019

Yu Qiu, Baoquan Li, Wuxi Shi and Yimei Chen

The purpose of this paper is to present a visual servo tracking strategy for the wheeled mobile robot, where the unknown feature depth information can be identified simultaneously…

Abstract

Purpose

The purpose of this paper is to present a visual servo tracking strategy for the wheeled mobile robot, where the unknown feature depth information can be identified simultaneously in the visual servoing process.

Design/methodology/approach

By using reference, desired and current images, system errors are constructed by measurable signals that are obtained by decomposing Euclidean homographies. Subsequently, by taking the advantage of the concurrent learning framework, both historical and current system data are used to construct an adaptive updating mechanism for recovering the unknown feature depth. Then, the kinematic controller is designed for the mobile robot to achieve the visual servo trajectory tracking task. Lyapunov techniques and LaSalle’s invariance principle are used to prove that system errors and the depth estimation error converge to zero synchronously.

Findings

The concurrent learning-based visual servo tracking and identification technology is found to be reliable, accurate and efficient with both simulation and comparative experimental results. Both trajectory tracking and depth estimation errors converge to zero successfully.

Originality/value

On the basis of the concurrent learning framework, an adaptive control strategy is developed for the mobile robot to successfully identify the unknown scene depth while accomplishing the visual servo trajectory tracking task.

Details

Assembly Automation, vol. 39 no. 3
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 12 January 2018

Yue Wang, Shusheng Zhang, Sen Yang, Weiping He and Xiaoliang Bai

This paper aims to propose a real-time augmented reality (AR)-based assembly assistance system using a coarse-to-fine marker-less tracking strategy. The system automatically…

1013

Abstract

Purpose

This paper aims to propose a real-time augmented reality (AR)-based assembly assistance system using a coarse-to-fine marker-less tracking strategy. The system automatically adapts to tracking requirement when the topological structure of the assembly changes after each assembly step.

Design/methodology/approach

The prototype system’s process can be divided into two stages: the offline preparation stage and online execution stage. In the offline preparation stage, planning results (assembly sequence, parts position, rotation, etc.) and image features [gradient and oriented FAST and rotated BRIEF (ORB)features] are extracted automatically from the assembly planning process. In the online execution stage, too, image features are extracted and matched with those generated offline to compute the camera pose, and planning results stored in XML files are parsed to generate the assembly instructions for manipulators. In the prototype system, the working range of template matching algorithm, LINE-MOD, is first extended by using depth information; then, a fast and robust marker-less tracker that combines the modified LINE-MOD algorithm and ORB tracker is designed to update the camera pose continuously. Furthermore, to track the camera pose stably, a tracking strategy according to the characteristic of assembly is presented herein.

Findings

The tracking accuracy and time of the proposed marker-less tracking approach were evaluated, and the results showed that the tracking method could run at 30 fps and the position and pose tracking accuracy was slightly superior to ARToolKit.

Originality/value

The main contributions of this work are as follows: First, the authors present a coarse-to-fine marker-less tracking method that uses modified state-of-the-art template matching algorithm, LINE-MOD, to find the coarse camera pose. Then, a feature point tracker ORB is activated to calculate the accurate camera pose. The whole tracking pipeline needs, on average, 24.35 ms for each frame, which can satisfy the real-time requirement for AR assembly. On basis of this algorithm, the authors present a generic tracking strategy according to the characteristics of the assembly and develop a generic AR-based assembly assistance platform. Second, the authors present a feature point mismatch-eliminating rule based on the orientation vector. By obtaining stable matching feature points, the proposed system can achieve accurate tracking results. The evaluation of the camera position and pose tracking accuracy result show that the study’s method is slightly superior to ARToolKit markers.

Details

Assembly Automation, vol. 38 no. 1
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 1 April 2014

Gerben G. Meyer, Paul Buijs, Nick B. Szirbik and J.C. (Hans) Wortmann

Many transportation companies struggle to effectively utilize the information provided by tracking technology for performing operational control. The research as presented in this…

2988

Abstract

Purpose

Many transportation companies struggle to effectively utilize the information provided by tracking technology for performing operational control. The research as presented in this paper aims to identify the problems underlying the inability to utilize tracking technology within this context. Moreover, this paper aims to contribute to solving these problems by proposing a set of design principles based on the concept of intelligent products.

Design/methodology/approach

The study as described in this paper adopts a design science research methodology consisting of three phases. First, a case study in a transportation company has been performed to identify the problems faced when utilizing tracking technology. Second, to overcome these problems, a set of design principles has been formulated. Finally, a prototype system based on the design principles has been developed and subjected to experimental and observational evaluation.

Findings

This paper identifies the problems associated with the utilization of tracking technology for the control of transport operations. Moreover, the proposed design principles support the development of information systems which overcome the identified problems and thereby enhance the utilization of tracking technology in a transportation context.

Originality/value

The commonly held perception that tracking technology will improve the ability to perform operational control does not unequivocally stand up to empirical scrutiny. While it is widely demonstrated that tracking technology is able to accurately capture the detailed operational information, it remains a fundamental challenge to transform this abundance of information into accurate and timely control decisions. This research provides a valuable contribution with respect to tackling this challenge, by identifying problems and providing solutions related to the utilization of readily available tracking technology.

Details

International Journal of Operations & Production Management, vol. 34 no. 4
Type: Research Article
ISSN: 0144-3577

Keywords

Open Access
Article
Publication date: 3 December 2021

Mykola Makhortykh, Aleksandra Urman, Teresa Gil-Lopez and Roberto Ulloa

This study investigates perceptions of the use of online tracking, a passive data collection method relying on the automated recording of participant actions on desktop and mobile…

3382

Abstract

Purpose

This study investigates perceptions of the use of online tracking, a passive data collection method relying on the automated recording of participant actions on desktop and mobile devices, for studying information behavior. It scrutinizes folk theories of tracking, the concerns tracking raises among the potential participants and design mechanisms that can be used to alleviate these concerns.

Design/methodology/approach

This study uses focus groups composed of university students (n = 13) to conduct an in-depth investigation of tracking perceptions in the context of information behavior research. Each focus group addresses three thematic blocks: (1) views on online tracking as a research technique, (2) concerns that influence participants' willingness to be tracked and (3) design mechanisms via which tracking-related concerns can be alleviated. To facilitate the discussion, each focus group combines open questions with card-sorting tasks. The results are analyzed using a combination of deductive content analysis and constant comparison analysis, with the main coding categories corresponding to the thematic blocks listed above.

Findings

The study finds that perceptions of tracking are influenced by recent data-related scandals (e.g. Cambridge Analytica), which have amplified negative attitudes toward tracking, which is viewed as a surveillance tool used by corporations and governments. This study also confirms the contextual nature of tracking-related concerns, which vary depending on the activities and content that are tracked. In terms of mechanisms used to address these concerns, this study highlights the importance of transparency-based mechanisms, particularly explanations dealing with the aims and methods of data collection, followed by privacy- and control-based mechanisms.

Originality/value

The study conducts a detailed examination of tracking perceptions and discusses how this research method can be used to increase engagement and empower participants involved in information behavior research.

Details

Internet Research, vol. 32 no. 7
Type: Research Article
ISSN: 1066-2243

Keywords

Article
Publication date: 23 November 2020

Chengjun Chen, Zhongke Tian, Dongnian Li, Lieyong Pang, Tiannuo Wang and Jun Hong

This study aims to monitor and guide the assembly process. The operators need to change the assembly process according to the products’ specifications during manual assembly of…

900

Abstract

Purpose

This study aims to monitor and guide the assembly process. The operators need to change the assembly process according to the products’ specifications during manual assembly of mass customized production. Traditional information inquiry and display methods, such as manual lookup of assembly drawings or electronic manuals, are inefficient and error-prone.

Design/methodology/approach

This paper proposes a projection-based augmented reality system (PBARS) for assembly guidance and monitoring. The system includes a projection method based on viewpoint tracking, in which the position of the operator’s head is tracked and the projection images are changed correspondingly. The assembly monitoring phase applies a method for parts recognition. First, the pixel local binary pattern (PX-LBP) operator is achieved by merging the classical LBP operator with the pixel classification process. Afterward, the PX-LBP features of the depth images are extracted and the randomized decision forests classifier is used to get the pixel classification prediction image (PCPI). Parts recognition and assembly monitoring is performed by PCPI analysis.

Findings

The projection image changes with the viewpoint of the human body, hence the operators always perceive the three-dimensional guiding scene from different viewpoints, improving the human-computer interaction. Part recognition and assembly monitoring were achieved by comparing the PCPIs, in which missing and erroneous assembly can be detected online.

Originality/value

This paper designed the PBARS to monitor and guide the assembly process simultaneously, with potential applications in mass customized production. The parts recognition and assembly monitoring based on pixels classification provides a novel method for assembly monitoring.

Content available
Book part
Publication date: 30 July 2018

Abstract

Details

Marketing Management in Turkey
Type: Book
ISBN: 978-1-78714-558-0

Article
Publication date: 3 December 2018

Babing Ji and Qixin Cao

This paper aims to propose a new solution for real-time 3D perception with monocular camera. Most of the industrial robots’ solutions use active sensors to acquire 3D structure…

Abstract

Purpose

This paper aims to propose a new solution for real-time 3D perception with monocular camera. Most of the industrial robots’ solutions use active sensors to acquire 3D structure information, which limit their applications to indoor scenarios. By only using monocular camera, some state of art method provides up-to-scale 3D structure information, but scale information of corresponding objects is uncertain.

Design/methodology/approach

First, high-accuracy and scale-informed camera pose and sparse 3D map are provided by leveraging ORB-SLAM and marker. Second, for each frame captured by a camera, a specially designed depth estimation pipeline is used to compute corresponding 3D structure called depth map in real-time. Finally, depth map is integrated into volumetric scene model. A feedback module has been designed for users to visualize intermediate scene surface in real-time.

Findings

The system provides more robust tracking performance and compelling results. The implementation runs near 25 Hz on mainstream laptop based on parallel computation technique.

Originality/value

A new solution for 3D perception is using monocular camera by leveraging ORB-SLAM systems. Results in our system are visually comparable to active sensor systems such as elastic fusion in small scenes. The system is also both efficient and easy to implement, and algorithms and specific configurations involved are introduced in detail.

Details

Industrial Robot: An International Journal, vol. 45 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 11 July 2016

Meiyin Liu, SangUk Han and SangHyun Lee

As a means of data acquisition for the situation awareness, computer vision-based motion capture technologies have increased the potential to observe and assess manual activities…

1194

Abstract

Purpose

As a means of data acquisition for the situation awareness, computer vision-based motion capture technologies have increased the potential to observe and assess manual activities for the prevention of accidents and injuries in construction. This study thus aims to present a computationally efficient and robust method of human motion data capture for the on-site motion sensing and analysis.

Design/methodology/approach

This study investigated a tracking approach to three-dimensional (3D) human skeleton extraction from stereo video streams. Instead of detecting body joints on each image, the proposed method tracks locations of the body joints over all the successive frames by learning from the initialized body posture. The corresponding body joints to the ones tracked are then identified and matched on the image sequences from the other lens and reconstructed in a 3D space through triangulation to build 3D skeleton models. For validation, a lab test is conducted to evaluate the accuracy and working ranges of the proposed method, respectively.

Findings

Results of the test reveal that the tracking approach produces accurate outcomes at a distance, with nearly real-time computational processing, and can be potentially used for site data collection. Thus, the proposed approach has a potential for various field analyses for construction workers’ safety and ergonomics.

Originality/value

Recently, motion capture technologies have rapidly been developed and studied in construction. However, existing sensing technologies are not yet readily applicable to construction environments. This study explores two smartphones as stereo cameras as a potentially suitable means of data collection in construction for the less operational constrains (e.g. no on-body sensor required, less sensitivity to sunlight, and flexible ranges of operations).

Details

Construction Innovation, vol. 16 no. 3
Type: Research Article
ISSN: 1471-4175

Keywords

Article
Publication date: 29 August 2022

Jianbin Xiong, Jinji Nie and Jiehao Li

This paper primarily aims to focus on a review of convolutional neural network (CNN)-based eye control systems. The performance of CNNs in big data has led to the development of…

Abstract

Purpose

This paper primarily aims to focus on a review of convolutional neural network (CNN)-based eye control systems. The performance of CNNs in big data has led to the development of eye control systems. Therefore, a review of eye control systems based on CNNs is helpful for future research.

Design/methodology/approach

In this paper, first, it covers the fundamentals of the eye control system as well as the fundamentals of CNNs. Second, the standard CNN model and the target detection model are summarized. The eye control system’s CNN gaze estimation approach and model are next described and summarized. Finally, the progress of the gaze estimation of the eye control system is discussed and anticipated.

Findings

The eye control system accomplishes the control effect using gaze estimation technology, which focuses on the features and information of the eyeball, eye movement and gaze, among other things. The traditional eye control system adopts pupil monitoring, pupil positioning, Hough algorithm and other methods. This study will focus on a CNN-based eye control system. First of all, the authors present the CNN model, which is effective in image identification, target detection and tracking. Furthermore, the CNN-based eye control system is separated into three categories: semantic information, monocular/binocular and full-face. Finally, three challenges linked to the development of an eye control system based on a CNN are discussed, along with possible solutions.

Originality/value

This research can provide theoretical and engineering basis for the eye control system platform. In addition, it also summarizes the ideas of predecessors to support the development of future research.

Details

Assembly Automation, vol. 42 no. 5
Type: Research Article
ISSN: 0144-5154

Keywords

1 – 10 of over 23000