Search results

1 – 8 of 8
To view the access options for this content please click here
Article

Lounis Chermak, Nabil Aouf and Mark Richardson

In visual-based applications, lighting conditions have a considerable impact on quality of the acquired images. Extremely low or high illuminated environments are a real…

Abstract

Purpose

In visual-based applications, lighting conditions have a considerable impact on quality of the acquired images. Extremely low or high illuminated environments are a real issue for a majority of cameras due to limitations in their dynamic range. Indeed, over or under exposure might result in loss of essential information because of pixel saturation or noise. This can be critical in computer vision applications. High dynamic range (HDR) imaging technology is known to improve image rendering in such conditions. The purpose of this paper is to investigate the level of performance that can be achieved for feature detection and tracking operations in images acquired with a HDR image sensor.

Design/methodology/approach

In this study, four different feature detection techniques are selected and tracking algorithm is based on the pyramidal implementation of Kanade-Lucas-Tomasi (KLT) feature tracker. Tracking algorithm is run over image sequences acquired with a HDR image sensor and with a high resolution 5 Megapixel image sensor to comparatively assess them.

Findings

The authors demonstrate that tracking performance is greatly improved on image sequences acquired with HDR sensor. Number and percentage of finally tracked features are several times higher than what can be achieved with a 5 Megapixel image sensor.

Originality/value

The specific interest of this work focuses on the evaluation of tracking persistence of a set of initial detected features over image sequences taken in different scenes. This includes extreme illumination indoor and outdoor environments subject to direct sunlight exposure, backlighting, as well as dim light and dark scenarios.

Details

Kybernetes, vol. 43 no. 8
Type: Research Article
ISSN: 0368-492X

Keywords

To view the access options for this content please click here
Article

Tao Guan and Li Duan

Augmented environments superimpose computer enhancements on the real world. The pose and occlusion consistencies between virtual and real objects have to be managed…

Abstract

Purpose

Augmented environments superimpose computer enhancements on the real world. The pose and occlusion consistencies between virtual and real objects have to be managed correctly, so that users can look at the natural scene. The purpose of this paper is to describe a novel technique that can be used to resolve pose and occlusion consistencies in real time with a unified affine properties‐based framework.

Design/methodology/approach

First, the method is simple and can resolve pose and occlusion consistencies in a unified framework based on affine properties. It can improve third dimension of the augmented reality system to a large degree while reducing the computing complexity. Second, the method is robust to arbitrary camera motion and does not require multiple cameras, camera calibration, use of fiducials, or a structural model of the scene to work. Third, a novel feature tracking method is proposed combing narrow and wide baseline strategies to match natural features between reference images and current frame directly.

Findings

It is found that the method is still effective even under large changes of viewing angles, while casting off the requirement that the initial camera position should close to the reference images.

Originality/value

This paper describes some experiments which have been carried out to demonstrate the validity of the proposed approach.

Details

Sensor Review, vol. 30 no. 2
Type: Research Article
ISSN: 0260-2288

Keywords

To view the access options for this content please click here
Article

Yan Pang, Andrew Y.C. Nee, Soh Khim Ong, Miaolong Yuan and Kamal Youcef‐Toumi

This paper aims to apply the augmented reality (AR) technology to assembly design in the early design stage. A proof‐of‐concept system with AR interface is developed.

Abstract

Purpose

This paper aims to apply the augmented reality (AR) technology to assembly design in the early design stage. A proof‐of‐concept system with AR interface is developed.

Design/methodology/approach

Through AR interface, designers can design the assembly on the real assembly platform. The system helps users to design the assembly features to provide proper part‐part constraints in the early design stage. The virtual assembly features are rendered on the real assembly platform using AR registration techniques. The new evaluated assembly parts can be generated in the AR interface and assembled to assembly platform through assembly features. The model‐based collision detection technique is implemented for assembly constraint evaluation.

Findings

With AR interface, it would be possible to combine some of the benefits of both physical and virtual prototyping (VP). The AR environment can save a lot of computation resource compared to a totally virtual environment. Working on real assembly platform, designers have more realistic feel and the ability to design an assembly in a more intuitive way.

Research limitations/implications

More interaction tools need to be developed to support the complex assembly design efficiently.

Practical implications

The presented system encourages designers to consider the assembly issues in the early design stage. The primitive 3D models of assembly parts with proper part‐part constraints are generated using the system before doing detailed geometry design.

Originality/value

A new markerless registration approach for AR system is presented. This generic approach can be also used for other AR applications.

Details

Assembly Automation, vol. 26 no. 1
Type: Research Article
ISSN: 0144-5154

Keywords

To view the access options for this content please click here
Article

Li Lijun, Guan Tao, Ren Bo, Yao Xiaowen and Wang Cheng

The purpose of this paper is to propose a novel registration method using Euclidean reconstruction and natural features tracking for AR‐based assembly guidance systems.

Abstract

Purpose

The purpose of this paper is to propose a novel registration method using Euclidean reconstruction and natural features tracking for AR‐based assembly guidance systems.

Design/methodology/approach

The method operates in two steps: offline Euclidean reconstruction and online tracking. Offline stage involves obtaining the structure of scene using Euclidean reconstruction technique. The classification trees are constructed using affine transform for online initialization. In tracking, the classification‐based wide baseline matching strategy and Td,d test are used to get a fast and accurate initialization for the first frame after which a modified optical flow tracker is used to fulfill the task of feature tracking in the real‐time video sequences. The four specified points are transferred to the current image to compute the registration matrix for augmentation.

Findings

Firstly, Euclidean reconstruction was used instead of projective reconstruction to get the projections of predefined features. Compared with the six points needed in projective reconstruction‐based method, this method can run normally even when only four features are successfully tracked. Secondly, an adaptive strategy was proposed to adjust the classification trees using the tracked features in online stage by which one can initialize or reinitialize the system, even with large difference between the first and reference images.

Originality/value

Some indoor and outdoor experiments are provided to validate the performance of the proposed method.

Details

Assembly Automation, vol. 28 no. 4
Type: Research Article
ISSN: 0144-5154

Keywords

To view the access options for this content please click here
Article

Yue Wang, Shusheng Zhang, Sen Yang, Weiping He and Xiaoliang Bai

This paper aims to propose a real-time augmented reality (AR)-based assembly assistance system using a coarse-to-fine marker-less tracking strategy. The system…

Abstract

Purpose

This paper aims to propose a real-time augmented reality (AR)-based assembly assistance system using a coarse-to-fine marker-less tracking strategy. The system automatically adapts to tracking requirement when the topological structure of the assembly changes after each assembly step.

Design/methodology/approach

The prototype system’s process can be divided into two stages: the offline preparation stage and online execution stage. In the offline preparation stage, planning results (assembly sequence, parts position, rotation, etc.) and image features [gradient and oriented FAST and rotated BRIEF (ORB)features] are extracted automatically from the assembly planning process. In the online execution stage, too, image features are extracted and matched with those generated offline to compute the camera pose, and planning results stored in XML files are parsed to generate the assembly instructions for manipulators. In the prototype system, the working range of template matching algorithm, LINE-MOD, is first extended by using depth information; then, a fast and robust marker-less tracker that combines the modified LINE-MOD algorithm and ORB tracker is designed to update the camera pose continuously. Furthermore, to track the camera pose stably, a tracking strategy according to the characteristic of assembly is presented herein.

Findings

The tracking accuracy and time of the proposed marker-less tracking approach were evaluated, and the results showed that the tracking method could run at 30 fps and the position and pose tracking accuracy was slightly superior to ARToolKit.

Originality/value

The main contributions of this work are as follows: First, the authors present a coarse-to-fine marker-less tracking method that uses modified state-of-the-art template matching algorithm, LINE-MOD, to find the coarse camera pose. Then, a feature point tracker ORB is activated to calculate the accurate camera pose. The whole tracking pipeline needs, on average, 24.35 ms for each frame, which can satisfy the real-time requirement for AR assembly. On basis of this algorithm, the authors present a generic tracking strategy according to the characteristics of the assembly and develop a generic AR-based assembly assistance platform. Second, the authors present a feature point mismatch-eliminating rule based on the orientation vector. By obtaining stable matching feature points, the proposed system can achieve accurate tracking results. The evaluation of the camera position and pose tracking accuracy result show that the study’s method is slightly superior to ARToolKit markers.

Details

Assembly Automation, vol. 38 no. 1
Type: Research Article
ISSN: 0144-5154

Keywords

To view the access options for this content please click here
Article

X. Wang, S.K. Ong and A.Y.C. Nee

This paper aims to propose and implement an integrated augmented-reality (AR)-aided assembly environment to incorporate the interaction between real and virtual…

Abstract

Purpose

This paper aims to propose and implement an integrated augmented-reality (AR)-aided assembly environment to incorporate the interaction between real and virtual components, so that users can obtain a more immersive experience of the assembly simulation in real time and achieve better assembly design.

Design/methodology/approach

A component contact handling strategy is proposed to model all the possible movements of virtual components when they interact with real components. A novel assembly information management approach is proposed to access and modify the information instances dynamically corresponding to user manipulation. To support the interaction between real and virtual components, a hybrid marker-less tracking method is implemented.

Findings

A prototype system has been developed, and a case study of an automobile alternator assembly is presented. A set of tests is implemented to validate the feasibility, efficiency, accuracy and intuitiveness of the system.

Research limitations/implications

The prototype system allows the users to manipulate and assemble the designed virtual components to the real components, so that the users can check for possible design errors and modify the original design in the context of their final use and in the real-world scale.

Originality/value

This paper proposes an integrated AR simulation and planning platform based on hybrid-tracking and ontology-based assembly information management. Component contact handling strategy based on collision detection and assembly feature surfaces mating reasoning is proposed to solve component degree of freedom.

To view the access options for this content please click here
Article

Dominik Szajerman, Piotr Napieralski and Jean-Philippe Lecointe

Technological innovation has made it possible to review how a film cues particular reactions on the part of the viewers. The purpose of this paper is to capture and…

Abstract

Purpose

Technological innovation has made it possible to review how a film cues particular reactions on the part of the viewers. The purpose of this paper is to capture and interpret visual perception and attention by the simultaneous use of eye tracking and electroencephalography (EEG) technologies.

Design/methodology/approach

The authors have developed a method for joint analysis of EEG and eye tracking. To achieve this goal, an algorithm was implemented to capture and interpret visual perception and attention by the simultaneous use of eye tracking and EEG technologies. All parameters have been measured as a function of the relationship between the tested signals, which, in turn, allowed for a more accurate validation of hypotheses by appropriately selected calculations.

Findings

The results of this study revealed a coherence between EEG and eye tracking that are of particular relevance for human perception.

Practical implications

This paper endeavors both to capture and interpret visual perception and attention by the simultaneous use of eye tracking and EEG technologies. Eye tracking provides a powerful real-time measure of viewer region of interest. EEG technologies provides data regarding the viewer’s emotional states while watching the movie.

Originality/value

The approach in this paper is distinct from similar studies because it takes into account the integration of the eye tracking and EEG technologies. This paper provides a method for determining a fully functional video introspection system.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 37 no. 5
Type: Research Article
ISSN: 0332-1649

Keywords

To view the access options for this content please click here
Article

K. Satya Sujith and G. Sasikala

Object detection models have gained considerable popularity as they aid in lot of applications, like monitoring, video surveillance, etc. Object detection through the…

Abstract

Purpose

Object detection models have gained considerable popularity as they aid in lot of applications, like monitoring, video surveillance, etc. Object detection through the video tracking faces lot of challenges, as most of the videos obtained as the real time stream are affected due to the environmental factors.

Design/methodology/approach

This research develops a system for crowd tracking and crowd behaviour recognition using hybrid tracking model. The input for the proposed crowd tracking system is high density crowd videos containing hundreds of people. The first step is to detect human through visual recognition algorithms. Here, a priori knowledge of location point is given as input to visual recognition algorithm. The visual recognition algorithm identifies the human through the constraints defined within Minimum Bounding Rectangle (MBR). Then, the spatial tracking model based tracks the path of the human object movement in the video frame, and the tracking is carried out by extraction of color histogram and texture features. Also, the temporal tracking model is applied based on NARX neural network model, which is effectively utilized to detect the location of moving objects. Once the path of the person is tracked, the behaviour of every human object is identified using the Optimal Support Vector Machine which is newly developed by combing SVM and optimization algorithm, namely MBSO. The proposed MBSO algorithm is developed through the integration of the existing techniques, like BSA and MBO.

Findings

The dataset for the object tracking is utilized from Tracking in high crowd density dataset. The proposed OSVM classifier has attained improved performance with the values of 0.95 for accuracy.

Originality/value

This paper presents a hybrid high density video tracking model, and the behaviour recognition model. The proposed hybrid tracking model tracks the path of the object in the video through the temporal tracking and spatial tracking. The features train the proposed OSVM classifier based on the weights selected by the proposed MBSO algorithm. The proposed MBSO algorithm can be regarded as the modified version of the BSO algorithm.

1 – 8 of 8