Search results

1 – 10 of over 3000
Article
Publication date: 12 January 2018

Yue Wang, Shusheng Zhang, Sen Yang, Weiping He and Xiaoliang Bai

This paper aims to propose a real-time augmented reality (AR)-based assembly assistance system using a coarse-to-fine marker-less tracking strategy. The system automatically…

1016

Abstract

Purpose

This paper aims to propose a real-time augmented reality (AR)-based assembly assistance system using a coarse-to-fine marker-less tracking strategy. The system automatically adapts to tracking requirement when the topological structure of the assembly changes after each assembly step.

Design/methodology/approach

The prototype system’s process can be divided into two stages: the offline preparation stage and online execution stage. In the offline preparation stage, planning results (assembly sequence, parts position, rotation, etc.) and image features [gradient and oriented FAST and rotated BRIEF (ORB)features] are extracted automatically from the assembly planning process. In the online execution stage, too, image features are extracted and matched with those generated offline to compute the camera pose, and planning results stored in XML files are parsed to generate the assembly instructions for manipulators. In the prototype system, the working range of template matching algorithm, LINE-MOD, is first extended by using depth information; then, a fast and robust marker-less tracker that combines the modified LINE-MOD algorithm and ORB tracker is designed to update the camera pose continuously. Furthermore, to track the camera pose stably, a tracking strategy according to the characteristic of assembly is presented herein.

Findings

The tracking accuracy and time of the proposed marker-less tracking approach were evaluated, and the results showed that the tracking method could run at 30 fps and the position and pose tracking accuracy was slightly superior to ARToolKit.

Originality/value

The main contributions of this work are as follows: First, the authors present a coarse-to-fine marker-less tracking method that uses modified state-of-the-art template matching algorithm, LINE-MOD, to find the coarse camera pose. Then, a feature point tracker ORB is activated to calculate the accurate camera pose. The whole tracking pipeline needs, on average, 24.35 ms for each frame, which can satisfy the real-time requirement for AR assembly. On basis of this algorithm, the authors present a generic tracking strategy according to the characteristics of the assembly and develop a generic AR-based assembly assistance platform. Second, the authors present a feature point mismatch-eliminating rule based on the orientation vector. By obtaining stable matching feature points, the proposed system can achieve accurate tracking results. The evaluation of the camera position and pose tracking accuracy result show that the study’s method is slightly superior to ARToolKit markers.

Details

Assembly Automation, vol. 38 no. 1
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 1 April 2014

Annette Mossel, Michael Leichtfried, Christoph Kaltenriner and Hannes Kaufmann

The authors present a low-cost unmanned aerial vehicle (UAV) for autonomous flight and navigation in GPS-denied environments using an off-the-shelf smartphone as its core on-board…

Abstract

Purpose

The authors present a low-cost unmanned aerial vehicle (UAV) for autonomous flight and navigation in GPS-denied environments using an off-the-shelf smartphone as its core on-board processing unit. Thereby, the approach is independent from additional ground hardware and the UAV core unit can be easily replaced with more powerful hardware that simplifies setup updates as well as maintenance. The paper aims to discuss these issues.

Design/methodology/approach

The UAV is able to map, locate and navigate in an unknown indoor environment fusing vision-based tracking with inertial and attitude measurements. The authors choose an algorithmic approach for mapping and localization that does not require GPS coverage of the target area; therefore autonomous indoor navigation is made possible.

Findings

The authors demonstrate the UAVs capabilities of mapping, localization and navigation in an unknown 2D marker environment. The promising results enable future research on 3D self-localization and dense mapping using mobile hardware as the only on-board processing unit.

Research limitations/implications

The proposed autonomous flight processing pipeline robustly tracks and maps planar markers that need to be distributed throughout the tracking volume.

Practical implications

Due to the cost-effective platform and the flexibility of the software architecture, the approach can play an important role in areas with poor infrastructure (e.g. developing countries) to autonomously perform tasks for search and rescue, inspection and measurements.

Originality/value

The authors provide a low-cost off-the-shelf flight platform that only requires a commercially available mobile device as core processing unit for autonomous flight in GPS-denied areas.

Details

International Journal of Pervasive Computing and Communications, vol. 10 no. 1
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 20 June 2016

Roopesh Kevin Sungkur, Akshay Panchoo and Nitisha Kirtee Bhoyroo

This study aims to show the relevance of augmented reality (AR) in mobile learning for the 21st century. With AR, any real-world environment can be augmented by providing users…

3197

Abstract

Purpose

This study aims to show the relevance of augmented reality (AR) in mobile learning for the 21st century. With AR, any real-world environment can be augmented by providing users with accurate digital overlays. AR is a promising technology that has the potential to encourage learners to explore learning materials from a totally new perspective. Besides, the advancements made in information technology further broaden the scope for educational AR applications. Furthermore, the proliferation of wireless mobile devices such as smartphones and tablets is also introducing AR into the mobile domain.

Design/methodology/approach

This discussion paper gives an insight of the different potential fields of application of AR and eventually proposes an AR application that will give a completely different learning experience for learners. This AR mobile application will not only provide learners with supplementary information but will also assist lecturers in their teaching process. There are certain concepts in computer science at the tertiary level that are at times difficult for learners to understand using the traditional classroom approach. Through this AR application developed, the learners are able to see what is happening and experience a different form of learning where the focus is more on “learning by doing” and on the ability of visualizing the complete set of steps involved for a particular operation. Finally what is proposed is a generic framework/process for the development of AR applications for learning purposes.

Findings

The AR application developed and tested has proved to be helpful in understanding complex concepts of computer science that average students have much difficulty in understanding. Through AR, learning has been brought to a new dimension where the students can easily visualize what is happening and easily understand complex concepts. This low-cost system that has been proposed can track and detect both markerless and marker-based images. A number of experiments have also been carried out to determine a set of best practices for the development and use of such AR applications.

Originality/value

Learners have been able to have a more interactive and enriching learning experience through two-dimensional and three-dimensional digital augmentations. The AR mobile application has been enhancing the cognitive skills of learners through enabling them to scan images from printed materials with their smartphones. Then, informative digital augmentation has been overlaid in real time on the mobile screen with the image preview still in the background.

Details

Interactive Technology and Smart Education, vol. 13 no. 2
Type: Research Article
ISSN: 1741-5659

Keywords

Article
Publication date: 17 January 2018

Mohamed Zaher, David Greenwood and Mohamed Marzouk

The purpose of this paper is to facilitate the process of monitoring construction projects. Classic practice for construction progress tracking relies on paper reports, which…

3428

Abstract

Purpose

The purpose of this paper is to facilitate the process of monitoring construction projects. Classic practice for construction progress tracking relies on paper reports, which entails a serious amount of manual data collection as well as the effort of imagining the actual progress from the paperwork.

Design/methodology/approach

This paper presents a new methodology for monitoring construction progress using smartphones. This is done by proposing a new system consisting of a newly-developed application named “BIM-U” and a mobile augmented reality (AR) channel named “BIM-Phase”. “BIM-U” is an Android application that allows the end-user to update the progress of activities onsite. These data are used to update the project’s 4D model enhanced with different cost parameters such as earned value, actual cost and planned value. The “BIM-Phase” application is a mobile AR channel that is used during construction phase through implementing a 4D “as-planned” phased model integrated with an augmented video showing real or planned progress.

Findings

The results from the project are then analysed and assessed to anticipate the potential of these and similar techniques for tracking time and cost on construction projects.

Originality/value

The proposed system through “BIM-U” and “BIM Phase” exploits the potential of mobile applications and AR in construction through the use of handheld mobile devices to offer new possibilities for measuring and monitoring work progress using building information modelling.

Details

Construction Innovation, vol. 18 no. 2
Type: Research Article
ISSN: 1471-4175

Keywords

Article
Publication date: 10 November 2020

Clement Onime, James Uhomoibhi, Hui Wang and Mattia Santachiara

This paper presents a reclassification of markers for mixed reality environments that is also applicable to the use of markers in robot navigation systems and 3D modelling. In the…

Abstract

Purpose

This paper presents a reclassification of markers for mixed reality environments that is also applicable to the use of markers in robot navigation systems and 3D modelling. In the case of Augmented Reality (AR) mixed reality environments, markers are used to integrate computer generated (virtual) objects into a predominantly real world, while in Augmented Virtuality (AV) mixed reality environments, the goal is to integrate real objects into a predominantly virtual (computer generated) world. Apart from AR/AV classifications, mixed reality environments have also been classified by reality; output technology/display devices; immersiveness as well as by visibility of markers.

Design/methodology/approach

The approach adopted consists of presenting six existing classifications of mixed reality environments and then extending them to define new categories of abstract, blended, virtual augmented, active and smart markers. This is supported with results/examples taken from the joint Mixed Augmented and Virtual Reality Laboratory (MAVRLAB) of the Ulster University, Belfast, Northern Ireland; the Abdus Salam International Centre for Theoretical Physics (ICTP), Trieste, Italy and Santasco SrL, Regio Emilia/Milan, Italy.

Findings

Existing classification of markers and mixed reality environments are mainly binary in nature and do not adequately capture the contextual relationship between markers and their use and application. The reclassification of markers into abstract, blended and virtual categories captures the context for simple use and applications while the categories of augmented, active and smart markers captures the relationship for enhanced or more complex use of markers. The new classifications are capable of improving the definitions of existing simple marker and markerless mixed reality environments as well as supporting more complex features within mixed reality environments such as co-location of objects, advanced interactivity, personalised user experience.

Research limitations/implications

It is thought that applications and devices in mixed reality environments when properly developed and deployed enhances the real environment by making invisible information visible to the user. The current work only marginally covers the use of internet of things (IoT) devices in mixed reality environments as well as potential implications for robot navigation systems and 3D modelling.

Practical implications

The use of these reclassifications enables researchers, developers and users of mixed reality environments to select and make informed decisions on best tools and environment for their respective application, while conveying information with additional clarity and accuracy. The development and application of more complex markers would contribute in no small measure to attaining greater advancements in extending current knowledge and developing applications to positively impact entertainment, business and health while minimizing costs and maximizing benefits.

Originality/value

The originality of this paper lies in the approach adopted in reclassifying markers. This is supported with results and work carried out at the MAV Reality Laboratory of Ulster University, Belfast–UK, the Abdus Salam International Centre for Theoretical Physics (ICTP), Trieste-Italy and Santasco SrL, Regio Emilia, Milan–Italy. The value of present research lies in the definitions of new categories as well as the discussions of how they improve mixed reality environments and application especially in the health and education sectors.

Details

The International Journal of Information and Learning Technology, vol. 38 no. 1
Type: Research Article
ISSN: 2056-4880

Keywords

Article
Publication date: 20 October 2014

Ping Zhang, Guanglong Du and Di Li

The aim of this paper is to present a novel methodology which incorporates Camshift, Kalman filter (KFs) and adaptive multi-space transformation (AMT) for a human-robot interface…

Abstract

Purpose

The aim of this paper is to present a novel methodology which incorporates Camshift, Kalman filter (KFs) and adaptive multi-space transformation (AMT) for a human-robot interface, which perfects human intelligence and teleoperation.

Design/methodology/approach

In the proposed method, an inertial measurement unit is used to measure the orientation of the human hand, and a Camshift algorithm is used to track the human hand using a three-dimensional camera. Although the location and the orientation of the human can be obtained from the two sensors, the measurement error increases over time due to the noise of the devices and the tracking errors. KFs are used to estimate the location and the orientation of the human hand. Moreover, to be subject to the perceptive limitations and the motor limitations, human operator is hard to carry out the high precision operation. An AMT method is proposed to assist the operator to improve accuracy and reliability in determining the pose of the robot.

Findings

The experimental results show that this method would not hinder most natural human-limb motion and allows the operator to concentrate on his/her own task. Compared with the non-contacting marker-less method (Kofman et al., 2007), this method proves more accurate and stable.

Originality/value

The human-robot interface system was experimentally verified in a laboratory environment, and the results indicate that such a system can complete high-precision manipulation efficiently.

Details

Industrial Robot: An International Journal, vol. 41 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 11 June 2019

Muhammad Yahya, Jawad Ali Shah, Kushsairy Abdul Kadir, Zulkhairi M. Yusof, Sheroz Khan and Arif Warsi

Motion capture system (MoCap) has been used in measuring the human body segments in several applications including film special effects, health care, outer-space and under-water…

1483

Abstract

Purpose

Motion capture system (MoCap) has been used in measuring the human body segments in several applications including film special effects, health care, outer-space and under-water navigation systems, sea-water exploration pursuits, human machine interaction and learning software to help teachers of sign language. The purpose of this paper is to help the researchers to select specific MoCap system for various applications and the development of new algorithms related to upper limb motion.

Design/methodology/approach

This paper provides an overview of different sensors used in MoCap and techniques used for estimating human upper limb motion.

Findings

The existing MoCaps suffer from several issues depending on the type of MoCap used. These issues include drifting and placement of Inertial sensors, occlusion and jitters in Kinect, noise in electromyography signals and the requirement of a well-structured, calibrated environment and time-consuming task of placing markers in multiple camera systems.

Originality/value

This paper outlines the issues and challenges in MoCaps for measuring human upper limb motion and provides an overview on the techniques to overcome these issues and challenges.

Details

Sensor Review, vol. 39 no. 4
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 30 April 2021

Vishak Dudhee and Vladimir Vukovic

The possibility of integrating building information in an augmented reality (AR) environment provides an effective solution to all phases of a building's lifecycle. This paper…

Abstract

Purpose

The possibility of integrating building information in an augmented reality (AR) environment provides an effective solution to all phases of a building's lifecycle. This paper explores the integration of building information modelling (BIM) and AR to effectively visualise building information models in an AR environment and evaluates the currently available AR tools.

Design/methodology/approach

A BIM model of a selected office room was created and superimposed to the actual physical space using two different AR devices and four different AR applications. The superimposing techniques, accuracy and the level of information that can be visualised were then investigated by performing a walk-through analysis.

Findings

From the investigation, it can be concluded that model positioning can be inaccurate depending on the superimposing method used and the AR device. Moreover, using the currently available techniques, only static building information can be superimposed and visualised in AR, showing a need to integrate data from Internet of Things (IoT) sensors into the current BIM-AR processes to allow visualisation of accurate and high-quality operational building information.

Originality/value

A practical process and method for visualising and superimposing BIM models in an AR environment have been described. Recommendations to improve superimposing accuracy are provided. The assessment of type, quality and level of detail that can be visualised indicates the areas that need improvement to increase the effectiveness of building information's visualisation in AR.

Details

Smart and Sustainable Built Environment, vol. 12 no. 4
Type: Research Article
ISSN: 2046-6099

Keywords

Article
Publication date: 1 January 2006

Yan Pang, Andrew Y.C. Nee, Soh Khim Ong, Miaolong Yuan and Kamal Youcef‐Toumi

This paper aims to apply the augmented reality (AR) technology to assembly design in the early design stage. A proof‐of‐concept system with AR interface is developed.

2063

Abstract

Purpose

This paper aims to apply the augmented reality (AR) technology to assembly design in the early design stage. A proof‐of‐concept system with AR interface is developed.

Design/methodology/approach

Through AR interface, designers can design the assembly on the real assembly platform. The system helps users to design the assembly features to provide proper part‐part constraints in the early design stage. The virtual assembly features are rendered on the real assembly platform using AR registration techniques. The new evaluated assembly parts can be generated in the AR interface and assembled to assembly platform through assembly features. The model‐based collision detection technique is implemented for assembly constraint evaluation.

Findings

With AR interface, it would be possible to combine some of the benefits of both physical and virtual prototyping (VP). The AR environment can save a lot of computation resource compared to a totally virtual environment. Working on real assembly platform, designers have more realistic feel and the ability to design an assembly in a more intuitive way.

Research limitations/implications

More interaction tools need to be developed to support the complex assembly design efficiently.

Practical implications

The presented system encourages designers to consider the assembly issues in the early design stage. The primitive 3D models of assembly parts with proper part‐part constraints are generated using the system before doing detailed geometry design.

Originality/value

A new markerless registration approach for AR system is presented. This generic approach can be also used for other AR applications.

Details

Assembly Automation, vol. 26 no. 1
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 23 September 2013

L.X. Ng, Z.B. Wang, S.K. Ong and A.Y.C. Nee

The purpose of this paper is to present a methodology that integrates design and assembly planning in an augmented reality (AR) environment. Intuitive bare-hand interactions…

1084

Abstract

Purpose

The purpose of this paper is to present a methodology that integrates design and assembly planning in an augmented reality (AR) environment. Intuitive bare-hand interactions (BHIs) and a combination of virtual and real objects are used to perform design and assembly tasks. Ergonomics and other assembly factors are analysed during assembly evaluation.

Design/methodology/approach

An AR design and assembly (ARDnA) system has been developed to implement the proposed methodology. For design generation, 3D models are created and combined together like building blocks, taking into account the product assembly in the early design stage. Detailed design can be performed on the components and manual assembly process is simulated to evaluate the assembly design.

Findings

A case study of the design and assembly of a toy car is conducted to demonstrate the application of the methodology and system.

Research limitations/implications

The system allows the users to consider the assembly of a product when generating the design of the components. BHI allows the users to create and interact with the virtual modes with their hands. Assembly evaluation is more realistic and takes into consideration the ergonomics issues during assembly.

Originality/value

The system synthesizes AR, BHI and a CAD software to provide an integrated approach for design and assembly planning, intuitive and realistic interaction with virtual models and holistic assembly evaluation.

1 – 10 of over 3000