Search results

1 – 10 of 709
Article
Publication date: 7 August 2017

Wilson E. Sakpere, Nhlanhla Boyfriend Wilton Mlitwa and Michael Adeyeye Oshin

This research aims to focus on providing interventions to alleviate usability challenges to strengthen the overall accuracy and the navigation effectiveness in indoor and…

Abstract

Purpose

This research aims to focus on providing interventions to alleviate usability challenges to strengthen the overall accuracy and the navigation effectiveness in indoor and stringent environments through the experiential manipulation of technical attributes of the positioning and navigation system.

Design/methodology/approach

The study followed a quantitative and experimental method of empirical enquiry and software engineering and synthesis research methods. The study further entails three implementation processes, namely, map generation, positioning framework and navigation service using a prototype mobile navigation application that uses the near field communication (NFC) technology.

Findings

The approach and findings revealed that the capability of NFC in leveraging its low-cost infrastructure of passive tags, its availability in mobile devices and the ubiquity of the mobile device provided a cost-effective solution with impressive accuracy and usability. The positioning accuracy achieved was less than 9 cm. The usability improved from 44 to 96 per cent based on feedbacks given by respondents who tested the application in an indoor environment. These showed that NFC is a viable alternative to resolve the challenges identified in previous solutions and technologies.

Research limitations/implications

The major limitation of the navigation application was that there is no real-time update of user position. This can be investigated and extended further by using NFC in a hybrid make-up with WLAN, radio-frequency identification (RFID) or Bluetooth as a cost-effective solution for real-time indoor positioning because of their coverage and existing infrastructures. The hybrid positioning model, which merges two or more techniques or technologies, is becoming more popular and will improve its accuracy, robustness and usability. In addition, it will balance complexity, compensate for the limitations in the technologies and achieve real-time mobile indoor navigation. Although the presence of WLAN, RFID and Bluetooth technologies are likely to result in system complexity and high cost, NFC will reduce the system’s complexity and balance the trade-off.

Practical implications

Whilst limitations in existing indoor navigation technologies meant putting up with poor signal and poor communication capabilities, outcomes of the NFC framework will offer valuable insight. It presents new possibilities on how to overcome signal quality limitations at improved turn-around time in constrained indoor spaces.

Social implications

The innovations have a direct positive social impact in that it will offer new solutions to mobile communications in the previously impossible terrains such as underground platforms and densely covered spaces. With the ability to operate mobile applications without signal inhibitions, the quality of communication – and ultimately, life opportunities – are enhanced.

Originality/value

While navigating, users face several challenges, such as infrastructure complexity, high-cost solution, inaccuracy and usability. Hence, as a contribution, this paper presents a symbolic map and path architecture of a floor of the test-bed building that was uploaded to OpenStreetMap. Furthermore, the implementation of the RFID and the NFC architectures produced new insight on how to redress the limitations in challenged spaces. In addition, a prototype mobile indoor navigation application was developed and implemented, offering novel solution to the practical problems inhibiting navigation in indoor challenged spaces – a practical contribution to the community of practice.

Details

Journal of Engineering, Design and Technology, vol. 15 no. 4
Type: Research Article
ISSN: 1726-0531

Keywords

Article
Publication date: 25 September 2019

Watthanasak Jeamwatthanachai, Mike Wald and Gary Wills

The purpose of this paper is to validate a framework for spatial representation, aka the spatial representation framework (SRF), to define spaces and building information required…

Abstract

Purpose

The purpose of this paper is to validate a framework for spatial representation, aka the spatial representation framework (SRF), to define spaces and building information required by people with visual impairment as a foundation of indoor maps for indoor navigation systems.

Design/methodology/approach

The SRF was first created with seven main components by a review of the relevant literature and state-of-the-art technologies shown in the preliminary study. This research comprised of two tasks: investigating problems and behaviors while accessing spaces and buildings by visually impaired people (VIP) and validating the SRF where 45 participants were recruited (30 VIP and 15 experts).

Findings

The findings revealed a list of problems and challenges were used to validate and redefine the spatial representation, which was validated by both VIP and experts. The framework subsequently consisted of 11 components categorized into five layers, each layer of which is responsible for a different function.

Research limitations/implications

This framework provides essential components required for building standard indoor maps as a foundation for indoor navigations systems for people with visual impairment.

Practical implications

This framework lays the foundation for a range of indoor-based applications by using this SRF to represent indoor spaces. Example applications include: indoor navigation by people with disabilities, robots and autonomous systems, security and surveillance, and context and spatial awareness.

Originality/value

This paper presents the validated spatial representation for indoor navigation by people with visual impairment with its details and description, methodology, results and findings of the validation of the SRF.

Details

Journal of Enabling Technologies, vol. 13 no. 4
Type: Research Article
ISSN: 2398-6263

Keywords

Article
Publication date: 28 May 2021

Guangbing Zhou, Jing Luo, Shugong Xu, Shunqing Zhang, Shige Meng and Kui Xiang

Indoor localization is a key tool for robot navigation in indoor environments. Traditionally, robot navigation depends on one sensor to perform autonomous localization. This paper…

Abstract

Purpose

Indoor localization is a key tool for robot navigation in indoor environments. Traditionally, robot navigation depends on one sensor to perform autonomous localization. This paper aims to enhance the navigation performance of mobile robots, a multiple data fusion (MDF) method is proposed for indoor environments.

Design/methodology/approach

Here, multiple sensor data i.e. collected information of inertial measurement unit, odometer and laser radar, are used. Then, an extended Kalman filter (EKF) is used to incorporate these multiple data and the mobile robot can perform autonomous localization according to the proposed EKF-based MDF method in complex indoor environments.

Findings

The proposed method has experimentally been verified in the different indoor environments, i.e. office, passageway and exhibition hall. Experimental results show that the EKF-based MDF method can achieve the best localization performance and robustness in the process of navigation.

Originality/value

Indoor localization precision is mostly related to the collected data from multiple sensors. The proposed method can incorporate these collected data reasonably and can guide the mobile robot to perform autonomous navigation (AN) in indoor environments. Therefore, the output of this paper would be used for AN in complex and unknown indoor environments.

Details

Assembly Automation, vol. 41 no. 3
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 20 June 2022

Preetha K.G., Subin K. Antony, Remesh Babu K.R., Saritha S. and Sangeetha U.

This paper aims to bring in augmented reality (AR) into navigation systems to rectify the issues mentioned. This paper proposes an AR enhanced navigation system for location…

Abstract

Purpose

This paper aims to bring in augmented reality (AR) into navigation systems to rectify the issues mentioned. This paper proposes an AR enhanced navigation system for location automated teller machine (ATM) counters (AR-ATM) and branches of banks based on user’s choice. Upon selecting the ATM, the navigational path to the destination is drawn from the current location, thereby the user can reach the ATM through the optimal path.

Design/methodology/approach

Traditional navigation systems require users to map with the real world environment as and when required and also may lead to incorrect path due to minor difference in distance. The traditional navigation systems’ also does not take into consideration the ergonomics and safety of the user.

Findings

In this system, a camera lens is used, which is directed down the street at eye level and the application displays the location of ATMs and bank branches and also provides information about the locations like distance and time through the AR superimposed object.

Originality/value

The application also provides indoor navigation, especially in a multi-storeyed building. Experiments are performed on smartphones that support AR, and the results are promising with no lag in time frame of the real object and virtual object. To determine the factors that regulate the suggested AR tracking mechanism, a quantitative evaluation of the experimental data is also performed. The testing of implemented AR-ATM from the standpoint of end-users is undertaken to evaluate real-time usage comfortability, and the results have been determined to be extremely satisfactory.

Details

Industrial Robot: the international journal of robotics research and application, vol. 50 no. 4
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 8 June 2020

Zhe Wang, Xisheng Li, Xiaojuan Zhang, Yanru Bai and Chengcai Zheng

The purpose of this study is to use visual and inertial sensors to achieve real-time location. How to provide an accurate location has become a popular research topic in the field…

Abstract

Purpose

The purpose of this study is to use visual and inertial sensors to achieve real-time location. How to provide an accurate location has become a popular research topic in the field of indoor navigation. Although the complementarity of vision and inertia has been widely applied in indoor navigation, many problems remain, such as inertial sensor deviation calibration, unsynchronized visual and inertial data acquisition and large amount of stored data.

Design/methodology/approach

First, this study demonstrates that the vanishing point (VP) evaluation function improves the precision of extraction, and the nearest ground corner point (NGCP) of the adjacent frame is estimated by pre-integrating the inertial sensor. The Sequential Similarity Detection Algorithm (SSDA) and Random Sample Consensus (RANSAC) algorithms are adopted to accurately match the adjacent NGCP in the estimated region of interest. Second, the model of visual pose is established by using the parameters of the camera itself, VP and NGCP. The model of inertial pose is established by pre-integrating. Third, location is calculated by fusing the model of vision and inertia.

Findings

In this paper, a novel method is proposed to fuse visual and inertial sensor to locate indoor environment. The authors describe the building of an embedded hardware platform to the best of their knowledge and compare the result with a mature method and POSAV310.

Originality/value

This paper proposes a VP evaluation function that is used to extract the most advantages in the intersection of a plurality of parallel lines. To improve the extraction speed of adjacent frame, the authors first proposed fusing the NGCP of the current frame and the calibrated pre-integration to estimate the NGCP of the next frame. The visual pose model was established using extinction VP and NGCP, calibration of inertial sensor. This theory offers the linear processing equation of gyroscope and accelerometer by the model of visual and inertial pose.

Article
Publication date: 28 November 2018

Qigao Fan, Jie Jia, Peng Pan, Hai Zhang and Yan Sun

The purpose of this paper is to relate to the real-time navigation and tracking of pedestrians in a closed environment. To restrain accumulated error of low-cost…

Abstract

Purpose

The purpose of this paper is to relate to the real-time navigation and tracking of pedestrians in a closed environment. To restrain accumulated error of low-cost microelectromechanical system inertial navigation system and adapt to the real-time navigation of pedestrians at different speeds, the authors proposed an improved inertial navigation system (INS)/pedestrian dead reckoning (PDR)/ultra wideband (UWB) integrated positioning method for indoor foot-mounted pedestrians.

Design/methodology/approach

This paper proposes a self-adaptive integrated positioning algorithm that can recognize multi-gait and realize a high accurate pedestrian multi-gait indoor positioning. First, the corresponding gait method is used to detect different gaits of pedestrians at different velocities; second, the INS/PDR/UWB integrated system is used to get the positioning information. Thus, the INS/UWB integrated system is used when the pedestrian moves at normal speed; the PDR/UWB integrated system is used when the pedestrian moves at rapid speed. Finally, the adaptive Kalman filter correction method is adopted to modify system errors and improve the positioning performance of integrated system.

Findings

The algorithm presented in this paper improves performance of indoor pedestrian integrated positioning system from three aspects: in the view of different pedestrian gaits at different speeds, the zero velocity detection and stride frequency detection are adopted on the integrated positioning system. Further, the accuracy of inertial positioning systems can be improved; the attitude fusion filter is used to obtain the optimal quaternion and improve the accuracy of INS positioning system and PDR positioning system; because of the errors of adaptive integrated positioning system, the adaptive filter is proposed to correct errors and improve integrated positioning accuracy and stability. The adaptive filtering algorithm can effectively restrain the divergence problem caused by outliers. Compared to the KF algorithm, AKF algorithm can better improve the fault tolerance and precision of integrated positioning system.

Originality/value

The INS/PDR/UWB integrated system is built to track pedestrian position and attitude. Finally, an adaptive Kalman filter is used to improve the accuracy and stability of integrated positioning system.

Article
Publication date: 15 June 2015

Catherine Todd, Swati Mallya, Sara Majeed, Jude Rojas and Katy Naylor

VirtuNav is a haptic-, audio-enabled virtual reality simulator that facilitates persons with visual impairment to explore a 3D computer model of a real-life indoor location, such…

Abstract

Purpose

VirtuNav is a haptic-, audio-enabled virtual reality simulator that facilitates persons with visual impairment to explore a 3D computer model of a real-life indoor location, such as a room or building. The purpose of this paper is to aid in pre-planning and spatial awareness, for a user to become more familiar with the environment prior to experiencing it in reality.

Design/methodology/approach

The system offers two unique interfaces: a free-roam interface where the user can navigate, and an edit mode where the administrator can manage test users, maps and retrieve test data.

Findings

System testing reveals that spatial awareness and memory mapping improve with user iterations within VirtuNav.

Research limitations/implications

VirtuNav is a research tool for investigation of user familiarity developed after repeated exposure to the simulator, to determine the extent to which haptic and/or sound cues improve a visually impaired user’s ability to navigate a room or building with or without occlusion.

Social implications

The application may prove useful for greater real world engagement: to build confidence in real world experiences, enabling persons with sight impairment to more comfortably and readily explore and interact with environments formerly unfamiliar or unattainable to them.

Originality/value

VirtuNav is developed as a practical application offering several unique features including map design, semi-automatic 3D map reconstruction and object classification from 2D map data. Visual and haptic rendering of real-time 3D map navigation are provided as well as automated administrative functions for shortest path determination, actual path comparison, and performance indicator assessment: exploration time taken and collision data.

Details

Journal of Assistive Technologies, vol. 9 no. 2
Type: Research Article
ISSN: 1754-9450

Keywords

Article
Publication date: 2 January 2018

K.M. Ibrahim Khalilullah, Shunsuke Ota, Toshiyuki Yasuda and Mitsuru Jindai

The purpose of this study is to develop a cost-effective autonomous wheelchair robot navigation method that assists the aging population.

Abstract

Purpose

The purpose of this study is to develop a cost-effective autonomous wheelchair robot navigation method that assists the aging population.

Design/methodology/approach

Navigation in outdoor environments is still a challenging task for an autonomous mobile robot because of the highly unstructured and different characteristics of outdoor environments. This study examines a complete vision guided real-time approach for robot navigation in urban roads based on drivable road area detection by using deep learning. During navigation, the camera takes a snapshot of the road, and the captured image is then converted into an illuminant invariant image. Subsequently, a deep belief neural network considers this image as an input. It extracts additional discriminative abstract features by using general purpose learning procedure for detection. During obstacle avoidance, the robot measures the distance from the obstacle position by using estimated parameters of the calibrated camera, and it performs navigation by avoiding obstacles.

Findings

The developed method is implemented on a wheelchair robot, and it is verified by navigating the wheelchair robot on different types of urban curve roads. Navigation in real environments indicates that the wheelchair robot can move safely from one place to another. The navigation performance of the developed method and a comparison with laser range finder (LRF)-based methods were demonstrated through experiments.

Originality/value

This study develops a cost-effective navigation method by using a single camera. Additionally, it utilizes the advantages of deep learning techniques for robust classification of the drivable road area. It performs better in terms of navigation when compared to LRF-based methods in LRF-denied environments.

Details

Industrial Robot: An International Journal, vol. 45 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 29 October 2019

Ravinder Singh and Kuldeep Singh Nagla

The purpose of this research is to provide the necessarily and resourceful information regarding range sensors to select the best fit sensor for robust autonomous navigation…

Abstract

Purpose

The purpose of this research is to provide the necessarily and resourceful information regarding range sensors to select the best fit sensor for robust autonomous navigation. Autonomous navigation is an emerging segment in the field of mobile robot in which the mobile robot navigates in the environment with high level of autonomy by lacking human interactions. Sensor-based perception is a prevailing aspect in the autonomous navigation of mobile robot along with localization and path planning. Various range sensors are used to get the efficient perception of the environment, but selecting the best-fit sensor to solve the navigation problem is still a vital assignment.

Design/methodology/approach

Autonomous navigation relies on the sensory information of various sensors, and each sensor relies on various operational parameters/characteristic for the reliable functioning. A simple strategy shown in this proposed study to select the best-fit sensor based on various parameters such as environment, 2 D/3D navigation, accuracy, speed, environmental conditions, etc. for the reliable autonomous navigation of a mobile robot.

Findings

This paper provides a comparative analysis for the diverse range sensors used in mobile robotics with respect to various aspects such as accuracy, computational load, 2D/3D navigation, environmental conditions, etc. to opt the best-fit sensors for achieving robust navigation of autonomous mobile robot.

Originality/value

This paper provides a straightforward platform for the researchers to select the best range sensor for the diverse robotics application.

Article
Publication date: 29 April 2021

Ricardo Eiris, Gilles Albeaino, Masoud Gheisari, William Benda and Randi Faris

The purpose of this research is to explore how to visually represent human decision-making processes during the performance of indoor building inspection flight operations using…

326

Abstract

Purpose

The purpose of this research is to explore how to visually represent human decision-making processes during the performance of indoor building inspection flight operations using drones.

Design/methodology/approach

Data from expert pilots were collected using a virtual reality drone flight simulator. The expert pilot data were studied to inform the development of an interactive 2D representation of drone flight spatial and temporal data – InDrone. Within the InDrone platform, expert pilot data were visually encoded to characterize key pilot behaviors in terms of pilots' approaches to view and difficulties encountered while detecting the inspection markers. The InDrone platform was evaluated using a user-center experimental methodology focusing on two metrics: (1) how novice pilots understood the flight approaches and difficulties contained within InDrone and (2) the perceived usability of the InDrone platform.

Findings

The results of the study indicated that novice pilots recognized inspection markers and difficult-to-inspect building areas in 63% (STD = 48%) and 75% (STD = 35%) of the time on average, respectively. Overall, the usability of InDrone presented high scores as demonstrated by the novice pilots during the flight pattern recognition tasks with a mean score of 77% (STD = 15%).

Originality/value

This research contributes to the definition of visual affordances that support the communication of human decision-making during drone indoor building inspection flight operations. The developed InDrone platform highlights the necessity of defining visual affordances to explore drone flight spatial and temporal data for indoor building inspections.

Details

Smart and Sustainable Built Environment, vol. 10 no. 3
Type: Research Article
ISSN: 2046-6099

Keywords

1 – 10 of 709