Search results

1 – 10 of 398
Article
Publication date: 8 June 2020

Zhe Wang, Xisheng Li, Xiaojuan Zhang, Yanru Bai and Chengcai Zheng

The purpose of this study is to use visual and inertial sensors to achieve real-time location. How to provide an accurate location has become a popular research topic in…

Abstract

Purpose

The purpose of this study is to use visual and inertial sensors to achieve real-time location. How to provide an accurate location has become a popular research topic in the field of indoor navigation. Although the complementarity of vision and inertia has been widely applied in indoor navigation, many problems remain, such as inertial sensor deviation calibration, unsynchronized visual and inertial data acquisition and large amount of stored data.

Design/methodology/approach

First, this study demonstrates that the vanishing point (VP) evaluation function improves the precision of extraction, and the nearest ground corner point (NGCP) of the adjacent frame is estimated by pre-integrating the inertial sensor. The Sequential Similarity Detection Algorithm (SSDA) and Random Sample Consensus (RANSAC) algorithms are adopted to accurately match the adjacent NGCP in the estimated region of interest. Second, the model of visual pose is established by using the parameters of the camera itself, VP and NGCP. The model of inertial pose is established by pre-integrating. Third, location is calculated by fusing the model of vision and inertia.

Findings

In this paper, a novel method is proposed to fuse visual and inertial sensor to locate indoor environment. The authors describe the building of an embedded hardware platform to the best of their knowledge and compare the result with a mature method and POSAV310.

Originality/value

This paper proposes a VP evaluation function that is used to extract the most advantages in the intersection of a plurality of parallel lines. To improve the extraction speed of adjacent frame, the authors first proposed fusing the NGCP of the current frame and the calibrated pre-integration to estimate the NGCP of the next frame. The visual pose model was established using extinction VP and NGCP, calibration of inertial sensor. This theory offers the linear processing equation of gyroscope and accelerometer by the model of visual and inertial pose.

Article
Publication date: 16 April 2018

Hanieh Deilamsalehy and Timothy C. Havens

Estimating the pose – position and orientation – of a moving object such as a robot is a necessary task for many applications, e.g., robot navigation control, environment…

Abstract

Purpose

Estimating the pose – position and orientation – of a moving object such as a robot is a necessary task for many applications, e.g., robot navigation control, environment mapping, and medical applications such as robotic surgery. The purpose of this paper is to introduce a novel method to fuse the information from several available sensors in order to improve the estimated pose from any individual sensor and calculate a more accurate pose for the moving platform.

Design/methodology/approach

Pose estimation is usually done by collecting the data obtained from several sensors mounted on the object/platform and fusing the acquired information. Assuming that the robot is moving in a three-dimensional (3D) world, its location is completely defined by six degrees of freedom (6DOF): three angles and three position coordinates. Some 3D sensors, such as IMUs and cameras, have been widely used for 3D localization. Yet, there are other sensors, like 2D Light Detection And Ranging (LiDAR), which can give a very precise estimation in a 2D plane but they are not employed for 3D estimation since the sensor is unable to obtain the full 6DOF. However, in some applications there is a considerable amount of time in which the robot is almost moving on a plane during the time interval between two sensor readings; e.g., a ground vehicle moving on a flat surface or a drone flying at an almost constant altitude to collect visual data. In this paper a novel method using a “fuzzy inference system” is proposed that employs a 2D LiDAR in a 3D localization algorithm in order to improve the pose estimation accuracy.

Findings

The method determines the trajectory of the robot and the sensor reliability between two readings and based on this information defines the weight of the 2D sensor in the final fused pose by adjusting “extended Kalman filter” parameters. Simulation and real world experiments show that the pose estimation error can be significantly decreased using the proposed method.

Originality/value

To the best of the authors’ knowledge this is the first time that a 2D LiDAR has been employed to improve the 3D pose estimation in an unknown environment without any previous knowledge. Simulation and real world experiments show that the pose estimation error can be significantly decreased using the proposed method.

Details

International Journal of Intelligent Unmanned Systems, vol. 6 no. 2
Type: Research Article
ISSN: 2049-6427

Keywords

Article
Publication date: 1 March 1985

Tomas Riha

Nobody concerned with political economy can neglect the history of economic doctrines. Structural changes in the economy and society influence economic thinking and

2076

Abstract

Nobody concerned with political economy can neglect the history of economic doctrines. Structural changes in the economy and society influence economic thinking and, conversely, innovative thought structures and attitudes have almost always forced economic institutions and modes of behaviour to adjust. We learn from the history of economic doctrines how a particular theory emerged and whether, and in which environment, it could take root. We can see how a school evolves out of a common methodological perception and similar techniques of analysis, and how it has to establish itself. The interaction between unresolved problems on the one hand, and the search for better solutions or explanations on the other, leads to a change in paradigma and to the formation of new lines of reasoning. As long as the real world is subject to progress and change scientific search for explanation must out of necessity continue.

Details

International Journal of Social Economics, vol. 12 no. 3/4/5
Type: Research Article
ISSN: 0306-8293

Article
Publication date: 8 February 2022

Yanwu Zhai, Haibo Feng, Haitao Zhou, Songyuan Zhang and Yili Fu

This paper aims to propose a method to solve the problem of localization and mapping of a two-wheeled inverted pendulum (TWIP) robot on the ground using the…

Abstract

Purpose

This paper aims to propose a method to solve the problem of localization and mapping of a two-wheeled inverted pendulum (TWIP) robot on the ground using the Stereo–inertial measurement unit (IMU) system. This method reparametrizes the pose according to the motion characteristics of TWIP and considers the impact of uneven ground on vision and IMU, which is more adaptable to the real environment.

Design/methodology/approach

When TWIP moves, it is constrained by the ground and swings back and forth to maintain balance. Therefore, the authors parameterize the robot pose as SE(2) pose plus pitch according to the motion characteristics of TWIP. However, the authors do not omit disturbances in other directions but perform error modeling, which is integrated into the visual constraints and IMU pre-integration constraints as an error term. Finally, the authors analyze the influence of the error term on the vision and IMU constraints during the optimization process. Compared to traditional algorithms, the algorithm is simpler and better adapt to the real environment.

Findings

The results of indoor and outdoor experiments show that, for the TWIP robot, the method has better positioning accuracy and robustness compared with the state-of-the-art.

Originality/value

The algorithm in this paper is proposed for the localization and mapping of a TWIP robot. Different from the traditional positioning method on SE(3), this paper parameterizes the robot pose as SE(2) pose plus pitch according to the motion of TWIP and the motion disturbances in other directions are integrated into visual constraints and IMU pre-integration constraints as error terms, which simplifies the optimization parameters, better adapts to the real environment and improves the accuracy of positioning.

Details

Industrial Robot: the international journal of robotics research and application, vol. 49 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 1 May 1983

In the last four years, since Volume I of this Bibliography first appeared, there has been an explosion of literature in all the main functional areas of business. This…

14967

Abstract

In the last four years, since Volume I of this Bibliography first appeared, there has been an explosion of literature in all the main functional areas of business. This wealth of material poses problems for the researcher in management studies — and, of course, for the librarian: uncovering what has been written in any one area is not an easy task. This volume aims to help the librarian and the researcher overcome some of the immediate problems of identification of material. It is an annotated bibliography of management, drawing on the wide variety of literature produced by MCB University Press. Over the last four years, MCB University Press has produced an extensive range of books and serial publications covering most of the established and many of the developing areas of management. This volume, in conjunction with Volume I, provides a guide to all the material published so far.

Details

Management Decision, vol. 21 no. 5
Type: Research Article
ISSN: 0025-1747

Keywords

Content available
Book part
Publication date: 30 July 2018

Abstract

Details

Marketing Management in Turkey
Type: Book
ISBN: 978-1-78714-558-0

Article
Publication date: 14 June 2013

Bart Larivière, Herm Joosten, Edward C. Malthouse, Marcel van Birgelen, Pelin Aksoy, Werner H. Kunz and Ming‐Hui Huang

The purpose of this paper is to introduce the concept of Value Fusion to describe how value can emerge from the use of mobile, networked technology by consumers, firms, and

6656

Abstract

Purpose

The purpose of this paper is to introduce the concept of Value Fusion to describe how value can emerge from the use of mobile, networked technology by consumers, firms, and entities such as non‐consumers, a firm's competitors, and others simultaneously.

Design/methodology/approach

The paper discusses the combination of characteristics of mobile devices that enable Value Fusion and discusses specific value and benefits to consumers and firms of being mobile and networked. Value Fusion is introduced and defined and set apart from related, other conceptualizations of value. Examples are provided of Value Fusion and the necessary conditions for Value Fusion to occur are discussed. Also discussed are the conditions under which the use of mobile, networked technology by consumers and firms may lead to Value Confusion instead of Value Fusion. Several research questions are proposed to further enhance the understanding and management of Value Fusion.

Findings

The combination of portable, personal, networked, textual/visual and converged characteristics of mobile devices enables firms and consumers to interact and communicate, produce and consume benefits, and create value in new ways that have not been captured by popular conceptualizations of value. These traditional conceptualizations include customer value, experiential value, customer lifetime value, and customer engagement value. Value Fusion is defined as value that can be achieved for the entire network of consumers and firms simultaneously, just by being on the mobile network. Value Fusion results from producers and consumers: individually or collectively; actively and passively; concurrently; interactively or in aggregation contributing to a mobile network; in real time; and just‐in‐time.

Originality/value

This paper synthesizes insights from the extant value literature that by and large has focused on either the customer's or the firm's perspective, but rarely blended the two.

Details

Journal of Service Management, vol. 24 no. 3
Type: Research Article
ISSN: 1757-5818

Keywords

Article
Publication date: 19 June 2017

Michał R. Nowicki, Dominik Belter, Aleksander Kostusiak, Petr Cížek, Jan Faigl and Piotr Skrzypczyński

This paper aims to evaluate four different simultaneous localization and mapping (SLAM) systems in the context of localization of multi-legged walking robots equipped with…

Abstract

Purpose

This paper aims to evaluate four different simultaneous localization and mapping (SLAM) systems in the context of localization of multi-legged walking robots equipped with compact RGB-D sensors. This paper identifies problems related to in-motion data acquisition in a legged robot and evaluates the particular building blocks and concepts applied in contemporary SLAM systems against these problems. The SLAM systems are evaluated on two independent experimental set-ups, applying a well-established methodology and performance metrics.

Design/methodology/approach

Four feature-based SLAM architectures are evaluated with respect to their suitability for localization of multi-legged walking robots. The evaluation methodology is based on the computation of the absolute trajectory error (ATE) and relative pose error (RPE), which are performance metrics well-established in the robotics community. Four sequences of RGB-D frames acquired in two independent experiments using two different six-legged walking robots are used in the evaluation process.

Findings

The experiments revealed that the predominant problem characteristics of the legged robots as platforms for SLAM are the abrupt and unpredictable sensor motions, as well as oscillations and vibrations, which corrupt the images captured in-motion. The tested adaptive gait allowed the evaluated SLAM systems to reconstruct proper trajectories. The bundle adjustment-based SLAM systems produced best results, thanks to the use of a map, which enables to establish a large number of constraints for the estimated trajectory.

Research limitations/implications

The evaluation was performed using indoor mockups of terrain. Experiments in more natural and challenging environments are envisioned as part of future research.

Practical implications

The lack of accurate self-localization methods is considered as one of the most important limitations of walking robots. Thus, the evaluation of the state-of-the-art SLAM methods on legged platforms may be useful for all researchers working on walking robots’ autonomy and their use in various applications, such as search, security, agriculture and mining.

Originality/value

The main contribution lies in the integration of the state-of-the-art SLAM methods on walking robots and their thorough experimental evaluation using a well-established methodology. Moreover, a SLAM system designed especially for RGB-D sensors and real-world applications is presented in details.

Details

Industrial Robot: An International Journal, vol. 44 no. 4
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 17 October 2016

Xianglong Kong, Wenqi Wu, Lilian Zhang, Xiaofeng He and Yujie Wang

This paper aims to present a method for improving the performance of the visual-inertial navigation system (VINS) by using a bio-inspired polarized light compass.

Abstract

Purpose

This paper aims to present a method for improving the performance of the visual-inertial navigation system (VINS) by using a bio-inspired polarized light compass.

Design/methodology/approach

The measurement model of each sensor module is derived, and a robust stochastic cloning extended Kalman filter (RSC-EKF) is implemented for data fusion. This fusion framework can not only handle multiple relative and absolute measurements, but can also deal with outliers, sensor outages of each measurement module.

Findings

The paper tests the approach on data sets acquired by a land vehicle moving in different environments and compares its performance against other methods. The results demonstrate the effectiveness of the proposed method for reducing the error growth of the VINS in the long run.

Originality/value

The main contribution of this paper lies in the design/implementation of the RSC-EKF for incorporating the homemade polarized light compass into visual-inertial navigation pipeline. The real-world tests in different environments demonstrate the effectiveness and feasibility of the proposed approach.

Details

Industrial Robot: An International Journal, vol. 43 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 28 May 2021

Zhengtuo Wang, Yuetong Xu, Guanhua Xu, Jianzhong Fu, Jiongyan Yu and Tianyi Gu

In this work, the authors aim to provide a set of convenient methods for generating training data, and then develop a deep learning method based on point clouds to…

Abstract

Purpose

In this work, the authors aim to provide a set of convenient methods for generating training data, and then develop a deep learning method based on point clouds to estimate the pose of target for robot grasping.

Design/methodology/approach

This work presents a deep learning method PointSimGrasp on point clouds for robot grasping. In PointSimGrasp, a point cloud emulator is introduced to generate training data and a pose estimation algorithm, which, based on deep learning, is designed. After trained with the emulation data set, the pose estimation algorithm could estimate the pose of target.

Findings

In experiment part, an experimental platform is built, which contains a six-axis industrial robot, a binocular structured-light sensor and a base platform with adjustable inclination. A data set that contains three subsets is set up on the experimental platform. After trained with the emulation data set, the PointSimGrasp is tested on the experimental data set, and an average translation error of about 2–3 mm and an average rotation error of about 2–5 degrees are obtained.

Originality/value

The contributions are as follows: first, a deep learning method on point clouds is proposed to estimate 6D pose of target; second, a convenient training method for pose estimation algorithm is presented and a point cloud emulator is introduced to generate training data; finally, an experimental platform is built, and the PointSimGrasp is tested on the platform.

Details

Assembly Automation, vol. 41 no. 2
Type: Research Article
ISSN: 0144-5154

Keywords

1 – 10 of 398