Search results

1 – 10 of over 1000
Article
Publication date: 26 October 2018

Tugrul Oktay, Harun Celik and Ilke Turkmen

The purpose of this paper is to examine the success of constrained control on reducing motion blur which occurs as a result of helicopter vibration.

Abstract

Purpose

The purpose of this paper is to examine the success of constrained control on reducing motion blur which occurs as a result of helicopter vibration.

Design/methodology/approach

Constrained controllers are designed to reduce the motion blur on images taken by helicopter. Helicopter vibrations under tight and soft constrained controllers are modeled and added to images to show the performance of controllers on reducing blur.

Findings

The blur caused by vibration can be reduced via constrained control of helicopter.

Research limitations/implications

The motion of camera is modeled and assumed same as the motion of helicopter. In model of exposing image, image noise is neglected, and blur is considered as the only distorting effect on image.

Practical implications

Tighter constrained controllers can be implemented to take higher quality images by helicopters.

Social implications

Recently, aerial vehicles are widely used for aerial photography. Images taken by helicopters mostly suffer from motion blur. Reducing motion blur can provide users to take higher quality images by helicopters.

Originality/value

Helicopter control is performed to reduce motion blur on image for the first time. A control-oriented and physic-based model of helicopter is benefited. Helicopter vibration which causes motion blur is modeled as blur kernel to see the effect of helicopter vibration on taken images. Tight and soft constrained controllers are designed and compared to denote their performance in reducing motion blur. It is proved that images taken by helicopter can be prevented from motion blur by controlling helicopter tightly.

Details

Aircraft Engineering and Aerospace Technology, vol. 90 no. 9
Type: Research Article
ISSN: 1748-8842

Keywords

Article
Publication date: 11 February 2021

Mahdi Jampour, Amin KarimiSardar and Hossein Rezaei Estakhroyeh

The purpose of this study is to design, program and implement an intelligent robot for shelf-reading. an essential task in library maintenance is shelf-reading, which refers to…

Abstract

Purpose

The purpose of this study is to design, program and implement an intelligent robot for shelf-reading. an essential task in library maintenance is shelf-reading, which refers to the process of checking the disciplines of books based on their call numbers to ensure that they are correctly shelved. Shelf-reading is a routine yet challenging task for librarians, as it involves controlling call numbers on the scale of thousands of books promptly.

Design/methodology/approach

Leveraging the strength of autonomous robots in handling repetitive tasks, this paper introduces a novel vision-based shelf-reader robot, called \emph{Pars} and demonstrate its effectiveness in accomplishing shelf-reading tasks. Also, this paper proposes a novel supervised approach to power the vision system of \emph{Pars}, allowing it to handle motion blur on images captured while it moves. An approach based on Faster R-CNN is also incorporated into the vision system, allowing the robot to efficiently detect the region of interest for retrieving a book’s information.

Findings

This paper evaluated the robot’s performance in a library with $120,000 books and discovered problems such as missing and misplaced books. Besides, this paper introduces a new challenging data set of blurred barcodes free publicly available for similar research studies.

Originality/value

The robot is equipped with six parallel cameras, which enable it to check books and decide moving paths. Through its vision-based system, it is also capable of routing and tracking paths between bookcases in a library and it can also turn around bends. Moreover, \emph{Pars} addresses the blurred barcodes, which may appear because of its motion.

Details

Industrial Robot: the international journal of robotics research and application, vol. 48 no. 5
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 30 August 2021

Jinchao Huang

Multi-domain convolutional neural network (MDCNN) model has been widely used in object recognition and tracking in the field of computer vision. However, if the objects to be…

3880

Abstract

Purpose

Multi-domain convolutional neural network (MDCNN) model has been widely used in object recognition and tracking in the field of computer vision. However, if the objects to be tracked move rapid or the appearances of moving objects vary dramatically, the conventional MDCNN model will suffer from the model drift problem. To solve such problem in tracking rapid objects under limiting environment for MDCNN model, this paper proposed an auto-attentional mechanism-based MDCNN (AA-MDCNN) model for the rapid moving and changing objects tracking under limiting environment.

Design/methodology/approach

First, to distinguish the foreground object between background and other similar objects, the auto-attentional mechanism is used to selectively aggregate the weighted summation of all feature maps to make the similar features related to each other. Then, the bidirectional gated recurrent unit (Bi-GRU) architecture is used to integrate all the feature maps to selectively emphasize the importance of the correlated feature maps. Finally, the final feature map is obtained by fusion the above two feature maps for object tracking. In addition, a composite loss function is constructed to solve the similar but different attribute sequences tracking using conventional MDCNN model.

Findings

In order to validate the effectiveness and feasibility of the proposed AA-MDCNN model, this paper used ImageNet-Vid dataset to train the object tracking model, and the OTB-50 dataset is used to validate the AA-MDCNN tracking model. Experimental results have shown that the augmentation of auto-attentional mechanism will improve the accuracy rate 2.75% and success rate 2.41%, respectively. In addition, the authors also selected six complex tracking scenarios in OTB-50 dataset; over eleven attributes have been validated that the proposed AA-MDCNN model outperformed than the comparative models over nine attributes. In addition, except for the scenario of multi-objects moving with each other, the proposed AA-MDCNN model solved the majority rapid moving objects tracking scenarios and outperformed than the comparative models on such complex scenarios.

Originality/value

This paper introduced the auto-attentional mechanism into MDCNN model and adopted Bi-GRU architecture to extract key features. By using the proposed AA-MDCNN model, rapid object tracking under complex background, motion blur and occlusion objects has better effect, and such model is expected to be further applied to the rapid object tracking in the real world.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 15 no. 1
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 6 June 2019

Shuang-Shuang Liu

The conventional pedestrian detection algorithms lack in scale sensitivity. The purpose of this paper is to propose a novel algorithm of self-adaptive scale pedestrian detection…

Abstract

Purpose

The conventional pedestrian detection algorithms lack in scale sensitivity. The purpose of this paper is to propose a novel algorithm of self-adaptive scale pedestrian detection, based on deep residual network (DRN), to address such lacks.

Design/methodology/approach

First, the “Edge boxes” algorithm is introduced to extract region of interests from pedestrian images. Then, the extracted bounding boxes are incorporated to different DRNs, one is a large-scale DRN and the other one is the small-scale DRN. The height of the bounding boxes is used to classify the results of pedestrians and to regress the bounding boxes to the entity of the pedestrian. At last, a weighted self-adaptive scale function, which combines the large-scale results and small-scale results, is designed for the final pedestrian detection.

Findings

To validate the effectiveness and feasibility of the proposed algorithm, some comparison experiments have been done on the common pedestrian detection data sets: Caltech, INRIA, ETH and KITTI. Experimental results show that the proposed algorithm is adapted for the various scales of the pedestrians. For the hard detected small-scale pedestrians, the proposed algorithm has improved the accuracy and robustness of detections.

Originality/value

By applying different models to deal with different scales of pedestrians, the proposed algorithm with the weighted calculation function has improved the accuracy and robustness for different scales of pedestrians.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 12 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 6 May 2021

Zhe Wang, Xisheng Li, Xiaojuan Zhang, Yanru Bai and Chengcai Zheng

How to model blind image deblurring that arises when a camera undergoes ego-motion while observing a static and close scene. In particular, this paper aims to detail how the…

Abstract

Purpose

How to model blind image deblurring that arises when a camera undergoes ego-motion while observing a static and close scene. In particular, this paper aims to detail how the blurry image can be restored under a sequence of the linear model of the point spread function (PSF) that are derived from the 6-degree of freedom (DOF) camera’s accurate path during the long exposure time.

Design/methodology/approach

There are two existing techniques, namely, an estimation of the PSF and a blind image deconvolution. Based on online and short-period inertial measurement unit (IMU) self-calibration, this motion path has discretized a sequence of the uniform speed of 3-DOF rectilinear motion, which unites with a 3-DOF rotational motion to form a discrete 6-DOF camera’s path. These PSFs are evaluated through the discrete path, then combine with a blurry image to restoration through deconvolution.

Findings

This paper describes to build a hardware attachment, which is composed of a consumer camera, an inexpensive IMU and a 3-DOF motion mechanism to the best of the knowledge, together with experimental results demonstrating its overall effectiveness.

Originality/value

First, the paper proposes that a high-precision 6-DOF motion platform periodically adjusts the speed of a three-axis rotational motion and a three-axis rectilinear motion in a short time to compensate the bias of the gyroscope and the accelerometer. Second, this paper establishes a model of 6-DOF motion and emphasizes on rotational motion, translational motion and scene depth motion. Third, this paper addresses a novel model of the discrete path that the motion during long exposure time is discretized at a uniform speed, then to estimate a sequence of PSFs.

Details

Sensor Review, vol. 41 no. 2
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 24 September 2019

Erliang Yao, Hexin Zhang, Haitao Song and Guoliang Zhang

To realize stable and precise localization in the dynamic environments, the authors propose a fast and robust visual odometry (VO) approach with a low-cost Inertial Measurement…

Abstract

Purpose

To realize stable and precise localization in the dynamic environments, the authors propose a fast and robust visual odometry (VO) approach with a low-cost Inertial Measurement Unit (IMU) in this study.

Design/methodology/approach

The proposed VO incorporates the direct method with the indirect method to track the features and to optimize the camera pose. It initializes the positions of tracked pixels with the IMU information. Besides, the tracked pixels are refined by minimizing the photometric errors. Due to the small convergence radius of the indirect method, the dynamic pixels are rejected. Subsequently, the camera pose is optimized by minimizing the reprojection errors. The frames with little dynamic information are selected to create keyframes. Finally, the local bundle adjustment is performed to refine the poses of the keyframes and the positions of 3-D points.

Findings

The proposed VO approach is evaluated experimentally in dynamic environments with various motion types, suggesting that the proposed approach achieves more accurate and stable location than the conventional approach. Moreover, the proposed VO approach works well in the environments with the motion blur.

Originality/value

The proposed approach fuses the indirect method and the direct method with the IMU information, which improves the localization in dynamic environments significantly.

Details

Industrial Robot: the international journal of robotics research and application, vol. 46 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 19 June 2017

Michał R. Nowicki, Dominik Belter, Aleksander Kostusiak, Petr Cížek, Jan Faigl and Piotr Skrzypczyński

This paper aims to evaluate four different simultaneous localization and mapping (SLAM) systems in the context of localization of multi-legged walking robots equipped with compact…

Abstract

Purpose

This paper aims to evaluate four different simultaneous localization and mapping (SLAM) systems in the context of localization of multi-legged walking robots equipped with compact RGB-D sensors. This paper identifies problems related to in-motion data acquisition in a legged robot and evaluates the particular building blocks and concepts applied in contemporary SLAM systems against these problems. The SLAM systems are evaluated on two independent experimental set-ups, applying a well-established methodology and performance metrics.

Design/methodology/approach

Four feature-based SLAM architectures are evaluated with respect to their suitability for localization of multi-legged walking robots. The evaluation methodology is based on the computation of the absolute trajectory error (ATE) and relative pose error (RPE), which are performance metrics well-established in the robotics community. Four sequences of RGB-D frames acquired in two independent experiments using two different six-legged walking robots are used in the evaluation process.

Findings

The experiments revealed that the predominant problem characteristics of the legged robots as platforms for SLAM are the abrupt and unpredictable sensor motions, as well as oscillations and vibrations, which corrupt the images captured in-motion. The tested adaptive gait allowed the evaluated SLAM systems to reconstruct proper trajectories. The bundle adjustment-based SLAM systems produced best results, thanks to the use of a map, which enables to establish a large number of constraints for the estimated trajectory.

Research limitations/implications

The evaluation was performed using indoor mockups of terrain. Experiments in more natural and challenging environments are envisioned as part of future research.

Practical implications

The lack of accurate self-localization methods is considered as one of the most important limitations of walking robots. Thus, the evaluation of the state-of-the-art SLAM methods on legged platforms may be useful for all researchers working on walking robots’ autonomy and their use in various applications, such as search, security, agriculture and mining.

Originality/value

The main contribution lies in the integration of the state-of-the-art SLAM methods on walking robots and their thorough experimental evaluation using a well-established methodology. Moreover, a SLAM system designed especially for RGB-D sensors and real-world applications is presented in details.

Details

Industrial Robot: An International Journal, vol. 44 no. 4
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 22 January 2024

Jun Liu, Junyuan Dong, Mingming Hu and Xu Lu

Existing Simultaneous Localization and Mapping (SLAM) algorithms have been relatively well developed. However, when in complex dynamic environments, the movement of the dynamic…

Abstract

Purpose

Existing Simultaneous Localization and Mapping (SLAM) algorithms have been relatively well developed. However, when in complex dynamic environments, the movement of the dynamic points on the dynamic objects in the image in the mapping can have an impact on the observation of the system, and thus there will be biases and errors in the position estimation and the creation of map points. The aim of this paper is to achieve more accurate accuracy in SLAM algorithms compared to traditional methods through semantic approaches.

Design/methodology/approach

In this paper, the semantic segmentation of dynamic objects is realized based on U-Net semantic segmentation network, followed by motion consistency detection through motion detection method to determine whether the segmented objects are moving in the current scene or not, and combined with the motion compensation method to eliminate dynamic points and compensate for the current local image, so as to make the system robust.

Findings

Experiments comparing the effect of detecting dynamic points and removing outliers are conducted on a dynamic data set of Technische Universität München, and the results show that the absolute trajectory accuracy of this paper's method is significantly improved compared with ORB-SLAM3 and DS-SLAM.

Originality/value

In this paper, in the semantic segmentation network part, the segmentation mask is combined with the method of dynamic point detection, elimination and compensation, which reduces the influence of dynamic objects, thus effectively improving the accuracy of localization in dynamic environments.

Details

Industrial Robot: the international journal of robotics research and application, vol. 51 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 22 September 2021

Laura Duarte, Mohammad Safeea and Pedro Neto

This paper proposes a novel method for human hands tracking using data from an event camera. The event camera detects changes in brightness, measuring motion, with low latency, no…

113

Abstract

Purpose

This paper proposes a novel method for human hands tracking using data from an event camera. The event camera detects changes in brightness, measuring motion, with low latency, no motion blur, low power consumption and high dynamic range. Captured frames are analysed using lightweight algorithms reporting three-dimensional (3D) hand position data. The chosen pick-and-place scenario serves as an example input for collaborative human–robot interactions and in obstacle avoidance for human–robot safety applications.

Design/methodology/approach

Events data are pre-processed into intensity frames. The regions of interest (ROI) are defined through object edge event activity, reducing noise. ROI features are extracted for use in-depth perception.

Findings

Event-based tracking of human hand demonstrated feasible, in real time and at a low computational cost. The proposed ROI-finding method reduces noise from intensity images, achieving up to 89% of data reduction in relation to the original, while preserving the features. The depth estimation error in relation to ground truth (measured with wearables), measured using dynamic time warping and using a single event camera, is from 15 to 30 millimetres, depending on the plane it is measured.

Originality/value

Tracking of human hands in 3 D space using a single event camera data and lightweight algorithms to define ROI features (hands tracking in space).

Details

Sensor Review, vol. 41 no. 4
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 16 October 2018

Shaoyan Xu, Tao Wang, Congyan Lang, Songhe Feng and Yi Jin

Typical feature-matching algorithms use only unary constraints on appearances to build correspondences where little structure information is used. Ignoring structure information…

Abstract

Purpose

Typical feature-matching algorithms use only unary constraints on appearances to build correspondences where little structure information is used. Ignoring structure information makes them sensitive to various environmental perturbations. The purpose of this paper is to propose a novel graph-based method that aims to improve matching accuracy by fully exploiting the structure information.

Design/methodology/approach

Instead of viewing a frame as a simple collection of keypoints, the proposed approach organizes a frame as a graph by treating each keypoint as a vertex, where structure information is integrated in edges between vertices. Subsequently, the matching process of finding keypoint correspondence is formulated in a graph matching manner.

Findings

The authors compare it with several state-of-the-art visual simultaneous localization and mapping algorithms on three datasets. Experimental results reveal that the ORB-G algorithm provides more accurate and robust trajectories in general.

Originality/value

Instead of viewing a frame as a simple collection of keypoints, the proposed approach organizes a frame as a graph by treating each keypoint as a vertex, where structure information is integrated in edges between vertices. Subsequently, the matching process of finding keypoint correspondence is formulated in a graph matching manner.

Details

Industrial Robot: An International Journal, vol. 45 no. 5
Type: Research Article
ISSN: 0143-991X

Keywords

1 – 10 of over 1000