Search results

1 – 10 of 159
Open Access
Article
Publication date: 1 February 2018

Xuhui Ye, Gongping Wu, Fei Fan, XiangYang Peng and Ke Wang

An accurate detection of overhead ground wire under open surroundings with varying illumination is the premise of reliable line grasping with the off-line arm when the inspection…

1243

Abstract

Purpose

An accurate detection of overhead ground wire under open surroundings with varying illumination is the premise of reliable line grasping with the off-line arm when the inspection robot cross obstacle automatically. This paper aims to propose an improved approach which is called adaptive homomorphic filter and supervised learning (AHSL) for overhead ground wire detection.

Design/methodology/approach

First, to decrease the influence of the varying illumination caused by the open work environment of the inspection robot, the adaptive homomorphic filter is introduced to compensation the changing illumination. Second, to represent ground wire more effectively and to extract more powerful and discriminative information for building a binary classifier, the global and local features fusion method followed by supervised learning method support vector machine is proposed.

Findings

Experiment results on two self-built testing data sets A and B which contain relative older ground wires and relative newer ground wire and on the field ground wires show that the use of the adaptive homomorphic filter and global and local feature fusion method can improve the detection accuracy of the ground wire effectively. The result of the proposed method lays a solid foundation for inspection robot grasping the ground wire by visual servo.

Originality/value

This method AHSL has achieved 80.8 per cent detection accuracy on data set A which contains relative older ground wires and 85.3 per cent detection accuracy on data set B which contains relative newer ground wires, and the field experiment shows that the robot can detect the ground wire accurately. The performance achieved by proposed method is the state of the art under open environment with varying illumination.

Content available
Book part
Publication date: 30 July 2018

Abstract

Details

Marketing Management in Turkey
Type: Book
ISBN: 978-1-78714-558-0

Open Access
Article
Publication date: 17 July 2020

Sheryl Brahnam, Loris Nanni, Shannon McMurtrey, Alessandra Lumini, Rick Brattin, Melinda Slack and Tonya Barrier

Diagnosing pain in neonates is difficult but critical. Although approximately thirty manual pain instruments have been developed for neonatal pain diagnosis, most are complex…

2286

Abstract

Diagnosing pain in neonates is difficult but critical. Although approximately thirty manual pain instruments have been developed for neonatal pain diagnosis, most are complex, multifactorial, and geared toward research. The goals of this work are twofold: 1) to develop a new video dataset for automatic neonatal pain detection called iCOPEvid (infant Classification Of Pain Expressions videos), and 2) to present a classification system that sets a challenging comparison performance on this dataset. The iCOPEvid dataset contains 234 videos of 49 neonates experiencing a set of noxious stimuli, a period of rest, and an acute pain stimulus. From these videos 20 s segments are extracted and grouped into two classes: pain (49) and nopain (185), with the nopain video segments handpicked to produce a highly challenging dataset. An ensemble of twelve global and local descriptors with a Bag-of-Features approach is utilized to improve the performance of some new descriptors based on Gaussian of Local Descriptors (GOLD). The basic classifier used in the ensembles is the Support Vector Machine, and decisions are combined by sum rule. These results are compared with standard methods, some deep learning approaches, and 185 human assessments. Our best machine learning methods are shown to outperform the human judges.

Details

Applied Computing and Informatics, vol. 19 no. 1/2
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 1 June 2022

Hua Zhai and Zheng Ma

Effective rail surface defects detection method is the basic guarantee to manufacture high-quality rail. However, the existed visual inspection methods have disadvantages such as…

Abstract

Purpose

Effective rail surface defects detection method is the basic guarantee to manufacture high-quality rail. However, the existed visual inspection methods have disadvantages such as poor ability to locate the rail surface region and high sensitivity to uneven reflection. This study aims to propose a bionic rail surface defect detection method to obtain the high detection accuracy of rail surface defects under uneven reflection environments.

Design/methodology/approach

Through this bionic rail surface defect detection algorithm, the positioning and correction of the rail surface region can be computed from maximum run-length smearing (MRLS) and background difference. A saliency image can be generated to simulate the human visual system through some features including local grayscale, local contrast and edge corner effect. Finally, the meanshift algorithm and adaptive threshold are developed to cluster and segment the saliency image.

Findings

On the constructed rail defect data set, the bionic rail surface defect detection algorithm shows good recognition ability on the surface defects of the rail. Pixel- and defect-level index in the experimental results demonstrate that the detection algorithm is better than three advanced rail defect detection algorithms and five saliency models.

Originality/value

The bionic rail surface defect detection algorithm in the production process is proposed. Particularly, a method based on MRLS is introduced to extract the rail surface region and a multifeature saliency fusion model is presented to identify rail surface defects.

Details

Sensor Review, vol. 42 no. 4
Type: Research Article
ISSN: 0260-2288

Keywords

Open Access
Article
Publication date: 13 July 2022

Jiqian Dong, Sikai Chen, Mohammad Miralinaghi, Tiantian Chen and Samuel Labi

Perception has been identified as the main cause underlying most autonomous vehicle related accidents. As the key technology in perception, deep learning (DL) based computer…

Abstract

Purpose

Perception has been identified as the main cause underlying most autonomous vehicle related accidents. As the key technology in perception, deep learning (DL) based computer vision models are generally considered to be black boxes due to poor interpretability. These have exacerbated user distrust and further forestalled their widespread deployment in practical usage. This paper aims to develop explainable DL models for autonomous driving by jointly predicting potential driving actions with corresponding explanations. The explainable DL models can not only boost user trust in autonomy but also serve as a diagnostic approach to identify any model deficiencies or limitations during the system development phase.

Design/methodology/approach

This paper proposes an explainable end-to-end autonomous driving system based on “Transformer,” a state-of-the-art self-attention (SA) based model. The model maps visual features from images collected by onboard cameras to guide potential driving actions with corresponding explanations, and aims to achieve soft attention over the image’s global features.

Findings

The results demonstrate the efficacy of the proposed model as it exhibits superior performance (in terms of correct prediction of actions and explanations) compared to the benchmark model by a significant margin with much lower computational cost on a public data set (BDD-OIA). From the ablation studies, the proposed SA module also outperforms other attention mechanisms in feature fusion and can generate meaningful representations for downstream prediction.

Originality/value

In the contexts of situational awareness and driver assistance, the proposed model can perform as a driving alarm system for both human-driven vehicles and autonomous vehicles because it is capable of quickly understanding/characterizing the environment and identifying any infeasible driving actions. In addition, the extra explanation head of the proposed model provides an extra channel for sanity checks to guarantee that the model learns the ideal causal relationships. This provision is critical in the development of autonomous systems.

Details

Journal of Intelligent and Connected Vehicles, vol. 5 no. 3
Type: Research Article
ISSN: 2399-9802

Keywords

Open Access
Article
Publication date: 16 July 2020

Loris Nanni, Stefano Ghidoni and Sheryl Brahnam

This work presents a system based on an ensemble of Convolutional Neural Networks (CNNs) and descriptors for bioimage classification that has been validated on different datasets…

2299

Abstract

This work presents a system based on an ensemble of Convolutional Neural Networks (CNNs) and descriptors for bioimage classification that has been validated on different datasets of color images. The proposed system represents a very simple yet effective way of boosting the performance of trained CNNs by composing multiple CNNs into an ensemble and combining scores by sum rule. Several types of ensembles are considered, with different CNN topologies along with different learning parameter sets. The proposed system not only exhibits strong discriminative power but also generalizes well over multiple datasets thanks to the combination of multiple descriptors based on different feature types, both learned and handcrafted. Separate classifiers are trained for each descriptor, and the entire set of classifiers is combined by sum rule. Results show that the proposed system obtains state-of-the-art performance across four different bioimage and medical datasets. The MATLAB code of the descriptors will be available at https://github.com/LorisNanni.

Details

Applied Computing and Informatics, vol. 17 no. 1
Type: Research Article
ISSN: 2634-1964

Open Access
Article
Publication date: 5 June 2020

Zijun Jiang, Zhigang Xu, Yunchao Li, Haigen Min and Jingmei Zhou

Precise vehicle localization is a basic and critical technique for various intelligent transportation system (ITS) applications. It also needs to adapt to the complex road…

1044

Abstract

Purpose

Precise vehicle localization is a basic and critical technique for various intelligent transportation system (ITS) applications. It also needs to adapt to the complex road environments in real-time. The global positioning system and the strap-down inertial navigation system are two common techniques in the field of vehicle localization. However, the localization accuracy, reliability and real-time performance of these two techniques can not satisfy the requirement of some critical ITS applications such as collision avoiding, vision enhancement and automatic parking. Aiming at the problems above, this paper aims to propose a precise vehicle ego-localization method based on image matching.

Design/methodology/approach

This study included three steps, Step 1, extraction of feature points. After getting the image, the local features in the pavement images were extracted using an improved speeded up robust features algorithm. Step 2, eliminate mismatch points. Using a random sample consensus algorithm to eliminate mismatched points of road image and make match point pairs more robust. Step 3, matching of feature points and trajectory generation.

Findings

Through the matching and validation of the extracted local feature points, the relative translation and rotation offsets between two consecutive pavement images were calculated, eventually, the trajectory of the vehicle was generated.

Originality/value

The experimental results show that the studied algorithm has an accuracy at decimeter-level and it fully meets the demand of the lane-level positioning in some critical ITS applications.

Details

Journal of Intelligent and Connected Vehicles, vol. 3 no. 2
Type: Research Article
ISSN: 2399-9802

Keywords

Open Access
Article
Publication date: 20 September 2022

Joo Hun Yoo, Hyejun Jeong, Jaehyeok Lee and Tai-Myoung Chung

This study aims to summarize the critical issues in medical federated learning and applicable solutions. Also, detailed explanations of how federated learning techniques can be…

2910

Abstract

Purpose

This study aims to summarize the critical issues in medical federated learning and applicable solutions. Also, detailed explanations of how federated learning techniques can be applied to the medical field are presented. About 80 reference studies described in the field were reviewed, and the federated learning framework currently being developed by the research team is provided. This paper will help researchers to build an actual medical federated learning environment.

Design/methodology/approach

Since machine learning techniques emerged, more efficient analysis was possible with a large amount of data. However, data regulations have been tightened worldwide, and the usage of centralized machine learning methods has become almost infeasible. Federated learning techniques have been introduced as a solution. Even with its powerful structural advantages, there still exist unsolved challenges in federated learning in a real medical data environment. This paper aims to summarize those by category and presents possible solutions.

Findings

This paper provides four critical categorized issues to be aware of when applying the federated learning technique to the actual medical data environment, then provides general guidelines for building a federated learning environment as a solution.

Originality/value

Existing studies have dealt with issues such as heterogeneity problems in the federated learning environment itself, but those were lacking on how these issues incur problems in actual working tasks. Therefore, this paper helps researchers understand the federated learning issues through examples of actual medical machine learning environments.

Details

International Journal of Web Information Systems, vol. 18 no. 2/3
Type: Research Article
ISSN: 1744-0084

Keywords

Open Access
Article
Publication date: 12 August 2022

Bolin Gao, Kaiyuan Zheng, Fan Zhang, Ruiqi Su, Junying Zhang and Yimin Wu

Intelligent and connected vehicle technology is in the ascendant. High-level autonomous driving places more stringent requirements on the accuracy and reliability of environmental…

Abstract

Purpose

Intelligent and connected vehicle technology is in the ascendant. High-level autonomous driving places more stringent requirements on the accuracy and reliability of environmental perception. Existing research works on multitarget tracking based on multisensor fusion mostly focuses on the vehicle perspective, but limited by the principal defects of the vehicle sensor platform, it is difficult to comprehensively and accurately describe the surrounding environment information.

Design/methodology/approach

In this paper, a multitarget tracking method based on roadside multisensor fusion is proposed, including a multisensor fusion method based on measurement noise adaptive Kalman filtering, a global nearest neighbor data association method based on adaptive tracking gate, and a Track life cycle management method based on M/N logic rules.

Findings

Compared with fixed-size tracking gates, the adaptive tracking gates proposed in this paper can comprehensively improve the data association performance in the multitarget tracking process. Compared with single sensor measurement, the proposed method improves the position estimation accuracy by 13.5% and the velocity estimation accuracy by 22.2%. Compared with the control method, the proposed method improves the position estimation accuracy by 23.8% and the velocity estimation accuracy by 8.9%.

Originality/value

A multisensor fusion method with adaptive Kalman filtering of measurement noise is proposed to realize the adaptive adjustment of measurement noise. A global nearest neighbor data association method based on adaptive tracking gate is proposed to realize the adaptive adjustment of the tracking gate.

Details

Smart and Resilient Transportation, vol. 4 no. 2
Type: Research Article
ISSN: 2632-0487

Keywords

Content available
Article
Publication date: 13 November 2023

Sheuli Paul

This paper presents a survey of research into interactive robotic systems for the purpose of identifying the state of the art capabilities as well as the extant gaps in this…

1049

Abstract

Purpose

This paper presents a survey of research into interactive robotic systems for the purpose of identifying the state of the art capabilities as well as the extant gaps in this emerging field. Communication is multimodal. Multimodality is a representation of many modes chosen from rhetorical aspects for its communication potentials. The author seeks to define the available automation capabilities in communication using multimodalities that will support a proposed Interactive Robot System (IRS) as an AI mounted robotic platform to advance the speed and quality of military operational and tactical decision making.

Design/methodology/approach

This review will begin by presenting key developments in the robotic interaction field with the objective of identifying essential technological developments that set conditions for robotic platforms to function autonomously. After surveying the key aspects in Human Robot Interaction (HRI), Unmanned Autonomous System (UAS), visualization, Virtual Environment (VE) and prediction, the paper then proceeds to describe the gaps in the application areas that will require extension and integration to enable the prototyping of the IRS. A brief examination of other work in HRI-related fields concludes with a recapitulation of the IRS challenge that will set conditions for future success.

Findings

Using insights from a balanced cross section of sources from the government, academic, and commercial entities that contribute to HRI a multimodal IRS in military communication is introduced. Multimodal IRS (MIRS) in military communication has yet to be deployed.

Research limitations/implications

Multimodal robotic interface for the MIRS is an interdisciplinary endeavour. This is not realistic that one can comprehend all expert and related knowledge and skills to design and develop such multimodal interactive robotic interface. In this brief preliminary survey, the author has discussed extant AI, robotics, NLP, CV, VDM, and VE applications that is directly related to multimodal interaction. Each mode of this multimodal communication is an active research area. Multimodal human/military robot communication is the ultimate goal of this research.

Practical implications

A multimodal autonomous robot in military communication using speech, images, gestures, VST and VE has yet to be deployed. Autonomous multimodal communication is expected to open wider possibilities for all armed forces. Given the density of the land domain, the army is in a position to exploit the opportunities for human–machine teaming (HMT) exposure. Naval and air forces will adopt platform specific suites for specially selected operators to integrate with and leverage this emerging technology. The possession of a flexible communications means that readily adapts to virtual training will enhance planning and mission rehearsals tremendously.

Social implications

Interaction, perception, cognition and visualization based multimodal communication system is yet missing. Options to communicate, express and convey information in HMT setting with multiple options, suggestions and recommendations will certainly enhance military communication, strength, engagement, security, cognition, perception as well as the ability to act confidently for a successful mission.

Originality/value

The objective is to develop a multimodal autonomous interactive robot for military communications. This survey reports the state of the art, what exists and what is missing, what can be done and possibilities of extension that support the military in maintaining effective communication using multimodalities. There are some separate ongoing progresses, such as in machine-enabled speech, image recognition, tracking, visualizations for situational awareness, and virtual environments. At this time, there is no integrated approach for multimodal human robot interaction that proposes a flexible and agile communication. The report briefly introduces the research proposal about multimodal interactive robot in military communication.

1 – 10 of 159