Search results
1 – 10 of 230Han Sun, Song Tang, Xiaozhi Qi, Zhiyuan Ma and Jianxin Gao
This study aims to introduce a novel noise filter module designed for LiDAR simultaneous localization and mapping (SLAM) systems. The primary objective is to enhance pose…
Abstract
Purpose
This study aims to introduce a novel noise filter module designed for LiDAR simultaneous localization and mapping (SLAM) systems. The primary objective is to enhance pose estimation accuracy and improve the overall system performance in outdoor environments.
Design/methodology/approach
Distinct from traditional approaches, MCFilter emphasizes enhancing point cloud data quality at the pixel level. This framework hinges on two primary elements. First, the D-Tracker, a tracking algorithm, is grounded on multiresolution three-dimensional (3D) descriptors and adeptly maintains a balance between precision and efficiency. Second, the R-Filter introduces a pixel-level attribute named motion-correlation, which effectively identifies and removes dynamic points. Furthermore, designed as a modular component, MCFilter ensures seamless integration into existing LiDAR SLAM systems.
Findings
Based on rigorous testing with public data sets and real-world conditions, the MCFilter reported an increase in average accuracy of 12.39% and reduced processing time by 24.18%. These outcomes emphasize the method’s effectiveness in refining the performance of current LiDAR SLAM systems.
Originality/value
In this study, the authors present a novel 3D descriptor tracker designed for consistent feature point matching across successive frames. The authors also propose an innovative attribute to detect and eliminate noise points. Experimental results demonstrate that integrating this method into existing LiDAR SLAM systems yields state-of-the-art performance.
Details
Keywords
Ruochen Zeng, Jonathan J.S. Shi, Chao Wang and Tao Lu
As laser scanning technology becomes readily available and affordable, there is an increasing demand of using point cloud data collected from a laser scanner to create as-built…
Abstract
Purpose
As laser scanning technology becomes readily available and affordable, there is an increasing demand of using point cloud data collected from a laser scanner to create as-built building information modeling (BIM) models for quality assessment, schedule control and energy performance within construction projects. To enhance the as-built modeling efficiency, this study explores an integrated system, called Auto-Scan-To-BIM (ASTB), with an aim to automatically generate a complete Industry Foundation Classes (IFC) model consisted of the 3D building elements for the given building based on its point cloud without requiring additional modeling tools.
Design/methodology/approach
ASTB has been developed with three function modules. Taking the scanned point data as input, Module 1 is built on the basis of the widely used region segmentation methodology and expanded with enhanced plane boundary line detection methods and corner recalibration algorithms. Then, Module 2 is developed with a domain knowledge-based heuristic method to analyze the features of the recognized planes, to associate them with corresponding building elements and to create BIM models. Based on the spatial relationships between these building elements, Module 3 generates a complete IFC model for the entire project compatible with any BIM software.
Findings
A case study validated the ASTB with an application with five common types of building elements (e.g. wall, floor, ceiling, window and door).
Originality/value
First, an integrated system, ASTB, is developed to generate a BIM model from scanned point cloud data without using additional modeling tools. Second, an enhanced plane boundary line detection method and a corner recalibration algorithm are developed in ASTB with high accuracy in obtaining the true surface planes. At last, the research contributes to develop a module, which can automatically convert the identified building elements into an IFC format based on the geometry and spatial relationships of each plan.
Details
Keywords
Dan Zhang, Junji Yuan, Haibin Meng, Wei Wang, Rui He and Sen Li
In the context of fire incidents within buildings, efficient scene perception by firefighting robots is particularly crucial. Although individual sensors can provide specific…
Abstract
Purpose
In the context of fire incidents within buildings, efficient scene perception by firefighting robots is particularly crucial. Although individual sensors can provide specific types of data, achieving deep data correlation among multiple sensors poses challenges. To address this issue, this study aims to explore a fusion approach integrating thermal imaging cameras and LiDAR sensors to enhance the perception capabilities of firefighting robots in fire environments.
Design/methodology/approach
Prior to sensor fusion, accurate calibration of the sensors is essential. This paper proposes an extrinsic calibration method based on rigid body transformation. The collected data is optimized using the Ceres optimization algorithm to obtain precise calibration parameters. Building upon this calibration, a sensor fusion method based on coordinate projection transformation is proposed, enabling real-time mapping between images and point clouds. In addition, the effectiveness of the proposed fusion device data collection is validated in experimental smoke-filled fire environments.
Findings
The average reprojection error obtained by the extrinsic calibration method based on rigid body transformation is 1.02 pixels, indicating good accuracy. The fused data combines the advantages of thermal imaging cameras and LiDAR, overcoming the limitations of individual sensors.
Originality/value
This paper introduces an extrinsic calibration method based on rigid body transformation, along with a sensor fusion approach based on coordinate projection transformation. The effectiveness of this fusion strategy is validated in simulated fire environments.
Details
Keywords
Boppana V. Chowdary and Deepak Jaglal
This paper aims to present a reverse engineering (RE) approach for three-dimensional (3D) model reconstruction and fast prototyping (FP) of broken chess pieces.
Abstract
Purpose
This paper aims to present a reverse engineering (RE) approach for three-dimensional (3D) model reconstruction and fast prototyping (FP) of broken chess pieces.
Design/methodology/approach
A case study involving a broken chess piece was selected to demonstrate the effectiveness of the proposed unconventional RE approach. Initially, a laser 3D scanner was used to acquire a (non-uniform rational B-spline) surface model of the object, which was then processed to develop a parametric computer aided design (CAD) model combined with geometric design and tolerancing (GD&T) technique for evaluation and then for FP of the part using a computer numerical controlled (CNC) machine.
Findings
The effectiveness of the proposed approach for reconstruction and FP of rotational parts was ascertained through a sample part. The study demonstrates non-contact data acquisition technologies such as 3D laser scanners together with RE systems can support to capture the entire part geometry that was broken/worn and developed quickly through the application of computer aided manufacturing principles and a CNC machine. The results indicate that design communication, customer involvement and FP can be efficiently accomplished by means of an integrated RE workflow combined with rapid product development tools and techniques.
Originality/value
This research established a RE approach for the acquisition of broken/worn part data and the development of parametric CAD models. Then, the developed 3D CAD model was inspected for accuracy by means of the GD&T approach and rapidly developed using a CNC machine. Further, the proposed RE led FP approach can provide solutions to similar industrial situations wherein agility in the product design and development process is necessary to produce physical samples and functional replacement parts for aging systems in a short turnaround time.
Details
Keywords
Single-shot multi-category clothing recognition and retrieval play a crucial role in online searching and offline settlement scenarios. Existing clothing recognition methods based…
Abstract
Purpose
Single-shot multi-category clothing recognition and retrieval play a crucial role in online searching and offline settlement scenarios. Existing clothing recognition methods based on RGBD clothing images often suffer from high-dimensional feature representations, leading to compromised performance and efficiency.
Design/methodology/approach
To address this issue, this paper proposes a novel method called Manifold Embedded Discriminative Feature Selection (MEDFS) to select global and local features, thereby reducing the dimensionality of the feature representation and improving performance. Specifically, by combining three global features and three local features, a low-dimensional embedding is constructed to capture the correlations between features and categories. The MEDFS method designs an optimization framework utilizing manifold mapping and sparse regularization to achieve feature selection. The optimization objective is solved using an alternating iterative strategy, ensuring convergence.
Findings
Empirical studies conducted on a publicly available RGBD clothing image dataset demonstrate that the proposed MEDFS method achieves highly competitive clothing classification performance while maintaining efficiency in clothing recognition and retrieval.
Originality/value
This paper introduces a novel approach for multi-category clothing recognition and retrieval, incorporating the selection of global and local features. The proposed method holds potential for practical applications in real-world clothing scenarios.
Details
Keywords
Run Yang, Jingru Li, Taiyun Zhu, Di Hu and Erbao Dong
Gas-insulated switchgear (GIS) stands as a pivotal component in power systems, susceptible to partial discharge occurrences. Nevertheless, manual inspection proves…
Abstract
Purpose
Gas-insulated switchgear (GIS) stands as a pivotal component in power systems, susceptible to partial discharge occurrences. Nevertheless, manual inspection proves labor-intensive, exhibits a low defect detection rate. Conventional inspection robots face limitations, unable to perform live line measurements or adapt effectively to diverse environmental conditions. This paper aims to introduce a novel solution: the GIS ultrasonic partial discharge detection robot (GBOT), designed to assume the role of substation personnel in inspection tasks.
Design/methodology/approach
GBOT is a mobile manipulator system divided into three subsystems: autonomous location and navigation, vision-guided and force-controlled manipulator and data detection and analysis. These subsystems collaborate, incorporating simultaneous localization and mapping, path planning, target recognition and signal processing, admittance control. This paper also introduces a path planning method designed to adapt to the substation environment. In addition, a flexible end effector is designed for full contact between the probe and the device.
Findings
The robot fulfills the requirements for substation GIS inspections. It can conduct efficient and low-cost path planning with narrow passages in the constructed substation map, realizes a sufficiently stable detection contact and perform high defect detection rate.
Practical implications
The robot mitigates the labor intensity of grid maintenance personnel, enhances inspection efficiency and safety and advances the intelligence and digitization of power equipment maintenance and monitoring. This research also provides valuable insights for the broader application of mobile manipulators in diverse fields.
Originality/value
The robot is a mobile manipulator system used in GIS detection, offering a viable alternative to grid personnel for equipment inspections. Comparing with the previous robotic systems, this system can work in live electrical detection, demonstrating robust environmental adaptability and superior efficiency.
Details
Keywords
Baoxu Tu, Yuanfei Zhang, Kang Min, Fenglei Ni and Minghe Jin
This paper aims to estimate contact location from sparse and high-dimensional soft tactile array sensor data using the tactile image. The authors used three feature extraction…
Abstract
Purpose
This paper aims to estimate contact location from sparse and high-dimensional soft tactile array sensor data using the tactile image. The authors used three feature extraction methods: handcrafted features, convolutional features and autoencoder features. Subsequently, these features were mapped to contact locations through a contact location regression network. Finally, the network performance was evaluated using spherical fittings of three different radii to further determine the optimal feature extraction method.
Design/methodology/approach
This paper aims to estimate contact location from sparse and high-dimensional soft tactile array sensor data using the tactile image.
Findings
This research indicates that data collected by probes can be used for contact localization. Introducing a batch normalization layer after the feature extraction stage significantly enhances the model’s generalization performance. Through qualitative and quantitative analyses, the authors conclude that convolutional methods can more accurately estimate contact locations.
Originality/value
The paper provides both qualitative and quantitative analyses of the performance of three contact localization methods across different datasets. To address the challenge of obtaining accurate contact locations in quantitative analysis, an indirect measurement metric is proposed.
Details
Keywords
Solder joint inspection plays a critical role in various industries, with a focus on integrated chip (IC) solder joints and metal surface welds. However, the detection of tubular…
Abstract
Purpose
Solder joint inspection plays a critical role in various industries, with a focus on integrated chip (IC) solder joints and metal surface welds. However, the detection of tubular solder joints has received relatively less attention. This paper aims to address the challenges of detecting small targets and complex environments by proposing a robust visual detection method for pipeline solder joints. The method is characterized by its simplicity, cost-effectiveness and ease of implementation.
Design/methodology/approach
A robust visual detection method based on the characteristics of pipeline solder joints is proposed. With the improved hue, saturation and value (HSV) color space, the method uses a multi-level template matching approach to first segment the pipeline from the background, and then match the endpoint of the pipeline to accurately locate the solder joint. The proposed method leverages the distinctive characteristics of pipeline solder joints and employs an enhanced HSV color space. A multi-level template matching approach is utilized to segment the pipeline from the background and accurately locate the solder joint by matching the pipeline endpoint.
Findings
The experimental results demonstrate the effectiveness of the proposed solder joint detection method in practical detection tasks. The average precision of pipeline weld joint localization exceeds 95%, while the average recall is greater than 90%. These findings highlight the applicability of the method to pipeline solder joint detection tasks, specifically in the context of production lines for refrigeration equipment.
Research limitations/implications
The precision of the method is influenced by the placement angle and lighting conditions of the test specimen, which may pose challenges and impact the algorithm's performance. Potential avenues for improvement include exploring deep learning methods, incorporating additional features and contextual information for localization, and utilizing advanced image enhancement techniques to improve image quality.
Originality/value
The proposed pipeline solder joint detection method offers a novel and practical approach. The simplicity, cost-effectiveness and ease of implementation make it an attractive choice for detecting pipeline solder joints in different industrial applications.
Details
Keywords
Aliaksei Petsiuk, Brandon Bloch, Mitch Debora and Joshua M. Pearce
Presently in multicolor fused filament-based three-dimensional (3-D) printing, significant amounts of waste material are produced through nozzle priming and purging each time a…
Abstract
Purpose
Presently in multicolor fused filament-based three-dimensional (3-D) printing, significant amounts of waste material are produced through nozzle priming and purging each time a change from one color to another occurs. G-code generating slicing software typically changes the material on each layer resulting in wipe towers with greater mass than the target object. The purpose of this study is to provide an alternative fabrication approach based on interlayer tool clustering (ITC) for the first time, which reduces the number of tool changes and is compatible with any commercial 3-D printer without the need for hardware modifications.
Design/methodology/approach
The authors have developed an open-source PrusaSlicer upgrade, compatible with Slic3r-based software, which uses the described algorithm to generate g-code toolpath and print experimental objects. The theoretical time, material and energy savings are calculated and validated to evaluate the proposed fabrication method qualitatively and quantitatively.
Findings
The experimental results show the novel ITC method can significantly increase the efficiency of multimaterial printing, with an average 1.7-fold reduction in material use, and an average 1.4-fold reduction in both time and 3-D printing energy use. In addition, this approach reduces the likelihood of technical failures in the manufacturing of the entire part by reducing the number of tool changes, or material transitions, on average by 2.4 times.
Originality/value
The obtained results support distributed recycling and additive manufacturing, which has both environmental and economic benefits and increasing the number of colors in a 3-D print increases manufacturing savings.
Details
Keywords
Based on Kansei Engineering, this study obtained consumers' emotional preferences aiming to enhance the emotional connection between consumers and clothing to extend the service…
Abstract
Purpose
Based on Kansei Engineering, this study obtained consumers' emotional preferences aiming to enhance the emotional connection between consumers and clothing to extend the service life of clothing and realize sustainable clothing design.
Design/methodology/approach
Six Kansei word pairs that are the most important to consumers were identified through literature reviews, magazines, websites, card sorting of consumers and cluster analysis. Finally, the consumers scored the 32 product specimens through a 5-level rating semantic differential scale questionnaire of six Kansei word pairs. The researchers verified the consumers' emotional preferences through principal component analysis and established the relationship between Kansei words and design elements of color through partial least squares.
Findings
The study found consumers' emotional preferences: elegant, minimalist, formal, casual, mature, practical and distinctive style. Besides white, black, gray, blue, consumers will also like red and yellow-red in the future. The crucial findings of this study are to get recommended guidelines that consumers' emotional preferences match the corresponding design elements.
Originality/value
The study's findings can be used to style the design of men's plain-color shirts and guide online marketers and designers to design apparel that meets consumers' emotional needs to develop consumers' sustainability reliance on clothing. This study also explains the overall process and methodology for integrating consumer preferences and product design elements.
Details