Search results

1 – 10 of 423
Article
Publication date: 22 August 2024

Reinier Stribos, Roel Bouman, Lisandro Jimenez, Maaike Slot and Marielle Stoelinga

Powder bed additive manufacturing has recently seen substantial growth, yet consistently producing high-quality parts remains challenging. Recoating streaking is a common anomaly…

Abstract

Purpose

Powder bed additive manufacturing has recently seen substantial growth, yet consistently producing high-quality parts remains challenging. Recoating streaking is a common anomaly that impairs print quality. Several data-driven models for automatically detecting this anomaly have been proposed, each with varying effectiveness. However, comprehensive comparisons among them are lacking. Additionally, these models are often tailored to specific data sets. This research addresses this gap by implementing and comparing these anomaly detection models for recoating streaking in a reproducible way. This study aims to offer a clearer, more objective evaluation of their performance, strengths and weaknesses. Furthermore, this study proposes an improvement to the Line Profiles detection model to broaden its applicability, and a novel preprocessing step was introduced to enhance the models’ performances.

Design/methodology/approach

All found anomaly detection models have been implemented along with several preprocessing steps. Additionally, a new universal benchmarking data set has been constructed. Finally, all implemented models have been evaluated on this benchmarking data set and the effect of the different preprocessing steps was studied.

Findings

This comparison shows that the improved Line Profiles model established it as the most efficient detection approach in this study’s benchmark data set. Furthermore, while most state-of-the-art neural networks perform very well off the shelf, this comparison shows that specialised detection models outperform all others with the correct preprocessing.

Originality/value

This comparison gives new insights into different recoater streaking (RCS) detection models, showcasing each one with its strengths and weaknesses. Furthermore, the improved Line Profiles model delivers compelling performance in detecting RCS.

Details

Rapid Prototyping Journal, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1355-2546

Keywords

Article
Publication date: 26 June 2024

Jinyao Nan, Pingfa Feng, Jie Xu and Feng Feng

The purpose of this study is to advance the computational modeling of liquid splashing dynamics, while balancing simulation accuracy and computational efficiency, a duality often…

Abstract

Purpose

The purpose of this study is to advance the computational modeling of liquid splashing dynamics, while balancing simulation accuracy and computational efficiency, a duality often compromised in high-fidelity fluid dynamics simulations.

Design/methodology/approach

This study introduces the fluid efficient graph neural network simulator (FEGNS), an innovative framework that integrates an adaptive filtering layer and aggregator fusion strategy within a graph neural network architecture. FEGNS is designed to directly learn from extensive liquid splash data sets, capturing the intricate dynamics and intrinsically complex interactions.

Findings

FEGNS achieves a remarkable 30.3% improvement in simulation accuracy over traditional methods, coupled with a 51.6% enhancement in computational speed. It exhibits robust generalization capabilities across diverse materials, enabling realistic simulations of droplet effects. Comparative analyses and empirical validations demonstrate FEGNS’s superior performance against existing benchmark models.

Originality/value

The originality of FEGNS lies in its adaptive filtering layer, which independently adjusts filtering weights per node, and a novel aggregator fusion strategy that enriches the network’s expressive power by combining multiple aggregation functions. To facilitate further research and practical deployment, the FEGNS model has been made accessible on GitHub (https://github.com/nanjinyao/FEGNS/tree/main).

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 34 no. 6
Type: Research Article
ISSN: 0961-5539

Keywords

Article
Publication date: 13 August 2024

Yan Kan, Hao Li, Zhengtao Chen, Changjiang Sun, Hao Wang and Joachim Seidelmann

This paper aims to propose a stable and precise recognition and pose estimation method to deal with the difficulties that industrial parts often present, such as incomplete point…

38

Abstract

Purpose

This paper aims to propose a stable and precise recognition and pose estimation method to deal with the difficulties that industrial parts often present, such as incomplete point cloud data due to surface reflections, lack of color texture features and limited availability of effective three-dimensional geometric information. These challenges lead to less-than-ideal performance of existing object recognition and pose estimation methods based on two-dimensional images or three-dimensional point cloud features.

Design/methodology/approach

In this paper, an image-guided depth map completion method is proposed to improve the algorithm's adaptability to noise and incomplete point cloud scenes. Furthermore, this paper also proposes a pose estimation method based on contour feature matching.

Findings

Through experimental testing on real-world and virtual scene dataset, it has been verified that the image-guided depth map completion method exhibits higher accuracy in estimating depth values for depth map hole pixels. The pose estimation method proposed in this paper was applied to conduct pose estimation experiments on various parts. The average recognition accuracy in real-world scenes was 88.17%, whereas in virtual scenes, the average recognition accuracy reached 95%.

Originality/value

The proposed recognition and pose estimation method can stably and precisely deal with the difficulties that industrial parts present and improve the algorithm's adaptability to noise and incomplete point cloud scenes.

Details

Robotic Intelligence and Automation, vol. 44 no. 5
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 8 December 2023

Han Sun, Song Tang, Xiaozhi Qi, Zhiyuan Ma and Jianxin Gao

This study aims to introduce a novel noise filter module designed for LiDAR simultaneous localization and mapping (SLAM) systems. The primary objective is to enhance pose…

Abstract

Purpose

This study aims to introduce a novel noise filter module designed for LiDAR simultaneous localization and mapping (SLAM) systems. The primary objective is to enhance pose estimation accuracy and improve the overall system performance in outdoor environments.

Design/methodology/approach

Distinct from traditional approaches, MCFilter emphasizes enhancing point cloud data quality at the pixel level. This framework hinges on two primary elements. First, the D-Tracker, a tracking algorithm, is grounded on multiresolution three-dimensional (3D) descriptors and adeptly maintains a balance between precision and efficiency. Second, the R-Filter introduces a pixel-level attribute named motion-correlation, which effectively identifies and removes dynamic points. Furthermore, designed as a modular component, MCFilter ensures seamless integration into existing LiDAR SLAM systems.

Findings

Based on rigorous testing with public data sets and real-world conditions, the MCFilter reported an increase in average accuracy of 12.39% and reduced processing time by 24.18%. These outcomes emphasize the method’s effectiveness in refining the performance of current LiDAR SLAM systems.

Originality/value

In this study, the authors present a novel 3D descriptor tracker designed for consistent feature point matching across successive frames. The authors also propose an innovative attribute to detect and eliminate noise points. Experimental results demonstrate that integrating this method into existing LiDAR SLAM systems yields state-of-the-art performance.

Details

Robotic Intelligence and Automation, vol. 44 no. 1
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 16 April 2024

Shilong Zhang, Changyong Liu, Kailun Feng, Chunlai Xia, Yuyin Wang and Qinghe Wang

The swivel construction method is a specially designed process used to build bridges that cross rivers, valleys, railroads and other obstacles. To carry out this construction…

Abstract

Purpose

The swivel construction method is a specially designed process used to build bridges that cross rivers, valleys, railroads and other obstacles. To carry out this construction method safely, real-time monitoring of the bridge rotation process is required to ensure a smooth swivel operation without collisions. However, the traditional means of monitoring using Electronic Total Station tools cannot realize real-time monitoring, and monitoring using motion sensors or GPS is cumbersome to use.

Design/methodology/approach

This study proposes a monitoring method based on a series of computer vision (CV) technologies, which can monitor the rotation angle, velocity and inclination angle of the swivel construction in real-time. First, three proposed CV algorithms was developed in a laboratory environment. The experimental tests were carried out on a bridge scale model to select the outperformed algorithms for rotation, velocity and inclination monitor, respectively, as the final monitoring method in proposed method. Then, the selected method was implemented to monitor an actual bridge during its swivel construction to verify the applicability.

Findings

In the laboratory study, the monitoring data measured with the selected monitoring algorithms was compared with those measured by an Electronic Total Station and the errors in terms of rotation angle, velocity and inclination angle, were 0.040%, 0.040%, and −0.454%, respectively, thus validating the accuracy of the proposed method. In the pilot actual application, the method was shown to be feasible in a real construction application.

Originality/value

In a well-controlled laboratory the optimal algorithms for bridge swivel construction are identified and in an actual project the proposed method is verified. The proposed CV method is complementary to the use of Electronic Total Station tools, motion sensors, and GPS for safety monitoring of swivel construction of bridges. It also contributes to being a possible approach without data-driven model training. Its principal advantages are that it both provides real-time monitoring and is easy to deploy in real construction applications.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 4 June 2024

Dan Zhang, Junji Yuan, Haibin Meng, Wei Wang, Rui He and Sen Li

In the context of fire incidents within buildings, efficient scene perception by firefighting robots is particularly crucial. Although individual sensors can provide specific…

Abstract

Purpose

In the context of fire incidents within buildings, efficient scene perception by firefighting robots is particularly crucial. Although individual sensors can provide specific types of data, achieving deep data correlation among multiple sensors poses challenges. To address this issue, this study aims to explore a fusion approach integrating thermal imaging cameras and LiDAR sensors to enhance the perception capabilities of firefighting robots in fire environments.

Design/methodology/approach

Prior to sensor fusion, accurate calibration of the sensors is essential. This paper proposes an extrinsic calibration method based on rigid body transformation. The collected data is optimized using the Ceres optimization algorithm to obtain precise calibration parameters. Building upon this calibration, a sensor fusion method based on coordinate projection transformation is proposed, enabling real-time mapping between images and point clouds. In addition, the effectiveness of the proposed fusion device data collection is validated in experimental smoke-filled fire environments.

Findings

The average reprojection error obtained by the extrinsic calibration method based on rigid body transformation is 1.02 pixels, indicating good accuracy. The fused data combines the advantages of thermal imaging cameras and LiDAR, overcoming the limitations of individual sensors.

Originality/value

This paper introduces an extrinsic calibration method based on rigid body transformation, along with a sensor fusion approach based on coordinate projection transformation. The effectiveness of this fusion strategy is validated in simulated fire environments.

Details

Sensor Review, vol. 44 no. 4
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 9 July 2024

Zengrui Zheng, Kainan Su, Shifeng Lin, Zhiquan Fu and Chenguang Yang

Visual simultaneous localization and mapping (SLAM) has limitations such as sensitivity to lighting changes and lower measurement accuracy. The effective fusion of information…

Abstract

Purpose

Visual simultaneous localization and mapping (SLAM) has limitations such as sensitivity to lighting changes and lower measurement accuracy. The effective fusion of information from multiple modalities to address these limitations has emerged as a key research focus. This study aims to provide a comprehensive review of the development of vision-based SLAM (including visual SLAM) for navigation and pose estimation, with a specific focus on techniques for integrating multiple modalities.

Design/methodology/approach

This paper initially introduces the mathematical models and framework development of visual SLAM. Subsequently, this paper presents various methods for improving accuracy in visual SLAM by fusing different spatial and semantic features. This paper also examines the research advancements in vision-based SLAM with respect to multi-sensor fusion in both loosely coupled and tightly coupled approaches. Finally, this paper analyzes the limitations of current vision-based SLAM and provides predictions for future advancements.

Findings

The combination of vision-based SLAM and deep learning has significant potential for development. There are advantages and disadvantages to both loosely coupled and tightly coupled approaches in multi-sensor fusion, and the most suitable algorithm should be chosen based on the specific application scenario. In the future, vision-based SLAM is evolving toward better addressing challenges such as resource-limited platforms and long-term mapping.

Originality/value

This review introduces the development of vision-based SLAM and focuses on the advancements in multimodal fusion. It allows readers to quickly understand the progress and current status of research in this field.

Details

Robotic Intelligence and Automation, vol. 44 no. 4
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 5 August 2024

Christopher Igwe Idumah, Raphael Stone Odera and Emmanuel Obumneme Ezeani

Nanotechnology (NT) advancements in personal protective textiles (PPT) or personal protective equipment (PPE) have alleviated spread and transmission of this highly contagious…

Abstract

Purpose

Nanotechnology (NT) advancements in personal protective textiles (PPT) or personal protective equipment (PPE) have alleviated spread and transmission of this highly contagious viral disease, and enabled enhancement of PPE, thereby fortifying antiviral behavior.

Design/methodology/approach

Review of a series of state of the art research papers on the subject matter.

Findings

This paper expounds on novel nanotechnological advancements in polymeric textile composites, emerging applications and fight against COVID-19 pandemic.

Research limitations/implications

As a panacea to “public droplet prevention,” textiles have proven to be potentially effective as environmental droplet barriers (EDBs).

Practical implications

PPT in form of healthcare materials including surgical face masks (SFMs), gloves, goggles, respirators, gowns, uniforms, scrub-suits and other apparels play critical role in hindering the spreading of COVID-19 and other “oral-respiratory droplet contamination” both within and outside hospitals.

Social implications

When used as double-layers, textiles display effectiveness as SFMs or surgical-fabrics, which reduces droplet transmission to <10 cm, within circumference of ∼0.3%.

Originality/value

NT advancements in textiles through nanoparticles, and sensor integration within textile materials have enhanced versatile sensory capabilities, robotics, flame retardancy, self-cleaning, electrical conductivity, flexibility and comfort, thereby availing it for health, medical, sporting, advanced engineering, pharmaceuticals, aerospace, military, automobile, food and agricultural applications, and more. Therefore, this paper expounds on recently emerging trends in nanotechnological influence in textiles for engineering and fight against COVID-19 pandemic.

Details

International Journal of Clothing Science and Technology, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0955-6222

Keywords

Article
Publication date: 16 July 2024

Tarun Pal Singh, Arun Kumar Verma, Vincentraju Rajkumar, Ravindra Kumar, Manoj Kumar Singh and Manish Kumar Chatli

Goat milk yoghurt differs from cow milk yoghurt in that it has a different casein composition and content, which presents several technical challenges, including consistency with…

Abstract

Purpose

Goat milk yoghurt differs from cow milk yoghurt in that it has a different casein composition and content, which presents several technical challenges, including consistency with an appropriate flavor.

Design/methodology/approach

In this study, the antioxidant potential and phytochemical profiling of the fruits (pineapple and papaya) and vegetable (carrot) extracts was evaluated and the effect of their purees on the quality and stability of stirred goat milk yoghurt (GMY) were investigated. The qualities of stirred GMY with carrot (CrY), pineapple (PaY) and papaya (PpY) purees were assessed against the product without puree (CY).

Findings

The carrot puree had the highest moisture, ash contents and pH value. The carrot extract had the highest DPPH radical scavenging activity, while the pineapple extract had the highest total phenolic value (1.59 µg GAE/g) and flavonoids content (0.203 µg CE/g). The scanning of all the puree extracts in GC-MS indicated that 5-hydroxymethylfurfural was a major component. The phytochemical quantification of the extracts through multiple reaction monitoring (MRM) against 16 compounds showed the presence of sinapic acid, cinnamic acid, pthalic acid, ferulic acid, 4-OH-benzoic acid, 3-OH-benzoic acid, p-coumaric acid, caffeic acid and vanillic acid in different quantities. The addition of purees and storage period had a significant (p < 0.05) effect on the moisture, pH, titratable acidity, syneresis, viscosity, color values and sensory properties of the products. In all the samples after 15 days of storage, Streptococcus thermophilus and Lactobacillus bulgaricus counts remained above the recommended level of 106CFU/g. Stirred GMY sample produced with pineapple puree showed a higher syneresis and viscosity, but the CrY sample demonstrated the highest antioxidant activity. The developed formulations remained stable with minimum changes in quality and sensory attribute during refrigerated storage for 10 days.

Originality/value

This study suggests that addition of fruit and vegetable improve the viscosity and sensory perception of the product with minimal use of synthetic flavor and preservatives.

Details

British Food Journal, vol. 126 no. 9
Type: Research Article
ISSN: 0007-070X

Keywords

Article
Publication date: 17 June 2021

Ambica Ghai, Pradeep Kumar and Samrat Gupta

Web users rely heavily on online content make decisions without assessing the veracity of the content. The online content comprising text, image, video or audio may be tampered…

1404

Abstract

Purpose

Web users rely heavily on online content make decisions without assessing the veracity of the content. The online content comprising text, image, video or audio may be tampered with to influence public opinion. Since the consumers of online information (misinformation) tend to trust the content when the image(s) supplement the text, image manipulation software is increasingly being used to forge the images. To address the crucial problem of image manipulation, this study focusses on developing a deep-learning-based image forgery detection framework.

Design/methodology/approach

The proposed deep-learning-based framework aims to detect images forged using copy-move and splicing techniques. The image transformation technique aids the identification of relevant features for the network to train effectively. After that, the pre-trained customized convolutional neural network is used to train on the public benchmark datasets, and the performance is evaluated on the test dataset using various parameters.

Findings

The comparative analysis of image transformation techniques and experiments conducted on benchmark datasets from a variety of socio-cultural domains establishes the effectiveness and viability of the proposed framework. These findings affirm the potential applicability of proposed framework in real-time image forgery detection.

Research limitations/implications

This study bears implications for several important aspects of research on image forgery detection. First this research adds to recent discussion on feature extraction and learning for image forgery detection. While prior research on image forgery detection, hand-crafted the features, the proposed solution contributes to stream of literature that automatically learns the features and classify the images. Second, this research contributes to ongoing effort in curtailing the spread of misinformation using images. The extant literature on spread of misinformation has prominently focussed on textual data shared over social media platforms. The study addresses the call for greater emphasis on the development of robust image transformation techniques.

Practical implications

This study carries important practical implications for various domains such as forensic sciences, media and journalism where image data is increasingly being used to make inferences. The integration of image forgery detection tools can be helpful in determining the credibility of the article or post before it is shared over the Internet. The content shared over the Internet by the users has become an important component of news reporting. The framework proposed in this paper can be further extended and trained on more annotated real-world data so as to function as a tool for fact-checkers.

Social implications

In the current scenario wherein most of the image forgery detection studies attempt to assess whether the image is real or forged in an offline mode, it is crucial to identify any trending or potential forged image as early as possible. By learning from historical data, the proposed framework can aid in early prediction of forged images to detect the newly emerging forged images even before they occur. In summary, the proposed framework has a potential to mitigate physical spreading and psychological impact of forged images on social media.

Originality/value

This study focusses on copy-move and splicing techniques while integrating transfer learning concepts to classify forged images with high accuracy. The synergistic use of hitherto little explored image transformation techniques and customized convolutional neural network helps design a robust image forgery detection framework. Experiments and findings establish that the proposed framework accurately classifies forged images, thus mitigating the negative socio-cultural spread of misinformation.

Details

Information Technology & People, vol. 37 no. 2
Type: Research Article
ISSN: 0959-3845

Keywords

1 – 10 of 423