Search results

1 – 10 of over 1000
Article
Publication date: 29 July 2014

Xiang Gao, Hua Wang and Guanlong Chen

Fitting evenness is one key characteristic for three-dimensional objects' optimal fit. The weighted Gaussian imaging method is developed for fitting evenness of auto body…

1741

Abstract

Purpose

Fitting evenness is one key characteristic for three-dimensional objects' optimal fit. The weighted Gaussian imaging method is developed for fitting evenness of auto body taillight fitting optimization.

Design/methodology/approach

Fitting boundary contours are extracted from scanning data points. Optimal fitting target is represented with gap and flushness between taillight and auto body. By optimizing the fitting position of the projected boundary contours on the Gaussian sphere, the weighted Gaussian imaging method accomplishes optimal requirements of gap and flushness. A scanning system is established, and the fitting contour of the taillight assembly model is extracted to analyse the quality of the fitting process.

Findings

The proposed method accomplishes the fitting optimization for taillight fitting with higher efficiency.

Originality/value

The weighted Gaussian imaging method is used to optimize the taillight fitting. The proposed method optimized the fitting objects' 3-D space, while the traditional fitting methods are based on 2-D algorithm. Its time complexity is O(n3), while those of the traditional methods are O(n5). The results of this research will enhance the understanding of the 3-D optimal fitting and help in systematically improving the productivity and the fitting quality in automotive industry.

Details

Assembly Automation, vol. 34 no. 3
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 19 October 2018

Shuanggao Li, Zhengping Deng, Qi Zeng and Xiang Huang

The assembly of large component in out-field is an important part for the usage and maintenance of aircrafts, which is mostly manually accomplished at present, as the commonly…

Abstract

Purpose

The assembly of large component in out-field is an important part for the usage and maintenance of aircrafts, which is mostly manually accomplished at present, as the commonly used large-volume measurement systems are usually inapplicable. This paper aims to propose a novel coaxial alignment method for large aircraft component assembly using distributed monocular vision.

Design/methodology/approach

For each of the mating holes on the components, a monocular vision module is applied to measure the poses of holes, which together shape a distributed monocular vision system. A new unconstrained hole pose optimization model is developed considering the complicated wearing on hole edges, and it is solved by a iterative reweighted particle swarm optimization (IR-PSO) method. Based on the obtained poses of holes, a Plücker line coordinates-based method is proposed for the relative posture evaluation between the components, and the analytical solution of posture parameters is derived. The required movements for coaxial alignment are finally calculated using the kinematics model of parallel mechanism.

Findings

The IR-PSO method derived more accurate hole pose arguments than the state-of-the-art method under complicated wearing situation of holes, and is much more efficient due to the elimination of constraints. The accuracy of the Plücker line coordinates-based relative posture evaluation (PRPE) method is competitive with the singular value decomposition (SVD) method, but it does not rely on the corresponding of point set; thus, it is more appropriate for coaxial alignment.

Practical implications

An automatic coaxial alignment system (ACAS) has been developed for the assembly of a large pilotless aircraft, and a coaxial error of 0.04 mm is realized.

Originality/value

The IR-PSO method can be applied for pose optimization of other cylindrical object, and the analytical solution of Plücker line coordinates-based axes registration is derived for the first time.

Details

Assembly Automation, vol. 38 no. 4
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 27 July 2021

Papangkorn Pidchayathanakorn and Siriporn Supratid

A major key success factor regarding proficient Bayes threshold denoising refers to noise variance estimation. This paper focuses on assessing different noise variance estimations…

Abstract

Purpose

A major key success factor regarding proficient Bayes threshold denoising refers to noise variance estimation. This paper focuses on assessing different noise variance estimations in three Bayes threshold models on two different characteristic brain lesions/tumor magnetic resonance imaging (MRIs).

Design/methodology/approach

Here, three Bayes threshold denoising models based on different noise variance estimations under the stationary wavelet transforms (SWT) domain are mainly assessed, compared to state-of-the-art non-local means (NLMs). Each of those three models, namely D1, GB and DR models, respectively, depends on the most detail wavelet subband at the first resolution level, on the entirely global detail subbands and on the detail subband in each direction/resolution. Explicit and implicit denoising performance are consecutively assessed by threshold denoising and segmentation identification results.

Findings

Implicit performance assessment points the first–second best accuracy, 0.9181 and 0.9048 Dice similarity coefficient (Dice), sequentially yielded by GB and DR; reliability is indicated by 45.66% Dice dropping of DR, compared against 53.38, 61.03 and 35.48% of D1 GB and NLMs, when increasing 0.2 to 0.9 noise level on brain lesions MRI. For brain tumor MRI under 0.2 noise level, it denotes the best accuracy of 0.9592 Dice, resulted by DR; however, 8.09% Dice dropping of DR, relative to 6.72%, 8.85 and 39.36% of D1, GB and NLMs is denoted. The lowest explicit and implicit denoising performances of NLMs are obviously pointed.

Research limitations/implications

A future improvement of denoising performance possibly refers to creating a semi-supervised denoising conjunction model. Such model utilizes the denoised MRIs, resulted by DR and D1 thresholding model as uncorrupted image version along with the noisy MRIs, representing corrupted version ones during autoencoder training phase, to reconstruct the original clean image.

Practical implications

This paper should be of interest to readers in the areas of technologies of computing and information science, including data science and applications, computational health informatics, especially applied as a decision support tool for medical image processing.

Originality/value

In most cases, DR and D1 provide the first–second best implicit performances in terms of accuracy and reliability on both simulated, low-detail small-size region-of-interest (ROI) brain lesions and realistic, high-detail large-size ROI brain tumor MRIs.

Article
Publication date: 5 April 2021

Zhixin Wang, Peng Xu, Bohan Liu, Yankun Cao, Zhi Liu and Zhaojun Liu

This paper aims to demonstrate the principle and practical applications of hyperspectral object detection, carry out the problem we now face and the possible solution. Also some…

Abstract

Purpose

This paper aims to demonstrate the principle and practical applications of hyperspectral object detection, carry out the problem we now face and the possible solution. Also some challenges in this field are discussed.

Design/methodology/approach

First, the paper summarized the current research status of the hyperspectral techniques. Then, the paper demonstrated the development of underwater hyperspectral techniques from three major aspects, which are UHI preprocess, unmixing and applications. Finally, the paper presents a conclusion of applications of hyperspectral imaging and future research directions.

Findings

Various methods and scenarios for underwater object detection with hyperspectral imaging are compared, which include preprocessing, unmixing and classification. A summary is made to demonstrate the application scope and results of different methods, which may play an important role in the application of underwater hyperspectral object detection in the future.

Originality/value

This paper introduced several methods of hyperspectral image process, give out the conclusion of the advantages and disadvantages of each method, then demonstrated the challenges we face and the possible way to deal with them.

Details

Sensor Review, vol. 41 no. 2
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 31 December 2021

Praveen Kumar Lendale and N.M. Nandhitha

Speckle noise removal in ultrasound images is one of the important tasks in biomedical-imaging applications. Many filtering -based despeckling methods are discussed in many…

Abstract

Purpose

Speckle noise removal in ultrasound images is one of the important tasks in biomedical-imaging applications. Many filtering -based despeckling methods are discussed in many existing works. Two-dimensional (2-D) transforms are also used enormously for the reduction of speckle noise in ultrasound medical images. In recent years, many soft computing-based intelligent techniques have been applied to noise removal and segmentation techniques. However, there is a requirement to improve the accuracy of despeckling using hybrid approaches.

Design/methodology/approach

The work focuses on double-bank anatomy with framelet transform combined with Gaussian filter (GF) and also consists of a fuzzy kind of clustering approach for despeckling ultrasound medical images. The presented transform efficiently rejects the speckle noise based on the gray scale relative thresholding where the directional filter group (DFB) preserves the edge information.

Findings

The proposed approach is evaluated by different performance indicators such as the mean square error (MSE), peak signal to noise ratio (PSNR) speckle suppression index (SSI), mean structural similarity and the edge preservation index (EPI) accordingly. It is found that the proposed methodology is superior in terms of all the above performance indicators.

Originality/value

Fuzzy kind clustering methods have been proved to be better than the conventional threshold methods for noise dismissal. The algorithm gives a reconcilable development as compared to other modern speckle reduction procedures, as it preserves the geometric features even after the noise dismissal.

Details

International Journal of Intelligent Unmanned Systems, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2049-6427

Keywords

Article
Publication date: 19 March 2024

Cemalettin Akdoğan, Tolga Özer and Yüksel Oğuz

Nowadays, food problems are likely to arise because of the increasing global population and decreasing arable land. Therefore, it is necessary to increase the yield of…

Abstract

Purpose

Nowadays, food problems are likely to arise because of the increasing global population and decreasing arable land. Therefore, it is necessary to increase the yield of agricultural products. Pesticides can be used to improve agricultural land products. This study aims to make the spraying of cherry trees more effective and efficient with the designed artificial intelligence (AI)-based agricultural unmanned aerial vehicle (UAV).

Design/methodology/approach

Two approaches have been adopted for the AI-based detection of cherry trees: In approach 1, YOLOv5, YOLOv7 and YOLOv8 models are trained with 70, 100 and 150 epochs. In Approach 2, a new method is proposed to improve the performance metrics obtained in Approach 1. Gaussian, wavelet transform (WT) and Histogram Equalization (HE) preprocessing techniques were applied to the generated data set in Approach 2. The best-performing models in Approach 1 and Approach 2 were used in the real-time test application with the developed agricultural UAV.

Findings

In Approach 1, the best F1 score was 98% in 100 epochs with the YOLOv5s model. In Approach 2, the best F1 score and mAP values were obtained as 98.6% and 98.9% in 150 epochs, with the YOLOv5m model with an improvement of 0.6% in the F1 score. In real-time tests, the AI-based spraying drone system detected and sprayed cherry trees with an accuracy of 66% in Approach 1 and 77% in Approach 2. It was revealed that the use of pesticides could be reduced by 53% and the energy consumption of the spraying system by 47%.

Originality/value

An original data set was created by designing an agricultural drone to detect and spray cherry trees using AI. YOLOv5, YOLOv7 and YOLOv8 models were used to detect and classify cherry trees. The results of the performance metrics of the models are compared. In Approach 2, a method including HE, Gaussian and WT is proposed, and the performance metrics are improved. The effect of the proposed method in a real-time experimental application is thoroughly analyzed.

Details

Robotic Intelligence and Automation, vol. 44 no. 1
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 27 March 2009

Ntogas Nikolaos and Ventzas Dimitrios

The purpose of this paper is to introduce an innovative procedure for digital historical documents image binarization based on image pre‐processing and image condition…

Abstract

Purpose

The purpose of this paper is to introduce an innovative procedure for digital historical documents image binarization based on image pre‐processing and image condition classification. The estimated results for each class of images and each method have shown improved image quality for the six categories of document images described by their separate characteristics.

Design/methodology/approach

The applied technique consists of five stages, i.e. text image acquisition, image preparation, denoising, image type classification in six categories according to image condition, image thresholding and final refinement, a very effective approach to binarize document images. The results achieved by the authors' method require minimal pre‐processing steps for best quality of the image and increased text readability. This methodology performs better compared to current state‐of‐the‐art adaptive thresholding techniques.

Findings

An innovative procedure for digital historical documents image binarization based on image pre‐processing, image type classification in categories according to image condition and further enhancement. This methodology is robust and simple, with minimal pre‐processing steps for best quality of the image, increased text readability and it performs better compared to available thresholding techniques.

Research limitations/implications

The technique consists of limited but optimized pre‐processing sequential steps, and attention should be given in document image preparation and denoising, and on image condition classification for thresholding and refinement, since bad results in a single stage corrupt the final document image quality and text readability.

Originality/value

The paper contributes in digital image binarization of text images suggesting a procedure based on image preparation, image type classification and thresholding and image refinement with applicability on Byzantine historical documents.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 2 no. 1
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 19 June 2017

Qingchen Qiu, Xuelian Wu, Zhi Liu, Bo Tang, Yuefeng Zhao, Xinyi Wu, Hongliang Zhu and Yang Xin

This paper aims to provide a framework of the supervised hyperspectral classification, to study the traditional flowchart of hyperspectral image (HIS) analysis and processing. HSI…

Abstract

Purpose

This paper aims to provide a framework of the supervised hyperspectral classification, to study the traditional flowchart of hyperspectral image (HIS) analysis and processing. HSI technology has been proposed for many years, and the applications of this technology were promoted by technical advancements.

Design/methodology/approach

First, the properties and current situation of hyperspectral technology are summarized. Then, this paper introduces a series of common classification approaches. In addition, a comparison of different classification approaches on real hyperspectral data is conducted. Finally, this survey presents a discussion on the classification results and points out the classification development tendency.

Findings

The core of this survey is to review of the state of the art of the classification for hyperspectral images, to study the performance and efficiency of certain implementation measures and to point out the challenges still exist.

Originality value

The study categorized the supervised classification for hyperspectral images, demonstrated the comparisons among these methods and pointed out the challenges that still exist.

Details

Sensor Review, vol. 37 no. 3
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 4 September 2019

Li Na, Xiong Zhiyong, Deng Tianqi and Ren Kai

The precise segmentation of brain tumors is the most important and crucial step in their diagnosis and treatment. Due to the presence of noise, uneven gray levels, blurred…

Abstract

Purpose

The precise segmentation of brain tumors is the most important and crucial step in their diagnosis and treatment. Due to the presence of noise, uneven gray levels, blurred boundaries and edema around the brain tumor region, the brain tumor image has indistinct features in the tumor region, which pose a problem for diagnostics. The paper aims to discuss these issues.

Design/methodology/approach

In this paper, the authors propose an original solution for segmentation using Tamura Texture and ensemble Support Vector Machine (SVM) structure. In the proposed technique, 124 features of each voxel are extracted, including Tamura texture features and grayscale features. Then, these features are ranked using the SVM-Recursive Feature Elimination method, which is also adopted to optimize the parameters of the Radial Basis Function kernel of SVMs. Finally, the bagging random sampling method is utilized to construct the ensemble SVM classifier based on a weighted voting mechanism to classify the types of voxel.

Findings

The experiments are conducted over a sample data set to be called BraTS2015. The experiments demonstrate that Tamura texture is very useful in the segmentation of brain tumors, especially the feature of line-likeness. The superior performance of the proposed ensemble SVM classifier is demonstrated by comparison with single SVM classifiers as well as other methods.

Originality/value

The authors propose an original solution for segmentation using Tamura Texture and ensemble SVM structure.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 12 no. 4
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 10 January 2024

Sara El-Ateif, Ali Idri and José Luis Fernández-Alemán

COVID-19 continues to spread, and cause increasing deaths. Physicians diagnose COVID-19 using not only real-time polymerase chain reaction but also the computed tomography (CT…

Abstract

Purpose

COVID-19 continues to spread, and cause increasing deaths. Physicians diagnose COVID-19 using not only real-time polymerase chain reaction but also the computed tomography (CT) and chest x-ray (CXR) modalities, depending on the stage of infection. However, with so many patients and so few doctors, it has become difficult to keep abreast of the disease. Deep learning models have been developed in order to assist in this respect, and vision transformers are currently state-of-the-art methods, but most techniques currently focus only on one modality (CXR).

Design/methodology/approach

This work aims to leverage the benefits of both CT and CXR to improve COVID-19 diagnosis. This paper studies the differences between using convolutional MobileNetV2, ViT DeiT and Swin Transformer models when training from scratch and pretraining on the MedNIST medical dataset rather than the ImageNet dataset of natural images. The comparison is made by reporting six performance metrics, the Scott–Knott Effect Size Difference, Wilcoxon statistical test and the Borda Count method. We also use the Grad-CAM algorithm to study the model's interpretability. Finally, the model's robustness is tested by evaluating it on Gaussian noised images.

Findings

Although pretrained MobileNetV2 was the best model in terms of performance, the best model in terms of performance, interpretability, and robustness to noise is the trained from scratch Swin Transformer using the CXR (accuracy = 93.21 per cent) and CT (accuracy = 94.14 per cent) modalities.

Originality/value

Models compared are pretrained on MedNIST and leverage both the CT and CXR modalities.

Details

Data Technologies and Applications, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-9288

Keywords

1 – 10 of over 1000