Search results

1 – 10 of 46
Article
Publication date: 1 September 2022

Xuwen Chi, Cao Tan, Bo Li, Jiayu Lu, Chaofan Gu and Changzhong Fu

The purpose of this paper is to solve the common problems that traditional optimization methods cannot fully improve the performance of electromagnetic linear actuators (EMLAs).

Abstract

Purpose

The purpose of this paper is to solve the common problems that traditional optimization methods cannot fully improve the performance of electromagnetic linear actuators (EMLAs).

Design/methodology/approach

In this paper, a multidisciplinary optimization (MDO) method based on the non-dominated sorting genetic algorithm-II (NSGA-II) algorithm was proposed. An electromagnetic-mechanical coupled actuator analysis model of EMLAs was established, and the coupling relationship between static/dynamic performance of the actuator was analyzed. Suitable optimization variables were designed based on fuzzy grayscale theory to address the incompleteness of the actuator data and the uncertainty of the coupling relationship. A multiobjective genetic algorithm was used to obtain the optimal solution set of Pareto with the maximum electromagnetic force, electromagnetic force fluctuation rate, time constant and efficiency as the optimization objectives, the final optimization results were then obtained through a multicriteria decision-making method.

Findings

The experimental results show that the maximum electromagnetic force, electromagnetic force fluctuation rate, time constants and efficiency are improved by 18.1%, 38.5%, 8.5% and 12%, respectively. Compared with single-discipline optimization, the effectiveness of the multidiscipline optimization method was verified.

Originality/value

This paper proposes a MDO method for EMLAs that takes into account static/dynamic performance, the proposed method is also applicable to the design and analysis of various electromagnetic actuators.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering , vol. 42 no. 2
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 2 January 2024

Xiumei Cai, Xi Yang and Chengmao Wu

Multi-view fuzzy clustering algorithms are not widely used in image segmentation, and many of these algorithms are lacking in robustness. The purpose of this paper is to…

Abstract

Purpose

Multi-view fuzzy clustering algorithms are not widely used in image segmentation, and many of these algorithms are lacking in robustness. The purpose of this paper is to investigate a new algorithm that can segment the image better and retain as much detailed information about the image as possible when segmenting noisy images.

Design/methodology/approach

The authors present a novel multi-view fuzzy c-means (FCM) clustering algorithm that includes an automatic view-weight learning mechanism. Firstly, this algorithm introduces a view-weight factor that can automatically adjust the weight of different views, thereby allowing each view to obtain the best possible weight. Secondly, the algorithm incorporates a weighted fuzzy factor, which serves to obtain local spatial information and local grayscale information to preserve image details as much as possible. Finally, in order to weaken the effects of noise and outliers in image segmentation, this algorithm employs the kernel distance measure instead of the Euclidean distance.

Findings

The authors added different kinds of noise to images and conducted a large number of experimental tests. The results show that the proposed algorithm performs better and is more accurate than previous multi-view fuzzy clustering algorithms in solving the problem of noisy image segmentation.

Originality/value

Most of the existing multi-view clustering algorithms are for multi-view datasets, and the multi-view fuzzy clustering algorithms are unable to eliminate noise points and outliers when dealing with noisy images. The algorithm proposed in this paper has stronger noise immunity and can better preserve the details of the original image.

Details

Engineering Computations, vol. 41 no. 1
Type: Research Article
ISSN: 0264-4401

Keywords

Open Access
Article
Publication date: 21 June 2022

Abhishek Das and Mihir Narayan Mohanty

In time and accurate detection of cancer can save the life of the person affected. According to the World Health Organization (WHO), breast cancer occupies the most frequent…

Abstract

Purpose

In time and accurate detection of cancer can save the life of the person affected. According to the World Health Organization (WHO), breast cancer occupies the most frequent incidence among all the cancers whereas breast cancer takes fifth place in the case of mortality numbers. Out of many image processing techniques, certain works have focused on convolutional neural networks (CNNs) for processing these images. However, deep learning models are to be explored well.

Design/methodology/approach

In this work, multivariate statistics-based kernel principal component analysis (KPCA) is used for essential features. KPCA is simultaneously helpful for denoising the data. These features are processed through a heterogeneous ensemble model that consists of three base models. The base models comprise recurrent neural network (RNN), long short-term memory (LSTM) and gated recurrent unit (GRU). The outcomes of these base learners are fed to fuzzy adaptive resonance theory mapping (ARTMAP) model for decision making as the nodes are added to the F_2ˆa layer if the winning criteria are fulfilled that makes the ARTMAP model more robust.

Findings

The proposed model is verified using breast histopathology image dataset publicly available at Kaggle. The model provides 99.36% training accuracy and 98.72% validation accuracy. The proposed model utilizes data processing in all aspects, i.e. image denoising to reduce the data redundancy, training by ensemble learning to provide higher results than that of single models. The final classification by a fuzzy ARTMAP model that controls the number of nodes depending upon the performance makes robust accurate classification.

Research limitations/implications

Research in the field of medical applications is an ongoing method. More advanced algorithms are being developed for better classification. Still, the scope is there to design the models in terms of better performance, practicability and cost efficiency in the future. Also, the ensemble models may be chosen with different combinations and characteristics. Only signal instead of images may be verified for this proposed model. Experimental analysis shows the improved performance of the proposed model. This method needs to be verified using practical models. Also, the practical implementation will be carried out for its real-time performance and cost efficiency.

Originality/value

The proposed model is utilized for denoising and to reduce the data redundancy so that the feature selection is done using KPCA. Training and classification are performed using heterogeneous ensemble model designed using RNN, LSTM and GRU as base classifiers to provide higher results than that of single models. Use of adaptive fuzzy mapping model makes the final classification accurate. The effectiveness of combining these methods to a single model is analyzed in this work.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Article
Publication date: 25 January 2018

Hima Bindu and Manjunathachari K.

This paper aims to develop the Hybrid feature descriptor and probabilistic neuro-fuzzy system for attaining the high accuracy in face recognition system. In recent days, facial…

Abstract

Purpose

This paper aims to develop the Hybrid feature descriptor and probabilistic neuro-fuzzy system for attaining the high accuracy in face recognition system. In recent days, facial recognition (FR) systems play a vital part in several applications such as surveillance, access control and image understanding. Accordingly, various face recognition methods have been developed in the literature, but the applicability of these algorithms is restricted because of unsatisfied accuracy. So, the improvement of face recognition is significantly important for the current trend.

Design/methodology/approach

This paper proposes a face recognition system through feature extraction and classification. The proposed model extracts the local and the global feature of the image. The local features of the image are extracted using the kernel based scale invariant feature transform (K-SIFT) model and the global features are extracted using the proposed m-Co-HOG model. (Co-HOG: co-occurrence histograms of oriented gradients) The proposed m-Co-HOG model has the properties of the Co-HOG algorithm. The feature vector database contains combined local and the global feature vectors derived using the K-SIFT model and the proposed m-Co-HOG algorithm. This paper proposes a probabilistic neuro-fuzzy classifier system for the finding the identity of the person from the extracted feature vector database.

Findings

The face images required for the simulation of the proposed work are taken from the CVL database. The simulation considers a total of 114 persons form the CVL database. From the results, it is evident that the proposed model has outperformed the existing models with an improved accuracy of 0.98. The false acceptance rate (FAR) and false rejection rate (FRR) values of the proposed model have a low value of 0.01.

Originality/value

This paper proposes a face recognition system with proposed m-Co-HOG vector and the hybrid neuro-fuzzy classifier. Feature extraction was based on the proposed m-Co-HOG vector for extracting the global features and the existing K-SIFT model for extracting the local features from the face images. The proposed m-Co-HOG vector utilizes the existing Co-HOG model for feature extraction, along with a new color gradient decomposition method. The major advantage of the proposed m-Co-HOG vector is that it utilizes the color features of the image along with other features during the histogram operation.

Details

Sensor Review, vol. 38 no. 3
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 4 September 2019

Li Na, Xiong Zhiyong, Deng Tianqi and Ren Kai

The precise segmentation of brain tumors is the most important and crucial step in their diagnosis and treatment. Due to the presence of noise, uneven gray levels, blurred…

Abstract

Purpose

The precise segmentation of brain tumors is the most important and crucial step in their diagnosis and treatment. Due to the presence of noise, uneven gray levels, blurred boundaries and edema around the brain tumor region, the brain tumor image has indistinct features in the tumor region, which pose a problem for diagnostics. The paper aims to discuss these issues.

Design/methodology/approach

In this paper, the authors propose an original solution for segmentation using Tamura Texture and ensemble Support Vector Machine (SVM) structure. In the proposed technique, 124 features of each voxel are extracted, including Tamura texture features and grayscale features. Then, these features are ranked using the SVM-Recursive Feature Elimination method, which is also adopted to optimize the parameters of the Radial Basis Function kernel of SVMs. Finally, the bagging random sampling method is utilized to construct the ensemble SVM classifier based on a weighted voting mechanism to classify the types of voxel.

Findings

The experiments are conducted over a sample data set to be called BraTS2015. The experiments demonstrate that Tamura texture is very useful in the segmentation of brain tumors, especially the feature of line-likeness. The superior performance of the proposed ensemble SVM classifier is demonstrated by comparison with single SVM classifiers as well as other methods.

Originality/value

The authors propose an original solution for segmentation using Tamura Texture and ensemble SVM structure.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 12 no. 4
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 1 July 2005

G.Y. Hong, B. Fong and A.C.M. Fong

We describe an intelligent video categorization engine (IVCE) that uses the learning capability of artificial neural networks (ANNs) to classify suitably preprocessed video…

Abstract

Purpose

We describe an intelligent video categorization engine (IVCE) that uses the learning capability of artificial neural networks (ANNs) to classify suitably preprocessed video segments into a predefined number of semantically meaningful events (categories).

Design/methodology/approach

We provide a survey of existing techniques that have been proposed, either directly or indirectly, towards achieving intelligent video categorization. We also compare the performance of two popular ANNs: Kohonen's self‐organizing map (SOM) and fuzzy adaptive resonance theory (Fuzzy ART). In particular, the ANNs are trained offline to form the necessary knowledge base prior to online categorization.

Findings

Experimental results show that accurate categorization can be achieved near instantaneously.

Research limitations

The main limitation of this research is the need for a finite set of predefined categories. Further research should focus on generalization of such techniques.

Originality/value

Machine understanding of video footage has tremendous potential for three reasons. First, it enables interactive broadcast of video. Second, it allows unequal error protection for different video shots/segments during transmission to make better use of limited channel resources. Third, it provides intuitive indexing and retrieval for video‐on‐demand applications.

Details

Kybernetes, vol. 34 no. 6
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 21 August 2023

Tomasz Rogalski, Paweł Rzucidło, Stanisław Noga and Dariusz Nowak

This study presents an image processing algorithm capable of calculating selected flight parameters requested by flight control systems to guide aircraft along the horizontal…

Abstract

Purpose

This study presents an image processing algorithm capable of calculating selected flight parameters requested by flight control systems to guide aircraft along the horizontal projection of the landing trajectory. The parameters identified based on the basics of the image of the Calvert light system appearing in the on-board video system are used by flight control algorithms that imitate the pilot’s schematics of control. Controls were generated using a fuzzy logic expert system. This study aims to analyse an alternative to classical solutions that can be applied to some specific cases.

Design/methodology/approach

The paper uses theoretical discussions and breakdowns to create the basics for the development of structures for both image processing algorithms and control algorithms. An analytical discussion on the first stage was transformed into laboratory rig tests using a real autopilot unit. The results of this research were verified in a series of software-in-the-loop computer simulations.

Findings

The image processing method extracts the most crucial parameters defining the relative position of the aircraft to the runway, as well as the control algorithm that uses it.

Practical implications

In flight control systems that do not use any dedicated ground or satellite infrastructure to land the aircraft.

Originality/value

This paper presents the original approach of the author to aircraft control in cases where visual signals are used to determine the flight trajectory of the aircraft.

Details

Aircraft Engineering and Aerospace Technology, vol. 95 no. 9
Type: Research Article
ISSN: 1748-8842

Keywords

Article
Publication date: 22 November 2011

Bailing Zhang

Content‐based image retrieval (CBIR) is an important research area for automatically retrieving images of user interest from a large database. Due to many potential applications…

Abstract

Purpose

Content‐based image retrieval (CBIR) is an important research area for automatically retrieving images of user interest from a large database. Due to many potential applications, facial image retrieval has received much attention in recent years. Similar to face recognition, finding appropriate image representation is a vital step for a successful facial image retrieval system. Recently, many efficient image feature descriptors have been proposed and some of them have been applied to face recognition. It is valuable to have comparative studies of different feature descriptors in facial image retrieval. And more importantly, how to fuse multiple features is a significant task which can have a substantial impact on the overall performance of the CBIR system. The purpose of this paper is to propose an efficient face image retrieval strategy.

Design/methodology/approach

In this paper, three different feature description methods have been investigated for facial image retrieval, including local binary pattern, curvelet transform and pyramid histogram of oriented gradient. The problem of large dimensionalities of the extracted features is addressed by employing a manifold learning method called spectral regression. A decision level fusion scheme fuzzy aggregation is applied by combining the distance metrics from the respective dimension reduced feature spaces.

Findings

Empirical evaluations on several face databases illustrate that dimension reduced features are more efficient for facial retrieval and the fuzzy aggregation fusion scheme can offer much enhanced performance. A 98 per cent rank 1 retrieval accuracy was obtained for the AR faces and 91 per cent for the FERET faces, showing that the method is robust against different variations like pose and occlusion.

Originality/value

The proposed method for facial image retrieval has a promising potential of designing a real‐world system for many applications, particularly in forensics and biometrics.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 4 no. 4
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 8 June 2012

Qiuping Wang, Tiepeng Wang and Ke Zhang

Image edge detection is an essential issue in image processing and computer vision. The purpose of this paper is to provide a novel and effective algorithm for image edge…

Abstract

Purpose

Image edge detection is an essential issue in image processing and computer vision. The purpose of this paper is to provide a novel and effective algorithm for image edge detection.

Design/methodology/approach

Because GM (1,1) model is a typical model for tendency analysis, GM (1,1) model can be used for detecting edge. Prediction image data are close to the original image data by reason of the data being smooth in the non‐edge zone of image. The principle of edge detection by GM (1,1) model is that the predicted value at an edge point will be an overestimate or underestimate owing to the data changing drastically in the edge zone of the image. First, the edge image information is obtained by a preprocessed image subtracting from prediction image via GM (1,1). Second, median filter is used to eliminate isolated point noise in edge information images, and discrete wavelet transform is used to extract the image edge. Finally, this paper verifies the proposed algorithm by experiment.

Findings

Experimental results show that the proposed algorithm has advantages such as precisely locating, abundant weak edge, and better anti‐noise performance.

Practical implications

The algorithm proposed in the paper can precisely detect the information of edge image, and get a clear image detail.

Originality/value

Grey system theory developed vigorously lays the foundation for image processing. Wavelet analysis in image processing has its characteristics. This paper combines grey prediction model with discrete wavelet transform (DWT) successfully and obtains a novel and effective algorithm for image edge detection.

Details

Kybernetes, vol. 41 no. 5/6
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 17 October 2008

Kong Yushou, Ji Lingling, Wang Changyu, Li Liguo and Zeng Liming

To forecast the path of tropic cyclones by using a non‐linear statistical forecasting technique – the method of successive analogy.

1260

Abstract

Purpose

To forecast the path of tropic cyclones by using a non‐linear statistical forecasting technique – the method of successive analogy.

Design/methodology/approach

Non‐linear statistical forecasting models can describe the non‐linear relationship between the factors and the forecasting objects and the real atmospheric movement more accurately, so they usually have stronger forecasting capability. In practice, however, it is shown that the relationships between predictors and predictands sometimes are so complex that it is very difficult or even impossible to establish the kind of non‐linear mathematical model. Therefore, it is an important topic for atmospheric science to solve non‐linear prediction problem of atmospheric systems by using the non‐function model approach.

Findings

The objective and quantitative prediction of tropical cyclone moving path can be given by using the method of successive analogy, a non‐linear forecasting technique, and calculating the similarity parameters between the grayscale field and the height field.

Research limitations/implications

Further experiments are needed to verify this technique.

Practical implications

A very useful technique for solving non‐linear problem.

Originality/value

Illustrates the new technique of solving non‐linear statistic problem and its application.

Details

Kybernetes, vol. 37 no. 9/10
Type: Research Article
ISSN: 0368-492X

Keywords

1 – 10 of 46