Search results

1 – 10 of 18
Article
Publication date: 8 January 2021

Ashok Naganath Shinde, Sanjay L. Nalbalwar and Anil B. Nandgaonkar

In today’s digital world, real-time health monitoring is becoming a most important challenge in the field of medical research. Body signals such as electrocardiogram (ECG)…

Abstract

Purpose

In today’s digital world, real-time health monitoring is becoming a most important challenge in the field of medical research. Body signals such as electrocardiogram (ECG), electromyogram and electroencephalogram (EEG) are produced in human body. This continuous monitoring generates huge count of data and thus an efficient method is required to shrink the size of the obtained large data. Compressed sensing (CS) is one of the techniques used to compress the data size. This technique is most used in certain applications, where the size of data is huge or the data acquisition process is too expensive to gather data from vast count of samples at Nyquist rate. This paper aims to propose Lion Mutated Crow search Algorithm (LM-CSA), to improve the performance of the LMCSA model.

Design/methodology/approach

A new CS algorithm is exploited in this paper, where the compression process undergoes three stages: designing of stable measurement matrix, signal compression and signal reconstruction. Here, the compression process falls under certain working principle, and is as follows: signal transformation, computation of Θ and normalization. As the main contribution, the theta value evaluation is proceeded by a new “Enhanced bi-orthogonal wavelet filter.” The enhancement is given under the scaling coefficients, where they are optimally tuned for processing the compression. However, the way of tuning seems to be the great crisis, and hence this work seeks the strategy of meta-heuristic algorithms. Moreover, a new hybrid algorithm is introduced that solves the above mentioned optimization inconsistency. The proposed algorithm is named as “Lion Mutated Crow search Algorithm (LM-CSA),” which is the hybridization of crow search algorithm (CSA) and lion algorithm (LA) to enhance the performance of the LM-CSA model.

Findings

Finally, the proposed LM-CSA model is compared over the traditional models in terms of certain error measures such as mean error percentage (MEP), symmetric mean absolute percentage error (SMAPE), mean absolute scaled error, mean absolute error (MAE), root mean square error, L1-norm and L2-normand infinity-norm. For ECG analysis, under bior 3.1, LM-CSA is 56.6, 62.5 and 81.5% better than bi-orthogonal wavelet in terms of MEP, SMAPE and MAE, respectively. Under bior 3.7 for ECG analysis, LM-CSA is 0.15% better than genetic algorithm (GA), 0.10% superior to particle search optimization (PSO), 0.22% superior to firefly (FF), 0.22% superior to CSA and 0.14% superior to LA, respectively, in terms of L1-norm. Further, for EEG analysis, LM-CSA is 86.9 and 91.2% better than the traditional bi-orthogonal wavelet under bior 3.1. Under bior 3.3, LM-CSA is 91.7 and 73.12% better than the bi-orthogonal wavelet in terms of MAE and MEP, respectively. Under bior 3.5 for EEG, L1-norm of LM-CSA is 0.64% superior to GA, 0.43% superior to PSO, 0.62% superior to FF, 0.84% superior to CSA and 0.60% better than LA, respectively.

Originality/value

This paper presents a novel CS framework using LM-CSA algorithm for EEG and ECG signal compression. To the best of the authors’ knowledge, this is the first work to use LM-CSA with enhanced bi-orthogonal wavelet filter for enhancing the CS capability as well reducing the errors.

Details

International Journal of Pervasive Computing and Communications, vol. 18 no. 5
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 20 November 2020

Sudeepa Das, Tirath Prasad Sahu and Rekh Ram Janghel

The purpose of this paper is to modify the crow search algorithm (CSA) to enhance both exploration and exploitation capability by including two novel approaches. The positions of…

Abstract

Purpose

The purpose of this paper is to modify the crow search algorithm (CSA) to enhance both exploration and exploitation capability by including two novel approaches. The positions of the crows are updated in two approaches based on awareness probability (AP). With AP, the position of a crow is updated by considering its velocity, calculated in a similar fashion to particle swarm optimization (PSO) to enhance the exploiting capability. Without AP, the crows are subdivided into groups by considering their weights, and the crows are updated by conceding leaders of the groups distributed over the search space to enhance the exploring capability. The performance of the proposed PSO-based group-oriented CSA (PGCSA) is realized by exploring the solution of benchmark equations. Further, the proposed PGCSA algorithm is validated over recently published algorithms by solving engineering problems.

Design/methodology/approach

In this paper, two novel approaches are implemented in two phases of CSA (with and without AP), which have been entitled the PGCSA algorithm to solve engineering benchmark problems.

Findings

The proposed algorithm is applied with two types of problems such as eight benchmark equations without constraint and six engineering problems.

Originality/value

The PGCSA algorithm is proposed with superior competence to solve engineering problems. The proposed algorithm is substantiated hypothetically by using a paired t-test.

Details

Engineering Computations, vol. 38 no. 2
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 26 December 2023

Yan Li, Ming K. Lim, Weiqing Xiong, Xingjun Huang, Yuhe Shi and Songyi Wang

Recently, electric vehicles have been widely used in the cold chain logistics sector to reduce the effects of excessive energy consumption and to support environmental…

Abstract

Purpose

Recently, electric vehicles have been widely used in the cold chain logistics sector to reduce the effects of excessive energy consumption and to support environmental friendliness. Considering the limited battery capacity of electric vehicles, it is vital to optimize battery charging during the distribution process.

Design/methodology/approach

This study establishes an electric vehicle routing model for cold chain logistics with charging stations, which will integrate multiple distribution centers to achieve sustainable logistics. The suggested optimization model aimed at minimizing the overall cost of cold chain logistics, which incorporates fixed, damage, refrigeration, penalty, queuing, energy and carbon emission costs. In addition, the proposed model takes into accounts factors such as time-varying speed, time-varying electricity price, energy consumption and queuing at the charging station. In the proposed model, a hybrid crow search algorithm (CSA), which combines opposition-based learning (OBL) and taboo search (TS), is developed for optimization purposes. To evaluate the model, algorithms and model experiments are conducted based on a real case in Chongqing, China.

Findings

The result of algorithm experiments illustrate that hybrid CSA is effective in terms of both solution quality and speed compared to genetic algorithm (GA) and particle swarm optimization (PSO). In addition, the model experiments highlight the benefits of joint distribution over individual distribution in reducing costs and carbon emissions.

Research limitations/implications

The optimization model of cold chain logistics routes based on electric vehicles provides a reference for managers to develop distribution plans, which contributes to the development of sustainable logistics.

Originality/value

In prior studies, many scholars have conducted related research on the subject of cold chain logistics vehicle routing problems and electric vehicle routing problems separately, but few have merged the above two subjects. In response, this study innovatively designs an electric vehicle routing model for cold chain logistics with consideration of time-varying speeds, time-varying electricity prices, energy consumption and queues at charging stations to make it consistent with the real world.

Details

Industrial Management & Data Systems, vol. 124 no. 3
Type: Research Article
ISSN: 0263-5577

Keywords

Article
Publication date: 30 July 2020

Nama Ajay Nagendra and Lakshman Pappula

The issues of radiating sources in the existence of smooth convex matters by such objects are of huge significance in the modeling of antennas on structures. Conformal antenna…

Abstract

Purpose

The issues of radiating sources in the existence of smooth convex matters by such objects are of huge significance in the modeling of antennas on structures. Conformal antenna arrays are necessary when an antenna has to match to certain platforms. A fundamental problem in the design is that the possible surfaces for a conformal antenna are infinite in number. Furthermore, if there is no symmetry, each element will see a different environment, and this complicates the mathematics. As a consequence, the element factor cannot be factored out from the array factor.

Design/methodology/approach

This paper intends to enhance the design of the conformal antenna. Here, the main objective of this task is to maximize the antenna gain and directivity from the first-side lobe and other side-lobes in the two way radiation pattern. Thus the adopted model is designed as a multiobjective concern. In order to attain this multiobjective function, both the element spacing and the radius of each antenna element should be optimized based on the probability of the Crow Search Algorithm (CSA). Thus the proposed method is named Probability Improved CSA (PI-CSA). Here, the First Null Beam Width (FNBW) and Side-Lobe Level (SLL) are minimized. Moreover, the adopted scheme is compared with conventional algorithms, and the results are attained.

Findings

From the analysis, the gain of the presented PI-CSA scheme in terms of best performance was 52.68% superior to ABC, 25.11% superior to PSO, 13.38% superior to FF and 3.21% superior to CS algorithms. Moreover, the mean performance of the adopted model was 62.94% better than ABC, 13.06% better than PSO, 24.34% better than FF and 10.05% better than CS algorithms. By maximizing the gain and directivity, FNBW and SLL were decreased. Thus, the optimal design of the conformal antenna has been attained by the proposed PI-CSA algorithm in an effective way.

Originality/value

This paper presents a technique for enhancing the design of the conformal antenna using the PI-CSA algorithm. This is the first work that utilizes PI-CSA-based optimization for improving the design of the conformal antenna.

Details

Data Technologies and Applications, vol. 55 no. 3
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 6 May 2020

Rajeshwari S. Patil and Nagashettappa Biradar

Breast cancer is one of the most common malignant tumors in women, which badly have an effect on women's physical and psychological health and even danger to life. Nowadays…

Abstract

Purpose

Breast cancer is one of the most common malignant tumors in women, which badly have an effect on women's physical and psychological health and even danger to life. Nowadays, mammography is considered as a fundamental criterion for medical practitioners to recognize breast cancer. Though, due to the intricate formation of mammogram images, it is reasonably hard for practitioners to spot breast cancer features.

Design/methodology/approach

Breast cancer is one of the most common malignant tumors in women, which badly have an effect on women's physical and psychological health and even danger to life. Nowadays, mammography is considered as a fundamental criterion for medical practitioners to recognize breast cancer. Though, due to the intricate formation of mammogram images, it is reasonably hard for practitioners to spot breast cancer features.

Findings

The performance analysis was done for both segmentation and classification. From the analysis, the accuracy of the proposed IAP-CSA-based fuzzy was 41.9% improved than the fuzzy classifier, 2.80% improved than PSO, WOA, and CSA, and 2.32% improved than GWO-based fuzzy classifiers. Additionally, the accuracy of the developed IAP-CSA-fuzzy was 9.54% better than NN, 35.8% better than SVM, and 41.9% better than the existing fuzzy classifier. Hence, it is concluded that the implemented breast cancer detection model was efficient in determining the normal, benign and malignant images.

Originality/value

This paper adopts the latest Improved Awareness Probability-based Crow Search Algorithm (IAP-CSA)-based Region growing and fuzzy classifier for enhancing the breast cancer detection of mammogram images, and this is the first work that utilizes this method.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 13 no. 2
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 8 December 2020

Jyoti Ranjan Nayak, Binod Shaw and Neeraj Kumar Dewangan

In this work, generation control of an isolated small hydro plant (SHP) is demonstrated by applying optimal controllers in speed governor and hydraulic turbine system. A…

Abstract

Purpose

In this work, generation control of an isolated small hydro plant (SHP) is demonstrated by applying optimal controllers in speed governor and hydraulic turbine system. A comparative analysis of application of fuzzy PI (FPI) and PID controller is conferred for generation control (both power and terminal voltage) of an SHP. The controllers are designed optimally by using crow search algorithm (CSA) and novel hybrid differential evolution crow search algorithm (DECSA). The purpose of this paper is to settle the voltage and real power to improve the quality of the power.

Design/methodology/approach

In this work, the controllers (PID and FPI) are implemented in speed governor and excitation system of SHP to regulate power and terminal voltage. Differential evolution and CSA are hybridized to enhance the performance of controller to refurbish the power and terminal voltage of SHP.

Findings

The proposed DECSA algorithm is applied to solve ten benchmark functions, and the effectiveness of DECSA algorithm over CSA and DE is demonstrated in terms of best value, mean and standard deviation. CSA and DECSA algorithms optimized controllers (PID and FPI) are used to design SHP with the capability to contribute power and voltage of better quality. The comparative analysis to substantiate the competence of DECSA algorithm and FPI controller is demonstrated in terms of statistical measures of power and voltage of SHP. Robustness analysis is performed by varying all system parameters to prove the effectiveness of the proposed controller.

Originality/value

The proposed algorithm and FPI controller are applied individually to improve the quality of the power of SHP. DE, CSA and DECSA algorithms are implemented to solve benchmark equations. The solutions of all benchmark equations contributed by DECSA algorithm is converged rapidly and having minimum statistical measures as compared to DE and CSA algorithms. The DECSA algorithm and FPI controller are proposed with superior competence to enhance the generator performances by conceding undershoot, overshoot and settling time of power and terminal voltage. DECSA-based FPI controller contributes a noticeable improvement of the performances over other approaches.

Article
Publication date: 30 July 2020

V. Srilakshmi, K. Anuradha and C. Shoba Bindu

This paper aims to model a technique that categorizes the texts from huge documents. The progression in internet technologies has raised the count of document accessibility, and…

Abstract

Purpose

This paper aims to model a technique that categorizes the texts from huge documents. The progression in internet technologies has raised the count of document accessibility, and thus the documents available online become countless. The text documents comprise of research article, journal papers, newspaper, technical reports and blogs. These large documents are useful and valuable for processing real-time applications. Also, these massive documents are used in several retrieval methods. Text classification plays a vital role in information retrieval technologies and is considered as an active field for processing massive applications. The aim of text classification is to categorize the large-sized documents into different categories on the basis of its contents. There exist numerous methods for performing text-related tasks such as profiling users, sentiment analysis and identification of spams, which is considered as a supervised learning issue and is addressed with text classifier.

Design/methodology/approach

At first, the input documents are pre-processed using the stop word removal and stemming technique such that the input is made effective and capable for feature extraction. In the feature extraction process, the features are extracted using the vector space model (VSM) and then, the feature selection is done for selecting the highly relevant features to perform text categorization. Once the features are selected, the text categorization is progressed using the deep belief network (DBN). The training of the DBN is performed using the proposed grasshopper crow optimization algorithm (GCOA) that is the integration of the grasshopper optimization algorithm (GOA) and Crow search algorithm (CSA). Moreover, the hybrid weight bounding model is devised using the proposed GCOA and range degree. Thus, the proposed GCOA + DBN is used for classifying the text documents.

Findings

The performance of the proposed technique is evaluated using accuracy, precision and recall is compared with existing techniques such as naive bayes, k-nearest neighbors, support vector machine and deep convolutional neural network (DCNN) and Stochastic Gradient-CAViaR + DCNN. Here, the proposed GCOA + DBN has improved performance with the values of 0.959, 0.959 and 0.96 for precision, recall and accuracy, respectively.

Originality/value

This paper proposes a technique that categorizes the texts from massive sized documents. From the findings, it can be shown that the proposed GCOA-based DBN effectively classifies the text documents.

Details

International Journal of Web Information Systems, vol. 16 no. 3
Type: Research Article
ISSN: 1744-0084

Keywords

Open Access
Article
Publication date: 29 December 2017

Prasenjit Dey, Aniruddha Bhattacharya and Priyanath Das

This paper reports a new technique for achieving optimized design for power system stabilizers. In any large scale interconnected systems, disturbances of small magnitudes are…

1687

Abstract

This paper reports a new technique for achieving optimized design for power system stabilizers. In any large scale interconnected systems, disturbances of small magnitudes are very common and low frequency oscillations pose a major problem. Hence small signal stability analysis is very important for analyzing system stability and performance. Power System Stabilizers (PSS) are used in these large interconnected systems for damping out low-frequency oscillations by providing auxiliary control signals to the generator excitation input. In this paper, collective decision optimization (CDO) algorithm, a meta-heuristic approach based on the decision making approach of human beings, has been applied for the optimal design of PSS. PSS parameters are tuned for the objective function, involving eigenvalues and damping ratios of the lightly damped electromechanical modes over a wide range of operating conditions. Also, optimal locations for PSS placement have been derived. Comparative study of the results obtained using CDO with those of grey wolf optimizer (GWO), differential Evolution (DE), Whale Optimization Algorithm (WOA) and crow search algorithm (CSA) methods, established the robustness of the algorithm in designing PSS under different operating conditions.

Details

Applied Computing and Informatics, vol. 16 no. 1/2
Type: Research Article
ISSN: 2634-1964

Keywords

Article
Publication date: 23 August 2022

Kamlesh Kumar Pandey and Diwakar Shukla

The K-means (KM) clustering algorithm is extremely responsive to the selection of initial centroids since the initial centroid of clusters determines computational effectiveness…

Abstract

Purpose

The K-means (KM) clustering algorithm is extremely responsive to the selection of initial centroids since the initial centroid of clusters determines computational effectiveness, efficiency and local optima issues. Numerous initialization strategies are to overcome these problems through the random and deterministic selection of initial centroids. The random initialization strategy suffers from local optimization issues with the worst clustering performance, while the deterministic initialization strategy achieves high computational cost. Big data clustering aims to reduce computation costs and improve cluster efficiency. The objective of this study is to achieve a better initial centroid for big data clustering on business management data without using random and deterministic initialization that avoids local optima and improves clustering efficiency with effectiveness in terms of cluster quality, computation cost, data comparisons and iterations on a single machine.

Design/methodology/approach

This study presents the Normal Distribution Probability Density (NDPD) algorithm for big data clustering on a single machine to solve business management-related clustering issues. The NDPDKM algorithm resolves the KM clustering problem by probability density of each data point. The NDPDKM algorithm first identifies the most probable density data points by using the mean and standard deviation of the datasets through normal probability density. Thereafter, the NDPDKM determines K initial centroid by using sorting and linear systematic sampling heuristics.

Findings

The performance of the proposed algorithm is compared with KM, KM++, Var-Part, Murat-KM, Mean-KM and Sort-KM algorithms through Davies Bouldin score, Silhouette coefficient, SD Validity, S_Dbw Validity, Number of Iterations and CPU time validation indices on eight real business datasets. The experimental evaluation demonstrates that the NDPDKM algorithm reduces iterations, local optima, computing costs, and improves cluster performance, effectiveness, efficiency with stable convergence as compared to other algorithms. The NDPDKM algorithm minimizes the average computing time up to 34.83%, 90.28%, 71.83%, 92.67%, 69.53% and 76.03%, and reduces the average iterations up to 40.32%, 44.06%, 32.02%, 62.78%, 19.07% and 36.74% with reference to KM, KM++, Var-Part, Murat-KM, Mean-KM and Sort-KM algorithms.

Originality/value

The KM algorithm is the most widely used partitional clustering approach in data mining techniques that extract hidden knowledge, patterns and trends for decision-making strategies in business data. Business analytics is one of the applications of big data clustering where KM clustering is useful for the various subcategories of business analytics such as customer segmentation analysis, employee salary and performance analysis, document searching, delivery optimization, discount and offer analysis, chaplain management, manufacturing analysis, productivity analysis, specialized employee and investor searching and other decision-making strategies in business.

Article
Publication date: 7 December 2021

Kalyan Sagar Kadali, Moorthy Veeraswamy, Marimuthu Ponnusamy and Viswanatha Rao Jawalkar

The purpose of this paper is to focus on the cost-effective and environmentally sustainable operation of thermal power systems to allocate optimum active power generation…

49

Abstract

Purpose

The purpose of this paper is to focus on the cost-effective and environmentally sustainable operation of thermal power systems to allocate optimum active power generation resultant for a feasible solution in diverse load patterns using the grey wolf optimization (GWO) algorithm.

Design/methodology/approach

The economic dispatch problem is formulated as a bi-objective optimization subjected to several operational and practical constraints. A normalized price penalty factor approach is used to convert these objectives into a single one. The GWO algorithm is adopted as an optimization tool in which the exploration and exploitation process in search space is carried through encircling, hunting and attacking.

Findings

A linear interpolated price penalty model is developed based on simple analytical geometry equations that perfectly blend two non-commensurable objectives. The desired GWO algorithm reports a new optimum thermal generation schedule for a feasible solution for different operational strategies. These are better than the earlier reports regarding solution quality.

Practical implications

The proposed method seems to be a promising optimization tool for the utilities, thereby modifying their operating strategies to generate electricity at minimum energy cost and pollution levels. Thus, a strategic balance is derived among economic development, energy cost and environmental sustainability.

Originality/value

A single optimization tool is used in both quadratic and non-convex cost characteristics thermal modal. The GWO algorithm has discovered the best, cost-effective and environmentally sustainable generation dispatch.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering , vol. 41 no. 1
Type: Research Article
ISSN: 0332-1649

Keywords

1 – 10 of 18