Search results

1 – 10 of over 1000
Article
Publication date: 10 August 2021

Deepa S.N.

Limitations encountered with the models developed in the previous studies had occurrences of global minima; due to which this study developed a new intelligent ubiquitous…

251

Abstract

Purpose

Limitations encountered with the models developed in the previous studies had occurrences of global minima; due to which this study developed a new intelligent ubiquitous computational model that learns with gradient descent learning rule and operates with auto-encoders and decoders to attain better energy optimization. Ubiquitous machine learning computational model process performs training in a better way than regular supervised learning or unsupervised learning computational models with deep learning techniques, resulting in better learning and optimization for the considered problem domain of cloud-based internet-of-things (IOTs). This study aims to improve the network quality and improve the data accuracy rate during the network transmission process using the developed ubiquitous deep learning computational model.

Design/methodology/approach

In this research study, a novel intelligent ubiquitous machine learning computational model is designed and modelled to maintain the optimal energy level of cloud IOTs in sensor network domains. A new intelligent ubiquitous computational model that learns with gradient descent learning rule and operates with auto-encoders and decoders to attain better energy optimization is developed. A new unified deterministic sine-cosine algorithm has been developed in this study for parameter optimization of weight factors in the ubiquitous machine learning model.

Findings

The newly developed ubiquitous model is used for finding network energy and performing its optimization in the considered sensor network model. At the time of progressive simulation, residual energy, network overhead, end-to-end delay, network lifetime and a number of live nodes are evaluated. It is elucidated from the results attained, that the ubiquitous deep learning model resulted in better metrics based on its appropriate cluster selection and minimized route selection mechanism.

Research limitations/implications

In this research study, a novel ubiquitous computing model derived from a new optimization algorithm called a unified deterministic sine-cosine algorithm and deep learning technique was derived and applied for maintaining the optimal energy level of cloud IOTs in sensor networks. The deterministic levy flight concept is applied for developing the new optimization technique and this tends to determine the parametric weight values for the deep learning model. The ubiquitous deep learning model is designed with auto-encoders and decoders and their corresponding layers weights are determined for optimal values with the optimization algorithm. The modelled ubiquitous deep learning approach was applied in this study to determine the network energy consumption rate and thereby optimize the energy level by increasing the lifetime of the sensor network model considered. For all the considered network metrics, the ubiquitous computing model has proved to be effective and versatile than previous approaches from early research studies.

Practical implications

The developed ubiquitous computing model with deep learning techniques can be applied for any type of cloud-assisted IOTs in respect of wireless sensor networks, ad hoc networks, radio access technology networks, heterogeneous networks, etc. Practically, the developed model facilitates computing the optimal energy level of the cloud IOTs for any considered network models and this helps in maintaining a better network lifetime and reducing the end-to-end delay of the networks.

Social implications

The social implication of the proposed research study is that it helps in reducing energy consumption and increases the network lifetime of the cloud IOT based sensor network models. This approach helps the people in large to have a better transmission rate with minimized energy consumption and also reduces the delay in transmission.

Originality/value

In this research study, the network optimization of cloud-assisted IOTs of sensor network models is modelled and analysed using machine learning models as a kind of ubiquitous computing system. Ubiquitous computing models with machine learning techniques develop intelligent systems and enhances the users to make better and faster decisions. In the communication domain, the use of predictive and optimization models created with machine learning accelerates new ways to determine solutions to problems. Considering the importance of learning techniques, the ubiquitous computing model is designed based on a deep learning strategy and the learning mechanism adapts itself to attain a better network optimization model.

Details

International Journal of Pervasive Computing and Communications, vol. 18 no. 1
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 14 September 2022

Mythili Boopathi, Meena Chavan, Jeneetha Jebanazer J. and Sanjay Nakharu Prasad Kumar

The Denial of Service (DoS) attack is a category of intrusion that devours various services and resources of the organization by the dispersal of unusable traffic, so that…

Abstract

Purpose

The Denial of Service (DoS) attack is a category of intrusion that devours various services and resources of the organization by the dispersal of unusable traffic, so that reliable users are not capable of getting benefit from the services. In general, the DoS attackers preserve their independence by collaborating several victim machines and following authentic network traffic, which makes it more complex to detect the attack. Thus, these issues and demerits faced by existing DoS attack recognition schemes in cloud are specified as a major challenge to inventing a new attack recognition method.

Design/methodology/approach

This paper aims to detect DoS attack detection scheme, termed as sine cosine anti coronavirus optimization (SCACVO)-driven deep maxout network (DMN). The recorded log file is considered in this method for the attack detection process. Significant features are chosen based on Pearson correlation in the feature selection phase. The over sampling scheme is applied in the data augmentation phase, and then the attack detection is done using DMN. The DMN is trained by the SCACVO algorithm, which is formed by combining sine cosine optimization and anti-corona virus optimization techniques.

Findings

The SCACVO-based DMN offers maximum testing accuracy, true positive rate and true negative rate of 0.9412, 0.9541 and 0.9178, respectively.

Originality/value

The DoS attack detection using the proposed model is accurate and improves the effectiveness of the detection.

Details

International Journal of Pervasive Computing and Communications, vol. 19 no. 5
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 27 April 2020

Saroj Kumar, Dayal R. Parhi, Manoj Kumar Muni and Krishna Kant Pandey

This paper aims to incorporate a hybridized advanced sine-cosine algorithm (ASCA) and advanced ant colony optimization (AACO) technique for optimal path search with control over…

314

Abstract

Purpose

This paper aims to incorporate a hybridized advanced sine-cosine algorithm (ASCA) and advanced ant colony optimization (AACO) technique for optimal path search with control over multiple mobile robots in static and dynamic unknown environments.

Design/methodology/approach

The controller for ASCA and AACO is designed and implemented through MATLAB simulation coupled with real-time experiments in various environments. Whenever the sensors detect obstacles, ASCA is applied to find their global best positions within the sensing range, following which AACO is activated to choose the next stand-point. This is how the robot travels to the specified target point.

Findings

Navigational analysis is carried out by implementing the technique developed here using single and multiple mobile robots. Its efficiency is authenticated through the comparison between simulation and experimental results. Further, the proposed technique is found to be more efficient when compared with existing methodologies. Significant improvements of about 10.21 per cent in path length are achieved along with better control over these.

Originality/value

Systematic presentation of the proposed technique attracts a wide readership among researchers where AI technique is the application criteria.

Details

Industrial Robot: the international journal of robotics research and application, vol. 47 no. 4
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 7 April 2022

Tian-Jian Luo

Steady-state visual evoked potential (SSVEP) has been widely used in the application of electroencephalogram (EEG) based non-invasive brain computer interface (BCI) due to its…

Abstract

Purpose

Steady-state visual evoked potential (SSVEP) has been widely used in the application of electroencephalogram (EEG) based non-invasive brain computer interface (BCI) due to its characteristics of high accuracy and information transfer rate (ITR). To recognize the SSVEP components in collected EEG trials, a lot of recognition algorithms based on template matching of training trials have been proposed and applied in recent years. In this paper, a comparative survey of SSVEP recognition algorithms based on template matching of training trails has been done.

Design/methodology/approach

To survey and compare the recently proposed recognition algorithms for SSVEP, this paper regarded the conventional canonical correlated analysis (CCA) as the baseline, and selected individual template CCA (ITCCA), multi-set CCA (MsetCCA), task related component analysis (TRCA), latent common source extraction (LCSE) and a sum of squared correlation (SSCOR) for comparison.

Findings

For the horizontal comparative of the six surveyed recognition algorithms, this paper adopted the “Tsinghua JFPM-SSVEP” data set and compared the average recognition performance on such data set. The comparative contents including: recognition accuracy, ITR, correlated coefficient and R-square values under different time duration of the SSVEP stimulus presentation. Based on the optimal time duration of stimulus presentation, the author has also compared the efficiency of the six compared algorithms. To measure the influence of different parameters, the number of training trials, the number of electrodes and the usage of filter bank preprocessing were compared in the ablation study.

Originality/value

Based on the comparative results, this paper analyzed the advantages and disadvantages of the six compared SSVEP recognition algorithms by considering application scenes, real-time and computational complexity. Finally, the author gives the algorithms selection range for the recognition of real-world online SSVEP-BCI.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 16 no. 1
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 16 November 2020

Narinder Singh, S.B. Singh, Essam H. Houssein and Muhammad Ahmad

The purpose of this study to investigate the effects and possible future prediction of COVID-19. The dataset considered in this study to investigate the effects and possible…

Abstract

Purpose

The purpose of this study to investigate the effects and possible future prediction of COVID-19. The dataset considered in this study to investigate the effects and possible future prediction of COVID-19 is constrained as follows: age, gender, systolic blood pressure, HDL-cholesterol, diabetes and its medication, does the patient suffered from heart disease or took anti-cough agent food or sensitive to cough related issues and any other chronic kidney disease, physical contact with foreign returns and social distance for the prediction of the risk of COVID-19.

Design/methodology/approach

This work implemented a meta-heuristic algorithm on the aforementioned dataset for possible analysis of the risk of being infected with COVID-19. The authors proposed a simple yet effective Risk Prediction through Nature Inspired Hybrid Particle Swarm Optimization and Sine Cosine Algorithm (HPSOSCA), particle swarm optimization (PSO), and sine cosine algorithm (SCA) algorithms.

Findings

The simulated results on different cases discussed in the dataset section reveal which category of individuals may happen to have the disease and of what level. The experimental results reveal that the proposed model can predict the percentage of risk with an overall accuracy of 88.63%, sensitivity (87.23%), specificity (89.02%), precision (69.49%), recall (87.23%), f_measure (77.36%) and Gmean (88.12%) with 41 and 146 true positive and negative, 18 and 6 false positive and negative cases, respectively. The proposed model provides a quite stable prediction of risk for COVID-19 on different categories of individuals.

Originality/value

The work for the very first time developed a novel HPSOSCA model based on PSO and SCA for the prediction of COVID-19 disease. The convergence rate of the proposed model is too high as compared to the literature. It also produces a better accuracy in a computationally efficient fashion. The obtained outputs are as follows: accuracy (88.63%), sensitivity (87.23%), specificity (89.02%), precision (69.49%), recall (87.23%), f_measure (77.36%), Gmean (88.12%), Tp (41), Tn (146), Fb (18) and Fn (06). The recommendations to reduce disease outbreaks are as follow: to control this epidemic in various regions, it is important to appropriately manage patients suspected of having the disease, immediately identify and isolate the source of infection, cut off the transmission route and prevent viral transmission from these potential patients or virus carriers.

Details

World Journal of Engineering, vol. 19 no. 1
Type: Research Article
ISSN: 1708-5284

Keywords

Article
Publication date: 30 June 2020

Sajad Ahmad Rather and P. Shanthi Bala

In this paper, a newly proposed hybridization algorithm namely constriction coefficient-based particle swarm optimization and gravitational search algorithm (CPSOGSA) has been…

Abstract

Purpose

In this paper, a newly proposed hybridization algorithm namely constriction coefficient-based particle swarm optimization and gravitational search algorithm (CPSOGSA) has been employed for training MLP to overcome sensitivity to initialization, premature convergence, and stagnation in local optima problems of MLP.

Design/methodology/approach

In this study, the exploration of the search space is carried out by gravitational search algorithm (GSA) and optimization of candidate solutions, i.e. exploitation is performed by particle swarm optimization (PSO). For training the multi-layer perceptron (MLP), CPSOGSA uses sigmoid fitness function for finding the proper combination of connection weights and neural biases to minimize the error. Secondly, a matrix encoding strategy is utilized for providing one to one correspondence between weights and biases of MLP and agents of CPSOGSA.

Findings

The experimental findings convey that CPSOGSA is a better MLP trainer as compared to other stochastic algorithms because it provides superior results in terms of resolving stagnation in local optima and convergence speed problems. Besides, it gives the best results for breast cancer, heart, sine function and sigmoid function datasets as compared to other participating algorithms. Moreover, CPSOGSA also provides very competitive results for other datasets.

Originality/value

The CPSOGSA performed effectively in overcoming stagnation in local optima problem and increasing the overall convergence speed of MLP. Basically, CPSOGSA is a hybrid optimization algorithm which has powerful characteristics of global exploration capability and high local exploitation power. In the research literature, a little work is available where CPSO and GSA have been utilized for training MLP. The only related research paper was given by Mirjalili et al., in 2012. They have used standard PSO and GSA for training simple FNNs. However, the work employed only three datasets and used the MSE performance metric for evaluating the efficiency of the algorithms. In this paper, eight different standard datasets and five performance metrics have been utilized for investigating the efficiency of CPSOGSA in training MLPs. In addition, a non-parametric pair-wise statistical test namely the Wilcoxon rank-sum test has been carried out at a 5% significance level to statistically validate the simulation results. Besides, eight state-of-the-art meta-heuristic algorithms were employed for comparative analysis of the experimental results to further raise the authenticity of the experimental setup.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 13 no. 2
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 12 January 2023

Zhixiang Chen

The purpose of this paper is to propose a novel improved teaching and learning-based algorithm (TLBO) to enhance its convergence ability and solution accuracy, making it more…

Abstract

Purpose

The purpose of this paper is to propose a novel improved teaching and learning-based algorithm (TLBO) to enhance its convergence ability and solution accuracy, making it more suitable for solving large-scale optimization issues.

Design/methodology/approach

Utilizing multiple cooperation mechanisms in teaching and learning processes, an improved TBLO named CTLBO (collectivism teaching-learning-based optimization) is developed. This algorithm introduces a new preparation phase before the teaching and learning phases and applies multiple teacher–learner cooperation strategies in teaching and learning processes. Applying modularization idea, based on the configuration structure of operators of CTLBO, six variants of CTLBO are constructed. For identifying the best configuration, 30 general benchmark functions are tested. Then, three experiments using CEC2020 (2020 IEEE Conference on Evolutionary Computation)-constrained optimization problems are conducted to compare CTLBO with other algorithms. At last, a large-scale industrial engineering problem is taken as the application case.

Findings

Experiment with 30 general unconstrained benchmark functions indicates that CTLBO-c is the best configuration of all variants of CTLBO. Three experiments using CEC2020-constrained optimization problems show that CTLBO is one powerful algorithm for solving large-scale constrained optimization problems. The application case of industrial engineering problem shows that CTLBO and its variant CTLBO-c can effectively solve the large-scale real problem, while the accuracies of TLBO and other meta-heuristic algorithm are far lower than CLTBO and CTLBO-c, revealing that CTLBO and its variants can far outperform other algorithms. CTLBO is an excellent algorithm for solving large-scale complex optimization issues.

Originality/value

The innovation of this paper lies in the improvement strategies in changing the original TLBO with two-phase teaching–learning mechanism to a new algorithm CTLBO with three-phase multiple cooperation teaching–learning mechanism, self-learning mechanism in teaching and group teaching mechanism. CTLBO has important application value in solving large-scale optimization problems.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 16 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 5 May 2015

Dariusz Zieliński, Piotr Lipnicki and Wojciech Jarzyna

In the dispersed generation system, power electronic converters allow for coupling between energy sources and the power grid. The requirements of Transmission System Operators are…

Abstract

Purpose

In the dispersed generation system, power electronic converters allow for coupling between energy sources and the power grid. The requirements of Transmission System Operators are difficult to meet when the share of distributed energy sources of the total energy balance increases. These requirements allow to increase penetration of distributed generation sources without compromising power system stability and reliability. Therefore, in addition to control of active or reactive power, as well as voltage and frequency stabilization, the modern power electronic converters should support power grid in dynamic states or in the presence of nonlinear distortions. The paper aims to discuss these issues.

Design/methodology/approach

The research methodology used in this paper is based on three steps: Mathematical modelling and simulation studies, Experiments on laboratory test stand, Analyzing obtained results, evaluating them and formulating the conclusions.

Findings

The authors identified two algorithms, αβ-Filter and Voltage Controlled Oscillator, which are able to successfully cope with notch distortions. Other algorithms, used previously for voltage dips, operate improperly when the voltage grid has notching disturbances. This work evaluates six different synchronization algorithms with respect to the abilities to deal with notching.

Research limitations/implications

The paper presents results of the synchronization algorithms in the presence of nonlinear notching interference. These studies were performed using the original hardware-software power grid emulator, real-time d’Space platform and power electronic converter. This methodology allowed us to exactly and accurately evaluate synchronization performance methods in the presence of complex nonlinear phenomena in power grid and power electronic converter. The results demonstrated that the best algorithms were αβ – Filtering and Voltage Controlled Oscilator.

Originality/value

In this paper, different synchronization algorithms have been tested. These included the classical Phase Locked Loop with Synchronous Reference Frame as well as modified algorithms developed by the authors, which displayed high robustness with respect to the notching interference. During the tests, the previously developed original test rig was used, allowing software-hardware emulation of grid phenomena.

Details

COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering, vol. 34 no. 3
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 7 July 2022

Subhradip Mukherjee, R. Kumar and Siddhanta Borah

This paper aims to incorporate one intelligent particle swarm optimization (IPSO) controller to realize an optimum path in unknown environments. In this paper, the fitness…

Abstract

Purpose

This paper aims to incorporate one intelligent particle swarm optimization (IPSO) controller to realize an optimum path in unknown environments. In this paper, the fitness function of IPSO is designed with intelligent design parameters, solving the path navigation problem of an autonomous wheeled robot towards the target point by avoiding obstacles in any unknown environment.

Design/methodology/approach

This controller depends on randomly oriented positions with all other position information and a fitness function. Evaluating the position’s best values, this study gets the local best values, and finally, the global best value is updated as the current value after comparing the local best values.

Findings

The path navigation of the proposed controller has been compared with particle swarm optimization algorithm, BAT algorithm, flower pollination algorithm, invasive weed algorithm and genetic algorithm in multiple challenging environments. The proposed controller shows the percent deviation in path length near 14.54% and the percent deviation in travel time near 4% after the simulation. IPSO is applied to optimize said parameters for path navigation of the wheeled robot in different simulation environments.

Originality/value

A hardware model with a 32-bit ARM board interfaced with a global positioning system (GPS) module, an ultrasonic module and ZigBee wireless communication module is designed to implement IPSO. In real-time, the IPSO controller shows the percent deviation in path length near 9%.

Details

Industrial Robot: the international journal of robotics research and application, vol. 50 no. 1
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 30 November 2021

Ning Yu, Lin Nan and Tao Ku

How to make accurate action decisions based on visual information is one of the important research directions of industrial robots. The purpose of this paper is to design a highly…

Abstract

Purpose

How to make accurate action decisions based on visual information is one of the important research directions of industrial robots. The purpose of this paper is to design a highly optimized hand-eye coordination model of the robot to improve the robots’ on-site decision-making ability.

Design/methodology/approach

The combination of inverse reinforcement learning (IRL) algorithm and generative adversarial network can effectively reduce the dependence on expert samples and robots can obtain the decision-making performance that the degree of optimization is not lower than or even higher than that of expert samples.

Findings

The performance of the proposed model is verified in the simulation environment and real scene. By monitoring the reward distribution of the reward function and the trajectory of the robot, the proposed model is compared with other existing methods. The experimental results show that the proposed model has better decision-making performance in the case of less expert data.

Originality/value

A robot hand-eye cooperation model based on improved IRL is proposed and verified. Empirical investigations on real experiments reveal that overall, the proposed approach tends to improve the real efficiency by more than 10% when compared to alternative hand-eye cooperation methods.

Details

Industrial Robot: the international journal of robotics research and application, vol. 49 no. 5
Type: Research Article
ISSN: 0143-991X

Keywords

1 – 10 of over 1000