Search results

1 – 10 of 127
Open Access
Article
Publication date: 25 March 2021

Fareed Sheriff

This paper presents the Edge Load Management and Optimization through Pseudoflow Prediction (ELMOPP) algorithm, which aims to solve problems detailed in previous algorithms;…

1994

Abstract

Purpose

This paper presents the Edge Load Management and Optimization through Pseudoflow Prediction (ELMOPP) algorithm, which aims to solve problems detailed in previous algorithms; through machine learning with nested long short-term memory (NLSTM) modules and graph theory, the algorithm attempts to predict the near future using past data and traffic patterns to inform its real-time decisions and better mitigate traffic by predicting future traffic flow based on past flow and using those predictions to both maximize present traffic flow and decrease future traffic congestion.

Design/methodology/approach

ELMOPP was tested against the ITLC and OAF traffic management algorithms using a simulation modeled after the one presented in the ITLC paper, a single-intersection simulation.

Findings

The collected data supports the conclusion that ELMOPP statistically significantly outperforms both algorithms in throughput rate, a measure of how many vehicles are able to exit inroads every second.

Originality/value

Furthermore, while ITLC and OAF require the use of GPS transponders and GPS, speed sensors and radio, respectively, ELMOPP only uses traffic light camera footage, something that is almost always readily available in contrast to GPS and speed sensors.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Article
Publication date: 8 June 2023

Jianhua Zhang, Liangchen Li, Fredrick Ahenkora Boamah, Shuwei Zhang and Longfei He

This study aims to deal with the case adaptation problem associated with continuous data by providing a non-zero base solution for knowledge users in solving a given situation.

Abstract

Purpose

This study aims to deal with the case adaptation problem associated with continuous data by providing a non-zero base solution for knowledge users in solving a given situation.

Design/methodology/approach

Firstly, the neighbourhood transformation of the initial case base and the view similarity between the problem and the existing cases will be examined. Multiple cases with perspective similarity or above a predefined threshold will be used as the adaption cases. Secondly, on the decision rule set of the decision space, the deterministic decision model of the corresponding distance between the problem and the set of lower approximate objects under each choice class of the adaptation set is applied to extract the decision rule set of the case condition space. Finally, the solution elements of the problem will be reconstructed using the rule set and the values of the problem's conditional elements.

Findings

The findings suggest that the classic knowledge matching approach reveals the user with the most similar knowledge/cases but relatively low satisfaction. This also revealed a non-zero adaptation based on human–computer interaction, which has the difficulties of solid subjectivity and low adaptation efficiency.

Research limitations/implications

In this study the multi-case inductive adaptation of the problem to be solved is carried out by analyzing and extracting the law of the effect of the centralized conditions on the decision-making of the adaptation. The adaption process is more rigorous with less subjective influence better reliability and higher application value. The approach described in this research can directly change the original data set which is more beneficial to enhancing problem-solving accuracy while broadening the application area of the adaptation mechanism.

Practical implications

The examination of the calculation cases confirms the innovation of this study in comparison to the traditional method of matching cases with tacit knowledge extrapolation.

Social implications

The algorithm models established in this study develop theoretical directions for a multi-case induction adaptation study of tacit knowledge.

Originality/value

This study designs a multi-case induction adaptation scheme by combining NRS and CBR for implicitly knowledgeable exogenous cases. A game-theoretic combinatorial assignment method is applied to calculate the case view and the view similarity based on the threshold screening.

Details

Kybernetes, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 22 March 2024

Douglas Ramalho Queiroz Pacheco

This study aims to propose and numerically assess different ways of discretising a very weak formulation of the Poisson problem.

Abstract

Purpose

This study aims to propose and numerically assess different ways of discretising a very weak formulation of the Poisson problem.

Design/methodology/approach

We use integration by parts twice to shift smoothness requirements to the test functions, thereby allowing low-regularity data and solutions.

Findings

Various conforming discretisations are presented and tested, with numerical results indicating good accuracy and stability in different types of problems.

Originality/value

This is one of the first articles to propose and test concrete discretisations for very weak variational formulations in primal form. The numerical results, which include a problem based on real MRI data, indicate the potential of very weak finite element methods for tackling problems with low regularity.

Details

Engineering Computations, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 1 May 2024

Mamun Mishra and Bibhuti Bhusan Pati

Islanding detection has become a serious concern due to the extensive integration of renewable energy sources. The non-detection zone (NDZ) and system-specific applicability…

Abstract

Purpose

Islanding detection has become a serious concern due to the extensive integration of renewable energy sources. The non-detection zone (NDZ) and system-specific applicability, which are the two major issues with the islanding detection methods, are addressed here. The purpose of this paper is to devise an islanding detection method with zero NDZ and, which will be applicable to all types of renewable energy sources using the sequence components of the point of common coupling voltage.

Design/methodology/approach

Here, a parameter using the sequence components is derived to devise an islanding detection method. The parameter derived from the sequence components of point of common coupling voltage is analysed using wavelet transform. Various operating conditions, such as islanding and non-islanding, are considered for several test systems to evaluate the performance of the proposed method. All the simulations are carried out in Simulink/MATLAB environment.

Findings

The results showed that the proposed method has zero NDZ for both inverter- and synchronous generator-based renewable energy sources. In addition, the proposed method works satisfactorily as per the IEEE 1547 standards requirement.

Originality/value

Performance of the proposed method has been tested in several test systems and is found to be better than some conventional methods.

Details

World Journal of Engineering, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1708-5284

Keywords

Article
Publication date: 18 April 2024

Stefano Costa, Eugenio Costamagna and Paolo Di Barba

A novel method for modelling permanent magnets is investigated based on numerical approximations with rational functions. This study aims to introduce the AAA algorithm and other…

Abstract

Purpose

A novel method for modelling permanent magnets is investigated based on numerical approximations with rational functions. This study aims to introduce the AAA algorithm and other recently developed, cutting-edge mathematical tools, which provide outstandingly fast and accurate numerical computation of potentials and vector fields.

Design/methodology/approach

First, the AAA algorithm is briefly introduced along with its main variants and other advanced mathematical tools involved in the modelling. Then, the analysis of a circular Halbach array with a one-pole pair is carried out by means of the AAA-least squares method, focusing on vector potential and flux density in the bore and validating results by means of classic finite element software. Finally, the investigation is completed by a finite difference analysis.

Findings

AAA methods for field analysis prove to be strikingly fast and accurate. Results are in excellent agreement with those provided by the finite element model, and the very good agreement with those from finite differences suggests future improvements. They are also easy programming; the MATLAB code is less than 200 lines. This indicates they can provide an effective tool for rapid analysis.

Research limitations/implications

AAA methods in magnetostatics are novel, but their extension to analogous physical problems seems straightforward. Being a meshless method, it is unlikely that local non-linearities can be considered. An aspect of particular interest, left for future research, is the capability of handling inhomogeneous domains, i.e. solving general interface problems.

Originality/value

The authors use cutting-edge mathematical tools for the modelling of complex physical objects in magnetostatics.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0332-1649

Keywords

Open Access
Article
Publication date: 27 June 2022

Saida Mancer, Abdelhakim Necir and Souad Benchaira

The purpose of this paper is to propose a semiparametric estimator for the tail index of Pareto-type random truncated data that improves the existing ones in terms of mean square…

Abstract

Purpose

The purpose of this paper is to propose a semiparametric estimator for the tail index of Pareto-type random truncated data that improves the existing ones in terms of mean square error. Moreover, we establish its consistency and asymptotic normality.

Design/methodology/approach

To construct a root mean squared error (RMSE)-reduced estimator of the tail index, the authors used the semiparametric estimator of the underlying distribution function given by Wang (1989). This allows us to define the corresponding tail process and provide a weak approximation to this one. By means of a functional representation of the given estimator of the tail index and by using this weak approximation, the authors establish the asymptotic normality of the aforementioned RMSE-reduced estimator.

Findings

In basis on a semiparametric estimator of the underlying distribution function, the authors proposed a new estimation method to the tail index of Pareto-type distributions for randomly right-truncated data. Compared with the existing ones, this estimator behaves well both in terms of bias and RMSE. A useful weak approximation of the corresponding tail empirical process allowed us to establish both the consistency and asymptotic normality of the proposed estimator.

Originality/value

A new tail semiparametric (empirical) process for truncated data is introduced, a new estimator for the tail index of Pareto-type truncated data is introduced and asymptotic normality of the proposed estimator is established.

Details

Arab Journal of Mathematical Sciences, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1319-5166

Keywords

Article
Publication date: 22 March 2024

Yahao Wang, Zhen Li, Yanghong Li and Erbao Dong

In response to the challenge of reduced efficiency or failure of robot motion planning algorithms when faced with end-effector constraints, this study aims to propose a new…

Abstract

Purpose

In response to the challenge of reduced efficiency or failure of robot motion planning algorithms when faced with end-effector constraints, this study aims to propose a new constraint method to improve the performance of the sampling-based planner.

Design/methodology/approach

In this work, a constraint method (TC method) based on the idea of cross-sampling is proposed. This method uses the tangent space in the workspace to approximate the constrained manifold pattern and projects the entire sampling process into the workspace for constraint correction. This method avoids the need for extensive computational work involving multiple iterations of the Jacobi inverse matrix in the configuration space and retains the sampling properties of the sampling-based algorithm.

Findings

Simulation results demonstrate that the performance of the planner when using the TC method under the end-effector constraint surpasses that of other methods. Physical experiments further confirm that the TC-Planner does not cause excessive constraint errors that might lead to task failure. Moreover, field tests conducted on robots underscore the effectiveness of the TC-Planner, and its excellent performance, thereby advancing the autonomy of robots in power-line connection tasks.

Originality/value

This paper proposes a new constraint method combined with the rapid-exploring random trees algorithm to generate collision-free trajectories that satisfy the constraints for a high-dimensional robotic system under end-effector constraints. In a series of simulation and experimental tests, the planner using the TC method under end-effector constraints efficiently performs. Tests on a power distribution live-line operation robot also show that the TC method can greatly aid the robot in completing operation tasks with end-effector constraints. This helps robots to perform tasks with complex end-effector constraints such as grinding and welding more efficiently and autonomously.

Details

Industrial Robot: the international journal of robotics research and application, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 6 May 2024

Ahmed Taibi, Said Touati, Lyes Aomar and Nabil Ikhlef

Bearings play a critical role in the reliable operation of induction machines, and their failure can lead to significant operational challenges and downtime. Detecting and…

Abstract

Purpose

Bearings play a critical role in the reliable operation of induction machines, and their failure can lead to significant operational challenges and downtime. Detecting and diagnosing these defects is imperative to ensure the longevity of induction machines and preventing costly downtime. The purpose of this paper is to develop a novel approach for diagnosis of bearing faults in induction machine.

Design/methodology/approach

To identify the different fault states of the bearing with accurately and efficiently in this paper, the original bearing vibration signal is first decomposed into several intrinsic mode functions (IMFs) using variational mode decomposition (VMD). The IMFs that contain more noise information are selected using the Pearson correlation coefficient. Subsequently, discrete wavelet transform (DWT) is used to filter the noisy IMFs. Second, the composite multiscale weighted permutation entropy (CMWPE) of each component is calculated to form the features vector. Finally, the features vector is reduced using the locality-sensitive discriminant analysis algorithm, to be fed into the support vector machine model for training and classification.

Findings

The obtained results showed the ability of the VMD_DWT algorithm to reduce the noise of raw vibration signals. It also demonstrated that the proposed method can effectively extract different fault features from vibration signals.

Originality/value

This study suggested a new VMD_DWT method to reduce the noise of the bearing vibration signal. The proposed approach for bearing fault diagnosis of induction machine based on VMD-DWT and CMWPE is highly effective. Its effectiveness has been verified using experimental data.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 25 December 2023

Umair Khan, William Pao, Karl Ezra Salgado Pilario, Nabihah Sallih and Muhammad Rehan Khan

Identifying the flow regime is a prerequisite for accurately modeling two-phase flow. This paper aims to introduce a comprehensive data-driven workflow for flow regime…

70

Abstract

Purpose

Identifying the flow regime is a prerequisite for accurately modeling two-phase flow. This paper aims to introduce a comprehensive data-driven workflow for flow regime identification.

Design/methodology/approach

A numerical two-phase flow model was validated against experimental data and was used to generate dynamic pressure signals for three different flow regimes. First, four distinct methods were used for feature extraction: discrete wavelet transform (DWT), empirical mode decomposition, power spectral density and the time series analysis method. Kernel Fisher discriminant analysis (KFDA) was used to simultaneously perform dimensionality reduction and machine learning (ML) classification for each set of features. Finally, the Shapley additive explanations (SHAP) method was applied to make the workflow explainable.

Findings

The results highlighted that the DWT + KFDA method exhibited the highest testing and training accuracy at 95.2% and 88.8%, respectively. Results also include a virtual flow regime map to facilitate the visualization of features in two dimension. Finally, SHAP analysis showed that minimum and maximum values extracted at the fourth and second signal decomposition levels of DWT are the best flow-distinguishing features.

Practical implications

This workflow can be applied to opaque pipes fitted with pressure sensors to achieve flow assurance and automatic monitoring of two-phase flow occurring in many process industries.

Originality/value

This paper presents a novel flow regime identification method by fusing dynamic pressure measurements with ML techniques. The authors’ novel DWT + KFDA method demonstrates superior performance for flow regime identification with explainability.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0961-5539

Keywords

Open Access
Article
Publication date: 26 December 2023

Mehmet Kursat Oksuz and Sule Itir Satoglu

Disaster management and humanitarian logistics (HT) play crucial roles in large-scale events such as earthquakes, floods, hurricanes and tsunamis. Well-organized disaster response…

Abstract

Purpose

Disaster management and humanitarian logistics (HT) play crucial roles in large-scale events such as earthquakes, floods, hurricanes and tsunamis. Well-organized disaster response is crucial for effectively managing medical centres, staff allocation and casualty distribution during emergencies. To address this issue, this study aims to introduce a multi-objective stochastic programming model to enhance disaster preparedness and response, focusing on the critical first 72 h after earthquakes. The purpose is to optimize the allocation of resources, temporary medical centres and medical staff to save lives effectively.

Design/methodology/approach

This study uses stochastic programming-based dynamic modelling and a discrete-time Markov Chain to address uncertainty. The model considers potential road and hospital damage and distance limits and introduces an a-reliability level for untreated casualties. It divides the initial 72 h into four periods to capture earthquake dynamics.

Findings

Using a real case study in Istanbul’s Kartal district, the model’s effectiveness is demonstrated for earthquake scenarios. Key insights include optimal medical centre locations, required capacities, necessary medical staff and casualty allocation strategies, all vital for efficient disaster response within the critical first 72 h.

Originality/value

This study innovates by integrating stochastic programming and dynamic modelling to tackle post-disaster medical response. The use of a Markov Chain for uncertain health conditions and focus on the immediate aftermath of earthquakes offer practical value. By optimizing resource allocation amid uncertainties, the study contributes significantly to disaster management and HT research.

Details

Journal of Humanitarian Logistics and Supply Chain Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2042-6747

Keywords

1 – 10 of 127