Search results

1 – 10 of over 1000
Article
Publication date: 1 June 2000

A. Savini

Gives introductory remarks about chapter 1 of this group of 31 papers, from ISEF 1999 Proceedings, in the methodologies for field analysis, in the electromagnetic community…

1131

Abstract

Gives introductory remarks about chapter 1 of this group of 31 papers, from ISEF 1999 Proceedings, in the methodologies for field analysis, in the electromagnetic community. Observes that computer package implementation theory contributes to clarification. Discusses the areas covered by some of the papers ‐ such as artificial intelligence using fuzzy logic. Includes applications such as permanent magnets and looks at eddy current problems. States the finite element method is currently the most popular method used for field computation. Closes by pointing out the amalgam of topics.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 19 no. 2
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 1 August 2016

Hongbin Mu, Wei Wei, Alexandrina Untaroiu and Qingdong Yan

Traditional three-dimensional numerical methods require a long time for transient computational fluid dynamics simulation on oil-filling process of hydrodynamic braking. The…

Abstract

Purpose

Traditional three-dimensional numerical methods require a long time for transient computational fluid dynamics simulation on oil-filling process of hydrodynamic braking. The purpose of this paper is to investigate reconstruction and prediction methods for the pressure field on blade surfaces to explore an accurate and rapid numerical method to solve transient internal flow in a hydrodynamic retarder.

Design/methodology/approach

Dynamic braking performance for the oil-filling process was simulated and validated using experimental results. With the proper orthogonal decomposition (POD) method, the dominant modes of transient pressure distribution on blades were extracted using their spatio-temporal structural features from the knowledge of computed flow data. Pressure field on blades was reconstructed. Based on the approximate model (AM), transient pressure field on blades was predicted in combination with POD. The causes of reconstruction and prediction error were, respectively, analyzed.

Findings

Results show that reconstruction with only a few dominant POD modes could represent all flow samples with high accuracy. POD method demonstrates an efficient simplification for accurate prediction of the instantaneous variation of pressure field in a hydrodynamic retarder, especially at the stage of high oil-filling rate.

Originality/value

The paper presents a novel numerical method, which combines POD and AM approaches for rapid and accurate prediction of braking characteristics during the oil-filling period, based on the knowledge of computed flow data.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 26 no. 6
Type: Research Article
ISSN: 0961-5539

Keywords

Article
Publication date: 8 January 2021

Ashok Naganath Shinde, Sanjay L. Nalbalwar and Anil B. Nandgaonkar

In today’s digital world, real-time health monitoring is becoming a most important challenge in the field of medical research. Body signals such as electrocardiogram (ECG)…

Abstract

Purpose

In today’s digital world, real-time health monitoring is becoming a most important challenge in the field of medical research. Body signals such as electrocardiogram (ECG), electromyogram and electroencephalogram (EEG) are produced in human body. This continuous monitoring generates huge count of data and thus an efficient method is required to shrink the size of the obtained large data. Compressed sensing (CS) is one of the techniques used to compress the data size. This technique is most used in certain applications, where the size of data is huge or the data acquisition process is too expensive to gather data from vast count of samples at Nyquist rate. This paper aims to propose Lion Mutated Crow search Algorithm (LM-CSA), to improve the performance of the LMCSA model.

Design/methodology/approach

A new CS algorithm is exploited in this paper, where the compression process undergoes three stages: designing of stable measurement matrix, signal compression and signal reconstruction. Here, the compression process falls under certain working principle, and is as follows: signal transformation, computation of Θ and normalization. As the main contribution, the theta value evaluation is proceeded by a new “Enhanced bi-orthogonal wavelet filter.” The enhancement is given under the scaling coefficients, where they are optimally tuned for processing the compression. However, the way of tuning seems to be the great crisis, and hence this work seeks the strategy of meta-heuristic algorithms. Moreover, a new hybrid algorithm is introduced that solves the above mentioned optimization inconsistency. The proposed algorithm is named as “Lion Mutated Crow search Algorithm (LM-CSA),” which is the hybridization of crow search algorithm (CSA) and lion algorithm (LA) to enhance the performance of the LM-CSA model.

Findings

Finally, the proposed LM-CSA model is compared over the traditional models in terms of certain error measures such as mean error percentage (MEP), symmetric mean absolute percentage error (SMAPE), mean absolute scaled error, mean absolute error (MAE), root mean square error, L1-norm and L2-normand infinity-norm. For ECG analysis, under bior 3.1, LM-CSA is 56.6, 62.5 and 81.5% better than bi-orthogonal wavelet in terms of MEP, SMAPE and MAE, respectively. Under bior 3.7 for ECG analysis, LM-CSA is 0.15% better than genetic algorithm (GA), 0.10% superior to particle search optimization (PSO), 0.22% superior to firefly (FF), 0.22% superior to CSA and 0.14% superior to LA, respectively, in terms of L1-norm. Further, for EEG analysis, LM-CSA is 86.9 and 91.2% better than the traditional bi-orthogonal wavelet under bior 3.1. Under bior 3.3, LM-CSA is 91.7 and 73.12% better than the bi-orthogonal wavelet in terms of MAE and MEP, respectively. Under bior 3.5 for EEG, L1-norm of LM-CSA is 0.64% superior to GA, 0.43% superior to PSO, 0.62% superior to FF, 0.84% superior to CSA and 0.60% better than LA, respectively.

Originality/value

This paper presents a novel CS framework using LM-CSA algorithm for EEG and ECG signal compression. To the best of the authors’ knowledge, this is the first work to use LM-CSA with enhanced bi-orthogonal wavelet filter for enhancing the CS capability as well reducing the errors.

Details

International Journal of Pervasive Computing and Communications, vol. 18 no. 5
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 19 December 2019

Waqar Ahmed Khan, S.H. Chung, Muhammad Usman Awan and Xin Wen

The purpose of this paper is to conduct a comprehensive review of the noteworthy contributions made in the area of the Feedforward neural network (FNN) to improve its…

1429

Abstract

Purpose

The purpose of this paper is to conduct a comprehensive review of the noteworthy contributions made in the area of the Feedforward neural network (FNN) to improve its generalization performance and convergence rate (learning speed); to identify new research directions that will help researchers to design new, simple and efficient algorithms and users to implement optimal designed FNNs for solving complex problems; and to explore the wide applications of the reviewed FNN algorithms in solving real-world management, engineering and health sciences problems and demonstrate the advantages of these algorithms in enhancing decision making for practical operations.

Design/methodology/approach

The FNN has gained much popularity during the last three decades. Therefore, the authors have focused on algorithms proposed during the last three decades. The selected databases were searched with popular keywords: “generalization performance,” “learning rate,” “overfitting” and “fixed and cascade architecture.” Combinations of the keywords were also used to get more relevant results. Duplicated articles in the databases, non-English language, and matched keywords but out of scope, were discarded.

Findings

The authors studied a total of 80 articles and classified them into six categories according to the nature of the algorithms proposed in these articles which aimed at improving the generalization performance and convergence rate of FNNs. To review and discuss all the six categories would result in the paper being too long. Therefore, the authors further divided the six categories into two parts (i.e. Part I and Part II). The current paper, Part I, investigates two categories that focus on learning algorithms (i.e. gradient learning algorithms for network training and gradient-free learning algorithms). Furthermore, the remaining four categories which mainly explore optimization techniques are reviewed in Part II (i.e. optimization algorithms for learning rate, bias and variance (underfitting and overfitting) minimization algorithms, constructive topology neural networks and metaheuristic search algorithms). For the sake of simplicity, the paper entitled “Machine learning facilitated business intelligence (Part II): Neural networks optimization techniques and applications” is referred to as Part II. This results in a division of 80 articles into 38 and 42 for Part I and Part II, respectively. After discussing the FNN algorithms with their technical merits and limitations, along with real-world management, engineering and health sciences applications for each individual category, the authors suggest seven (three in Part I and other four in Part II) new future directions which can contribute to strengthening the literature.

Research limitations/implications

The FNN contributions are numerous and cannot be covered in a single study. The authors remain focused on learning algorithms and optimization techniques, along with their application to real-world problems, proposing to improve the generalization performance and convergence rate of FNNs with the characteristics of computing optimal hyperparameters, connection weights, hidden units, selecting an appropriate network architecture rather than trial and error approaches and avoiding overfitting.

Practical implications

This study will help researchers and practitioners to deeply understand the existing algorithms merits of FNNs with limitations, research gaps, application areas and changes in research studies in the last three decades. Moreover, the user, after having in-depth knowledge by understanding the applications of algorithms in the real world, may apply appropriate FNN algorithms to get optimal results in the shortest possible time, with less effort, for their specific application area problems.

Originality/value

The existing literature surveys are limited in scope due to comparative study of the algorithms, studying algorithms application areas and focusing on specific techniques. This implies that the existing surveys are focused on studying some specific algorithms or their applications (e.g. pruning algorithms, constructive algorithms, etc.). In this work, the authors propose a comprehensive review of different categories, along with their real-world applications, that may affect FNN generalization performance and convergence rate. This makes the classification scheme novel and significant.

Details

Industrial Management & Data Systems, vol. 120 no. 1
Type: Research Article
ISSN: 0263-5577

Keywords

Article
Publication date: 26 June 2009

George J. Besseris

The aim of this paper is to circumvent the multi‐distribution effects and small sample constraints that may arise in unreplicated‐saturated fractional factorial designs during…

Abstract

Purpose

The aim of this paper is to circumvent the multi‐distribution effects and small sample constraints that may arise in unreplicated‐saturated fractional factorial designs during construction blueprint screening.

Design/methodology/approach

A simple additive ranking scheme is devised based on converting the responses of interest to rank variables regardless of the nature of each response and the optimization direction that may be issued for each of them. Collapsing all ranked responses to a single rank response, appropriately referred to as “Super‐Ranking”, allows simultaneous optimization for all factor settings considered.

Research limitations/implications

The Super‐Rank response is treated by Wilcoxon's rank sum test or Mann‐Whitney's test, aiming to establish possible factor‐setting differences by exploring their statistical significance. An optimal value for each response is predicted.

Practical implications

It is stressed, by example, that the model may handle simultaneously any number of quality characteristics. A case study based on a real geotechnical engineering project is used to illustrate how this method may be applied for optimizing simultaneously three quality characteristics that belong to each of the three possible cases, i.e. “nominal‐is‐best”, “larger‐is‐better”, and “smaller‐is‐better” respectively. For this reason, a screening set of experiments is performed on a professional CAD/CAE software package making use of an L8(27) orthogonal array where all seven factor columns are saturated by group excavation controls.

Originality/value

The statistical nature of this method is discussed in comparison with results produced by the desirability method for the case of exhausted degrees of freedom for the error. The case study itself is a unique paradigm from the area of construction operations management.

Details

International Journal of Quality & Reliability Management, vol. 26 no. 6
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 18 April 2017

David Binion and Xiaolin Chen

This paper aims to describe a method for efficient frequency domain model order reduction. The method attempts to combine the desirable attributes of Krylov reduction and proper…

Abstract

Purpose

This paper aims to describe a method for efficient frequency domain model order reduction. The method attempts to combine the desirable attributes of Krylov reduction and proper orthogonal decomposition (POD) and is entitled Krylov enhanced POD (KPOD).

Design/methodology/approach

The KPOD method couples Krylov’s moment-matching property with POD’s data generalization ability to construct reduced models capable of maintaining accuracy over wide frequency ranges. The method is based on generating a sequence of state- and frequency-dependent Krylov subspaces and then applying POD to extract a single basis that generalizes the sequence of Krylov bases.

Findings

The frequency response of a pre-stressed microelectromechanical system resonator is used as an example to demonstrate KPOD’s ability in frequency domain model reduction, with KPOD exhibiting a 44 per cent efficiency improvement over POD.

Originality/value

The results indicate that KPOD greatly outperforms POD in accuracy and efficiency, making the proposed method a potential asset in the design of frequency-selective applications.

Details

Engineering Computations, vol. 34 no. 2
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 31 May 2022

Dipankar Das

To run a job guarantee public policy scheme, it is important to know the aspiration level or the reference point of labor, and accordingly, the labor hour and the wage sequence…

Abstract

Purpose

To run a job guarantee public policy scheme, it is important to know the aspiration level or the reference point of labor, and accordingly, the labor hour and the wage sequence are to be prepared. The existing job guarantee schemes consider the same wage rates for all types of jobs. As a result, it is to identify the reference point. The present work aims to propose a job guarantee scheme where different types of jobs have different wage rates. The paper explains the choice problem between labor and leisure at different wage rates and proposes complete computational tools to be incorporated into the job guarantee schemes. The paper also gives a mechanism to prepare the list of jobs and corresponding wage rates by maintaining a balance between labor and leisure, where productive activities measure labor hours and labor welfare measures leisure hours. Lastly, the paper provides the analytical tools to interpret the ex-post data of the job guarantee public policy schemes.

Design/methodology/approach

The paper has been written based on the Coordination Game and its Welfare Implications in the job guarantee public policy schemes.

Findings

The present paper gives an initial work to measure the choice between labor and leisure for the different wage rates practically. This will help in getting the equilibrium strategies, namely, the combination of the labor hour and the wage rate between the policymaker and the labor. This method will help to implement the job guarantee schemes. For example, to run successfully the Basic Income policy, the basic income calculation should give due care; otherwise, there will be a downward trend in the basic income and the welfare of labor will be reduced, because the labor would have to supply excess labor to meet the target income.

Originality/value

This paper derives theories and explains how the equilibrium in this coordination game can be achieved. The paper explains how the policy of the job guarantee schemes can be practiced practically. In the MGNREGA scheme, the public institution declares different categories of jobs with different wage rates. The categories have been classified with respect to the hours required to complete the job. Therefore, the public institution declares different lists or a sequence of pairs of labor hours and wage rates. Moreover, the list is stochastic, because the list can be changed by the inclusion of an offer from the market as well. The labor has to select from the list. The challenge on the part of the public institution is to prepare the list in such a way so that the inclusion of the market offers will not distort the equilibrium of the coordination game. An important method has been proposed here to analyze the ex-post data of job offers so that the preparation of the future sequence of the job offers can be prepared with due care. One objective of the policymaker here is to make a list of job offers in such a way so that the labor supply will be converging to a point and that will not deviate if the wage rate increases further. This objective will make a balance of the distribution of funds between the existing registered labor and the new entrants into the job guarantee schemes.

Details

Journal of Economic Studies, vol. 50 no. 4
Type: Research Article
ISSN: 0144-3585

Keywords

Article
Publication date: 1 June 2000

P.Di Barba

Introduces papers from this area of expertise from the ISEF 1999 Proceedings. States the goal herein is one of identifying devices or systems able to provide prescribed…

Abstract

Introduces papers from this area of expertise from the ISEF 1999 Proceedings. States the goal herein is one of identifying devices or systems able to provide prescribed performance. Notes that 18 papers from the Symposium are grouped in the area of automated optimal design. Describes the main challenges that condition computational electromagnetism’s future development. Concludes by itemizing the range of applications from small activators to optimization of induction heating systems in this third chapter.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 19 no. 2
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 20 April 2015

Mário Rui Tiago Arruda and Dragos Ionut Moldovan

– The purpose of this paper is to report the implementation of an alternative time integration procedure for the dynamic non-linear analysis of structures.

Abstract

Purpose

The purpose of this paper is to report the implementation of an alternative time integration procedure for the dynamic non-linear analysis of structures.

Design/methodology/approach

The time integration algorithm discussed in this work corresponds to a spectral decomposition technique implemented in the time domain. As in the case of the modal decomposition in space, the numerical efficiency of the resulting integration scheme depends on the possibility of uncoupling the equations of motion. This is achieved by solving an eigenvalue problem in the time domain that only depends on the approximation basis being implemented. Complete sets of orthogonal Legendre polynomials are used to define the time approximation basis required by the model.

Findings

A classical example with known analytical solution is presented to validate the model, in linear and non-linear analysis. The efficiency of the numerical technique is assessed. Comparisons are made with the classical Newmark method applied to the solution of both linear and non-linear dynamics. The mixed time integration technique presents some interesting features making very attractive its application to the analysis of non-linear dynamic systems. It corresponds in essence to a modal decomposition technique implemented in the time domain. As in the case of the modal decomposition in space, the numerical efficiency of the resulting integration scheme depends on the possibility of uncoupling the equations of motion.

Originality/value

One of the main advantages of this technique is the possibility of considering relatively large time step increments which enhances the computational efficiency of the numerical procedure. Due to its characteristics, this method is well suited to parallel processing, one of the features that have to be conveniently explored in the near future.

Details

Engineering Computations, vol. 32 no. 2
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 1 June 2021

Hannan Amoozad Mahdiraji, Madjid Tavana, Pouya Mahdiani and Ali Asghar Abbasi Kamardi

Customer differences and similarities play a crucial role in service operations, and service industries need to develop various strategies for different customer types. This study…

Abstract

Purpose

Customer differences and similarities play a crucial role in service operations, and service industries need to develop various strategies for different customer types. This study aims to understand the behavioral pattern of customers in the banking industry by proposing a hybrid data mining approach with rule extraction and service operation benchmarking.

Design/methodology/approach

The authors analyze customer data to identify the best customers using a modified recency, frequency and monetary (RFM) model and K-means clustering. The number of clusters is determined with a two-step K-means quality analysis based on the Silhouette, Davies–Bouldin and Calinski–Harabasz indices and the evaluation based on distance from average solution (EDAS). The best–worst method (BWM) and the total area based on orthogonal vectors (TAOV) are used next to sort the clusters. Finally, the associative rules and the Apriori algorithm are used to derive the customers' behavior patterns.

Findings

As a result of implementing the proposed approach in the financial service industry, customers were segmented and ranked into six clusters by analyzing 20,000 records. Furthermore, frequent customer financial behavior patterns were recognized based on demographic characteristics and financial transactions of customers. Thus, customer types were classified as highly loyal, loyal, high-interacting, low-interacting and missing customers. Eventually, appropriate strategies for interacting with each customer type were proposed.

Originality/value

The authors propose a novel hybrid multi-attribute data mining approach for rule extraction and the service operations benchmarking approach by combining data mining tools with a multilayer decision-making approach. The proposed hybrid approach has been implemented in a large-scale problem in the financial services industry.

1 – 10 of over 1000