Search results

1 – 10 of over 8000
To view the access options for this content please click here
Article
Publication date: 12 August 2020

Ngoc Le Chau, Ngoc Thoai Tran and Thanh-Phong Dao

Compliant mechanism has been receiving a great interest in precision engineering. However, analytical methods involving their behavior analysis is still a challenge…

Abstract

Purpose

Compliant mechanism has been receiving a great interest in precision engineering. However, analytical methods involving their behavior analysis is still a challenge because there are unclear kinematic behaviors. Especially, design optimization for compliant mechanisms becomes an important task when the problem is more and more complex. Therefore, the purpose of this study is to design a new hybrid computational method. The hybridized method is an integration of statistics, numerical method, computational intelligence and optimization.

Design/methodology/approach

A tensural bistable compliant mechanism is used to clarify the efficiency of the developed method. A pseudo model of the mechanism is designed and simulations are planned to retrieve the data sets. Main contributions of design variables are analyzed by analysis of variance to initialize several new populations. Next, objective functions are transformed into the desirability, which are inputs of the fuzzy inference system (FIS). The FIS modeling is aimed to initialize a single-combined objective function (SCOF). Subsequently, adaptive neuro-fuzzy inference system is developed to modeling a relation of the main geometrical parameters and the SCOF. Finally, the SCOF is maximized by lightning attachment procedure optimization algorithm to yield a global optimality.

Findings

The results prove that the present method is better than a combination of fuzzy logic and Taguchi. The present method is also superior to other algorithms by conducting non-parameter tests. The proposed computational method is a usefully systematic method that can be applied to compliant mechanisms with complex structures and multiple-constrained optimization problems.

Originality/value

The novelty of this work is to make a new approach by combining statistical techniques, numerical method, computational intelligence and metaheuristic algorithm. The feasibility of the method is capable of solving a multi-objective optimization problem for compliant mechanisms with nonlinear complexity.

Details

Engineering Computations, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0264-4401

Keywords

To view the access options for this content please click here
Article
Publication date: 8 May 2018

Zeeshan Ahmad, Yaoliang Song and Qiang Du

Direction-of-arrival (DOA) estimation for wideband sources has attracted a growing interest in the recent decade because wideband sources are incorporated in many…

Abstract

Purpose

Direction-of-arrival (DOA) estimation for wideband sources has attracted a growing interest in the recent decade because wideband sources are incorporated in many real-world applications such as communication systems, radar, sonar and acoustics. One way to estimate the DOAs of wideband signals is to decompose it into narrowband signals using discrete Fourier transform (DFT) and then apply well-established narrowband algorithms to each signal. Afterwards, results are averaged to yield the final DOAs. These techniques require scanning the full band of wideband sources, ultimately degrading the resolution and increasing complexity. This paper aims to propose a new DOA estimation methodology to solve these problems.

Design/methodology/approach

The new DOA estimation methodology is based on incoherent signal subspace method (ISSM). The proposed approach presents a criterion to select a single sub-band of the selected narrowband signals instead of scanning the whole signal spectrum. Then, the DOAs of wideband signals are estimated using the selected sub-band. Therefore, it is named as single sub-band (SSB)-ISSM.

Findings

The computational complexity of the proposed method is much lower than that of traditional DFT-based methods. The effectiveness and advantages of the proposed methodology are theoretically investigated, and computational complexity is also addressed.

Originality/value

To verify the theoretical analysis, computer simulations are implemented, and comparisons with other algorithms are made. The simulation results show that the proposed method achieves better performance and accurately estimates the DOAs of wideband sources.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 37 no. 3
Type: Research Article
ISSN: 0332-1649

Keywords

To view the access options for this content please click here
Article
Publication date: 22 June 2010

Imam Machdi, Toshiyuki Amagasa and Hiroyuki Kitagawa

The purpose of this paper is to propose general parallelism techniques for holistic twig join algorithms to process queries against Extensible Markup Language (XML…

Abstract

Purpose

The purpose of this paper is to propose general parallelism techniques for holistic twig join algorithms to process queries against Extensible Markup Language (XML) databases on a multi‐core system.

Design/methodology/approach

The parallelism techniques comprised data and task parallelism. As for data parallelism, the paper adopted the stream‐based partitioning for XML to partition XML data as the basis of parallelism on multiple CPU cores. The XML data partitioning was performed in two levels. The first level was to create buckets for creating data independence and balancing loads among CPU cores; each bucket was assigned onto a CPU core. Within each bucket, the second level of XML data partitioning was performed to create finer partitions for providing finer parallelism. Each CPU core performed the holistic twig join algorithm on each finer partition of its own in parallel with other CPU cores. In task parallelism, the holistic twig join algorithm was decomposed into two main tasks, which were pipelined to create parallelism. The first task adopted the data parallelism technique and their outputs were transferred to the second task periodically. Since data transfers incurred overheads, the size of each data transfer needed to be estimated cautiously for achieving optimal performance.

Findings

The data and task parallelism techniques contribute to good performance especially for queries having complex structures and/or higher values of query selectivity. The performance of data parallelism can be further improved by task parallelism. Significant performance improvement is attained by queries having higher selectivity because more outputs computed by the second task is performed in parallel with the first task.

Research limitations/implications

The proposed parallelism techniques primarily deals with executing a single long‐running query for intra‐query parallelism, partitioning XML data on‐the‐fly, and allocating partitions on CPU cores statically. During the parallel execution, presumably there are no such dynamic XML data updates.

Practical implications

The effectiveness of the proposed parallel holistic twig joins relies fundamentally on some system parameter values that can be obtained from a benchmark of the system platform.

Originality/value

The paper proposes novel techniques to increase parallelism by combining techniques of data and task parallelism for achieving high performance. To the best of the author's knowledge, this is the first paper of parallelizing the holistic twig join algorithms on a multi‐core system.

Details

International Journal of Web Information Systems, vol. 6 no. 2
Type: Research Article
ISSN: 1744-0084

Keywords

To view the access options for this content please click here
Article
Publication date: 7 September 2015

M. V. A. Raju Bahubalendruni, Bibhuti Bhusan Biswal, Manish Kumar and Radharani Nayak

The purpose of this paper is to find out the significant influence of assembly predicate consideration on optimal assembly sequence generation (ASG) in terms of search…

Abstract

Purpose

The purpose of this paper is to find out the significant influence of assembly predicate consideration on optimal assembly sequence generation (ASG) in terms of search space, computational time and possibility of resulting practically not feasible assembly sequences. An appropriate assembly sequence results in minimal lead time and low cost of assembly. ASG is a complex combinatorial optimisation problem which deals with several assembly predicates to result an optimal assembly sequence. The consideration of each assembly predicate highly influences the search space and thereby computational time to achieve valid assembly sequence. Often, the ignoring an assembly predicate leads to inappropriate assembly sequence, which may not be physically possible, sometimes predicate assumption drastic ally raises the search space with high computational time.

Design/methodology/approach

The influence of assuming and considering different assembly predicates on optimal assembly sequence generation have been clearly illustrated with examples using part concatenation method.

Findings

The presence of physical attachments and type of assembly liaisons decide the consideration of assembly predicate to reduce the complexity of the problem formulation and overall computational time.

Originality/value

Most of the times, assembly predicates are ignored to reduce the computational time without considering their impact on the assembly sequence problem irrespective of assembly attributes. The current research proposes direction towards predicate considerations based on the assembly configurations for effective and efficient ASG.

To view the access options for this content please click here
Article
Publication date: 8 July 2020

Wasiq Ullah, Faisal Khan and Muhammad Umair

The purpose of this paper is to investigate an alternative simplified analytical approach for the design of electric machines. Numerical-based finite element method (FEM…

Abstract

Purpose

The purpose of this paper is to investigate an alternative simplified analytical approach for the design of electric machines. Numerical-based finite element method (FEM) is a powerful tool for accurate modelling and electromagnetic performance analysis of electric machines. However, computational complexity, magnetic saturation, complex stator structure and time consumption compel researchers to adopt alternate analytical model for initial design of electric machine especially flux switching machines (FSMs).

Design/methodology/approach

In this paper, simplified lumped parameter magnetic equivalent circuit (LPMEC) model is presented for newly developed segmented PM consequent pole flux switching machine (SPMCPFSM). LPMEC model accounts influence of all machine parts for quarter of machine which helps to reduce computational complexity, computational time and drive storage without affecting overall accuracy. Furthermore, inductance calculation is performed in the rotor and stator frame of reference for accurate estimation of the self-inductance, mutual inductance and dq-axis inductance profile using park transformation.

Findings

The developed LPMEC model is validated with corresponding FEA using JMAG Commercial FEA Package v. 18.1 which shows good agreement with accuracy of ∼98.23%, and park transformation precisely estimates the inductance profile in rotor and stator frame of reference.

Practical implications

The model is developed for high-speed brushless AC applications.

Originality/value

The proposed SPMCPFSM enhance electromagnetic performance owing to partitioned PMs configuration which make it different than conventional designs. Moreover, the developed LPMEC model reduces computational time by solving quarter of machine.

To view the access options for this content please click here
Article
Publication date: 2 May 2017

Kannan S. and Somasundaram K.

Due to the large-size, non-uniform transactions per day, the money laundering detection (MLD) is a time-consuming and difficult process. The major purpose of the proposed…

Abstract

Purpose

Due to the large-size, non-uniform transactions per day, the money laundering detection (MLD) is a time-consuming and difficult process. The major purpose of the proposed auto-regressive (AR) outlier-based MLD (AROMLD) is to reduce the time consumption for handling large-sized non-uniform transactions.

Design/methodology/approach

The AR-based outlier design produces consistent asymptotic distributed results that enhance the demand-forecasting abilities. Besides, the inter-quartile range (IQR) formulations proposed in this paper support the detailed analysis of time-series data pairs.

Findings

The prediction of high-dimensionality and the difficulties in the relationship/difference between the data pairs makes the time-series mining as a complex task. The presence of domain invariance in time-series mining initiates the regressive formulation for outlier detection. The deep analysis of time-varying process and the demand of forecasting combine the AR and the IQR formulations for an effective outlier detection.

Research limitations/implications

The present research focuses on the detection of an outlier in the previous financial transaction, by using the AR model. Prediction of the possibility of an outlier in future transactions remains a major issue.

Originality/value

The lack of prior segmentation of ML detection suffers from dimensionality. Besides, the absence of boundary to isolate the normal and suspicious transactions induces the limitations. The lack of deep analysis and the time consumption are overwhelmed by using the regression formulation.

Details

Journal of Money Laundering Control, vol. 20 no. 2
Type: Research Article
ISSN: 1368-5201

Keywords

Content available
Article
Publication date: 18 August 2020

Slavcho Shtrakov

In this paper we study a class of complexity measures, induced by a new data structure for representing k-valued functions (operations), called minor decision diagram…

Abstract

In this paper we study a class of complexity measures, induced by a new data structure for representing k-valued functions (operations), called minor decision diagram. When assigning values to some variables in a function the resulting functions are called subfunctions, and when identifying some variables the resulting functions are called minors. The sets of essential variables in subfunctions of f are called separable in f.

We examine the maximal separable subsets of variables and their conjugates, introduced in the paper, proving that each such set has at least one conjugate. The essential arity gap gap(f) of the function f is the minimal number of essential variables in f which become fictive when identifying distinct essential variables in f. We also investigate separable sets of variables in functions with non-trivial arity gap. This allows us to solve several important algebraic, computational and combinatorial problems about the finite-valued functions.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Content available
Article
Publication date: 2 September 2019

Juliana Padilha Leitzke and Hubert Zangl

This paper aims to present an approach based on electrical impedance tomography spectroscopy (EITS) for the determination of water and ice fraction in low-power…

Abstract

Purpose

This paper aims to present an approach based on electrical impedance tomography spectroscopy (EITS) for the determination of water and ice fraction in low-power applications such as autarkic wireless sensors, which require a low computational complexity reconstruction approach and a low number of electrodes. This paper also investigates how the electrode design can affect the reconstruction results in tomography.

Design/methodology/approach

EITS is performed by using a non-iterative method called optimal first order approximation. In addition to that, a planar electrode geometry is used instead of the traditional circular electrode geometry. Such a structure allows the system to identify materials placed on the region above the sensor, which do not need to be confined in a pipe. For the optimization, the mean squared error (MSE) between the reference images and the obtained reconstructed images was calculated.

Findings

The authors demonstrate that even with a low number of four electrodes and a low complexity reconstruction algorithm, a reasonable reconstruction of water and ice fractions is possible. Furthermore, it is shown that an optimal distribution of the sensor electrodes can help to reduce the MSE without any costs in terms of computational complexity or power consumption.

Originality/value

This paper shows through simulations that the reconstruction of ice and water mixtures is possible and that the electrode design is a topic of great importance, as they can significantly affect the reconstruction results.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering , vol. 38 no. 5
Type: Research Article
ISSN: 0332-1649

Keywords

To view the access options for this content please click here
Article
Publication date: 13 May 2014

Fabian Andres Lara-Molina, João Maurício Rosário, Didier Dumur and Philippe Wenger

– The purpose of this paper is to address the synthesis and experimental application of a generalized predictive control (GPC) technique on an Orthoglide robot.

Abstract

Purpose

The purpose of this paper is to address the synthesis and experimental application of a generalized predictive control (GPC) technique on an Orthoglide robot.

Design/methodology/approach

The control strategy is composed of two control loops. The inner loop aims at linearizing the nonlinear robot dynamics using feedback linearization. The outer loop tracks the desired trajectory based on GPC strategy, which is robustified against measurement noise and neglected dynamics using Youla parameterization.

Findings

The experimental results show the benefits of the robustified predictive control strategy on the dynamical performance of the Orthoglide robot in terms of tracking accuracy, disturbance rejection, attenuation of noise acting on the control signal and parameter variation without increasing the computational complexity.

Originality/value

The paper shows the implementation of the robustified predictive control strategy in real time with low computational complexity on the Orthoglide robot.

Details

Industrial Robot: An International Journal, vol. 41 no. 3
Type: Research Article
ISSN: 0143-991X

Keywords

Content available
Article
Publication date: 2 December 2019

Christophe Schinckus

The term “agent-based modelling” (ABM) is a buzzword which is widely used in the scientific literature even though it refers to a variety of methodologies implemented in…

Abstract

Purpose

The term “agent-based modelling” (ABM) is a buzzword which is widely used in the scientific literature even though it refers to a variety of methodologies implemented in different disciplinary contexts. The numerous works dealing with ABM require a clarification to better understand the lines of thinking paved by this approach in economics. All modelling tasks are a means and a source of knowledge, and this epistemic function can vary depending on the methodology. this paper is to present four major ways (deductive, abductive, metaphorical and phenomenological) of implementing an agent-based framework to describe economic systems. ABM generates numerous debates in economics and opens the room for epistemological questions about the micro-foundations of macroeconomics; before dealing with this issue, the purpose of this paper is to identify the kind of ABM the author can find in economics.

Design/methodology/approach

The profusion of works dealing with ABM requires a clarification to understand better the lines of thinking paved by this approach in economics. This paper offers a conceptual classification outlining the major trends of ABM in economics.

Findings

There are four categories of ABM in economics.

Originality/value

This paper suggests a methodological categorization of ABM works in economics.

Details

Journal of Asian Business and Economic Studies, vol. 26 no. 2
Type: Research Article
ISSN: 2515-964X

Keywords

1 – 10 of over 8000