Search results

1 – 10 of over 12000
Article
Publication date: 13 August 2021

Manju V.M. and Ganesh R.S.

Multiple-input multiple-output (MIMO) combined with multi-user massive MIMO has been a well-known approach for high spectral efficiency in wideband systems, and it was targeted to…

Abstract

Purpose

Multiple-input multiple-output (MIMO) combined with multi-user massive MIMO has been a well-known approach for high spectral efficiency in wideband systems, and it was targeted to detect the MIMO signals. The increasing data rates with multiple antennas and multiple users that share the communication channel simultaneously lead to higher capacity requirements and increased complexity. Thus, different detection algorithms were developed for the Massive MIMO.

Design/methodology/approach

This paper focuses on the various literature analyzes on various detection algorithms and techniques for MIMO detectors. Here, it reviews several research papers and exhibits the significance of each detection method.

Findings

This paper provides the details of the performance analysis of the MIMO detectors and reveals the best value in the case of each performance measure. Finally, it widens the research issues that can be useful for future researchers to be accomplished in MIMO massive detectors

Originality/value

This paper has presented a detailed review of the detection of massive MIMO on different algorithms and techniques. The survey mainly focuses on different types of channels used in MIMO detections, the number of antennas used in transmitting signals from the source to destination, and vice-versa. The performance measures and the best performance of each of the detectors are described.

Details

International Journal of Pervasive Computing and Communications, vol. 19 no. 2
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 12 August 2020

Ngoc Le Chau, Ngoc Thoai Tran and Thanh-Phong Dao

Compliant mechanism has been receiving a great interest in precision engineering. However, analytical methods involving their behavior analysis is still a challenge because there…

Abstract

Purpose

Compliant mechanism has been receiving a great interest in precision engineering. However, analytical methods involving their behavior analysis is still a challenge because there are unclear kinematic behaviors. Especially, design optimization for compliant mechanisms becomes an important task when the problem is more and more complex. Therefore, the purpose of this study is to design a new hybrid computational method. The hybridized method is an integration of statistics, numerical method, computational intelligence and optimization.

Design/methodology/approach

A tensural bistable compliant mechanism is used to clarify the efficiency of the developed method. A pseudo model of the mechanism is designed and simulations are planned to retrieve the data sets. Main contributions of design variables are analyzed by analysis of variance to initialize several new populations. Next, objective functions are transformed into the desirability, which are inputs of the fuzzy inference system (FIS). The FIS modeling is aimed to initialize a single-combined objective function (SCOF). Subsequently, adaptive neuro-fuzzy inference system is developed to modeling a relation of the main geometrical parameters and the SCOF. Finally, the SCOF is maximized by lightning attachment procedure optimization algorithm to yield a global optimality.

Findings

The results prove that the present method is better than a combination of fuzzy logic and Taguchi. The present method is also superior to other algorithms by conducting non-parameter tests. The proposed computational method is a usefully systematic method that can be applied to compliant mechanisms with complex structures and multiple-constrained optimization problems.

Originality/value

The novelty of this work is to make a new approach by combining statistical techniques, numerical method, computational intelligence and metaheuristic algorithm. The feasibility of the method is capable of solving a multi-objective optimization problem for compliant mechanisms with nonlinear complexity.

Details

Engineering Computations, vol. 38 no. 4
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 8 May 2018

Zeeshan Ahmad, Yaoliang Song and Qiang Du

Direction-of-arrival (DOA) estimation for wideband sources has attracted a growing interest in the recent decade because wideband sources are incorporated in many real-world…

Abstract

Purpose

Direction-of-arrival (DOA) estimation for wideband sources has attracted a growing interest in the recent decade because wideband sources are incorporated in many real-world applications such as communication systems, radar, sonar and acoustics. One way to estimate the DOAs of wideband signals is to decompose it into narrowband signals using discrete Fourier transform (DFT) and then apply well-established narrowband algorithms to each signal. Afterwards, results are averaged to yield the final DOAs. These techniques require scanning the full band of wideband sources, ultimately degrading the resolution and increasing complexity. This paper aims to propose a new DOA estimation methodology to solve these problems.

Design/methodology/approach

The new DOA estimation methodology is based on incoherent signal subspace method (ISSM). The proposed approach presents a criterion to select a single sub-band of the selected narrowband signals instead of scanning the whole signal spectrum. Then, the DOAs of wideband signals are estimated using the selected sub-band. Therefore, it is named as single sub-band (SSB)-ISSM.

Findings

The computational complexity of the proposed method is much lower than that of traditional DFT-based methods. The effectiveness and advantages of the proposed methodology are theoretically investigated, and computational complexity is also addressed.

Originality/value

To verify the theoretical analysis, computer simulations are implemented, and comparisons with other algorithms are made. The simulation results show that the proposed method achieves better performance and accurately estimates the DOAs of wideband sources.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 37 no. 3
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 12 August 2021

Wasiq Ullah, Faisal Khan, Muhammad Umair and Bakhtiar Khan

This paper aims to reviewed analytical methodologies, i.e. lumped parameter magnetic equivalent circuit (LPMEC), magnetic co-energy (MCE), Laplace equations (LE), Maxwell stress…

Abstract

Purpose

This paper aims to reviewed analytical methodologies, i.e. lumped parameter magnetic equivalent circuit (LPMEC), magnetic co-energy (MCE), Laplace equations (LE), Maxwell stress tensor (MST) method and sub-domain modelling for design of segmented PM(SPM) consequent pole flux switching machine (SPMCPFSM). Electric machines, especially flux switching machines (FSMs), are accurately modeled using numerical-based finite element analysis (FEA) tools; however, despite of expensive hardware setup, repeated iterative process, complex stator design and permanent magnet (PM) non-linear behavior increases computational time and complexity.

Design/methodology/approach

This paper reviews various alternate analytical methodologies for electromagnetic performance calculation. In above-mentioned analytical methodologies, no-load phase flux linkage is performed using LPMEC, magnetic co-energy for cogging torque, LE for magnetic flux density (MFD) components, i.e. radial and tangential and MST for instantaneous torque. Sub-domain model solves electromagnetic performance, i.e. MFD and torque behaviour.

Findings

The reviewed analytical methodologies are validated with globally accepted FEA using JMAG Commercial FEA Package v. 18.1 which shows good agreement with accuracy. In comparison of analytical methodologies, analysis reveals that sub-domain model not only get rid of multiples techniques for validation purpose but also provide better results by accounting influence of all machine parts which helps to reduce computational complexity, computational time and drive storage with overall accuracy of ∼99%. Furthermore, authors are confident to recommend sub-domain model for initial design stage of SPMCPFSM when higher accuracy and low computational cost are primal requirements.

Practical implications

The model is developed for high-speed brushless AC applications.

Originality/value

The SPMCPFSM enhances electromagnetic performance owing to segmented PMs configuration which makes it different than conventional designs. Moreover, developed analytical methodologies for SPMCPFSM reduce computational time compared with that of FEA.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering , vol. 40 no. 3
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 22 June 2010

Imam Machdi, Toshiyuki Amagasa and Hiroyuki Kitagawa

The purpose of this paper is to propose general parallelism techniques for holistic twig join algorithms to process queries against Extensible Markup Language (XML) databases on a…

Abstract

Purpose

The purpose of this paper is to propose general parallelism techniques for holistic twig join algorithms to process queries against Extensible Markup Language (XML) databases on a multi‐core system.

Design/methodology/approach

The parallelism techniques comprised data and task parallelism. As for data parallelism, the paper adopted the stream‐based partitioning for XML to partition XML data as the basis of parallelism on multiple CPU cores. The XML data partitioning was performed in two levels. The first level was to create buckets for creating data independence and balancing loads among CPU cores; each bucket was assigned onto a CPU core. Within each bucket, the second level of XML data partitioning was performed to create finer partitions for providing finer parallelism. Each CPU core performed the holistic twig join algorithm on each finer partition of its own in parallel with other CPU cores. In task parallelism, the holistic twig join algorithm was decomposed into two main tasks, which were pipelined to create parallelism. The first task adopted the data parallelism technique and their outputs were transferred to the second task periodically. Since data transfers incurred overheads, the size of each data transfer needed to be estimated cautiously for achieving optimal performance.

Findings

The data and task parallelism techniques contribute to good performance especially for queries having complex structures and/or higher values of query selectivity. The performance of data parallelism can be further improved by task parallelism. Significant performance improvement is attained by queries having higher selectivity because more outputs computed by the second task is performed in parallel with the first task.

Research limitations/implications

The proposed parallelism techniques primarily deals with executing a single long‐running query for intra‐query parallelism, partitioning XML data on‐the‐fly, and allocating partitions on CPU cores statically. During the parallel execution, presumably there are no such dynamic XML data updates.

Practical implications

The effectiveness of the proposed parallel holistic twig joins relies fundamentally on some system parameter values that can be obtained from a benchmark of the system platform.

Originality/value

The paper proposes novel techniques to increase parallelism by combining techniques of data and task parallelism for achieving high performance. To the best of the author's knowledge, this is the first paper of parallelizing the holistic twig join algorithms on a multi‐core system.

Details

International Journal of Web Information Systems, vol. 6 no. 2
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 7 September 2015

M. V. A. Raju Bahubalendruni, Bibhuti Bhusan Biswal, Manish Kumar and Radharani Nayak

The purpose of this paper is to find out the significant influence of assembly predicate consideration on optimal assembly sequence generation (ASG) in terms of search space…

Abstract

Purpose

The purpose of this paper is to find out the significant influence of assembly predicate consideration on optimal assembly sequence generation (ASG) in terms of search space, computational time and possibility of resulting practically not feasible assembly sequences. An appropriate assembly sequence results in minimal lead time and low cost of assembly. ASG is a complex combinatorial optimisation problem which deals with several assembly predicates to result an optimal assembly sequence. The consideration of each assembly predicate highly influences the search space and thereby computational time to achieve valid assembly sequence. Often, the ignoring an assembly predicate leads to inappropriate assembly sequence, which may not be physically possible, sometimes predicate assumption drastic ally raises the search space with high computational time.

Design/methodology/approach

The influence of assuming and considering different assembly predicates on optimal assembly sequence generation have been clearly illustrated with examples using part concatenation method.

Findings

The presence of physical attachments and type of assembly liaisons decide the consideration of assembly predicate to reduce the complexity of the problem formulation and overall computational time.

Originality/value

Most of the times, assembly predicates are ignored to reduce the computational time without considering their impact on the assembly sequence problem irrespective of assembly attributes. The current research proposes direction towards predicate considerations based on the assembly configurations for effective and efficient ASG.

Article
Publication date: 7 July 2020

Wasiq Ullah, Faisal Khan and Muhammad Umair

The purpose of this paper is to investigate an alternative simplified analytical approach for the design of electric machines. Numerical-based finite element method (FEM) is a…

Abstract

Purpose

The purpose of this paper is to investigate an alternative simplified analytical approach for the design of electric machines. Numerical-based finite element method (FEM) is a powerful tool for accurate modelling and electromagnetic performance analysis of electric machines. However, computational complexity, magnetic saturation, complex stator structure and time consumption compel researchers to adopt alternate analytical model for initial design of electric machine especially flux switching machines (FSMs).

Design/methodology/approach

In this paper, simplified lumped parameter magnetic equivalent circuit (LPMEC) model is presented for newly developed segmented PM consequent pole flux switching machine (SPMCPFSM). LPMEC model accounts influence of all machine parts for quarter of machine which helps to reduce computational complexity, computational time and drive storage without affecting overall accuracy. Furthermore, inductance calculation is performed in the rotor and stator frame of reference for accurate estimation of the self-inductance, mutual inductance and dq-axis inductance profile using park transformation.

Findings

The developed LPMEC model is validated with corresponding FEA using JMAG Commercial FEA Package v. 18.1 which shows good agreement with accuracy of ∼98.23%, and park transformation precisely estimates the inductance profile in rotor and stator frame of reference.

Practical implications

The model is developed for high-speed brushless AC applications.

Originality/value

The proposed SPMCPFSM enhance electromagnetic performance owing to partitioned PMs configuration which make it different than conventional designs. Moreover, the developed LPMEC model reduces computational time by solving quarter of machine.

Article
Publication date: 2 May 2017

Kannan S. and Somasundaram K.

Due to the large-size, non-uniform transactions per day, the money laundering detection (MLD) is a time-consuming and difficult process. The major purpose of the proposed…

Abstract

Purpose

Due to the large-size, non-uniform transactions per day, the money laundering detection (MLD) is a time-consuming and difficult process. The major purpose of the proposed auto-regressive (AR) outlier-based MLD (AROMLD) is to reduce the time consumption for handling large-sized non-uniform transactions.

Design/methodology/approach

The AR-based outlier design produces consistent asymptotic distributed results that enhance the demand-forecasting abilities. Besides, the inter-quartile range (IQR) formulations proposed in this paper support the detailed analysis of time-series data pairs.

Findings

The prediction of high-dimensionality and the difficulties in the relationship/difference between the data pairs makes the time-series mining as a complex task. The presence of domain invariance in time-series mining initiates the regressive formulation for outlier detection. The deep analysis of time-varying process and the demand of forecasting combine the AR and the IQR formulations for an effective outlier detection.

Research limitations/implications

The present research focuses on the detection of an outlier in the previous financial transaction, by using the AR model. Prediction of the possibility of an outlier in future transactions remains a major issue.

Originality/value

The lack of prior segmentation of ML detection suffers from dimensionality. Besides, the absence of boundary to isolate the normal and suspicious transactions induces the limitations. The lack of deep analysis and the time consumption are overwhelmed by using the regression formulation.

Details

Journal of Money Laundering Control, vol. 20 no. 2
Type: Research Article
ISSN: 1368-5201

Keywords

Open Access
Article
Publication date: 17 August 2020

Slavcho Shtrakov

In this paper we study a class of complexity measures, induced by a new data structure for representing k-valued functions (operations), called minor decision diagram. When…

Abstract

In this paper we study a class of complexity measures, induced by a new data structure for representing k-valued functions (operations), called minor decision diagram. When assigning values to some variables in a function the resulting functions are called subfunctions, and when identifying some variables the resulting functions are called minors. The sets of essential variables in subfunctions of f are called separable in f.

We examine the maximal separable subsets of variables and their conjugates, introduced in the paper, proving that each such set has at least one conjugate. The essential arity gap gap(f) of the function f is the minimal number of essential variables in f which become fictive when identifying distinct essential variables in f. We also investigate separable sets of variables in functions with non-trivial arity gap. This allows us to solve several important algebraic, computational and combinatorial problems about the finite-valued functions.

Details

Applied Computing and Informatics, vol. 17 no. 1
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 25 July 2019

Juliana Padilha Leitzke and Hubert Zangl

This paper aims to present an approach based on electrical impedance tomography spectroscopy (EITS) for the determination of water and ice fraction in low-power applications such…

932

Abstract

Purpose

This paper aims to present an approach based on electrical impedance tomography spectroscopy (EITS) for the determination of water and ice fraction in low-power applications such as autarkic wireless sensors, which require a low computational complexity reconstruction approach and a low number of electrodes. This paper also investigates how the electrode design can affect the reconstruction results in tomography.

Design/methodology/approach

EITS is performed by using a non-iterative method called optimal first order approximation. In addition to that, a planar electrode geometry is used instead of the traditional circular electrode geometry. Such a structure allows the system to identify materials placed on the region above the sensor, which do not need to be confined in a pipe. For the optimization, the mean squared error (MSE) between the reference images and the obtained reconstructed images was calculated.

Findings

The authors demonstrate that even with a low number of four electrodes and a low complexity reconstruction algorithm, a reasonable reconstruction of water and ice fractions is possible. Furthermore, it is shown that an optimal distribution of the sensor electrodes can help to reduce the MSE without any costs in terms of computational complexity or power consumption.

Originality/value

This paper shows through simulations that the reconstruction of ice and water mixtures is possible and that the electrode design is a topic of great importance, as they can significantly affect the reconstruction results.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering , vol. 38 no. 5
Type: Research Article
ISSN: 0332-1649

Keywords

1 – 10 of over 12000