Search results

1 – 10 of over 14000
Article
Publication date: 13 August 2021

Manju V.M. and Ganesh R.S.

Multiple-input multiple-output (MIMO) combined with multi-user massive MIMO has been a well-known approach for high spectral efficiency in wideband systems, and it was targeted to…

Abstract

Purpose

Multiple-input multiple-output (MIMO) combined with multi-user massive MIMO has been a well-known approach for high spectral efficiency in wideband systems, and it was targeted to detect the MIMO signals. The increasing data rates with multiple antennas and multiple users that share the communication channel simultaneously lead to higher capacity requirements and increased complexity. Thus, different detection algorithms were developed for the Massive MIMO.

Design/methodology/approach

This paper focuses on the various literature analyzes on various detection algorithms and techniques for MIMO detectors. Here, it reviews several research papers and exhibits the significance of each detection method.

Findings

This paper provides the details of the performance analysis of the MIMO detectors and reveals the best value in the case of each performance measure. Finally, it widens the research issues that can be useful for future researchers to be accomplished in MIMO massive detectors

Originality/value

This paper has presented a detailed review of the detection of massive MIMO on different algorithms and techniques. The survey mainly focuses on different types of channels used in MIMO detections, the number of antennas used in transmitting signals from the source to destination, and vice-versa. The performance measures and the best performance of each of the detectors are described.

Details

International Journal of Pervasive Computing and Communications, vol. 19 no. 2
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 5 June 2019

Gang Li, Shuo Jia and Hong-Nan Li

The purpose of this paper is to make a theoretical comprehensive efficiency evaluation of a nonlinear analysis method based on the Woodbury formula from the efficiency of the…

Abstract

Purpose

The purpose of this paper is to make a theoretical comprehensive efficiency evaluation of a nonlinear analysis method based on the Woodbury formula from the efficiency of the solution of linear equations in each incremental step and the selected iterative algorithms.

Design/methodology/approach

First, this study employs the time complexity theory to quantitatively compare the efficiency of the Woodbury formula and the LDLT factorization method which is a commonly used method to solve linear equations. Moreover, the performance of iterative algorithms also significantly effects the efficiency of the analysis. Thus, the three-point method with a convergence order of eight is employed to solve the equilibrium equations of the nonlinear analysis method based on the Woodbury formula, aiming to improve the iterative performance of the Newton–Raphson (N–R) method.

Findings

First, the result shows that the asymptotic time complexity of the Woodbury formula is much lower than that of the LDLT factorization method when the number of inelastic degrees of freedom (IDOFs) is much less than that of DOFs, indicating that the Woodbury formula is more efficient for local nonlinear problems. Moreover, the time complexity comparison of the N–R method and the three-point method indicates that the three-point method is more efficient than the N–R method for local nonlinear problems with large-scale structures or a larger ratio of IDOFs number to the DOFs number.

Originality/value

This study theoretically evaluates the efficiency of nonlinear analysis method based on the Woodbury formula, and quantitatively shows the application condition of the comparative methods. The comparison result provides a theoretical basis for the selection of algorithms for different nonlinear problems.

Details

Engineering Computations, vol. 36 no. 4
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 7 August 2017

Daniel Mejia, Diego A. Acosta and Oscar Ruiz-Salguero

Mesh Parameterization is central to reverse engineering, tool path planning, etc. This work synthesizes parameterizations with un-constrained borders, overall minimum angle plus…

Abstract

Purpose

Mesh Parameterization is central to reverse engineering, tool path planning, etc. This work synthesizes parameterizations with un-constrained borders, overall minimum angle plus area distortion. This study aims to present an assessment of the sensitivity of the minimized distortion with respect to weighed area and angle distortions.

Design/methodology/approach

A Mesh Parameterization which does not constrain borders is implemented by performing: isometry maps for each triangle to the plane Z = 0; an affine transform within the plane Z = 0 to glue the triangles back together; and a Levenberg–Marquardt minimization algorithm of a nonlinear F penalty function that modifies the parameters of the first two transformations to discourage triangle flips, angle or area distortions. F is a convex weighed combination of area distortion (weight: α with 0 ≤ α ≤ 1) and angle distortion (weight: 1 − α).

Findings

The present study parameterization algorithm has linear complexity [𝒪(n), n = number of mesh vertices]. The sensitivity analysis permits a fine-tuning of the weight parameter which achieves overall bijective parameterizations in the studied cases. No theoretical guarantee is given in this manuscript for the bijectivity. This algorithm has equal or superior performance compared with the ABF, LSCM and ARAP algorithms for the Ball, Cow and Gargoyle data sets. Additional correct results of this algorithm alone are presented for the Foot, Fandisk and Sliced-Glove data sets.

Originality/value

The devised free boundary nonlinear Mesh Parameterization method does not require a valid initial parameterization and produces locally bijective parameterizations in all of our tests. A formal sensitivity analysis shows that the resulting parameterization is more stable, i.e. the UV mapping changes very little when the algorithm tries to preserve angles than when it tries to preserve areas. The algorithm presented in this study belongs to the class that parameterizes meshes with holes. This study presents the results of a complexity analysis comparing the present study algorithm with 12 competing ones.

Details

Engineering Computations, vol. 34 no. 6
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 29 April 2021

Mohamed Haddache, Allel Hadjali and Hamid Azzoune

The study of the skyline queries has received considerable attention from several database researchers since the end of 2000's. Skyline queries are an appropriate tool that can…

Abstract

Purpose

The study of the skyline queries has received considerable attention from several database researchers since the end of 2000's. Skyline queries are an appropriate tool that can help users to make intelligent decisions in the presence of multidimensional data when different, and often contradictory criteria are to be taken into account. Based on the concept of Pareto dominance, the skyline process extracts the most interesting (not dominated in the sense of Pareto) objects from a set of data. Skyline computation methods often lead to a set with a large size which is less informative for the end users and not easy to be exploited. The purpose of this paper is to tackle this problem, known as the large size skyline problem, and propose a solution to deal with it by applying an appropriate refining process.

Design/methodology/approach

The problem of the skyline refinement is formalized in the fuzzy formal concept analysis setting. Then, an ideal fuzzy formal concept is computed in the sense of some particular defined criteria. By leveraging the elements of this ideal concept, one can reduce the size of the computed Skyline.

Findings

An appropriate and rational solution is discussed for the problem of interest. Then, a tool, named SkyRef, is developed. Rich experiments are done using this tool on both synthetic and real datasets.

Research limitations/implications

The authors have conducted experiments on synthetic and some real datasets to show the effectiveness of the proposed approaches. However, thorough experiments on large-scale real datasets are highly desirable to show the behavior of the tool with respect to the performance and time execution criteria.

Practical implications

The tool developed SkyRef can have many domains applications that require decision-making, personalized recommendation and where the size of skyline has to be reduced. In particular, SkyRef can be used in several real-world applications such as economic, security, medicine and services.

Social implications

This work can be expected in all domains that require decision-making like hotel finder, restaurant recommender, recruitment of candidates, etc.

Originality/value

This study mixes two research fields artificial intelligence (i.e. formal concept analysis) and databases (i.e. skyline queries). The key elements of the solution proposed for the skyline refinement problem are borrowed from the fuzzy formal concept analysis which makes it clearer and rational, semantically speaking. On the other hand, this study opens the door for using the formal concept analysis and its extensions in solving other issues related to skyline queries, such as relaxation.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 14 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 15 June 2010

Zhendong Liu, Hengwu Li and Daming Zhu

The purpose of this paper is to design an algorithm to predict RNA secondary structure, compared with other relevant algorithm, its time complexity and space complexity are…

Abstract

Purpose

The purpose of this paper is to design an algorithm to predict RNA secondary structure, compared with other relevant algorithm, its time complexity and space complexity are reduced.

Design/methodology/approach

The dynamic programming algorithm need more time and space; it is very difficult to predict the RNA secondary structure which have more 1,000 bases. The nested RNA secondary structure algorithms cannot predict the RNA secondary structure containing pseudoknots, so the fast algorithm is needed to predict the RNA secondary structure containing pseudoknots urgently. Based on the greedy principle, a model is designed to solve the problem.

Findings

A greedy algorithm is presented to predict RNA secondary structure.

Research limitations/implications

The problem for predicting RNA secondary structure including pseudoknots is NP‐complete.

Practical implications

The paper presents a valuable and useful method for predicting the RNA secondary structure.

Originality/value

The new algorithm needs O(n3) time and O(n) space; the experimental results indicate that the algorithm has good accuracy and sensitivity.

Details

Kybernetes, vol. 39 no. 6
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 22 June 2010

Imam Machdi, Toshiyuki Amagasa and Hiroyuki Kitagawa

The purpose of this paper is to propose general parallelism techniques for holistic twig join algorithms to process queries against Extensible Markup Language (XML) databases on a…

Abstract

Purpose

The purpose of this paper is to propose general parallelism techniques for holistic twig join algorithms to process queries against Extensible Markup Language (XML) databases on a multi‐core system.

Design/methodology/approach

The parallelism techniques comprised data and task parallelism. As for data parallelism, the paper adopted the stream‐based partitioning for XML to partition XML data as the basis of parallelism on multiple CPU cores. The XML data partitioning was performed in two levels. The first level was to create buckets for creating data independence and balancing loads among CPU cores; each bucket was assigned onto a CPU core. Within each bucket, the second level of XML data partitioning was performed to create finer partitions for providing finer parallelism. Each CPU core performed the holistic twig join algorithm on each finer partition of its own in parallel with other CPU cores. In task parallelism, the holistic twig join algorithm was decomposed into two main tasks, which were pipelined to create parallelism. The first task adopted the data parallelism technique and their outputs were transferred to the second task periodically. Since data transfers incurred overheads, the size of each data transfer needed to be estimated cautiously for achieving optimal performance.

Findings

The data and task parallelism techniques contribute to good performance especially for queries having complex structures and/or higher values of query selectivity. The performance of data parallelism can be further improved by task parallelism. Significant performance improvement is attained by queries having higher selectivity because more outputs computed by the second task is performed in parallel with the first task.

Research limitations/implications

The proposed parallelism techniques primarily deals with executing a single long‐running query for intra‐query parallelism, partitioning XML data on‐the‐fly, and allocating partitions on CPU cores statically. During the parallel execution, presumably there are no such dynamic XML data updates.

Practical implications

The effectiveness of the proposed parallel holistic twig joins relies fundamentally on some system parameter values that can be obtained from a benchmark of the system platform.

Originality/value

The paper proposes novel techniques to increase parallelism by combining techniques of data and task parallelism for achieving high performance. To the best of the author's knowledge, this is the first paper of parallelizing the holistic twig join algorithms on a multi‐core system.

Details

International Journal of Web Information Systems, vol. 6 no. 2
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 18 November 2013

Jaroslav Pokorný

This paper considers schemaless XML data stored in a column-oriented storage, particularly in C-store. Axes of the XPath language are studied and a design and analysis of…

Abstract

Purpose

This paper considers schemaless XML data stored in a column-oriented storage, particularly in C-store. Axes of the XPath language are studied and a design and analysis of algorithms for processing the XPath fragment XP{*, //, /} are described in detail. The paper aims to discuss these issues.

Design/methodology/approach

A two-level model of C-store based on XML-enabled relational databases is supposed. The axes of XPath language in this environment have been studied by Cástková and Pokorný. The associated algorithms have been used for the implementation of the XPath fragment XP{*, //, /}.

Findings

The main advantage of this approach is algorithms implementing axes evaluations that are mostly of logarithmic complexity in n, where n is the number of nodes of XML tree associated with an XML document. A low-level memory system enables the estimation of the number of two abstract operations providing an interface to an external memory. The algorithms developed are mostly of logarithmic complexity in n, where n is the number of nodes of XML tree associated with an XML document.

Originality/value

The paper extends the approach of querying XML data stored in a column-oriented storage to the XPath fragment using only child and descendant axes and estimates the complexity of evaluating its queries.

Details

International Journal of Web Information Systems, vol. 9 no. 4
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 15 October 2018

Yongquan Zhou, Ying Ling and Qifang Luo

This paper aims to represent an improved whale optimization algorithm (WOA) based on a Lévy flight trajectory and called the LWOA algorithm to solve engineering optimization…

Abstract

Purpose

This paper aims to represent an improved whale optimization algorithm (WOA) based on a Lévy flight trajectory and called the LWOA algorithm to solve engineering optimization problems. The LWOA makes the WOA faster, more robust and significantly enhances the WOA. In the LWOA, the Lévy flight trajectory enhances the capability of jumping out of the local optima and is helpful for smoothly balancing exploration and exploitation of the WOA. It has been successfully applied to five standard engineering optimization problems. The simulation results of the classical engineering design problems and real application exhibit the superiority of the LWOA algorithm in solving challenging problems with constrained and unknown search spaces when compared to the basic WOA algorithm or other available solutions.

Design/methodology/approach

In this paper, an improved WOA based on a Lévy flight trajectory and called the LWOA algorithm is represented to solve engineering optimization problems.

Findings

It has been successfully applied to five standard engineering optimization problems. The simulation results of the classical engineering design problems and real application exhibit the superiority of the LWOA algorithm in solving challenging problems with constrained and unknown search spaces when compared to the basic WOA algorithm or other available solutions.

Originality value

An improved WOA based on a Lévy flight trajectory and called the LWOA algorithm is first proposed.

Details

Engineering Computations, vol. 35 no. 7
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 7 April 2022

Tian-Jian Luo

Steady-state visual evoked potential (SSVEP) has been widely used in the application of electroencephalogram (EEG) based non-invasive brain computer interface (BCI) due to its…

Abstract

Purpose

Steady-state visual evoked potential (SSVEP) has been widely used in the application of electroencephalogram (EEG) based non-invasive brain computer interface (BCI) due to its characteristics of high accuracy and information transfer rate (ITR). To recognize the SSVEP components in collected EEG trials, a lot of recognition algorithms based on template matching of training trials have been proposed and applied in recent years. In this paper, a comparative survey of SSVEP recognition algorithms based on template matching of training trails has been done.

Design/methodology/approach

To survey and compare the recently proposed recognition algorithms for SSVEP, this paper regarded the conventional canonical correlated analysis (CCA) as the baseline, and selected individual template CCA (ITCCA), multi-set CCA (MsetCCA), task related component analysis (TRCA), latent common source extraction (LCSE) and a sum of squared correlation (SSCOR) for comparison.

Findings

For the horizontal comparative of the six surveyed recognition algorithms, this paper adopted the “Tsinghua JFPM-SSVEP” data set and compared the average recognition performance on such data set. The comparative contents including: recognition accuracy, ITR, correlated coefficient and R-square values under different time duration of the SSVEP stimulus presentation. Based on the optimal time duration of stimulus presentation, the author has also compared the efficiency of the six compared algorithms. To measure the influence of different parameters, the number of training trials, the number of electrodes and the usage of filter bank preprocessing were compared in the ablation study.

Originality/value

Based on the comparative results, this paper analyzed the advantages and disadvantages of the six compared SSVEP recognition algorithms by considering application scenes, real-time and computational complexity. Finally, the author gives the algorithms selection range for the recognition of real-world online SSVEP-BCI.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 16 no. 1
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 15 November 2018

Siqi Li and Yimin Deng

The purpose of this paper is to propose a new algorithm for independent navigation of unmanned aerial vehicle path planning with fast and stable performance, which is based on…

Abstract

Purpose

The purpose of this paper is to propose a new algorithm for independent navigation of unmanned aerial vehicle path planning with fast and stable performance, which is based on pigeon-inspired optimization (PIO) and quantum entanglement (QE) theory.

Design/methodology/approach

A biomimetic swarm intelligent optimization of PIO is inspired by the natural behavior of homing pigeons. In this paper, the model of QEPIO is devised according to the merging optimization of basic PIO algorithm and dynamics of QE in a two-qubit XXZ Heisenberg System.

Findings

Comparative experimental results with genetic algorithm, particle swarm optimization and traditional PIO algorithm are given to show the convergence velocity and robustness of our proposed QEPIO algorithm.

Practical implications

The QEPIO algorithm hold broad adoption prospects because of no reliance on INS, both on military affairs and market place.

Originality/value

This research is adopted to solve path planning problems with a new aspect of quantum effect applied in parameters designing for the model with the respective of unmanned aerial vehicle path planning.

Details

Aircraft Engineering and Aerospace Technology, vol. 91 no. 1
Type: Research Article
ISSN: 1748-8842

Keywords

1 – 10 of over 14000