Search results

1 – 10 of 37
Open Access
Article
Publication date: 24 October 2021

Piergiorgio Alotto, Paolo Di Barba, Alessandro Formisano, Gabriele Maria Lozito, Raffaele Martone, Maria Evelina Mognaschi, Maurizio Repetto, Alessandro Salvini and Antonio Savini

Inverse problems in electromagnetism, namely, the recovery of sources (currents or charges) or system data from measured effects, are usually ill-posed or, in the numerical…

Abstract

Purpose

Inverse problems in electromagnetism, namely, the recovery of sources (currents or charges) or system data from measured effects, are usually ill-posed or, in the numerical formulation, ill-conditioned and require suitable regularization to provide meaningful results. To test new regularization methods, there is the need of benchmark problems, which numerical properties and solutions should be well known. Hence, this study aims to define a benchmark problem, suitable to test new regularization approaches and solves with different methods.

Design/methodology/approach

To assess reliability and performance of different solving strategies for inverse source problems, a benchmark problem of current synthesis is defined and solved by means of several regularization methods in a comparative way; subsequently, an approach in terms of an artificial neural network (ANN) is considered as a viable alternative to classical regularization schemes. The solution of the underlying forward problem is based on a finite element analysis.

Findings

The paper provides a very detailed analysis of the proposed inverse problem in terms of numerical properties of the lead field matrix. The solutions found by different regularization approaches and an ANN method are provided, showing the performance of the applied methods and the numerical issues of the benchmark problem.

Originality/value

The value of the paper is to provide the numerical characteristics and issues of the proposed benchmark problem in a comprehensive way, by means of a wide variety of regularization methods and an ANN approach.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering , vol. 40 no. 6
Type: Research Article
ISSN: 0332-1649

Keywords

Open Access
Article
Publication date: 29 July 2019

Ren Yang, Qi Song and Pu Chen

The purpose of this paper is to establish and implement a direct topological reanalysis algorithm for general successive structural modifications, based on the updating matrix

Abstract

Purpose

The purpose of this paper is to establish and implement a direct topological reanalysis algorithm for general successive structural modifications, based on the updating matrix triangular factorization (UMTF) method for non-topological modification proposed by Song et al. [Computers and Structures, 143(2014):60-72].

Design/methodology/approach

In this method, topological modifications are viewed as a union of symbolic and numerical change of structural matrices. The numerical part is dealt with UMTF by directly updating the matrix triangular factors. For symbolic change, an integral structure which consists of all potential nodes/elements is introduced to avoid side effects on the efficiency during successive modifications. Necessary pre- and post processing are also developed for memory-economic matrix manipulation.

Findings

The new reanalysis algorithm is applicable to successive general structural modifications for arbitrary modification amplitudes and locations. It explicitly updates the factor matrices of the modified structure and thus guarantees the accuracy as full direct analysis while greatly enhancing the efficiency.

Practical implications

Examples including evolutionary structural optimization and sequential construction analysis show the capability and efficiency of the algorithm.

Originality/value

This innovative paper makes direct topological reanalysis be applicable for successive structural modifications in many different areas.

Details

Engineering Computations, vol. 36 no. 8
Type: Research Article
ISSN: 0264-4401

Keywords

Open Access
Article
Publication date: 16 May 2022

Mohammad Reza Fathi, Mohsen Torabi and Somayeh Razi Moheb Saraj

Apitourism is a form of tourism that deals with the culture and traditions of rural communities and can be considered one of the most sustainable methods of development and…

1182

Abstract

Purpose

Apitourism is a form of tourism that deals with the culture and traditions of rural communities and can be considered one of the most sustainable methods of development and tourism. Accordingly, this study aims to identify the key factors and plausible scenarios of Iranian apitourism in the future.

Design/methodology/approach

This study is applied research. For this purpose, first, by examining the theoretical foundations and interviewing experts, the key factors affecting the future of Iranian apitourism were identified. Then, using a binomial test, these factors were screened. Both critical uncertainty and DEMATEL techniques were used to select the final drivers.

Findings

Two drivers of “apitourism information system and promotional activities” and “organizing ecological infrastructure” were selected for scenario planning using critical uncertainty and DEMATEL techniques. According to these two drivers, four golden beehive, expectancy, anonymous bee and black beehive scenarios were developed. Each scenario represents a situation for apitourism in the future. According to the criteria of trend compliance, fact-based plausibility and compliance with current data, the “Black Beehive” scenario was selected as the most likely scenario. The “Golden Beehive” scenario shows the best case in terms of apitourism information system and implementation of promotional activities and organizing and providing ecological infrastructure. The “Black Beehive” scenario, on the other hand, describes an isolated and vulnerable system.

Originality/value

Developing plausible Iranian apitourism scenarios helps key stakeholders and actors develop flexible plans for various situations.

Details

Journal of Tourism Futures, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2055-5911

Keywords

Open Access
Article
Publication date: 19 May 2022

Akhilesh S Thyagaturu, Giang Nguyen, Bhaskar Prasad Rimal and Martin Reisslein

Cloud computing originated in central data centers that are connected to the backbone of the Internet. The network transport to and from a distant data center incurs long…

1039

Abstract

Purpose

Cloud computing originated in central data centers that are connected to the backbone of the Internet. The network transport to and from a distant data center incurs long latencies that hinder modern low-latency applications. In order to flexibly support the computing demands of users, cloud computing is evolving toward a continuum of cloud computing resources that are distributed between the end users and a distant data center. The purpose of this review paper is to concisely summarize the state-of-the-art in the evolving cloud computing field and to outline research imperatives.

Design/methodology/approach

The authors identify two main dimensions (or axes) of development of cloud computing: the trend toward flexibility of scaling computing resources, which the authors denote as Flex-Cloud, and the trend toward ubiquitous cloud computing, which the authors denote as Ubi-Cloud. Along these two axes of Flex-Cloud and Ubi-Cloud, the authors review the existing research and development and identify pressing open problems.

Findings

The authors find that extensive research and development efforts have addressed some Ubi-Cloud and Flex-Cloud challenges resulting in exciting advances to date. However, a wide array of research challenges remains open, thus providing a fertile field for future research and development.

Originality/value

This review paper is the first to define the concept of the Ubi-Flex-Cloud as the two-dimensional research and design space for cloud computing research and development. The Ubi-Flex-Cloud concept can serve as a foundation and reference framework for planning and positioning future cloud computing research and development efforts.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 1 March 2022

Jaehyuk Choi and Rong Chen

Risk parity, also known as equal risk contribution, has recently gained increasing attention as a portfolio allocation method. However, solving portfolio weights must resort to…

1578

Abstract

Risk parity, also known as equal risk contribution, has recently gained increasing attention as a portfolio allocation method. However, solving portfolio weights must resort to numerical methods as the analytic solution is not available. This study improves two existing iterative methods: the cyclical coordinate descent (CCD) and Newton methods. The authors enhance the CCD method by simplifying the formulation using a correlation matrix and imposing an additional rescaling step. The authors also suggest an improved initial guess inspired by the CCD method for the Newton method. Numerical experiments show that the improved CCD method performs the best and is approximately three times faster than the original CCD method, saving more than 40% of the iterations.

Details

Journal of Derivatives and Quantitative Studies: 선물연구, vol. 30 no. 2
Type: Research Article
ISSN: 1229-988X

Keywords

Open Access
Article
Publication date: 21 July 2023

M. Neumayer, T. Suppan, T. Bretterklieber, H. Wegleiter and Colin Fox

Nonlinear solution approaches for inverse problems require fast simulation techniques for the underlying sensing problem. In this work, the authors investigate finite element (FE…

Abstract

Purpose

Nonlinear solution approaches for inverse problems require fast simulation techniques for the underlying sensing problem. In this work, the authors investigate finite element (FE) based sensor simulations for the inverse problem of electrical capacitance tomography. Two known computational bottlenecks are the assembly of the FE equation system as well as the computation of the Jacobian. Here, existing computation techniques like adjoint field approaches require additional simulations. This paper aims to present fast numerical techniques for the sensor simulation and computations with the Jacobian matrix.

Design/methodology/approach

For the FE equation system, a solution strategy based on Green’s functions is derived. Its relation to the solution of a standard FE formulation is discussed. A fast stiffness matrix assembly based on an eigenvector decomposition is shown. Based on the properties of the Green’s functions, Jacobian operations are derived, which allow the computation of matrix vector products with the Jacobian for free, i.e. no additional solves are required. This is demonstrated by a Broyden–Fletcher–Goldfarb–Shanno-based image reconstruction algorithm.

Findings

MATLAB-based time measurements of the new methods show a significant acceleration for all calculation steps compared to reference implementations with standard methods. E.g. for the Jacobian operations, improvement factors of well over 100 could be found.

Originality/value

The paper shows new methods for solving known computational tasks for solving inverse problems. A particular advantage is the coherent derivation and elaboration of the results. The approaches can also be applicable to other inverse problems.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering , vol. 42 no. 5
Type: Research Article
ISSN: 0332-1649

Keywords

Open Access
Article
Publication date: 22 November 2022

Kedong Yin, Yun Cao, Shiwei Zhou and Xinman Lv

The purposes of this research are to study the theory and method of multi-attribute index system design and establish a set of systematic, standardized, scientific index systems…

Abstract

Purpose

The purposes of this research are to study the theory and method of multi-attribute index system design and establish a set of systematic, standardized, scientific index systems for the design optimization and inspection process. The research may form the basis for a rational, comprehensive evaluation and provide the most effective way of improving the quality of management decision-making. It is of practical significance to improve the rationality and reliability of the index system and provide standardized, scientific reference standards and theoretical guidance for the design and construction of the index system.

Design/methodology/approach

Using modern methods such as complex networks and machine learning, a system for the quality diagnosis of index data and the classification and stratification of index systems is designed. This guarantees the quality of the index data, realizes the scientific classification and stratification of the index system, reduces the subjectivity and randomness of the design of the index system, enhances its objectivity and rationality and lays a solid foundation for the optimal design of the index system.

Findings

Based on the ideas of statistics, system theory, machine learning and data mining, the focus in the present research is on “data quality diagnosis” and “index classification and stratification” and clarifying the classification standards and data quality characteristics of index data; a data-quality diagnosis system of “data review – data cleaning – data conversion – data inspection” is established. Using a decision tree, explanatory structural model, cluster analysis, K-means clustering and other methods, classification and hierarchical method system of indicators is designed to reduce the redundancy of indicator data and improve the quality of the data used. Finally, the scientific and standardized classification and hierarchical design of the index system can be realized.

Originality/value

The innovative contributions and research value of the paper are reflected in three aspects. First, a method system for index data quality diagnosis is designed, and multi-source data fusion technology is adopted to ensure the quality of multi-source, heterogeneous and mixed-frequency data of the index system. The second is to design a systematic quality-inspection process for missing data based on the systematic thinking of the whole and the individual. Aiming at the accuracy, reliability, and feasibility of the patched data, a quality-inspection method of patched data based on inversion thought and a unified representation method of data fusion based on a tensor model are proposed. The third is to use the modern method of unsupervised learning to classify and stratify the index system, which reduces the subjectivity and randomness of the design of the index system and enhances its objectivity and rationality.

Details

Marine Economics and Management, vol. 5 no. 2
Type: Research Article
ISSN: 2516-158X

Keywords

Open Access
Article
Publication date: 21 March 2024

Hedi Khedhiri and Taher Mkademi

In this paper we talk about complex matrix quaternions (biquaternions) and we deal with some abstract methods in mathematical complex matrix analysis.

Abstract

Purpose

In this paper we talk about complex matrix quaternions (biquaternions) and we deal with some abstract methods in mathematical complex matrix analysis.

Design/methodology/approach

We introduce and investigate the complex space HC consisting of all 2 × 2 complex matrices of the form ξ=z1+iw1z2+iw2z2iw2z1+iw1, (z1,w1,z2,w2)C4.

Findings

We develop on HC a new matrix holomorphic structure for which we provide the fundamental operational calculus properties.

Originality/value

We give sufficient and necessary conditions in terms of Cauchy–Riemann type quaternionic differential equations for holomorphicity of a function of one complex matrix variable ξHC. In particular, we show that we have a lot of holomorphic functions of one matrix quaternion variable.

Details

Arab Journal of Mathematical Sciences, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1319-5166

Keywords

Open Access
Article
Publication date: 7 September 2015

Hubert Zangl and Stephan Mühlbacher-Karrer

The purpose of this paper is to reduce the artifacts in fast Bayesian reconstruction images in electrical tomography. This is in particular important with respect to object…

1051

Abstract

Purpose

The purpose of this paper is to reduce the artifacts in fast Bayesian reconstruction images in electrical tomography. This is in particular important with respect to object detection in electrical tomography applications.

Design/methodology/approach

The authors suggest to apply the Box-Cox transformation in Bayesian linear minimum mean square error (BMMSE) reconstruction to better accommodate the non-linear relation between the capacitance matrix and the permittivity distribution. The authors compare the results of the original algorithm with the modified algorithm and with the ground truth in both, simulation and experiments.

Findings

The results show a reduction of 50 percent of the mean square error caused by artifacts in low permittivity regions. Furthermore, the algorithm does not increase the computational complexity significantly such that the hard real time constraints can still be met. The authors demonstrate that the algorithm also works with limited observations angles. This allows for object detection in real time, e.g., in robot collision avoidance.

Originality/value

This paper shows that the extension of BMMSE by applying the Box-Cox transformation leads to a significant improvement of the quality of the reconstruction image while hard real time constraints are still met.

Details

COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering, vol. 34 no. 5
Type: Research Article
ISSN: 0332-1649

Keywords

Open Access
Article
Publication date: 7 August 2019

Markus Neumayer, Thomas Suppan and Thomas Bretterklieber

The application of statistical inversion theory provides a powerful approach for solving estimation problems including the ability for uncertainty quantification (UQ) by means of…

Abstract

Purpose

The application of statistical inversion theory provides a powerful approach for solving estimation problems including the ability for uncertainty quantification (UQ) by means of Markov chain Monte Carlo (MCMC) methods and Monte Carlo integration. This paper aims to analyze the application of a state reduction technique within different MCMC techniques to improve the computational efficiency and the tuning process of these algorithms.

Design/methodology/approach

A reduced state representation is constructed from a general prior distribution. For sampling the Metropolis Hastings (MH) Algorithm and the Gibbs sampler are used. Efficient proposal generation techniques and techniques for conditional sampling are proposed and evaluated for an exemplary inverse problem.

Findings

For the MH-algorithm, high acceptance rates can be obtained with a simple proposal kernel. For the Gibbs sampler, an efficient technique for conditional sampling was found. The state reduction scheme stabilizes the ill-posed inverse problem, allowing a solution without a dedicated prior distribution. The state reduction is suitable to represent general material distributions.

Practical implications

The state reduction scheme and the MCMC techniques can be applied in different imaging problems. The stabilizing nature of the state reduction improves the solution of ill-posed problems. The tuning of the MCMC methods is simplified.

Originality/value

The paper presents a method to improve the solution process of inverse problems within the Bayesian framework. The stabilization of the inverse problem due to the state reduction improves the solution. The approach simplifies the tuning of MCMC methods.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering , vol. 38 no. 5
Type: Research Article
ISSN: 0332-1649

Keywords

1 – 10 of 37