Search results
1 – 10 of over 23000Arman Shojaei, Bijan Boroomand and Farshid Mossaiby
The purpose of this paper is to present a simple meshless solution method for challenging engineering problems such as those with high wave numbers or convection-diffusion ones…
Abstract
Purpose
The purpose of this paper is to present a simple meshless solution method for challenging engineering problems such as those with high wave numbers or convection-diffusion ones with high Peclet number. The method uses a set of residual-free bases in a local form.
Design/methodology/approach
The residual-free bases, called here as exponential basis functions, are found so that they satisfy the governing equations within each subdomain. The compatibility between the subdomains is weakly satisfied by enforcing the local approximation of the main state variables pass through the data at nodes surrounding the central node of the subdomain. The central state variable is first recovered from the approximation and then re-assigned to the central node to construct the associated equation. This leads to the least compatibility required in the solution, e.g. C0 continuity in Laplace problems.
Findings
The authors shall show that one can solve a variety of problems with regular and irregular point distribution with high convergence rate. The authors demonstrate that this is impossible to achieve using finite element method. Problems with Laplace and Helmholtz operators as well as elasto-static problems are solved to demonstrate the effectiveness of the method. A convection-diffusion problem with high Peclet number and problems with high wave numbers are among the examples. In all cases, results with high rate of convergence are obtained with moderate number of nodes per cloud.
Originality/value
The paper presents a simple meshless method which not only is capable of solving a variety of challenging engineering problems but also yields results with high convergence rate.
Details
Keywords
Convergence rates for high‐order finite‐element polynomials are discussed in the context of problems arising in cylindrical co‐ordinate systems with azimuthal symmetry. It is…
Abstract
Convergence rates for high‐order finite‐element polynomials are discussed in the context of problems arising in cylindrical co‐ordinate systems with azimuthal symmetry. It is shown that expected rates of convergence can only be obtained by the proper choice of interpolation function.
Lokman Gunduz and Mustafa Kemal Yilmaz
This paper aims to examine the convergence pattern of residential house prices in a panel of 55 major cities in Turkey over the period between 2010 and 2018 and to investigate the…
Abstract
Purpose
This paper aims to examine the convergence pattern of residential house prices in a panel of 55 major cities in Turkey over the period between 2010 and 2018 and to investigate the determinants of convergence club formations.
Design/methodology/approach
The authors applied the log t-test to identify the convergence clubs and estimated ordered logit model to determine the key drivers.
Findings
The results suggest that there are five convergence clubs and confirm the heterogeneity of the Turkish housing market. Istanbul, the commercial capital, and Mugla, an attractive tourist destination, are at the top of the housing market and followed by the cities located in the western part, particularly along the Aegean and Mediterranean coasts of Turkey. Moreover, the ordered logit model results point out that the differences in employment rate, climate, population density and having a metropolitan municipality play a significant role in determining convergence club membership.
Practical implications
Large-scale policy measures aiming to increase employment opportunities in rural cities of central and eastern provinces and providing lower land prices and property taxes in the metropolitan cities of Turkey can help mitigate some of the divergence in the house prices across cities.
Originality/value
The novelty of this study lies in employing a new data set at the city level containing 55 cities in Turkey, which is by far the largest in terms of city coverage among emerging market economies to implement the log t-test. It also contributes to the literature on city-specific determinants of convergence club formation in the case of an emerging economy.
Details
Keywords
Martha del Pilar Rodríguez, Klender Cortez and Alma Berenice Méndez
This chapter aims to analyze whether member countries of the Pacific Alliance agreement showed economic and financial convergence during the 2010–2016 period. The sample consists…
Abstract
This chapter aims to analyze whether member countries of the Pacific Alliance agreement showed economic and financial convergence during the 2010–2016 period. The sample consists of four Latin American countries that are members of the Alianza del Pacífico (Pacific Alliance): Mexico, Chile, Colombia, and Peru. We use an economic convergence index (ECI) to classify the degree of the countries’ convergence regarding a given monetary area, considering the size of their economy, and compute three criteria: (1) nominal variables (used to define the Maastricht criteria), which are inflation, long-term interest rates, public debt, fiscal deficit as percentages of gross domestic product (GDP), and exchange rate volatility; (2) real and cyclical variables such as real GDP growth, gap between real GDP and potential GDP, unemployment, current account balance as a percentage of GDP, and short-term interest rates; and (3) a conditional combination that unequally weights nominal and real variables. We also use correlation analysis to compare coefficients. The results can be analyzed in the medium term in terms of descriptive statistics of their real and nominal variables, convergence indexes, and correlation analysis. The results show that the countries of the Pacific Alliance under study are converging in terms of nominal variables such as interest rate, exchange rate, fiscal deficits, and government debt. Also it can be observed that convergence occurs in real and weighted variables, although to a lesser magnitude. In relation to real variables related to GDP growth and foreign trade, these variables adjust less quickly than nominal ones.
Details
Keywords
Montfort Mlachila and Sarah Sanya
The purpose of this paper is to answer one important question: in the aftermath of a systemic banking crisis, can the expected deviations in credit supply, liquidity, and other…
Abstract
Purpose
The purpose of this paper is to answer one important question: in the aftermath of a systemic banking crisis, can the expected deviations in credit supply, liquidity, and other bank characteristics become entrenched in that they do not converge back to “normal”?
Design/methodology/approach
Using a panel data set of commercial banks in the Mercosur during the period 1990-2006, the authors analyze the impact of crises on four sets of financial indicators of bank behavior and outcomes – profitability, maturity preference, credit supply, and risk taking. The authors employ convergence methodology – which is often used in the growth literature – to identify the evolution of bank behavior in the region after crises.
Findings
A key finding of the paper is that bank risk-taking behavior is significantly modified leading to prolonged reduction of intermediation to the private sector in favor of less risky government securities and preference for high levels excess liquidity well after the crisis. This can be attributed to the role played by macroeconomic and institutional volatility that has nurtured a relatively high level of risk aversion in banks in the Mercosur.
Originality/value
To the best of the authors’ knowledge, using convergence methodology is a relatively novel approach in this area. An added advantage of using this approach over others currently used in the literature is that the authors can empirically quantify the rate of convergence and the institutional and macroeconomic factors that condition the convergence. Moreover, the methodology allows one to identify – in some hierarchical order – factors that condition persistent deviation from “normality.” The lessons learned from the Mercosur case study are useful for countries that suffered systemic banking crises in the aftermath of the global financial crisis.
Details
Keywords
M. Vaz Jr, E.L. Cardoso and J. Stahlschmidt
Parameter identification is a technique which aims at determining material or other process parameters based on a combination of experimental and numerical techniques. In recent…
Abstract
Purpose
Parameter identification is a technique which aims at determining material or other process parameters based on a combination of experimental and numerical techniques. In recent years, heuristic approaches, such as genetic algorithms (GAs), have been proposed as possible alternatives to classical identification procedures. The present work shows that particle swarm optimization (PSO), as an example of such methods, is also appropriate to identification of inelastic parameters. The paper aims to discuss these issues.
Design/methodology/approach
PSO is a class of swarm intelligence algorithms which attempts to reproduce the social behaviour of a generic population. In parameter identification, each individual particle is associated to hyper-coordinates in the search space, corresponding to a set of material parameters, upon which velocity operators with random components are applied, leading the particles to cluster together at convergence.
Findings
PSO has proved to be a viable alternative to identification of inelastic parameters owing to its robustness (achieving the global minimum with high tolerance for variations of the population size and control parameters), and, contrasting to GAs, higher convergence rate and small number of control variables.
Originality/value
PSO has been mostly applied to electrical and industrial engineering. This paper extends the field of application of the method to identification of inelastic material parameters.
Details
Keywords
Sajad Ahmad Rather and P. Shanthi Bala
In this paper, a newly proposed hybridization algorithm namely constriction coefficient-based particle swarm optimization and gravitational search algorithm (CPSOGSA) has been…
Abstract
Purpose
In this paper, a newly proposed hybridization algorithm namely constriction coefficient-based particle swarm optimization and gravitational search algorithm (CPSOGSA) has been employed for training MLP to overcome sensitivity to initialization, premature convergence, and stagnation in local optima problems of MLP.
Design/methodology/approach
In this study, the exploration of the search space is carried out by gravitational search algorithm (GSA) and optimization of candidate solutions, i.e. exploitation is performed by particle swarm optimization (PSO). For training the multi-layer perceptron (MLP), CPSOGSA uses sigmoid fitness function for finding the proper combination of connection weights and neural biases to minimize the error. Secondly, a matrix encoding strategy is utilized for providing one to one correspondence between weights and biases of MLP and agents of CPSOGSA.
Findings
The experimental findings convey that CPSOGSA is a better MLP trainer as compared to other stochastic algorithms because it provides superior results in terms of resolving stagnation in local optima and convergence speed problems. Besides, it gives the best results for breast cancer, heart, sine function and sigmoid function datasets as compared to other participating algorithms. Moreover, CPSOGSA also provides very competitive results for other datasets.
Originality/value
The CPSOGSA performed effectively in overcoming stagnation in local optima problem and increasing the overall convergence speed of MLP. Basically, CPSOGSA is a hybrid optimization algorithm which has powerful characteristics of global exploration capability and high local exploitation power. In the research literature, a little work is available where CPSO and GSA have been utilized for training MLP. The only related research paper was given by Mirjalili et al., in 2012. They have used standard PSO and GSA for training simple FNNs. However, the work employed only three datasets and used the MSE performance metric for evaluating the efficiency of the algorithms. In this paper, eight different standard datasets and five performance metrics have been utilized for investigating the efficiency of CPSOGSA in training MLPs. In addition, a non-parametric pair-wise statistical test namely the Wilcoxon rank-sum test has been carried out at a 5% significance level to statistically validate the simulation results. Besides, eight state-of-the-art meta-heuristic algorithms were employed for comparative analysis of the experimental results to further raise the authenticity of the experimental setup.
Details
Keywords
Hui‐Yuan Fan, Junhong Liu and Jouni Lampinen
The purpose of this paper is to improve the existing differential evolution (DE) mutation operator so as to accelerate its convergence.
Abstract
Purpose
The purpose of this paper is to improve the existing differential evolution (DE) mutation operator so as to accelerate its convergence.
Design/methodology/approach
A new general donor form for mutation operation in DE is presented, which defines a donor as a convex combination of the triplet of individuals selected for a mutation. Three new donor schemes from that form are deduced.
Findings
The three donor schemes were empirically compared with the original DE version and three existing variants of DE by using a suite of nine well‐known test functions, and were also demonstrated by a practical application case – training a neural network to approximate aerodynamic data. The obtained numerical simulation results suggested that these modifications to the mutation operator could improve the DE's convergence performance in both the convergence rate and the convergence reliability.
Research limitations/implications
Further research is still needed for adequately explaining why it was possible to simultaneously improve both the convergence rate and the convergence reliability of DE to that extent despite the well‐known “No Free Lunch” theorem. Also further research is considered necessary for outlining more distinctively the particular class of problems, where the current observations can be generalized.
Practical implications
More complicated engineering problems could be solved sub‐optimally, whereas their real optimal solution may never be reached subject to the current computer capability.
Originality/value
Though DE has demonstrated a considerably better convergence performance than the other evolutionary algorithms (EAs), its convergence rate is still far from what is hoped for by scientists. On the one hand, a higher convergence rate is always expected for any optimization method used in seeking the global optimum of a non‐linear objective function. On the other hand, since all EAs, including DE, work with a population of solutions rather than a single solution, many evaluations of candidate solutions are required in the optimization process. If evaluation of candidate solutions is too time‐consuming, the overall optimization cost may become too expensive. One often has to limit the algorithm to operate within an acceptable time, which maybe is not enough to find the global optimum (optima), but enough to obtain a sub‐optimal solution. Therefore, it is continuously necessary to investigate the new strategies to improve the current DE algorithm.
Details
Keywords
It has been usual to prefer an enrichment pattern independent of the mesh when applying singular functions in the Generalized/eXtended finite element method (G/XFEM). This choice…
Abstract
Purpose
It has been usual to prefer an enrichment pattern independent of the mesh when applying singular functions in the Generalized/eXtended finite element method (G/XFEM). This choice, when modeling crack tip singularities through extrinsic enrichment, has been understood as the only way to surpass the typical poor convergence rate obtained with the finite element method (FEM), on uniform or quasi-uniform meshes conforming to the crack. Then, the purpose of this study is to revisit the topological enrichment strategy in the light of a higher-order continuity obtained with a smooth partition of unity (PoU). Aiming to verify the smoothness' impacts on the blending phenomenon, a series of numerical experiments is conceived to compare the two GFEM versions: the conventional one, based on piecewise continuous PoU's, and another which considers PoU's with high-regularity.
Design/methodology/approach
The stress approximations right at the crack tip vicinity are qualified by focusing on crack severity parameters. For this purpose, the material forces method originated from the configurational mechanics is employed. Some attempts to improve solution using different polynomial enrichment schemes, besides the singular one, are discussed aiming to verify the transition/blending effects. A classical two-dimensional problem of the linear elastic fracture mechanics (LEFM) is solved, considering the pure mode I and the mixed-mode loadings.
Findings
The results reveal that, in the presence of smooth PoU's, the topological enrichment can still be considered as a suitable strategy for extrinsic enrichment. First, because such an enrichment pattern still can treat the crack independently of the mesh and deliver some advantage in terms of convergence rates, under certain conditions, when compared to the conventional FEM. Second, because the topological pattern demands fewer degrees of freedom and impacts conditioning less than the geometrical strategy.
Originality/value
Several outputs are presented, considering estimations for the
Details
Keywords
Abstract
Purpose
Increasing carbon productivity is an effective way to reduce carbon emissions, while boosting economic prosperity. For appropriate formulating and enforcement of energy saving and carbon emissions reduction policies in various sectors, it is of great significance to investigate the evolution characteristics and convergence modes of carbon productivity across the manufacturing sectors.
Design/methodology/approach
Using slack-based measure directional distance function (SBM-DDF) and global Malmquist–Luenberger (GML) productivity index, this paper measures the carbon productivities of 29 manufacturing subsectors in Shanghai, China, from 2001 to 2016 under the total factor framework. Furthermore, based on the convergence theories, it empirically examines the convergence of carbon productivity across these manufacturing sectors.
Findings
The measurement results suggest that the carbon productivities of the manufacturing sectors in Shanghai show an increasing tendency on the whole, and technical efficiency instead of technological change makes a main contribution to the increase. It is found that there is no obvious σ convergence across the manufacturing sectors in Shanghai, but there exist both absolute ß convergence and conditional ß convergence. Moreover, there is heterogeneity in convergence characteristics between the clean sectors and polluting sectors. The findings also show that firm size and industry structure have significant positive impacts on the growth of carbon productivities of the manufacturing sectors, whereas the impacts of capital deepening and energy consumption structure are significantly negative.
Originality/value
This paper measures the carbon productivities of the manufacturing subsectors by applying SBM-DDF and GML index, so as to improve the accuracy. It provides an insight into the convergence of carbon productivity across the manufacturing sectors.
Details