Search results

1 – 10 of over 24000
To view the access options for this content please click here
Article
Publication date: 30 June 2020

Sajad Ahmad Rather and P. Shanthi Bala

In this paper, a newly proposed hybridization algorithm namely constriction coefficient-based particle swarm optimization and gravitational search algorithm (CPSOGSA) has…

Abstract

Purpose

In this paper, a newly proposed hybridization algorithm namely constriction coefficient-based particle swarm optimization and gravitational search algorithm (CPSOGSA) has been employed for training MLP to overcome sensitivity to initialization, premature convergence, and stagnation in local optima problems of MLP.

Design/methodology/approach

In this study, the exploration of the search space is carried out by gravitational search algorithm (GSA) and optimization of candidate solutions, i.e. exploitation is performed by particle swarm optimization (PSO). For training the multi-layer perceptron (MLP), CPSOGSA uses sigmoid fitness function for finding the proper combination of connection weights and neural biases to minimize the error. Secondly, a matrix encoding strategy is utilized for providing one to one correspondence between weights and biases of MLP and agents of CPSOGSA.

Findings

The experimental findings convey that CPSOGSA is a better MLP trainer as compared to other stochastic algorithms because it provides superior results in terms of resolving stagnation in local optima and convergence speed problems. Besides, it gives the best results for breast cancer, heart, sine function and sigmoid function datasets as compared to other participating algorithms. Moreover, CPSOGSA also provides very competitive results for other datasets.

Originality/value

The CPSOGSA performed effectively in overcoming stagnation in local optima problem and increasing the overall convergence speed of MLP. Basically, CPSOGSA is a hybrid optimization algorithm which has powerful characteristics of global exploration capability and high local exploitation power. In the research literature, a little work is available where CPSO and GSA have been utilized for training MLP. The only related research paper was given by Mirjalili et al., in 2012. They have used standard PSO and GSA for training simple FNNs. However, the work employed only three datasets and used the MSE performance metric for evaluating the efficiency of the algorithms. In this paper, eight different standard datasets and five performance metrics have been utilized for investigating the efficiency of CPSOGSA in training MLPs. In addition, a non-parametric pair-wise statistical test namely the Wilcoxon rank-sum test has been carried out at a 5% significance level to statistically validate the simulation results. Besides, eight state-of-the-art meta-heuristic algorithms were employed for comparative analysis of the experimental results to further raise the authenticity of the experimental setup.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 13 no. 2
Type: Research Article
ISSN: 1756-378X

Keywords

To view the access options for this content please click here
Article
Publication date: 6 February 2020

Sajad Ahmad Rather and P. Shanthi Bala

The purpose of this paper is to investigate the performance of chaotic gravitational search algorithm (CGSA) in solving mechanical engineering design frameworks including…

Abstract

Purpose

The purpose of this paper is to investigate the performance of chaotic gravitational search algorithm (CGSA) in solving mechanical engineering design frameworks including welded beam design (WBD), compression spring design (CSD) and pressure vessel design (PVD).

Design/methodology/approach

In this study, ten chaotic maps were combined with gravitational constant to increase the exploitation power of gravitational search algorithm (GSA). Also, CGSA has been used for maintaining the adaptive capability of gravitational constant. Furthermore, chaotic maps were used for overcoming premature convergence and stagnation in local minima problems of standard GSA.

Findings

The chaotic maps have shown efficient performance for WBD and PVD problems. Further, they have depicted competitive results for CSD framework. Moreover, the experimental results indicate that CGSA shows efficient performance in terms of convergence speed, cost function minimization, design variable optimization and successful constraint handling as compared to other participating algorithms.

Research limitations/implications

The use of chaotic maps in standard GSA is a new beginning for research in GSA particularly convergence and time complexity analysis. Moreover, CGSA can be used for solving the infinite impulsive response (IIR) parameter tuning and economic load dispatch problems in electrical sciences.

Originality/value

The hybridization of chaotic maps and evolutionary algorithms for solving practical engineering problems is an emerging topic in metaheuristics. In the literature, it can be seen that researchers have used some chaotic maps such as a logistic map, Gauss map and a sinusoidal map more rigorously than other maps. However, this work uses ten different chaotic maps for engineering design optimization. In addition, non-parametric statistical test, namely, Wilcoxon rank-sum test, was carried out at 5% significance level to statistically validate the simulation results. Besides, 11 state-of-the-art metaheuristic algorithms were used for comparative analysis of the experimental results to further raise the authenticity of the experimental setup.

To view the access options for this content please click here
Article
Publication date: 4 April 2008

Erwin Stein and Gautam Sagar

The purpose of this paper is to examine quadratic convergence of finite element analysis for hyperelastic material at finite strains via Abaqus‐UMAT as well as…

Abstract

Purpose

The purpose of this paper is to examine quadratic convergence of finite element analysis for hyperelastic material at finite strains via Abaqus‐UMAT as well as classification of the rates of convergence for iterative solutions in regular cases.

Design/methodology/approach

Different formulations for stiffness – Hessian form of the free energy functionals – are systematically given for getting the rate‐independent analytical tangent and the numerical tangent as well as rate‐dependent tangents using the objective Jaumann rate of Kirchoff stress tensor as used in Abaqus. The convergence rates for available element types in Abaqus are computed and compared for simple but significant nonlinear elastic problems, such as using the 8‐node linear brick (B‐bar) element – also with hybrid pressure formulation and with incompatible modes – further the 20‐node quadratic brick element with corresponding modifications as well as the 6‐node linear triangular prism element and 4‐node linear tetrahedral element with modifications.

Findings

By using the Jaumann rate of Kirchoff stress tensor for both, rate dependent and rate independent problems, quadratic or nearly quadratic convergence is achieved for most of the used elements using Abaqus‐UMAT interface. But in case of using rate independent analytical tangent for rate independent problems, even convergence at all is not assured for all elements and the considered problems.

Originality/value

First time the convergence properties of 3D finite elements available in Abaqus sre systematically treated for elastic material at finite strain via Abaqus‐UMAT.

Details

Engineering Computations, vol. 25 no. 3
Type: Research Article
ISSN: 0264-4401

Keywords

To view the access options for this content please click here
Article
Publication date: 8 October 2018

Atul Mishra and Sankha Deb

Assembly sequence optimization is a difficult combinatorial optimization problem having to simultaneously satisfy various feasibility constraints and optimization…

Abstract

Purpose

Assembly sequence optimization is a difficult combinatorial optimization problem having to simultaneously satisfy various feasibility constraints and optimization criteria. Applications of evolutionary algorithms have shown a lot of promise in terms of lower computational cost and time. But there remain challenges like achieving global optimum in least number of iterations with fast convergence speed, robustness/consistency in finding global optimum, etc. With the above challenges in mind, this study aims to propose an improved flower pollination algorithm (FPA) and hybrid genetic algorithm (GA)-FPA.

Design/methodology/approach

In view of slower convergence rate and more computational time required by the previous discrete FPA, this paper presents an improved hybrid FPA with different representation scheme, initial population generation strategy and modifications in local and global pollination rules. Different optimization objectives are considered like direction changes, tool changes, assembly stability, base component location and feasibility. The parameter settings of hybrid GA-FPA are also discussed.

Findings

The results, when compared with previous discrete FPA and GA, memetic algorithm (MA), harmony search and improved FPA (IFPA), the proposed hybrid GA-FPA gives promising results with respect to higher global best fitness and higher average fitness, faster convergence (especially from the previously developed variant of FPA) and most importantly improved robustness/consistency in generating global optimum solutions.

Practical implications

It is anticipated that using the proposed approach, assembly sequence planning can be accomplished efficiently and consistently with reduced lead time for process planning, making it cost-effective for industrial applications.

Originality/value

Different representation schemes, initial population generation strategy and modifications in local and global pollination rules are introduced in the IFPA. Moreover, hybridization with GA is proposed to improve convergence speed and robustness/consistency in finding globally optimal solutions.

Details

Assembly Automation, vol. 39 no. 1
Type: Research Article
ISSN: 0144-5154

Keywords

To view the access options for this content please click here
Article
Publication date: 1 September 2005

Sangkyun Kim and Choon Seong Leem

To provide the strategic model of approach which helps enterprise executives to solve the managerial problems of planning, implementation and operation about information…

Abstract

Purpose

To provide the strategic model of approach which helps enterprise executives to solve the managerial problems of planning, implementation and operation about information security in business convergence environments.

Design/methodology/approach

A risk analysis method and baseline controls of BS7799 were used to generate security patterns of business convergence. With the analysis of existing enterprise architecture (EA) methods, the framework of the enterprise security architecture was designed.

Findings

The adaptive framework, including the security patterns with quantitative factors, enterprise security architecture with 18 dimensions, and reference models in business convergence environments, is provided.

Research limitations/implications

Information assets and baseline controls should be subdivided to provide more detailed risk factors and weight factors of each business convergence strategy. Case studies should be performed continuously to consolidate contents of best practices.

Practical implications

With the enterprise security architecture provided in this paper, an enterprise that tries to create a value‐added business model using convergence model can adapt itself to mitigate security risks and reduce potential losses.

Originality/value

This paper outlined the business risks in convergence environments with risk analysis and baseline controls. It is aguably the first attempt to adapt the EA approach for enterprise executives to solve the security problems of business convergence.

Details

Industrial Management & Data Systems, vol. 105 no. 7
Type: Research Article
ISSN: 0263-5577

Keywords

To view the access options for this content please click here
Article
Publication date: 1 March 2013

Wenyu Chen, Wangyang Bian and Ru Zeng

The purpose of this paper is to show that the theoretical proofs of convergence in solution of ant colony optimization (ACO) algorithms have significant values of theory…

Abstract

Purpose

The purpose of this paper is to show that the theoretical proofs of convergence in solution of ant colony optimization (ACO) algorithms have significant values of theory and application.

Design/methodology/approach

This paper adapts the basic ACO algorithm framework and proves two important ACO subclass algorithms which are ACObs,τmin  and ACObs,τmin (t).

Findings

This paper indicates that when the minimums of pheromone trial decay to 0 with the speed of logarithms, it is ensured that algorithms can, at least, get a certain optimal solution. Even if the randomicity and deflection of random algorithms are disturbed infinitesimally, algorithms can obtain optimal solution.

Originality/value

This paper focuses on the analysis and proof of the convergence theory of ACO subset algorithm to explore internal mechanism of ACO algorithm.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 32 no. 2
Type: Research Article
ISSN: 0332-1649

Keywords

To view the access options for this content please click here
Article
Publication date: 1 August 1999

Gh. Juncu

The paper analyses the preconditioning of non‐linear nonsymmetric equations with approximations of the discrete Laplace operator. The test problems are non‐linear 2‐D…

Abstract

The paper analyses the preconditioning of non‐linear nonsymmetric equations with approximations of the discrete Laplace operator. The test problems are non‐linear 2‐D elliptic equations that describe natural convection, Darcy flow, in a porous medium. The standard second order accurate finite difference scheme is used to discretize the models’ equations. The discrete approximations are solved with a double iterative process using the Newton method as outer iteration and the preconditioned generalised conjugate gradient (PGCG) methods as inner iteration. Three PGCG algorithms, CGN, CGS and GMRES, are tested. The preconditioning with discrete Laplace operator approximations consists of replacing the solving of the equation with the preconditioner by a few iterations of an appropriate iterative scheme. Two iterative algorithms are tested: incomplete Cholesky (IC) and multigrid (MG). The numerical results show that MG preconditioning leads to mesh independence. CGS is the most robust algorithm but its efficiency is lower than that of GMRES.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 9 no. 5
Type: Research Article
ISSN: 0961-5539

Keywords

To view the access options for this content please click here
Article
Publication date: 9 April 2019

Mohammad Mortezazadeh and Liangzhu (Leon) Wang

The purpose of this paper is the development of a new density-based (DB) semi-Lagrangian method to speed up the conventional pressure-based (PB) semi-Lagrangian methods.

Abstract

Purpose

The purpose of this paper is the development of a new density-based (DB) semi-Lagrangian method to speed up the conventional pressure-based (PB) semi-Lagrangian methods.

Design/methodology/approach

The semi-Lagrangian-based solvers are typically PB, i.e. semi-Lagrangian pressure-based (SLPB) solvers, where a Poisson equation is solved for obtaining the pressure field and ensuring a divergence-free flow field. As an elliptic-type equation, the Poisson equation often relies on an iterative solution, so it can create a challenge of parallel computing and a bottleneck of computing speed. This study proposes a new DB semi-Lagrangian method, i.e. the semi-Lagrangian artificial compressibility (SLAC), which replaces the Poisson equation by a hyperbolic continuity equation with an added artificial compressibility (AC) term, so a time-marching solution is possible. Without the Poisson equation, the proposed SLAC solver is faster, particularly for the cases with more computational cells, and better suited for parallel computing.

Findings

The study compares the accuracy and the computing speeds of both SLPB and SLAC solvers for the lid-driven cavity flow and the step-flow problems. It shows that the proposed SLAC solver is able to achieve the same results as the SLPB, whereas with a 3.03 times speed up before using the OpenMP parallelization and a 3.35 times speed up for the large grid number case (512 × 512) after the parallelization. The speed up can be improved further for larger cases because of increasing the condition number of the coefficient matrixes of the Poisson equation.

Originality/value

This paper proposes a method of avoiding solving the Poisson equation, a typical computing bottleneck for semi-Lagrangian-based fluid solvers by converting the conventional PB solver (SLPB) to the DB solver (SLAC) through the addition of the AC term. The method simplifies and facilitates the parallelization process of semi-Lagrangian-based fluid solvers for modern HPC infrastructures, such as OpenMP and GPU computing.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 29 no. 6
Type: Research Article
ISSN: 0961-5539

Keywords

To view the access options for this content please click here
Article
Publication date: 1 February 1990

R.K. Singh, T. Kant and A. Kakodkar

This paper demonstrates the capability of staggered solution procedure for coupled fluid‐structure interaction problems. Three possible computational paths for coupled…

Abstract

This paper demonstrates the capability of staggered solution procedure for coupled fluid‐structure interaction problems. Three possible computational paths for coupled problems are described. These are critically examined for a variety of coupled problems with different types of mesh partitioning schemes. The results are compared with the reported results by continuum mechanics priority approach—a method which has been very popular until recently. Optimum computational paths and mesh partitionings for two field problems are indicated. Staggered solution procedure is shown to be quite effective when optimum path and partitionings are selected.

Details

Engineering Computations, vol. 7 no. 2
Type: Research Article
ISSN: 0264-4401

To view the access options for this content please click here
Article
Publication date: 4 November 2014

ShiYang Zhao and Pu Xue

– The purpose of the paper is to improve the calculability of a continuum damage failure model of composite laminates based on Tsai-Wu criteria.

Abstract

Purpose

The purpose of the paper is to improve the calculability of a continuum damage failure model of composite laminates based on Tsai-Wu criteria.

Design/methodology/approach

A technique based on viscous regularization, a characteristic element length and fracture energies of fiber and matrix are used in the model.

Findings

The calculability of the material model is improved. The modified model can predict the behavior of composite structure better.

Originality/value

The convergence problem and the mesh softening problem are main concern in the calculability of numerical model. In order to improve the convergence, a technique based on viscous regularization of damage variable is used. Meanwhile, characteristic element length and fracture energies of fiber and matrix are added into the damage constitutive equation to reduce the mesh sensitivity of numerical results. Finally, a laminated structure with damages is implemented using a User Material Subroutine in ABAQUS/Standard. Mesh sensitivity and value of viscosity are discussed.

Details

Multidiscipline Modeling in Materials and Structures, vol. 10 no. 4
Type: Research Article
ISSN: 1573-6105

Keywords

1 – 10 of over 24000