Search results
1 – 10 of 452Roshith Mittakolu, Sarma L. Rani and Dilip Srinivas Sundaram
A higher-order implicit shock-capturing scheme is presented for the Euler equations based on time linearization of the implicit flux vector rather than the residual vector.
Abstract
Purpose
A higher-order implicit shock-capturing scheme is presented for the Euler equations based on time linearization of the implicit flux vector rather than the residual vector.
Design/methodology/approach
The flux vector is linearized through a truncated Taylor-series expansion whose leading-order implicit term is an inner product of the flux Jacobian and the vector of differences between the current and previous time step values of conserved variables. The implicit conserved-variable difference vector is evaluated at cell faces by using the reconstructed states at the left and right sides of a cell face and projecting the difference between the left and right states onto the right eigenvectors. Flux linearization also facilitates the construction of implicit schemes with higher-order spatial accuracy (up to third order in the present study). To enhance the diagonal dominance of the coefficient matrix and thereby increase the implicitness of the scheme, wave strengths at cell faces are expressed as the inner product of the inverse of the right eigenvector matrix and the difference in the right and left reconstructed states at a cell face.
Findings
The accuracy of the implicit algorithm at Courant–Friedrichs–Lewy (CFL) numbers greater than unity is demonstrated for a number of test cases comprising one-dimensional (1-D) Sod’s shock tube, quasi 1-D steady flow through a converging-diverging nozzle, and two-dimensional (2-D) supersonic flow over a compression corner and an expansion corner.
Practical implications
The algorithm has the advantage that it does not entail spatial derivatives of flux Jacobian so that the implicit flux can be readily evaluated using Roe’s approximate Jacobian. As a result, this approach readily facilitates the construction of implicit schemes with high-order spatial accuracy such as Roe-MUSCL.
Originality/value
A novel finite-volume-based higher-order implicit shock-capturing scheme was developed that uses time linearization of fluxes at cell interfaces.
Details
Keywords
Jarraya Abdessalem, Dammak Fakhreddine, Abid Said and Haddar Mohamed
– This paper aims to describe a shape optimization for hyperelastic axisymmetric structure with an exact sensitivity method.
Abstract
Purpose
This paper aims to describe a shape optimization for hyperelastic axisymmetric structure with an exact sensitivity method.
Design/methodology/approach
The whole shape optimization process is carried out by integrating a closed geometric shape in the real space R2 with boundaries defined by B-splines curves. An exact sensitivity analysis and a mathematical programming method (SQP: Sequential Quadratic Programming) are implemented. The design variables are the control points' coordinates which minimize the Von-Mises criteria, with a constraint that the total material volume of the structure remains constant. The feasibility of the proposed methods is carried out by two numerical examples. Results show that the exact Jacobian has an important computing time reduction.
Findings
Numerical examples are presented to illustrate its performance.
Originality/value
In this work, the sensitivity performance is computed using two numerical methods: the efficient finite difference scheme and the exact Jacobian.
Details
Keywords
Laurent Gerbaud, Zié Drissa Diarra, Herve Chazal and Lauric Garbuio
The paper aims to deal with the exact computation of the Jacobian of a time criteria from a numerical simulation of power electronics structures, for the sizing by gradient-based…
Abstract
Purpose
The paper aims to deal with the exact computation of the Jacobian of a time criteria from a numerical simulation of power electronics structures, for the sizing by gradient-based optimization algorithm.
Design/methodology/approach
Runge Kutta 44 is used to solve the state equations. The generic approach combines numerical and symbolic approaches. The modelling of the static converter is based on ideal switches.
Findings
The paper extends the state equations to derivate any state variable according a sizing parameter. The integral expressions used for some sizing performances (e.g. average or RMS values) mix symbolic and numerical approaches. Choices are made for the derivatives of the extrema of which the search is not a continuous process. The use of an object-oriented implementation allows to have generic formulation of some design performances.
Research limitations/implications
The paper aims to propose and to test formulations of sizing criteria and their gradients; so, the modelling of the study case is carried out manually. Due to generic modelling approach used for the power electronics, the model is not completely continuous. So, the derivatives according some parameters (e.g. switch controls) must be carried out by finite differences. However, as the global behaviour is continuous, it is not critical.
Practical implications
The proposed formulations can be easily applied on simple static converter applications. For applications with large state equations, it should be possible to use the basic model of switches used in simulation tools of power electronics. The solving process and the sizing criteria formulation (with their derivatives) are generic and can be instantiate for any study.
Originality/value
The approach proposes formulations giving a numerical sizing dynamic model with a Jacobian computed, if possible, by an exact derivation useful for optimization studies. The approach gives fast simulation and fast computation of the derivatives by combining numerical and analytical approaches.
Details
Keywords
Camillo Genesi and Mario Montagna
– The purpose of this work is that of showing some efficient techniques to perform PV-PQ node type switching in multiple power flow computations.
Abstract
Purpose
The purpose of this work is that of showing some efficient techniques to perform PV-PQ node type switching in multiple power flow computations.
Design/methodology/approach
Reactive generation limits of generation buses must be taken into account to obtain realistic power flow solutions. This may result computationally demanding when many power flow computations are required as in contingency screening or Monte Carlo simulations. In the present paper, the implementation of efficient PV-PQ node type switching is examined with particular emphasis on the efficiency of computation. Some different methods are proposed and compared on the basis of computation speed and accuracy.
Findings
Tests show the efficiency of the proposed methods with reference to actual networks with up to 800 buses.
Originality/value
The classical method of (partial) re-factorisation is not very efficient when many power flow solutions are to be evaluated. In the present work, a different approach is proposed; it is based on grounding each PV node by a fictitious short-circuit branch which is removed when the node type is changed to PQ. This operation is carried out by compensation of the solution and combined with the modifications required for contingency simulation.
Details
Keywords
Benoit Delinchant, Frédéric Wurtz, João Vasconcelos and Jean-Louis Coulomb
– The purpose of this paper is to make easily accessible models to test and compare the optimization algorithms we develop.
Abstract
Purpose
The purpose of this paper is to make easily accessible models to test and compare the optimization algorithms we develop.
Design/methodology/approach
For this, the paper proposes an optimization framework based on software component, web service, and plugin to exploit these models in different environments.
Findings
The paper illustrates the discussion with optimizations in Matlab™ and R (www.r-project.org) of a transformer described and exploitable from the internet.
Originality/value
The originality is to make easy implementation of simulation model and optimization algorithm coupling using software component, web service, and plugin.
Details
Keywords
The initial stiffness method has been extensively adopted for elasto‐plastic finite element analysis. The main problem associated with the initial stiffness method, however, is…
Abstract
Purpose
The initial stiffness method has been extensively adopted for elasto‐plastic finite element analysis. The main problem associated with the initial stiffness method, however, is its slow convergence, even when it is used in conjunction with acceleration techniques. The Newton‐Raphson method has a rapid convergence rate, but its implementation resorts to non‐symmetric linear solvers, and hence the memory requirement may be high. The purpose of this paper is to develop more advanced solution techniques which may overcome the above problems associated with the initial stiffness method and the Newton‐Raphson method.
Design/methodology/approach
In this work, the accelerated symmetric stiffness matrix methods, which cover the accelerated initial stiffness methods as special cases, are proposed for non‐associated plasticity. Within the computational framework for the accelerated symmetric stiffness matrix techniques, some symmetric stiffness matrix candidates are investigated and evaluated.
Findings
Numerical results indicate that for the accelerated symmetric stiffness methods, the elasto‐plastic constitutive matrix, which is constructed by mapping the yield surface of the equivalent material to the plastic potential surface, appears to be appealing. Even when combined with the Krylov iterative solver using a loose convergence criterion, they may still provide good nonlinear convergence rates.
Originality/value
Compared to the work by Sloan et al., the novelty of this study is that a symmetric stiffness matrix is proposed to be used in conjunction with acceleration schemes and it is shown to be more appealing; it is assembled from the elasto‐plastic constitutive matrix by mapping the yield surface of the equivalent material to the plastic potential surface. The advantage of combining the proposed accelerated symmetric stiffness techniques with the Krylov subspace iterative methods for large‐scale applications is also emphasized.
Details
Keywords
Javier Principe and Ramon Codina
The purpose of this paper is to describe a finite element formulation to approximate thermally coupled flows using both the Boussinesq and the low Mach number models with…
Abstract
Purpose
The purpose of this paper is to describe a finite element formulation to approximate thermally coupled flows using both the Boussinesq and the low Mach number models with particular emphasis on the numerical implementation of the algorithm developed.
Design/methodology/approach
The formulation, that allows us to consider convection dominated problems using equal order interpolation for all the valuables of the problem, is based on the subgrid scale concept. The full Newton linearization strategy gives rise to monolithic treatment of the coupling of variables whereas some fixed point schemes permit the segregated treatment of velocity‐pressure and temperature. A relaxation scheme based on the Armijo rule has also been developed.
Findings
A full Newtown linearization turns out to be very efficient for steady‐state problems and very robust when it is combined with a line search strategy. A segregated treatment of velocity‐pressure and temperature happens to be more appropriate for transient problems.
Research limitations/implications
A fractional step scheme, splitting also momentum and continuity equations, could be further analysed.
Practical implications
The results presented in the paper are useful to decide the solution strategy for a given problem.
Originality/value
The numerical implementation of a stabilized finite element approximation of thermally coupled flows is described. The implementation algorithm is developed considering several possibilities for the solution of the discrete nonlinear problem.
Details
Keywords
Sez Atamturktur and Ismail Farajpour
Physical phenomena interact with each other in ways that one cannot be analyzed without considering the other. To account for such interactions between multiple phenomena…
Abstract
Purpose
Physical phenomena interact with each other in ways that one cannot be analyzed without considering the other. To account for such interactions between multiple phenomena, partitioning has become a widely implemented computational approach. Partitioned analysis involves the exchange of inputs and outputs from constituent models (partitions) via iterative coupling operations, through which the individually developed constituent models are allowed to affect each other’s inputs and outputs. Partitioning, whether multi-scale or multi-physics in nature, is a powerful technique that can yield coupled models that can predict the behavior of a system more complex than the individual constituents themselves. The paper aims to discuss these issues.
Design/methodology/approach
Although partitioned analysis has been a key mechanism in developing more realistic predictive models over the last decade, its iterative coupling operations may lead to the propagation and accumulation of uncertainties and errors that, if unaccounted for, can severely degrade the coupled model predictions. This problem can be alleviated by reducing uncertainties and errors in individual constituent models through further code development. However, finite resources may limit code development efforts to just a portion of possible constituents, making it necessary to prioritize constituent model development for efficient use of resources. Thus, the authors propose here an approach along with its associated metric to rank constituents by tracing uncertainties and errors in coupled model predictions back to uncertainties and errors in constituent model predictions.
Findings
The proposed approach evaluates the deficiency (relative degree of imprecision and inaccuracy), importance (relative sensitivity) and cost of further code development for each constituent model, and combines these three factors in a quantitative prioritization metric. The benefits of the proposed metric are demonstrated on a structural portal frame using an optimization-based uncertainty inference and coupling approach.
Originality/value
This study proposes an approach and its corresponding metric to prioritize the improvement of constituents by quantifying the uncertainties, bias contributions, sensitivity analysis, and cost of the constituent models.
Details
Keywords
Wilma Polini and Andrea Corrado
The purpose of this paper is to carry out a tolerance analysis with geometric tolerances by means of the Jacobian model. Tolerance analysis is an important task to design and to…
Abstract
Purpose
The purpose of this paper is to carry out a tolerance analysis with geometric tolerances by means of the Jacobian model. Tolerance analysis is an important task to design and to manufacture high-precision mechanical assemblies; it has received considerable attention by the literature. The Jacobian model is one of the methods proposed by the literature for tolerance analysis. The Jacobian model cannot deal with geometric tolerances for mechanical assemblies. The geometric tolerances may not be neglected for assemblies, as they significantly influence their functional requirements.
Design/methodology/approach
This paper presents how it is possible to deal with geometric tolerances when a tolerance analysis is carried out by means of a Jacobian model for a 2D and 3D assemblies for which the geometric tolerances applied to the components involve only translational deviations. The three proposed approaches modify the expression of the stack-up function to overcome the shortage of Jacobian model that the geometric error cannot be processed.
Findings
The proposed approach has been applied to a case study. The results of the case study show how, when a statistical approach is implemented, the Jacobian model with the three developed methods gives results very similar to those due to other models of the literature, such as vector loop and variational.
Research limitations/implications
In particular, the proposed approach may be applied only when the applied geometrical tolerances involve translational variations in 3D assemblies.
Practical implications
Tolerance analysis is a valid tool to foresee geometric interferences among the components of an assembly before getting the physical assembly. It involves a decrease of the manufacturing costs.
Originality/value
The original contribution of the paper is due to three methods to make a Jacobian model able to consider form and geometric deviations.
Details
Keywords
Jean-Jacques Forneron and Serena Ng
This paper considers properties of an optimization-based sampler for targeting the posterior distribution when the likelihood is intractable. It uses auxiliary statistics to…
Abstract
This paper considers properties of an optimization-based sampler for targeting the posterior distribution when the likelihood is intractable. It uses auxiliary statistics to summarize information in the data and does not directly evaluate the likelihood associated with the specified parametric model. Our reverse sampler approximates the desired posterior distribution by first solving a sequence of simulated minimum distance problems. The solutions are then reweighted by an importance ratio that depends on the prior and the volume of the Jacobian matrix. By a change of variable argument, the output consists of draws from the desired posterior distribution. Optimization always results in acceptable draws. Hence, when the minimum distance problem is not too difficult to solve, combining importance sampling with optimization can be much faster than the method of Approximate Bayesian Computation that by-passes optimization.
Details