Search results

1 – 10 of 142
Open Access
Article
Publication date: 21 December 2023

Rafael Pereira Ferreira, Louriel Oliveira Vilarinho and Americo Scotti

This study aims to propose and evaluate the progress in the basic-pixel (a strategy to generate continuous trajectories that fill out the entire surface) algorithm towards…

Abstract

Purpose

This study aims to propose and evaluate the progress in the basic-pixel (a strategy to generate continuous trajectories that fill out the entire surface) algorithm towards performance gain. The objective is also to investigate the operational efficiency and effectiveness of an enhanced version compared with conventional strategies.

Design/methodology/approach

For the first objective, the proposed methodology is to apply the improvements proposed in the basic-pixel strategy, test it on three demonstrative parts and statistically evaluate the performance using the distance trajectory criterion. For the second objective, the enhanced-pixel strategy is compared with conventional strategies in terms of trajectory distance, build time and the number of arcs starts and stops (operational efficiency) and targeting the nominal geometry of a part (operational effectiveness).

Findings

The results showed that the improvements proposed to the basic-pixel strategy could generate continuous trajectories with shorter distances and comparable building times (operational efficiency). Regarding operational effectiveness, the parts built by the enhanced-pixel strategy presented lower dimensional deviation than the other strategies studied. Therefore, the enhanced-pixel strategy appears to be a good candidate for building more complex printable parts and delivering operational efficiency and effectiveness.

Originality/value

This paper presents an evolution of the basic-pixel strategy (a space-filling strategy) with the introduction of new elements in the algorithm and proves the improvement of the strategy’s performance with this. An interesting comparison is also presented in terms of operational efficiency and effectiveness between the enhanced-pixel strategy and conventional strategies.

Details

Rapid Prototyping Journal, vol. 30 no. 11
Type: Research Article
ISSN: 1355-2546

Keywords

Article
Publication date: 20 June 2016

Sajan Kapil, Prathamesh Joshi, Hari Vithasth Yagani, Dhirendra Rana, Pravin Milind Kulkarni, Ranjeet Kumar and K.P. Karunakaran

In additive manufacturing (AM) process, the physical properties of the products made by fractal toolpaths are better as compared to those made by conventional toolpaths. Also, it…

1122

Abstract

Purpose

In additive manufacturing (AM) process, the physical properties of the products made by fractal toolpaths are better as compared to those made by conventional toolpaths. Also, it is desirable to minimize the number of tool retractions. The purpose of this study is to describe three different methods to generate fractal-based computer numerical control (CNC) toolpath for area filling of a closed curve with minimum or zero tool retractions.

Design/methodology/approach

This work describes three different methods to generate fractal-based CNC toolpath for area filling of a closed curve with minimum or zero tool retractions. In the first method, a large fractal square is placed over the outer boundary and then rest of the unwanted curve is trimmed out. To reduce the number of retractions, ends of the trimmed toolpath are connected in such a way that overlapping within the existing toolpath is avoided. In the second method, the trimming of the fractal is similar to the first method but the ends of trimmed toolpath are connected such that the overlapping is found at the boundaries only. The toolpath in the third method is a combination of fractal and zigzag curves. This toolpath is capable of filling a given connected area in a single pass without any tool retraction and toolpath overlap within a tolerance value equal to stepover of the toolpath.

Findings

The generated toolpath has several applications in AM and constant Z-height surface finishing. Experiments have been performed to verify the toolpath by depositing material by hybrid layered manufacturing process.

Research limitations/implications

Third toolpath method is suitable for the hybrid layered manufacturing process only because the toolpath overlapping tolerance may not be enough for other AM processes.

Originality/value

Development of a CNC toolpath for AM specifically hybrid layered manufacturing which can completely fill any arbitrary connected area in single pass while maintaining a constant stepover.

Details

Rapid Prototyping Journal, vol. 22 no. 4
Type: Research Article
ISSN: 1355-2546

Keywords

Article
Publication date: 4 September 2018

Muhannad Aldosary, Jinsheng Wang and Chenfeng Li

This paper aims to provide a comprehensive review of uncertainty quantification methods supported by evidence-based comparison studies. Uncertainties are widely encountered in…

Abstract

Purpose

This paper aims to provide a comprehensive review of uncertainty quantification methods supported by evidence-based comparison studies. Uncertainties are widely encountered in engineering practice, arising from such diverse sources as heterogeneity of materials, variability in measurement, lack of data and ambiguity in knowledge. Academia and industries have long been researching for uncertainty quantification (UQ) methods to quantitatively account for the effects of various input uncertainties on the system response. Despite the rich literature of relevant research, UQ is not an easy subject for novice researchers/practitioners, where many different methods and techniques coexist with inconsistent input/output requirements and analysis schemes.

Design/methodology/approach

This confusing status significantly hampers the research progress and practical application of UQ methods in engineering. In the context of engineering analysis, the research efforts of UQ are most focused in two largely separate research fields: structural reliability analysis (SRA) and stochastic finite element method (SFEM). This paper provides a state-of-the-art review of SRA and SFEM, covering both technology and application aspects. Moreover, unlike standard survey papers that focus primarily on description and explanation, a thorough and rigorous comparative study is performed to test all UQ methods reviewed in the paper on a common set of reprehensive examples.

Findings

Over 20 uncertainty quantification methods in the fields of structural reliability analysis and stochastic finite element methods are reviewed and rigorously tested on carefully designed numerical examples. They include FORM/SORM, importance sampling, subset simulation, response surface method, surrogate methods, polynomial chaos expansion, perturbation method, stochastic collocation method, etc. The review and comparison tests comment and conclude not only on accuracy and efficiency of each method but also their applicability in different types of uncertainty propagation problems.

Originality/value

The research fields of structural reliability analysis and stochastic finite element methods have largely been developed separately, although both tackle uncertainty quantification in engineering problems. For the first time, all major uncertainty quantification methods in both fields are reviewed and rigorously tested on a common set of examples. Critical opinions and concluding remarks are drawn from the rigorous comparative study, providing objective evidence-based information for further research and practical applications.

Details

Engineering Computations, vol. 35 no. 6
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 3 July 2017

Andrea Da Ronch, Marco Panzeri, M. Anas Abd Bari, Roberto d’Ippolito and Matteo Franciolini

The purpose of this paper is to document an efficient and accurate approach to generate aerodynamic tables using computational fluid dynamics. This is demonstrated in the context…

Abstract

Purpose

The purpose of this paper is to document an efficient and accurate approach to generate aerodynamic tables using computational fluid dynamics. This is demonstrated in the context of a concept transport aircraft model.

Design/methodology/approach

Two designs of experiment algorithms in combination with surrogate modelling are investigated. An adaptive algorithm is compared to an industry-standard algorithm used as a benchmark. Numerical experiments are obtained solving the Reynolds-averaged Navier–Stokes equations on a large computational grid.

Findings

This study demonstrates that a surrogate model built upon an adaptive design of experiments strategy achieves a higher prediction capability than that built upon a traditional strategy. This is quantified in terms of the sum of the squared error between the surrogate model predictions and the computational fluid dynamics results. The error metric is reduced by about one order of magnitude compared to the traditional approach.

Practical implications

This work lays the ground to obtain more realistic aerodynamic predictions earlier in the aircraft design process at manageable costs, improving the design solution and reducing risks. This may be equally applied in the analysis of other complex and non-linear engineering phenomena.

Originality/value

This work explores the potential benefits of an adaptive design of experiment algorithm within a prototype working environment, whereby the maximum number of experiments is limited and a large parameter space is investigated.

Details

Aircraft Engineering and Aerospace Technology, vol. 89 no. 4
Type: Research Article
ISSN: 1748-8842

Keywords

Article
Publication date: 29 June 2012

Muhammad Aamir Raza and Wang Liang

During any design phase, the associated process variations and uncertainties can cause the design to deviate from its expected performance. The purpose of this paper is to propose…

Abstract

Purpose

During any design phase, the associated process variations and uncertainties can cause the design to deviate from its expected performance. The purpose of this paper is to propose a robust design optimization (RDO) strategy for the 3D grain design of a dual thrust solid rocket motor (DTRM) under uncertainties in design parameters.

Design/methodology/approach

The methodology consists of design of 3D complex grain geometry and hybrid optimization approach through genetic algorithm, globally and simulated annealing, locally considering the uncertainties in design parameters. The robustness of optimized data is measured for a worst case parameter deviation using sensitivity analysis through stochastic Monte Carlo simulation considering variance of design parameters mean.

Findings

The important achievement that can be associated with this methodology is its ability also to evaluate and optimize the propulsion system performance in a complex scenario of intricate 3D geometry under uncertainty. The study shows the objective function to maximize the average thrust in dual levels could be achieved by the proposed optimization technique while satisfying constraints conditions. Also, this technique proved to be a great help in reducing the design space for optimization and increasing the computational quality.

Originality/value

This is the first paper to address the dual thrust solid rocket motor grain design under uncertainties using robust design and hybrid optimization approach.

Details

Aircraft Engineering and Aerospace Technology, vol. 84 no. 4
Type: Research Article
ISSN: 0002-2667

Keywords

Article
Publication date: 22 March 2013

Yanfeng Xing

The key control characteristics (KCCs) are very important to control dimensional quality of the final product. The purpose of this paper is to propose optimization algorithm and…

Abstract

Purpose

The key control characteristics (KCCs) are very important to control dimensional quality of the final product. The purpose of this paper is to propose optimization algorithm and rules of design KCCs by optimizing KCCs of 2D and 3D workpieces based on equations and candidate locating points.

Design/methodology/approach

This paper analyzes optimization process of 2D and 3D rectangle workpieces based on equations and candidate locating points by using fruit fly optimization algorithm (FOA). For decreasing variables of the algorithm, the improved fruit fly optimization algorithm (IFOA) is presented. Moreover, the Euclidean norm of inverse Jacobian is used as the objective function of optimizing KCCs by comparing different objective functions. Finally, a case of side frame assembly is presented to illustrate design and optimization of KCCs through IFOA, and results show that the method proposed in this paper is efficient and precise.

Findings

The paper provides some reasonable conclusions for the design and optimization of KCCs.

Originality/value

This paper designs and optimizes KCCs of fixtures and parts to improve dimensional quality of the final product.

Details

Kybernetes, vol. 42 no. 3
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 7 September 2015

Amer F Rafique, Qasim Zeeshan, Ali Kamran and Liang Guozhu

The paper aims to extend the knowledge base for design and optimization of Star grain which is well known for its simplicity, reliability and efficiency. Star grain configuration…

Abstract

Purpose

The paper aims to extend the knowledge base for design and optimization of Star grain which is well known for its simplicity, reliability and efficiency. Star grain configuration is considered to be among the extensively used configurations for the past 60 years. The unexplored areas of treatment of ballistic constraints, non-neutral trace and freedom from use of generalized design equations and sensitivity analysis of optimum design point are treated in detail to bridge the gap. The foremost purpose is to expand the design domain by considering entire convex Star family under both neutral and non-neutral conditions.

Design/methodology/approach

This research effort optimizes Star grain configuration for use in Solid Rocket Motors with ballistic objective function (effective total impulse) and parametric modelling of the entire convex Star grain family using solid modelling module. Internal ballistics calculations are performed using equilibrium pressure method. Optimization process consists of Latinized hypercube generated initial population and Swarm Intelligence optimizer’s ability to search design space. Candidate solutions are passed to solid modelling module to simulate the burning process. Optimal design points, critical geometrical and important ballistic parameters (throat diameter, burn rate, characteristic velocity and propellant density) are then tested for sensitivities through Monte Carlo simulation.

Findings

The proposed approach takes the design of Star grain configuration to a new level with introduction of parametric modelling and sensitivity analysis, thus, offering practical optimum design points for use in various mission scenarios. The proposed design and optimization process provides essential data sets which can be useful prior to the production of large number of solid rocket motors. Results also advocate the adequacy of design from engineering perspective and practicality.

Research limitations/implications

Results showed that few design parameters are sensitive to uncertainties. These uncertainties can be investigated in future by a robust design method.

Practical implications

Monte Carlo simulation can prove to be vital considering the production of a large number of motor units and enlightens the necessity to obtain statistical data during manufacturing.

Originality/value

This paper fulfils long-sought requirement on getting free from use of generalized set of equations for commonly used Star grain configurations.

Details

Aircraft Engineering and Aerospace Technology: An International Journal, vol. 87 no. 5
Type: Research Article
ISSN: 0002-2667

Keywords

Article
Publication date: 18 November 2019

Guanying Huo, Xin Jiang, Zhiming Zheng and Deyi Xue

Metamodeling is an effective method to approximate the relations between input and output parameters when significant efforts of experiments and simulations are required to…

Abstract

Purpose

Metamodeling is an effective method to approximate the relations between input and output parameters when significant efforts of experiments and simulations are required to collect the data to build the relations. This paper aims to develop a new sequential sampling method for adaptive metamodeling by using the data with highly nonlinear relation between input and output parameters.

Design/methodology/approach

In this method, the Latin hypercube sampling method is used to sample the initial data, and kriging method is used to construct the metamodel. In this work, input parameter values for collecting the next output data to update the currently achieved metamodel are determined based on qualities of data in both the input and output parameter spaces. Uniformity is used to evaluate data in the input parameter space. Leave-one-out errors and sensitivities are considered to evaluate data in the output parameter space.

Findings

This new method has been compared with the existing methods to demonstrate its effectiveness in approximation. This new method has also been compared with the existing methods in solving global optimization problems. An engineering case is used at last to verify the method further.

Originality/value

This paper provides an effective sequential sampling method for adaptive metamodeling to approximate highly nonlinear relations between input and output parameters.

Details

Engineering Computations, vol. 37 no. 3
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 16 April 2018

Jinglai Wu, Zhen Luo, Nong Zhang and Wei Gao

This paper aims to study the sampling methods (or design of experiments) which have a large influence on the performance of the surrogate model. To improve the adaptability of…

Abstract

Purpose

This paper aims to study the sampling methods (or design of experiments) which have a large influence on the performance of the surrogate model. To improve the adaptability of modelling, a new sequential sampling method termed as sequential Chebyshev sampling method (SCSM) is proposed in this study.

Design/methodology/approach

The high-order polynomials are used to construct the global surrogated model, which retains the advantages of the traditional low-order polynomial models while overcoming their disadvantage in accuracy. First, the zeros of Chebyshev polynomials with the highest allowable order will be used as sampling candidates to improve the stability and accuracy of the high-order polynomial model. In the second step, some initial sampling points will be selected from the candidates by using a coordinate alternation algorithm, which keeps the initial sampling set uniformly distributed. Third, a fast sequential sampling scheme based on the space-filling principle is developed to collect more samples from the candidates, and the order of polynomial model is also updated in this procedure. The final surrogate model will be determined as the polynomial that has the largest adjusted R-square after the sequential sampling is terminated.

Findings

The SCSM has better performance in efficiency, accuracy and stability compared with several popular sequential sampling methods, e.g. LOLA-Voronoi algorithm and global Monte Carlo method from the SED toolbox, and the Halton sequence.

Originality/value

The SCSM has good performance in building the high-order surrogate model, including the high stability and accuracy, which may save a large amount of cost in solving complicated engineering design or optimisation problems.

Details

Engineering Computations, vol. 35 no. 2
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 7 November 2016

Diogo Tenório Cintra, Ramiro Brito Willmersdorf, Paulo Roberto Maciel Lyra and William Wagner Matos Lira

The purpose of this paper is to present a methodology for parallel simulation that employs the discrete element method (DEM) and improves the cache performance using Hilbert space

Abstract

Purpose

The purpose of this paper is to present a methodology for parallel simulation that employs the discrete element method (DEM) and improves the cache performance using Hilbert space filling curves (HSFC).

Design/methodology/approach

The methodology is well suited for large-scale engineering simulations and considers modelling restrictions due to memory limitations related to the problem size. An algorithm based on mapping indexes, which does not use excessive additional memory, is adopted to enable the contact search procedure for highly scattered domains. The parallel solution strategy uses the recursive coordinate bisection method in the dynamical load balancing procedure. The proposed memory access control aims to improve the data locality of a dynamic set of particles. The numerical simulations presented here contain up to 7.8 millions of particles, considering a visco-elastic model of contact and a rolling friction assumption.

Findings

A real landslide is adopted as reference to evaluate the numerical approach. Three-dimensional simulations are compared in terms of the deposition pattern of the Shum Wan Road landslide. The results show that the methodology permits the simulation of models with a good control of load balancing and memory access. The improvement in cache performance significantly reduces the processing time for large-scale models.

Originality/value

The proposed approach allows the application of DEM in several practical engineering problems of large scale. It also introduces the use of HSFC in the optimization of memory access for DEM simulations.

Details

Engineering Computations, vol. 33 no. 8
Type: Research Article
ISSN: 0264-4401

Keywords

1 – 10 of 142