Search results
1 – 10 of over 1000Rafael Castro-Triguero, Enrique Garcia-Macias, Erick Saavedra Flores, M.I. Friswell and Rafael Gallego
The purpose of this paper is to capture the actual structural behavior of the longest timber footbridge in Spain by means of a multi-scale model updating approach in conjunction…
Abstract
Purpose
The purpose of this paper is to capture the actual structural behavior of the longest timber footbridge in Spain by means of a multi-scale model updating approach in conjunction with ambient vibration tests.
Design/methodology/approach
In a first stage, a numerical pre-test analysis of the full bridge is performed, using standard beam-type finite elements with isotropic material properties. This approach offers a first structural model in which optimal sensor placement (OSP) methodologies are applied to improve the system identification process. In particular, the effective independence (EFI) method is used to determine the optimal locations of a set of sensors. Ambient vibration tests are conducted to determine experimentally the modal characteristics of the structure. The identified modal parameters are compared with those values obtained from this preliminary model. To improve the accuracy of the numerical predictions, the material response is modeled by means of a homogenization-based multi-scale computational approach. In a second stage, the structure is modeled by means of three-dimensional solid elements with the above material definition, capturing realistically the full orthotropic mechanical properties of wood. A genetic algorithm (GA) technique is adopted to calibrate the micromechanical parameters which are either not well-known or susceptible to considerable variations when measured experimentally.
Findings
An overall good agreement is found between the results of the updated numerical simulations and the corresponding experimental measurements. The longitudinal and transverse Young's moduli, sliding and rolling shear moduli, density and natural frequencies are computed by the present approach. The obtained results reveal the potential predictive capabilities of the present GA/multi-scale/experimental approach to capture accurately the actual behavior of complex materials and structures.
Originality/value
The uniqueness and importance of this structure leads to an intensive study of its structural behavior. Ambient vibration tests are carried out under environmental excitation. Extraction of modal parameters is obtained from output-only experimental data. The EFI methodology is applied for the OSP on a large-scale structure. Information coming from several length scales, from sub-micrometer dimensions to macroscopic scales, is included in the material definition. The strong differences found between the stiffness along the longitudinal and transverse directions of wood lumbers are incorporated in the structural model. A multi-scale model updating approach is carried out by means of a GA technique to calibrate the micromechanical parameters which are either not well-known or susceptible to considerable variations when measured experimentally.
Details
Keywords
Damijan Markovic, Rainer Niekamp, Adnan Ibrahimbegović, Hermann G. Matthies and Robert L. Taylor
To provide a computational strategy for highly accurate analyses of non‐linear inelastic behaviour for heterogeneous structures in civil and mechanical engineering applications
Abstract
Purpose
To provide a computational strategy for highly accurate analyses of non‐linear inelastic behaviour for heterogeneous structures in civil and mechanical engineering applications
Design/methodology/approach
Adapts recent developments on mathematical formulations of multi‐scale problems to the recently developed component technology based on C++ generic templates programming.
Findings
Provides the understanding how theoretical hypotheses, concerning essentially the multi‐scale interface conditions, affect the computational precision of the strategy.
Practical implications
The present approach allows a very precise modelling of multi‐scale aspects in structural mechanics problems and can play an essential tool in searching for an optimal structural design.
Originality/value
Provides all the ingredients for constructing an efficient multi‐scale computational framework, from the theoretical formulation to the implementation for parallel computing. It is addressed to researchers and engineers analysing composite structures under extreme loading.
Details
Keywords
Emad Samadiani and Yogendra Joshi
The purpose of this paper is to review the available reduced order modeling approaches in the literature for predicting the flow and specially temperature fields inside data…
Abstract
Purpose
The purpose of this paper is to review the available reduced order modeling approaches in the literature for predicting the flow and specially temperature fields inside data centers in terms of the involved design parameters.
Design/methodology/approach
This paper begins with a motivation for flow/thermal modeling needs for designing an energy‐efficient thermal management system in data centers. Recent studies on air velocity and temperature field simulations in data centers through computational fluid dynamics/heat transfer (CFD/HT) are reviewed. Meta‐modeling and reduced order modeling are tools to generate accurate and rapid surrogate models for a complex system. These tools, with a focus on low‐dimensional models of turbulent flows are reviewed. Reduced order modeling techniques based on turbulent coherent structures identification, in particular the proper orthogonal decomposition (POD) are explained and reviewed in more details. Then, the available approaches for rapid thermal modeling of data centers are reviewed. Finally, recent studies on generating POD‐based reduced order thermal models of data centers are reviewed and representative results are presented and compared for a case study.
Findings
It is concluded that low‐dimensional models are needed in order to predict the multi‐parameter dependent thermal behavior of data centers accurately and rapidly for design and control purposes. POD‐based techniques have shown great approximation for multi‐parameter thermal modeling of data centers. It is believed that wavelet‐based techniques due to the their ability to separate between coherent and incoherent structures – something that POD cannot do – can be considered as new promising tools for reduced order thermal modeling of complex electronic systems such as data centers
Originality/value
The paper reviews different numerical methods and provides the reader with some insight for reduced order thermal modeling of complex convective systems such as data centers.
Details
Keywords
Weixin Zhang, Zhao Liu, Yu Song, Yixuan Lu and Zhenping Feng
To improve the speed and accuracy of turbine blade film cooling design process, the most advanced deep learning models were introduced into this study to investigate the most…
Abstract
Purpose
To improve the speed and accuracy of turbine blade film cooling design process, the most advanced deep learning models were introduced into this study to investigate the most suitable define for prediction work. This paper aims to create a generative surrogate model that can be applied on multi-objective optimization problems.
Design/methodology/approach
The latest backbone in the field of computer vision (Swin-Transformer, 2021) was introduced and improved as the surrogate function for prediction of the multi-physics field distribution (film cooling effectiveness, pressure, density and velocity). The basic samples were generated by Latin hypercube sampling method and the numerical method adopt for the calculation was validated experimentally at first. The training and testing samples were calculated at experimental conditions. At last, the surrogate model predicted results were verified by experiment in a linear cascade.
Findings
The results indicated that comparing with the Multi-Scale Pix2Pix Model, the Swin-Transformer U-Net model presented higher accuracy and computing speed on the prediction of contour results. The computation time for each step of the Swin-Transformer U-Net model is one-third of the original model, especially in the case of multi-physics field prediction. The correlation index reached more than 99.2% and the first-order error was lower than 0.3% for multi-physics field. The predictions of the data-driven surrogate model are consistent with the predictions of the computational fluid dynamics results, and both are very close to the experimental results. The application of the Swin-Transformer model on enlarging the different structure samples will reduce the cost of numerical calculations as well as experiments.
Research limitations/implications
The number of U-Net layers and sample scales has a proper relationship according to equation (8). Too many layers of U-Net will lead to unnecessary nonlinear variation, whereas too few layers will lead to insufficient feature extraction. In the case of Swin-Transformer U-Net model, incorrect number of U-Net layer will reduce the prediction accuracy. The multi-scale Pix2Pix model owns higher accuracy in predicting a single physical field, but the calculation speed is too slow. The Swin-Transformer model is fast in prediction and training (nearly three times faster than multi Pix2Pix model), but the predicted contours have more noise. The neural network predicted results and numerical calculations are consistent with the experimental distribution.
Originality/value
This paper creates a generative surrogate model that can be applied on multi-objective optimization problems. The generative adversarial networks using new backbone is chosen to adjust the output from single contour to multi-physics fields, which will generate more results simultaneously than traditional surrogate models and reduce the time-cost. And it is more applicable to multi-objective spatial optimization algorithms. The Swin-Transformer surrogate model is three times faster to computation speed than the Multi Pix2Pix model. In the prediction results of multi-physics fields, the prediction results of the Swin-Transformer model are more accurate.
Details
Keywords
Fuyuan Gong, Yuya Takahashi and Koichi Maekawa
This paper aims to propose a multi-scale simulation approach for the concrete macro-mechanical damage caused by mixed micro-pore pressures, such as the coupled alkali–silica…
Abstract
Purpose
This paper aims to propose a multi-scale simulation approach for the concrete macro-mechanical damage caused by mixed micro-pore pressures, such as the coupled alkali–silica reaction (ASR) and freeze-thaw cycles (FTC).
Design/methodology/approach
The micro-physical events are computationally modeled by considering the coupling effect between ASR gel and condensed water in the mixed pressure and motion. The pressures and transport of pore substances are also linked with the concrete matrix deformation at macro-scale through a poro-mechanical approach, and affect each other, reciprocally. Once the crack happens in the nonlinear analysis, both the micro-events (water and gel motion) and the macro mechanics will be mutually interacted. Finally, different sequences of combined ASR and FTC are simulated.
Findings
The multi-chemo mechanistic computation can reproduce complex events in pore structures, and further the macro-damages. The results show that ASR can reduce the FTC expansion for non-air-entrained concrete, but may increase the frost damage for air-entrained concrete. The simulation is examined to bring about the observed phenomena.
Originality/value
This paper numerically clarifies the strong linkage between macro-mechanical deformation and micro-chemo-physical events for concrete composites under coupled ASR and FTC.
Details
Keywords
Guilherme Fonseca Gonçalves, Rui Pedro Cardoso Coelho and Igor André Rodrigues Lopes
The purpose of this research is to establish a robust numerical framework for the calibration of macroscopic constitutive parameters, based on the analysis of polycrystalline RVEs…
Abstract
Purpose
The purpose of this research is to establish a robust numerical framework for the calibration of macroscopic constitutive parameters, based on the analysis of polycrystalline RVEs with computational homogenisation.
Design/methodology/approach
This framework is composed of four building-blocks: (1) the multi-scale model, consisting of polycrystalline RVEs, where the grains are modelled with anisotropic crystal plasticity, and computational homogenisation to link the scales, (2) a set of loading cases to generate the reference responses, (3) the von Mises elasto-plastic model to be calibrated, and (4) the optimisation algorithms to solve the inverse identification problem. Several optimisation algorithms are assessed through a reference identification problem. Thereafter, different calibration strategies are tested. The accuracy of the calibrated models is evaluated by comparing their results against an FE2 model and experimental data.
Findings
In the initial tests, the LIPO optimiser performs the best. Good results accuracy is obtained with the calibrated constitutive models. The computing time needed by the FE2 simulations is 5 orders of magnitude larger, compared to the standard macroscopic simulations, demonstrating how this framework is suitable to obtain efficient micro-mechanics-informed constitutive models.
Originality/value
This contribution proposes a numerical framework, based on FE2 and macro-scale single element simulations, where the calibration of constitutive laws is informed by multi-scale analysis. The most efficient combination of optimisation algorithm and definition of the objective function is studied, and the robustness of the proposed approach is demonstrated by validation with both numerical and experimental data.
Details
Keywords
Clothing patterns play a dominant role in costume design and have become an important link in the perception of costume art. Conventional clothing patterns design relies on…
Abstract
Purpose
Clothing patterns play a dominant role in costume design and have become an important link in the perception of costume art. Conventional clothing patterns design relies on experienced designers. Although the quality of clothing patterns is very high on conventional design, the input time and output amount ratio is relative low for conventional design. In order to break through the bottleneck of conventional clothing patterns design, this paper proposes a novel way based on generative adversarial network (GAN) model for automatic clothing patterns generation, which not only reduces the dependence of experienced designer, but also improve the input-output ratio.
Design/methodology/approach
In view of the fact that clothing patterns have high requirements for global artistic perception and local texture details, this paper improves the conventional GAN model from two aspects: a multi-scales discriminators strategy is introduced to deal with the local texture details; and the self-attention mechanism is introduced to improve the global artistic perception. Therefore, the improved GAN called multi-scales self-attention improved generative adversarial network (MS-SA-GAN) model, which is used for high resolution clothing patterns generation.
Findings
To verify the feasibility and effectiveness of the proposed MS-SA-GAN model, a crawler is designed to acquire standard clothing patterns dataset from Baidu pictures, and a comparative experiment is conducted on our designed clothing patterns dataset. In experiments, we have adjusted different parameters of the proposed MS-SA-GAN model, and compared the global artistic perception and local texture details of the generated clothing patterns.
Originality/value
Experimental results have shown that the clothing patterns generated by the proposed MS-SA-GAN model are superior to the conventional algorithms in some local texture detail indexes. In addition, a group of clothing design professionals is invited to evaluate the global artistic perception through a valence-arousal scale. The scale results have shown that the proposed MS-SA-GAN model achieves a better global art perception.
Details
Keywords
Sadik Lafta Omairey, Peter Donald Dunning and Srinivas Sriramula
The purpose of this study is to enable performing reliability-based design optimisation (RBDO) for a composite component while accounting for several multi-scale uncertainties…
Abstract
Purpose
The purpose of this study is to enable performing reliability-based design optimisation (RBDO) for a composite component while accounting for several multi-scale uncertainties using a large representative volume element (LRVE). This is achieved using an efficient finite element analysis (FEA)-based multi-scale reliability framework and sequential optimisation strategy.
Design/methodology/approach
An efficient FEA-based multi-scale reliability framework used in this study is extended and combined with a proposed sequential optimisation strategy to produce an efficient, flexible and accurate RBDO framework for fibre-reinforced composite laminate components. The proposed RBDO strategy is demonstrated by finding the optimum design solution for a composite component under the effect of multi-scale uncertainties while meeting a specific stiffness reliability requirement. Performing this using the double-loop approach is computationally expensive because of the number of uncertainties and function evaluations required to assess the reliability. Thus, a sequential optimisation concept is proposed, which starts by finding a deterministic optimum solution, then assesses the reliability and shifts the constraint limit to a safer region. This is repeated until the desired level of reliability is reached. This is followed by a final probabilistic optimisation to reduce the mass further and meet the desired level of stiffness reliability. In addition, the proposed framework uses several surrogate models to replace expensive FE function evaluations during optimisation and reliability analysis. The numerical example is also used to investigate the effect of using different sizes of LRVEs, compared with a single RVE. In future work, other problem-dependent surrogates such as Kriging will be used to allow predicting lower probability of failures with high accuracy.
Findings
The integration of the developed multi-scale reliability framework with the sequential RBDO optimisation strategy is proven computationally feasible, and it is shown that the use of LRVEs leads to less conservative designs compared with the use of single RVE, i.e. up to 3.5% weight reduction in the case of the 1 × 1 RVE optimised component. This is because the LRVE provides a representation of the spatial variability of uncertainties in a composite material while capturing a wider range of uncertainties at each iteration.
Originality/value
Fibre-reinforced composite laminate components designed using reliability and optimisation have been investigated before. Still, they have not previously been combined in a comprehensive multi-scale RBDO. Therefore, this study combines the probabilistic framework with an optimisation strategy to perform multi-scale RBDO and demonstrates its feasibility and efficiency for an fibre reinforced polymer component design.
Details
Keywords
Gijeong Seo, Md. RU Ahsan, Yousub Lee, Jong-Ho Shin, Hyungjun Park and Duck Bong Kim
Due to the complexity of and variations in additive manufacturing (AM) processes, there is a level of uncertainty that creates critical issues in quality assurance (QA), which…
Abstract
Purpose
Due to the complexity of and variations in additive manufacturing (AM) processes, there is a level of uncertainty that creates critical issues in quality assurance (QA), which must be addressed by time-consuming and cost-intensive tasks. This deteriorates the process repeatability, reliability and part reproducibility. So far, many AM efforts have been performed in an isolated and scattered way over several decades. In this paper, a systematically integrated holistic view is proposed to achieve QA for AM.
Design/methodology/approach
A systematically integrated view is presented to ensure the predefined part properties before/during/after the AM process. It consists of four stages, namely, QA plan, prospective validation, concurrent validation and retrospective validation. As a foundation for QA planning, a functional workflow and the required information flows are proposed by using functional design models: Icam DEFinition for Function Modeling.
Findings
The functional design model of the QA plan provides the systematically integrated view that can be the basis for inspection of AM processes for the repeatability and qualification of AM parts for reproducibility.
Research limitations/implications
A powder bed fusion process was used to validate the feasibility of this QA plan. Feasibility was demonstrated under many assumptions; real validation is not included in this study.
Social implications
This study provides an innovative and transformative methodology that can lead to greater productivity and improved quality of AM parts across industries. Furthermore, the QA guidelines and functional design models provide the foundation for the development of a QA architecture and management system.
Originality/value
This systematically integrated view and the corresponding QA plan can pose fundamental questions to the AM community and initiate new research efforts in the in-situ digital inspection of AM processes and parts.
Details
Keywords
The purpose of this paper is to outline the extensive multi-scale and multi-physics challenges when simulating future aircraft and offer strategies to help deal with some of these…
Abstract
Purpose
The purpose of this paper is to outline the extensive multi-scale and multi-physics challenges when simulating future aircraft and offer strategies to help deal with some of these challenges.
Design/methodology/approach
To help with the multi-scale challenges, in a hierarchical, zonal fashion both the handling of turbulence and geometry is considered.
Findings
Such modelling of geometry is necessary to help deal with the increasingly coupled nature of many aerodynamic problems more economically and the drive towards considering ever increasing levels of geometrical complexity/scale.
Originality/value
The proposed unified framework could be exploited all the way, through initial fast preliminary design to final numerical test involving various bespoke combinations of hierarchical components.
Details