Search results
1 – 10 of 65Ryley McConkey, Nikhila Kalia, Eugene Yee and Fue-Sang Lien
Industrial simulations of turbulent flows often rely on Reynolds-averaged Navier-Stokes (RANS) turbulence models, which contain numerous closure coefficients that need to be…
Abstract
Purpose
Industrial simulations of turbulent flows often rely on Reynolds-averaged Navier-Stokes (RANS) turbulence models, which contain numerous closure coefficients that need to be calibrated. This paper aims to address this issue by proposing a semi-automated calibration of these coefficients using a new framework (referred to as turbo-RANS) based on Bayesian optimization.
Design/methodology/approach
The authors introduce the generalized error and default coefficient preference (GEDCP) objective function, which can be used with integral, sparse or dense reference data for the purpose of calibrating RANS turbulence closure model coefficients. Then, the authors describe a Bayesian optimization-based algorithm for conducting the calibration of these model coefficients. An in-depth hyperparameter tuning study is conducted to recommend efficient settings for the turbo-RANS optimization procedure.
Findings
The authors demonstrate that the performance of the k-ω shear stress transport (SST) and generalized k-ω (GEKO) turbulence models can be efficiently improved via turbo-RANS, for three example cases: predicting the lift coefficient of an airfoil; predicting the velocity and turbulent kinetic energy fields for a separated flow; and, predicting the wall pressure coefficient distribution for flow through a converging-diverging channel.
Originality/value
To the best of the authors’ knowledge, this work is the first to propose and provide an open-source black-box calibration procedure for turbulence model coefficients based on Bayesian optimization. The authors propose a data-flexible objective function for the calibration target. The open-source implementation of the turbo-RANS framework includes OpenFOAM, Ansys Fluent, STAR-CCM+ and solver-agnostic templates for user application.
Details
Keywords
Yupaporn Areepong and Saowanit Sukparungsee
The purpose of this paper is to investigate and review the impact of the use of statistical quality control (SQC) development and analytical and numerical methods on average run…
Abstract
Purpose
The purpose of this paper is to investigate and review the impact of the use of statistical quality control (SQC) development and analytical and numerical methods on average run length for econometric applications.
Design/methodology/approach
This study used several academic databases to survey and analyze the literature on SQC tools, their characteristics and applications. The surveys covered both parametric and nonparametric SQC.
Findings
This survey paper reviews the literature both control charts and methodology to evaluate an average run length (ARL) which the SQC charts can be applied to any data. Because of the nonparametric control chart is an alternative effective to standard control charts. The mixed nonparametric control chart can overcome the assumption of normality and independence. In addition, there are several analytical and numerical methods for determining the ARL, those of methods; Markov Chain, Martingales, Numerical Integral Equation and Explicit formulas which use less time consuming but accuracy. New ideas of mixed parametric and nonparametric control charts are effective alternatives for econometric applications.
Originality/value
In terms of mixed nonparametric control charts, this can be applied to all data which no limitation in using of the proposed control chart. In particular, the data consist of volatility and fluctuation usually occurred in econometric solutions. Furthermore, to find the ARL as a performance measure, an explicit formula for the ARL of time series data can be derived using the integral equation and its accuracy can be verified using the numerical integral equation.
Details
Keywords
Zhangtao Peng, Qian Fang, Qing Ai, Xiaomo Jiang, Hui Wang, Xingchun Huang and Yong Yuan
A risk-based method is proposed to identify the dominant influencing factors of secondary lining cracking in an operating mountain tunnel with weak surrounding rock.
Abstract
Purpose
A risk-based method is proposed to identify the dominant influencing factors of secondary lining cracking in an operating mountain tunnel with weak surrounding rock.
Design/methodology/approach
Based on the inspection data from a mountain tunnel in Southwest China, a lognormal proportional hazard model is established to describe the statistical distribution of secondary lining cracks. Then, the model parameters are obtained by using the Bayesian regression method, and the importance of influencing factors can be sorted based on the absolute values of the parameters.
Findings
The results show that the order of importance of the influencing factors of secondary lining cracks is as follows: location of the crack on the tunnel profile, rock mass grade of the surrounding rock, time to completion of the secondary lining, and void behind the secondary lining. Accordingly, the location of the crack on the tunnel profile and rock mass grade of the surrounding rock are the two most important influencing factors of secondary lining cracks in the investigated mountain tunnel, and appropriate maintenance measures should be focused on these two aspects.
Originality/value
This study provides a general and effective reference for identifying the dominant influencing factors of secondary lining cracks to guide the targeted maintenance in mountain tunnels.
Details
Keywords
Guilherme Fonseca Gonçalves, Rui Pedro Cardoso Coelho and Igor André Rodrigues Lopes
The purpose of this research is to establish a robust numerical framework for the calibration of macroscopic constitutive parameters, based on the analysis of polycrystalline RVEs…
Abstract
Purpose
The purpose of this research is to establish a robust numerical framework for the calibration of macroscopic constitutive parameters, based on the analysis of polycrystalline RVEs with computational homogenisation.
Design/methodology/approach
This framework is composed of four building-blocks: (1) the multi-scale model, consisting of polycrystalline RVEs, where the grains are modelled with anisotropic crystal plasticity, and computational homogenisation to link the scales, (2) a set of loading cases to generate the reference responses, (3) the von Mises elasto-plastic model to be calibrated, and (4) the optimisation algorithms to solve the inverse identification problem. Several optimisation algorithms are assessed through a reference identification problem. Thereafter, different calibration strategies are tested. The accuracy of the calibrated models is evaluated by comparing their results against an FE2 model and experimental data.
Findings
In the initial tests, the LIPO optimiser performs the best. Good results accuracy is obtained with the calibrated constitutive models. The computing time needed by the FE2 simulations is 5 orders of magnitude larger, compared to the standard macroscopic simulations, demonstrating how this framework is suitable to obtain efficient micro-mechanics-informed constitutive models.
Originality/value
This contribution proposes a numerical framework, based on FE2 and macro-scale single element simulations, where the calibration of constitutive laws is informed by multi-scale analysis. The most efficient combination of optimisation algorithm and definition of the objective function is studied, and the robustness of the proposed approach is demonstrated by validation with both numerical and experimental data.
Details
Keywords
Zhihong Jiang, Jiachen Hu, Xiao Huang and Hui Li
Current reinforcement learning (RL) algorithms are facing issues such as low learning efficiency and poor generalization performance, which significantly limit their practical…
Abstract
Purpose
Current reinforcement learning (RL) algorithms are facing issues such as low learning efficiency and poor generalization performance, which significantly limit their practical application in real robots. This paper aims to adopt a hybrid model-based and model-free policy search method with multi-timescale value function tuning, aiming to allow robots to learn complex motion planning skills in multi-goal and multi-constraint environments with a few interactions.
Design/methodology/approach
A goal-conditioned model-based and model-free search method with multi-timescale value function tuning is proposed in this paper. First, the authors construct a multi-goal, multi-constrained policy optimization approach that fuses model-based policy optimization with goal-conditioned, model-free learning. Soft constraints on states and controls are applied to ensure fast and stable policy iteration. Second, an uncertainty-aware multi-timescale value function learning method is proposed, which constructs a multi-timescale value function network and adaptively chooses the value function planning timescales according to the value prediction uncertainty. It implicitly reduces the value representation complexity and improves the generalization performance of the policy.
Findings
The algorithm enables physical robots to learn generalized skills in real-world environments through a handful of trials. The simulation and experimental results show that the algorithm outperforms other relevant model-based and model-free RL algorithms.
Originality/value
This paper combines goal-conditioned RL and the model predictive path integral method into a unified model-based policy search framework, which improves the learning efficiency and policy optimality of motor skill learning in multi-goal and multi-constrained environments. An uncertainty-aware multi-timescale value function learning and selection method is proposed to overcome long horizon problems, improve optimal policy resolution and therefore enhance the generalization ability of goal-conditioned RL.
Details
Keywords
Xiaolong Lyu, Dan Huang, Liwei Wu and Ding Chen
Parameter estimation in complex engineering structures typically necessitates repeated calculations using simulation models, leading to significant computational costs. This paper…
Abstract
Purpose
Parameter estimation in complex engineering structures typically necessitates repeated calculations using simulation models, leading to significant computational costs. This paper aims to introduce an adaptive multi-output Gaussian process (MOGP) surrogate model for parameter estimation in time-consuming models.
Design/methodology/approach
The MOGP surrogate model is established to replace the computationally expensive finite element method (FEM) analysis during the estimation process. We propose a novel adaptive sampling method for MOGP inspired by the traditional expected improvement (EI) method, aiming to reduce the number of required sample points for building the surrogate model. Two mathematical examples and an application in the back analysis of a concrete arch dam are tested to demonstrate the effectiveness of the proposed method.
Findings
The numerical results show that the proposed method requires a relatively small number of sample points to achieve accurate estimates. The proposed adaptive sampling method combined with the MOGP surrogate model shows an obvious advantage in parameter estimation problems involving expensive-to-evaluate models, particularly those with high-dimensional output.
Originality/value
A novel adaptive sampling method for establishing the MOGP surrogate model is proposed to accelerate the procedure of solving large-scale parameter estimation problems. This modified adaptive sampling method, based on the traditional EI method, is better suited for multi-output problems, making it highly valuable for numerous practical engineering applications.
Details
Keywords
Once regional financial risks erupt, they not only affect the stability and security of the financial system in the region, but also trigger a comprehensive financial crisis…
Abstract
Purpose
Once regional financial risks erupt, they not only affect the stability and security of the financial system in the region, but also trigger a comprehensive financial crisis, damage the national economy, and affect social stability. Therefore, it is necessary to regulate regional financial risks through artificial intelligence methods.
Design/methodology/approach
In this manuscript, we scrutinize the loan data pertaining to aggregated regional financial risks and proffer an ARIMA-SVR loan data regression model, amalgamating traditional statistical regression methods with a machine learning framework. This model initially employs the ARIMA model to accomplish historical data fitting and subsequently utilizes the resultant error as input for SVR to refine the non-linear error. Building upon this, it integrates with the original data to derive optimized prediction results.
Findings
The experimental findings reveal that the ARIMA-SVR (Autoregress Integrated Moving Average Model-Support Vector Regression) method advanced in this discourse surpasses individual methods in terms of RMSE (Root Mean Square Error) and MAE (Mean Absolute Error) indices, exhibiting superiority to the deep learning LSTM method.
Originality/value
An ARIMA-SVR framework for the financial risk recognition is proposed. This presentation furnishes a benchmark for future financial risk prediction and the forecasting of associated time series data.
Details
Keywords
The purpose of this paper is to investigate the vehicle-based sensor effect and pavement temperature on road condition assessment, as well as to compute a threshold value for the…
Abstract
Purpose
The purpose of this paper is to investigate the vehicle-based sensor effect and pavement temperature on road condition assessment, as well as to compute a threshold value for the classification of pavement conditions.
Design/methodology/approach
Four sensors were placed on the vehicle’s control arms and one inside the vehicle to collect vibration acceleration data for analysis. The Analysis of Variance (ANOVA) tests were performed to diagnose the effect of the vehicle-based sensors’ placement in the field. To classify road conditions and identify pavement distress (point of interest), the probability distribution was applied based on the magnitude values of vibration data.
Findings
Results from ANOVA indicate that pavement sensing patterns from the sensors placed on the front control arms were statistically significant, and there is no difference between the sensors placed on the same side of the vehicle (e.g., left or right side). A reference threshold (i.e., 1.7 g) was computed from the distribution fitting method to classify road conditions and identify the road distress based on the magnitude values that combine all acceleration along three axes. In addition, the pavement temperature was found to be highly correlated with the sensing patterns, which is noteworthy for future projects.
Originality/value
The paper investigates the effect of pavement sensors’ placement in assessing road conditions, emphasizing the implications for future road condition assessment projects. A threshold value for classifying road conditions was proposed and applied in class assignments (I-17 highway projects).
Details
Keywords
Reza Hajipour Farsangi, Ghadir Mahdavi, Majid Jafari Khaledi, Murat Büyükyazıcı and Mitra Ghanbarzadeh
This study aims to price the risk contribution of general Takaful at the level of tariff cells, considering a spatial dependency framework.
Abstract
Purpose
This study aims to price the risk contribution of general Takaful at the level of tariff cells, considering a spatial dependency framework.
Design/methodology/approach
Three different models, including a generalized linear model, a generalized linear mixed model (GLMM) and a spatial generalized linear mixed model (SGLMM), according to the actuarial modeling of general Takaful, are used to price pure risk contribution (PRC).
Findings
The results reveal that the SGLMM yields more accurate predictions of the PRC compared to the other models, emphasizing the significance of spatial modeling in this context. Following the estimation of the PRC, the gross contribution according to the mechanism of Takaful models is calculated considering the spatial model.
Practical implications
Considering the similarities between Takaful and insurance, this study addresses the pricing of general Takaful within different Takaful models through a spatial dependency framework, such that the practical implications of the study are applicable for running Takaful's business in both Islamic and non-Islamic countries.
Originality/value
Most studies consider only the social or practical view of Takaful. This study contributes to the broader knowledge and understanding of Takaful by presenting a conceptual understanding of Takaful and then investigates the practical application of pricing risk contribution using innovative modeling of claim frequency and severity at the level of tariff cells.
Details
Keywords
Sarath Radhakrishnan, Joan Calafell, Arnau Miró, Bernat Font and Oriol Lehmkuhl
Wall-modeled large eddy simulation (LES) is a practical tool for solving wall-bounded flows with less computational cost by avoiding the explicit resolution of the near-wall…
Abstract
Purpose
Wall-modeled large eddy simulation (LES) is a practical tool for solving wall-bounded flows with less computational cost by avoiding the explicit resolution of the near-wall region. However, its use is limited in flows that have high non-equilibrium effects like separation or transition. This study aims to present a novel methodology of using high-fidelity data and machine learning (ML) techniques to capture these non-equilibrium effects.
Design/methodology/approach
A precursor to this methodology has already been tested in Radhakrishnan et al. (2021) for equilibrium flows using LES of channel flow data. In the current methodology, the high-fidelity data chosen for training includes direct numerical simulation of a double diffuser that has strong non-equilibrium flow regions, and LES of a channel flow. The ultimate purpose of the model is to distinguish between equilibrium and non-equilibrium regions, and to provide the appropriate wall shear stress. The ML system used for this study is gradient-boosted regression trees.
Findings
The authors show that the model can be trained to make accurate predictions for both equilibrium and non-equilibrium boundary layers. In example, the authors find that the model is very effective for corner flows and flows that involve relaminarization, while performing rather ineffectively at recirculation regions.
Originality/value
Data from relaminarization regions help the model to better understand such phenomenon and to provide an appropriate boundary condition based on that. This motivates the authors to continue the research in this direction by adding more non-equilibrium phenomena to the training data to capture recirculation as well.
Details