Search results

1 – 10 of 122
Article
Publication date: 13 June 2024

Ryley McConkey, Nikhila Kalia, Eugene Yee and Fue-Sang Lien

Industrial simulations of turbulent flows often rely on Reynolds-averaged Navier-Stokes (RANS) turbulence models, which contain numerous closure coefficients that need to be…

Abstract

Purpose

Industrial simulations of turbulent flows often rely on Reynolds-averaged Navier-Stokes (RANS) turbulence models, which contain numerous closure coefficients that need to be calibrated. This paper aims to address this issue by proposing a semi-automated calibration of these coefficients using a new framework (referred to as turbo-RANS) based on Bayesian optimization.

Design/methodology/approach

The authors introduce the generalized error and default coefficient preference (GEDCP) objective function, which can be used with integral, sparse or dense reference data for the purpose of calibrating RANS turbulence closure model coefficients. Then, the authors describe a Bayesian optimization-based algorithm for conducting the calibration of these model coefficients. An in-depth hyperparameter tuning study is conducted to recommend efficient settings for the turbo-RANS optimization procedure.

Findings

The authors demonstrate that the performance of the k-ω shear stress transport (SST) and generalized k-ω (GEKO) turbulence models can be efficiently improved via turbo-RANS, for three example cases: predicting the lift coefficient of an airfoil; predicting the velocity and turbulent kinetic energy fields for a separated flow; and, predicting the wall pressure coefficient distribution for flow through a converging-diverging channel.

Originality/value

To the best of the authors’ knowledge, this work is the first to propose and provide an open-source black-box calibration procedure for turbulence model coefficients based on Bayesian optimization. The authors propose a data-flexible objective function for the calibration target. The open-source implementation of the turbo-RANS framework includes OpenFOAM, Ansys Fluent, STAR-CCM+ and solver-agnostic templates for user application.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 34 no. 8
Type: Research Article
ISSN: 0961-5539

Keywords

Article
Publication date: 21 December 2023

Majid Rahi, Ali Ebrahimnejad and Homayun Motameni

Taking into consideration the current human need for agricultural produce such as rice that requires water for growth, the optimal consumption of this valuable liquid is…

Abstract

Purpose

Taking into consideration the current human need for agricultural produce such as rice that requires water for growth, the optimal consumption of this valuable liquid is important. Unfortunately, the traditional use of water by humans for agricultural purposes contradicts the concept of optimal consumption. Therefore, designing and implementing a mechanized irrigation system is of the highest importance. This system includes hardware equipment such as liquid altimeter sensors, valves and pumps which have a failure phenomenon as an integral part, causing faults in the system. Naturally, these faults occur at probable time intervals, and the probability function with exponential distribution is used to simulate this interval. Thus, before the implementation of such high-cost systems, its evaluation is essential during the design phase.

Design/methodology/approach

The proposed approach included two main steps: offline and online. The offline phase included the simulation of the studied system (i.e. the irrigation system of paddy fields) and the acquisition of a data set for training machine learning algorithms such as decision trees to detect, locate (classification) and evaluate faults. In the online phase, C5.0 decision trees trained in the offline phase were used on a stream of data generated by the system.

Findings

The proposed approach is a comprehensive online component-oriented method, which is a combination of supervised machine learning methods to investigate system faults. Each of these methods is considered a component determined by the dimensions and complexity of the case study (to discover, classify and evaluate fault tolerance). These components are placed together in the form of a process framework so that the appropriate method for each component is obtained based on comparison with other machine learning methods. As a result, depending on the conditions under study, the most efficient method is selected in the components. Before the system implementation phase, its reliability is checked by evaluating the predicted faults (in the system design phase). Therefore, this approach avoids the construction of a high-risk system. Compared to existing methods, the proposed approach is more comprehensive and has greater flexibility.

Research limitations/implications

By expanding the dimensions of the problem, the model verification space grows exponentially using automata.

Originality/value

Unlike the existing methods that only examine one or two aspects of fault analysis such as fault detection, classification and fault-tolerance evaluation, this paper proposes a comprehensive process-oriented approach that investigates all three aspects of fault analysis concurrently.

Article
Publication date: 28 May 2024

Kuo-Yi Lin and Thitipong Jamrus

Motivated by recent research indicating the significant challenges posed by imbalanced datasets in industrial settings, this paper presents a novel framework for Industrial…

71

Abstract

Purpose

Motivated by recent research indicating the significant challenges posed by imbalanced datasets in industrial settings, this paper presents a novel framework for Industrial Data-driven Modeling for Imbalanced Fault Diagnosis, aiming to improve fault detection accuracy and reliability.

Design/methodology/approach

This study addressing the challenge of imbalanced datasets in predicting hard drive failures is both innovative and comprehensive. By integrating data enhancement techniques with cost-sensitive methods, the research pioneers a solution that directly targets the intrinsic issues posed by imbalanced data, a common obstacle in predictive maintenance and reliability analysis.

Findings

In real industrial environments, there is a critical demand for addressing the issue of imbalanced datasets. When faced with limited data for rare events or a heavily skewed distribution of categories, it becomes essential for models to effectively mine insights from the original imbalanced dataset. This involves employing techniques like data augmentation to generate new insights and rules, enhancing the model’s ability to accurately identify and predict failures.

Originality/value

Previous research has highlighted the complexity of diagnosing faults within imbalanced industrial datasets, often leading to suboptimal predictive accuracy. This paper bridges this gap by introducing a robust framework for Industrial Data-driven Modeling for Imbalanced Fault Diagnosis. It combines data enhancement and cost-sensitive methods to effectively manage the challenges posed by imbalanced datasets, further innovating with a bagging method to refine model optimization. The validation of the proposed approach demonstrates superior accuracy compared to existing methods, showcasing its potential to significantly improve fault diagnosis in industrial applications.

Details

Industrial Management & Data Systems, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0263-5577

Keywords

Open Access
Article
Publication date: 2 September 2024

Yupaporn Areepong and Saowanit Sukparungsee

The purpose of this paper is to investigate and review the impact of the use of statistical quality control (SQC) development and analytical and numerical methods on average run…

Abstract

Purpose

The purpose of this paper is to investigate and review the impact of the use of statistical quality control (SQC) development and analytical and numerical methods on average run length for econometric applications.

Design/methodology/approach

This study used several academic databases to survey and analyze the literature on SQC tools, their characteristics and applications. The surveys covered both parametric and nonparametric SQC.

Findings

This survey paper reviews the literature both control charts and methodology to evaluate an average run length (ARL) which the SQC charts can be applied to any data. Because of the nonparametric control chart is an alternative effective to standard control charts. The mixed nonparametric control chart can overcome the assumption of normality and independence. In addition, there are several analytical and numerical methods for determining the ARL, those of methods; Markov Chain, Martingales, Numerical Integral Equation and Explicit formulas which use less time consuming but accuracy. New ideas of mixed parametric and nonparametric control charts are effective alternatives for econometric applications.

Originality/value

In terms of mixed nonparametric control charts, this can be applied to all data which no limitation in using of the proposed control chart. In particular, the data consist of volatility and fluctuation usually occurred in econometric solutions. Furthermore, to find the ARL as a performance measure, an explicit formula for the ARL of time series data can be derived using the integral equation and its accuracy can be verified using the numerical integral equation.

Details

Asian Journal of Economics and Banking, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2615-9821

Keywords

Article
Publication date: 24 May 2024

Hamidreza Najafi, Ahmad Golrokh Sani and Mohammad Amin Sobati

In this study, a different approach is introduced to generate the kinetic sub-model for the modeling of solid-state pyrolysis reactions based on the thermogravimetric (TG…

Abstract

Purpose

In this study, a different approach is introduced to generate the kinetic sub-model for the modeling of solid-state pyrolysis reactions based on the thermogravimetric (TG) experimental data over a specified range of heating rates. Gene Expression Programming (GEP) is used to produce a correlation for the single-step global reaction rate as a function of determining kinetic variables, namely conversion, temperature, and heating rate.

Design/methodology/approach

For a case study on the coal pyrolysis, a coefficient of determination (R2) of 0.99 was obtained using the generated model according to the experimental benchmark data. Comparison of the model results with the experimental data proves the applicability, reliability, and convenience of GEP as a powerful tool for modeling purposes in the solid-state pyrolysis reactions.

Findings

The resulting kinetic sub-model takes advantage of particular characteristics, to be highly efficient, simple, accurate, and computationally attractive, which facilitates the CFD simulation of real pyrolizers under isothermal and non-isothermal conditions.

Originality/value

It should be emphasized that the above-mentioned manuscript is not under evaluation in any journals and submitted exclusively for consideration for possible publication in this journal. The generated kinetic model is in the final form of an algebraic correlation which, in comparison to the conventional kinetic models, suggests several advantages: to be relatively simpler, more accurate, and numerically efficient. These characteristics make the proposed model computationally attractive when used as a sub-model in CFD applications to simulate real pyrolizers under complex heating conditions.

Details

Engineering Computations, vol. 41 no. 4
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 10 June 2024

Zhangtao Peng, Qian Fang, Qing Ai, Xiaomo Jiang, Hui Wang, Xingchun Huang and Yong Yuan

A risk-based method is proposed to identify the dominant influencing factors of secondary lining cracking in an operating mountain tunnel with weak surrounding rock.

Abstract

Purpose

A risk-based method is proposed to identify the dominant influencing factors of secondary lining cracking in an operating mountain tunnel with weak surrounding rock.

Design/methodology/approach

Based on the inspection data from a mountain tunnel in Southwest China, a lognormal proportional hazard model is established to describe the statistical distribution of secondary lining cracks. Then, the model parameters are obtained by using the Bayesian regression method, and the importance of influencing factors can be sorted based on the absolute values of the parameters.

Findings

The results show that the order of importance of the influencing factors of secondary lining cracks is as follows: location of the crack on the tunnel profile, rock mass grade of the surrounding rock, time to completion of the secondary lining, and void behind the secondary lining. Accordingly, the location of the crack on the tunnel profile and rock mass grade of the surrounding rock are the two most important influencing factors of secondary lining cracks in the investigated mountain tunnel, and appropriate maintenance measures should be focused on these two aspects.

Originality/value

This study provides a general and effective reference for identifying the dominant influencing factors of secondary lining cracks to guide the targeted maintenance in mountain tunnels.

Details

International Journal of Structural Integrity, vol. 15 no. 4
Type: Research Article
ISSN: 1757-9864

Keywords

Article
Publication date: 26 July 2024

Guilherme Fonseca Gonçalves, Rui Pedro Cardoso Coelho and Igor André Rodrigues Lopes

The purpose of this research is to establish a robust numerical framework for the calibration of macroscopic constitutive parameters, based on the analysis of polycrystalline RVEs…

Abstract

Purpose

The purpose of this research is to establish a robust numerical framework for the calibration of macroscopic constitutive parameters, based on the analysis of polycrystalline RVEs with computational homogenisation.

Design/methodology/approach

This framework is composed of four building-blocks: (1) the multi-scale model, consisting of polycrystalline RVEs, where the grains are modelled with anisotropic crystal plasticity, and computational homogenisation to link the scales, (2) a set of loading cases to generate the reference responses, (3) the von Mises elasto-plastic model to be calibrated, and (4) the optimisation algorithms to solve the inverse identification problem. Several optimisation algorithms are assessed through a reference identification problem. Thereafter, different calibration strategies are tested. The accuracy of the calibrated models is evaluated by comparing their results against an FE2 model and experimental data.

Findings

In the initial tests, the LIPO optimiser performs the best. Good results accuracy is obtained with the calibrated constitutive models. The computing time needed by the FE2 simulations is 5 orders of magnitude larger, compared to the standard macroscopic simulations, demonstrating how this framework is suitable to obtain efficient micro-mechanics-informed constitutive models.

Originality/value

This contribution proposes a numerical framework, based on FE2 and macro-scale single element simulations, where the calibration of constitutive laws is informed by multi-scale analysis. The most efficient combination of optimisation algorithm and definition of the objective function is studied, and the robustness of the proposed approach is demonstrated by validation with both numerical and experimental data.

Details

Engineering Computations, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 11 June 2024

Zhihong Jiang, Jiachen Hu, Xiao Huang and Hui Li

Current reinforcement learning (RL) algorithms are facing issues such as low learning efficiency and poor generalization performance, which significantly limit their practical…

Abstract

Purpose

Current reinforcement learning (RL) algorithms are facing issues such as low learning efficiency and poor generalization performance, which significantly limit their practical application in real robots. This paper aims to adopt a hybrid model-based and model-free policy search method with multi-timescale value function tuning, aiming to allow robots to learn complex motion planning skills in multi-goal and multi-constraint environments with a few interactions.

Design/methodology/approach

A goal-conditioned model-based and model-free search method with multi-timescale value function tuning is proposed in this paper. First, the authors construct a multi-goal, multi-constrained policy optimization approach that fuses model-based policy optimization with goal-conditioned, model-free learning. Soft constraints on states and controls are applied to ensure fast and stable policy iteration. Second, an uncertainty-aware multi-timescale value function learning method is proposed, which constructs a multi-timescale value function network and adaptively chooses the value function planning timescales according to the value prediction uncertainty. It implicitly reduces the value representation complexity and improves the generalization performance of the policy.

Findings

The algorithm enables physical robots to learn generalized skills in real-world environments through a handful of trials. The simulation and experimental results show that the algorithm outperforms other relevant model-based and model-free RL algorithms.

Originality/value

This paper combines goal-conditioned RL and the model predictive path integral method into a unified model-based policy search framework, which improves the learning efficiency and policy optimality of motor skill learning in multi-goal and multi-constrained environments. An uncertainty-aware multi-timescale value function learning and selection method is proposed to overcome long horizon problems, improve optimal policy resolution and therefore enhance the generalization ability of goal-conditioned RL.

Details

Robotic Intelligence and Automation, vol. 44 no. 4
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 15 July 2024

Xiaolong Lyu, Dan Huang, Liwei Wu and Ding Chen

Parameter estimation in complex engineering structures typically necessitates repeated calculations using simulation models, leading to significant computational costs. This paper…

Abstract

Purpose

Parameter estimation in complex engineering structures typically necessitates repeated calculations using simulation models, leading to significant computational costs. This paper aims to introduce an adaptive multi-output Gaussian process (MOGP) surrogate model for parameter estimation in time-consuming models.

Design/methodology/approach

The MOGP surrogate model is established to replace the computationally expensive finite element method (FEM) analysis during the estimation process. We propose a novel adaptive sampling method for MOGP inspired by the traditional expected improvement (EI) method, aiming to reduce the number of required sample points for building the surrogate model. Two mathematical examples and an application in the back analysis of a concrete arch dam are tested to demonstrate the effectiveness of the proposed method.

Findings

The numerical results show that the proposed method requires a relatively small number of sample points to achieve accurate estimates. The proposed adaptive sampling method combined with the MOGP surrogate model shows an obvious advantage in parameter estimation problems involving expensive-to-evaluate models, particularly those with high-dimensional output.

Originality/value

A novel adaptive sampling method for establishing the MOGP surrogate model is proposed to accelerate the procedure of solving large-scale parameter estimation problems. This modified adaptive sampling method, based on the traditional EI method, is better suited for multi-output problems, making it highly valuable for numerous practical engineering applications.

Details

Engineering Computations, vol. 41 no. 6
Type: Research Article
ISSN: 0264-4401

Keywords

Open Access
Article
Publication date: 22 March 2024

Geming Zhang, Lin Yang and Wenxiang Jiang

The purpose of this study is to introduce the top-level design ideas and the overall architecture of earthquake early-warning system for high speed railways in China, which is…

Abstract

Purpose

The purpose of this study is to introduce the top-level design ideas and the overall architecture of earthquake early-warning system for high speed railways in China, which is based on P-wave earthquake early-warning and multiple ways of rapid treatment.

Design/methodology/approach

The paper describes the key technologies that are involved in the development of the system, such as P-wave identification and earthquake early-warning, multi-source seismic information fusion and earthquake emergency treatment technologies. The paper also presents the test results of the system, which show that it has complete functions and its major performance indicators meet the design requirements.

Findings

The study demonstrates that the high speed railways earthquake early-warning system serves as an important technical tool for high speed railways to cope with the threat of earthquake to the operation safety. The key technical indicators of the system have excellent performance: The first report time of the P-wave is less than three seconds. From the first arrival of P-wave to the beginning of train braking, the total delay of onboard emergency treatment is 3.63 seconds under 95% probability. The average total delay for power failures triggered by substations is 3.3 seconds.

Originality/value

The paper provides a valuable reference for the research and development of earthquake early-warning system for high speed railways in other countries and regions. It also contributes to the earthquake prevention and disaster reduction efforts.

1 – 10 of 122