Search results
1 – 10 of over 24000Michal Frivaldsky, Miroslav Pavelek, Pavol Spanik, Dagmar Faktorova and Gabriela Spanikova
The purpose of this paper is to study the performance of the approximated model of biological tissue for development of complex 3 D models. The comparison of results from the…
Abstract
Purpose
The purpose of this paper is to study the performance of the approximated model of biological tissue for development of complex 3 D models. The comparison of results from the complex model of liver tissue and results from the approximated model is provided to validate the proposed approximation method.
Design/methodology/approach
The proposed model of hepatic tissue (respecting its heterogeneous character up to the microstructure of hepatic lobules) is used for analysis of current field distribution within this tissue. Initially, the complex model of tissue structure (respecting the heterogenous structure) is presented, considering its complicated structure. Consequently, the procedure for the approximation of a complex model is being described. The main motivation is the need for simple, fast and accurate simulation model, which can be consequently used within more complex modeling of human organs for investigation of negative impacts of electrosurgical equipment on heterogenic tissue structure. For these purposes, the complex and approximated model are mutually compared and evaluated.
Findings
The obtained results are exploitable for the analysis of the probability of injury formation in sensitive tissue structures, and the approximated model shall serve for optimization of complex and time-consuming analyses.
Research limitations/implications
Research limitations include development of precise and fast electro-magnetic simulation model of biological tissue.
Practical implications
Practical implications is focused on the optimization processes of the electro-surgical procedures.
Originality/value
The originality of the paper concerns the approximation method of organic tissue modeling.
Details
Keywords
H. Medellín, J. Corney, J.B.C. Davies, T. Lim and J.M. Ritchie
This paper presents a novel approach for rapid prototyping based on the octree decomposition of 3D geometric models. The proposed method, referred as OcBlox, integrates an octree…
Abstract
This paper presents a novel approach for rapid prototyping based on the octree decomposition of 3D geometric models. The proposed method, referred as OcBlox, integrates an octree modeller, an assembly planning system, and a robotic assembly cell into an integrated system that builds approximate prototypes directly from 3D model data. Given an exact 3D model this system generates an octree decomposition of it, which approximates the shape cubic units referred as “Blox”. These cuboid units are automatically assembled to obtain an approximate physical prototype. This paper details the algorithms used to generate the octree's assembly sequence and demonstrates the feasibility of the OcBlox approach by describing a single resolution example of a prototype built with this automated system. An analysis of the potential of the approach to decrease the manufacturing time of physical components is detailed. Finally, the potential of OcBlox to support complex overhanging geometry is discussed.
Details
Keywords
Zhiyuan Huang, Haobo Qiu, Ming Zhao, Xiwen Cai and Liang Gao
Popular regression methodologies are inapplicable to obtain accurate metamodels for high dimensional practical problems since the computational time increases exponentially as the…
Abstract
Purpose
Popular regression methodologies are inapplicable to obtain accurate metamodels for high dimensional practical problems since the computational time increases exponentially as the number of dimensions rises. The purpose of this paper is to use support vector regression with high dimensional model representation (SVR-HDMR) model to obtain accurate metamodels for high dimensional problems with a few sampling points.
Design/methodology/approach
High-dimensional model representation (HDMR) is a general set of quantitative model assessment and analysis tools for improving the efficiency of deducing high dimensional input-output system behavior. Support vector regression (SVR) method can approximate the underlying functions with a small subset of sample points. Dividing Rectangles (DIRECT) algorithm is a deterministic sampling method.
Findings
This paper proposes a new form of HDMR by integrating the SVR, termed as SVR-HDMR. And an intelligent sampling strategy, namely, DIRECT method, is adopted to improve the efficiency of SVR-HDMR.
Originality/value
Compared to other metamodeling techniques, the accuracy and efficiency of SVR-HDMR were significantly improved. The SVR-HDMR helped engineers understand the essence of underlying problems visually.
Details
Keywords
Silvana Maria B. Afonso, Bernardo Horowitz and Marcelo Ferreira da Silva
The purpose of this paper is to propose physically based varying fidelity surrogates to be used in structural design optimization of space trusses. The main aim is to demonstrate…
Abstract
Purpose
The purpose of this paper is to propose physically based varying fidelity surrogates to be used in structural design optimization of space trusses. The main aim is to demonstrate its efficiency in reducing the number of high fidelity (HF) runs in the optimization process.
Design/methodology/approach
In this work, surrogate models are built for space truss structures. This study uses functional as well as physical surrogates. In the latter, a grid analogy of the space truss is used thereby reducing drastically the analysis cost. Global and local approaches are considered. The latter will require a globalization scheme (sequential approximate optimization (SAO)) to ensure convergence.
Findings
Physically based surrogates were proposed. Classical techniques, namely Taylor series and kriging, are also implemented for comparison purposes. A parameter study in kriging is necessary to select the best kriging model to be used as surrogate. A test case was considered for optimization and several surrogates were built. The CPU time is reduced when compared with the HF solution, for all surrogate‐based optimization performed. The best result was achieved combining the proposed physical model with additive corrections in a SAO strategy in which C1 continuity was imposed at each trust region center. Some guidance for other engineering applications was given.
Originality/value
This is the first time that physical‐based surrogates for optimum design of space truss systems are used in the SAO framework. Physical surrogates typically exhibit better generalization properties than other surrogates forms, produce faster solutions, and do not suffer from dimensionality curse when used in approximate optimization strategies.
Details
Keywords
Duncai Lei, Xiannian Kong, Siyu Chen, Jinyuan Tang and Zehua Hu
The purpose of this paper is to investigate the dynamic responses of a spur gear pair with unloaded static transmission error (STE) excitation numerically and experimentally and…
Abstract
Purpose
The purpose of this paper is to investigate the dynamic responses of a spur gear pair with unloaded static transmission error (STE) excitation numerically and experimentally and the influences of the system factors including mesh stiffness, error excitation and torque on the dynamic transmission error (DTE).
Design/methodology/approach
A simple lumped parameters dynamic model of a gear pair considering time-varying mesh stiffness, backlash and unloaded STE excitation is developed. The STE is calculated from the measured tooth profile deviation under the unloaded condition. A four-square gear test rig is designed to measure and analyze the DTE and vibration responses of the gear pair. The dynamic responses of the gear transmission are studied numerically and experimentally.
Findings
The predicted numerical DTE matches well with the experimental results. When the real unloaded STE excitation without any approximation is used, the dynamic response is dominated by the mesh frequency and its high order harmonic components, which may not be result caused by the assembling error. The sub-harmonic and super-harmonic resonant behaviors are excited because of the high order harmonic components of STE. It will not certainly prevent the separations of mesh teeth when the gear pair is under the condition of high speed and heavy load.
Originality/value
This study helps to improve the modeling method of the dynamic analysis of spur gear transmission and provide some reference for the understanding of the influence of mesh stiffness, STE excitation and system torque on the vibration behaviors.
Details
Keywords
Matthew Powers and Brian O'Flynn
Rapid sensitivity analysis and near-optimal decision-making in contested environments are valuable requirements when providing military logistics support. Port of debarkation…
Abstract
Purpose
Rapid sensitivity analysis and near-optimal decision-making in contested environments are valuable requirements when providing military logistics support. Port of debarkation denial motivates maneuver from strategic operational locations, further complicating logistics support. Simulations enable rapid concept design, experiment and testing that meet these complicated logistic support demands. However, simulation model analyses are time consuming as output data complexity grows with simulation input. This paper proposes a methodology that leverages the benefits of simulation-based insight and the computational speed of approximate dynamic programming (ADP).
Design/methodology/approach
This paper describes a simulated contested logistics environment and demonstrates how output data informs the parameters required for the ADP dialect of reinforcement learning (aka Q-learning). Q-learning output includes a near-optimal policy that prescribes decisions for each state modeled in the simulation. This paper's methods conform to DoD simulation modeling practices complemented with AI-enabled decision-making.
Findings
This study demonstrates simulation output data as a means of state–space reduction to mitigate the curse of dimensionality. Furthermore, massive amounts of simulation output data become unwieldy. This work demonstrates how Q-learning parameters reflect simulation inputs so that simulation model behavior can compare to near-optimal policies.
Originality/value
Fast computation is attractive for sensitivity analysis while divorcing evaluation from scenario-based limitations. The United States military is eager to embrace emerging AI analytic techniques to inform decision-making but is hesitant to abandon simulation modeling. This paper proposes Q-learning as an aid to overcome cognitive limitations in a way that satisfies the desire to wield AI-enabled decision-making combined with modeling and simulation.
Details
Keywords
Hailiang Su, Fengchong Lan, Yuyan He and Jiqing Chen
Meta-model method has been widely used in structural reliability optimization design. The main limitation of this method is that it is difficult to quantify the error caused by…
Abstract
Purpose
Meta-model method has been widely used in structural reliability optimization design. The main limitation of this method is that it is difficult to quantify the error caused by the meta-model approximation, which leads to the inaccuracy of the optimization results of the reliability evaluation. Taking the local high efficiency of the proxy model, this paper aims to propose a local effective constrained response surface method (LEC-RSM) based on a meta-model.
Design/methodology/approach
The operating mechanisms of LEC-RSM is to calculate the index of the local relative importance based on numerical theory and capture the most effective area in the entire design space, as well as selecting important analysis domains for sample changes. To improve the efficiency of the algorithm, the constrained efficient set algorithm (ESA) is introduced, in which the sample point validity is identified based on the reliability information obtained in the previous cycle and then the boundary sampling points that violate the constraint conditions are ignored or eliminated.
Findings
The computational power of the proposed method is demonstrated by solving two mathematical problems and the actual engineering optimization problem of a car collision. LEC-RSM makes it easier to achieve the optimal performance, less feature evaluation and fewer algorithm iterations.
Originality/value
This paper proposes a new RSM technology based on proxy model to complete the reliability design. The originality of this paper is to increase the sampling points by identifying the local importance of the analysis domain and introduce the constrained ESA to improve the efficiency of the algorithm.
Details
Keywords
Hailiang Su, Fengchong Lan, Yuyan He and Jiqing Chen
Because of the high computational efficiency, response surface method (RSM) has been widely used in structural reliability analysis. However, for a highly nonlinear limit state…
Abstract
Purpose
Because of the high computational efficiency, response surface method (RSM) has been widely used in structural reliability analysis. However, for a highly nonlinear limit state function (LSF), the approximate accuracy of the failure probability mainly depends on the design point, and the result is that the response surface function composed of initial experimental points rarely fits the LSF exactly. The inaccurate design points usually cause some errors in the traditional RSM. The purpose of this paper is to present a hybrid method combining adaptive moving experimental points strategy and RSM, describing a new response surface using downhill simplex algorithm (DSA-RSM).
Design/methodology/approach
In DSA-RSM, the operation mechanism principle of the basic DSA, in which local descending vectors are automatically generated, was studied. Then, the search strategy of the basic DSA was changed and the RSM approximate model was reconstructed by combining the direct search advantage of DSA with the reliability mechanism of response surface analysis.
Findings
The computational power of the proposed method is demonstrated by solving four structural reliability problems, including the actual engineering problem of a car collision. Compared to specific structural reliability analysis methods, the approach of modified DSA interpolation response surface for structural reliability has a good convergent capability and computational accuracy.
Originality/value
This paper proposes a new RSM technology based on proxy model to complete the reliability analysis. The originality of this paper is to present an improved RSM that adjusts the position of the experimental points judiciously by using the DSA principle to make the fitted response surface closer to the actual limit state surface.
Details
Keywords
Cindy S. H. Wang and Shui Ki Wan
This chapter extends the univariate forecasting method proposed by Wang, Luc, and Hsiao (2013) to forecast the multivariate long memory model subject to structural breaks. The…
Abstract
This chapter extends the univariate forecasting method proposed by Wang, Luc, and Hsiao (2013) to forecast the multivariate long memory model subject to structural breaks. The approach does not need to estimate the parameters of this multivariate system nor need to detect the structural breaks. The only procedure is to employ a VAR(k) model to approximate the multivariate long memory model subject to structural breaks. Therefore, this approach reduces the computational burden substantially and also avoids estimation of the parameters of the multivariate long memory model, which can lead to poor forecasting performance. Moreover, when there are multiple breaks, when the breaks occur close to the end of the sample or when the breaks occur at different locations for the time series in the system, our VAR approximation approach solves the issue of spurious breaks in finite samples, even though the exact orders of the multivariate long memory process are unknown. Insights from our theoretical analysis are confirmed by a set of Monte Carlo experiments, through which we demonstrate that our approach provides a substantial improvement over existing multivariate prediction methods. Finally, an empirical application to the multivariate realized volatility illustrates the usefulness of our forecasting procedure.
Details
Keywords
In many problems involving decision‐making under uncertainty, the underlying probability model is unknown but partial information is available. In some approaches to this problem…
Abstract
Purpose
In many problems involving decision‐making under uncertainty, the underlying probability model is unknown but partial information is available. In some approaches to this problem, the available prior information is used to define an appropriate probability model for the system uncertainty through a probability density function. When the prior information is available as a finite sequence of moments of the unknown probability density function (PDF) defining the appropriate probability model for the uncertain system, the maximum entropy (ME) method derives a PDF from an exponential family to define an approximate model. This paper, aims to investigate some optimality properties of the ME estimates.
Design/methodology/approach
For n>m, when the exact model can be best approximated by one of an infinite number of unknown PDFs from an n parameter exponential family. The upper bound of the divergence distance between any PDF from this family and the m parameter exponential family PDF defined by the ME method are derived. A measure of adequacy of the model defined by ME method is thus provided.
Findings
These results may be used to establish confidence intervals on the estimate of a function of the random variable when the ME approach is employed. Additionally, it is shown that when working with large samples of independent observations, a probability density function (PDF) can be defined from an exponential family to model the uncertainty of the underlying system with measurable accuracy. Finally, a relationship with maximum likelihood estimation for this case is established.
Practical implications
The so‐called known moments problem addressed in this paper has a variety of applications in learning, blind equalization and neural networks.
Originality/value
An upper bound for error in approximating an unknown density function, f(x) by its ME estimate based on m moment constraints, obtained as a PDF p(x, α) from an m parameter exponential family is derived. The error bound will help us decide if the number of moment constraints is adequate for modeling the uncertainty in the system under study. In turn, this allows one to establish confidence intervals on an estimate of some function of the random variable, X, given the known moments. It is also shown how, when working with a large sample of independent observations, instead of precisely known moment constraints, a density from an exponential family to model the uncertainty of the underlying system with measurable accuracy can be defined. In this case, a relationship to ML estimation is established.
Details