Search results

1 – 10 of 240
Article
Publication date: 4 July 2016

Xiongfeng Zhu, Zheng Guo, Zhongxi Hou, Xianzhong Gao and Juntao Zhang

The purpose of this study is to present a methodology for parameters’ sensitivity analysis of solar-powered airplanes.

Abstract

Purpose

The purpose of this study is to present a methodology for parameters’ sensitivity analysis of solar-powered airplanes.

Design/methodology/approach

The study focuses on a preliminary design and parameters’ relations of a heavier-than-air, solar-powered, high-altitude long-endurance unmanned aerial vehicle. The methodology is founded on the balance of energy production and requirement. An analytic expression with four generalized parameters is derived to determine the airplane flying on the specific altitude. The four generalized parameters’ sensitivities on altitude are then analyzed. Finally, to demonstrate the methodology, a case study is given on the parameters’ sensitivity analysis of a prototype solar-powered airplane.

Findings

When using the presented methodology, the nighttime duration and the energy density of batteries are more sensitive on flight altitude of the prototype airplane.

Practical implications

It is not easy to design a solar-powered airplane to realize high-attitude and long-endurance flight. For the current state-of-art, it is a way to figure out the most critical parameters which need prior consideration and immediate development.

Originality/value

This paper provides an analytical methodology for analyzing the parameters’ sensitivities of solar-powered airplanes, which can benefit the preliminary design of a solar-powered airplane.

Details

Aircraft Engineering and Aerospace Technology: An International Journal, vol. 88 no. 4
Type: Research Article
ISSN: 1748-8842

Keywords

Article
Publication date: 1 June 2003

B.J. Henz, K.K. Tamma, R. Kanapady, N.D. Ngo and P.W. Chung

The resin transfer molding process for composites manufacturing consists of either of two considerations, namely, the fluid flow analysis through a porous fiber preform where the…

1003

Abstract

The resin transfer molding process for composites manufacturing consists of either of two considerations, namely, the fluid flow analysis through a porous fiber preform where the location of the flow front is of fundamental importance, and the combined flow/heat transfer/cure analysis. In this paper, the continuous sensitivity formulations are developed for the process modeling of composites manufactured by RTM to predict, analyze, and optimize the manufacturing process. Attention is focused here on developments for isothermal flow simulations, and various illustrative examples are presented for sensitivity analysis of practical applications which help serve as a design tool in the process modeling stages.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 13 no. 4
Type: Research Article
ISSN: 0961-5539

Keywords

Article
Publication date: 5 September 2016

Amin Helmzadeh and Shahram M. Kouhsari

The purpose of this paper is to propose an efficient method for detection and modification of erroneous branch parameters in real time power system simulators. The aim of the…

Abstract

Purpose

The purpose of this paper is to propose an efficient method for detection and modification of erroneous branch parameters in real time power system simulators. The aim of the proposed method is to minimize the sum of squared errors (SSE) due to mismatches between simulation results and corresponding field measurements. Assuming that the network configuration is known, a limited number of erroneous branch parameters will be detected and corrected in an optimization procedure.

Design/methodology/approach

Proposing a novel formulation that utilizes network voltages and last modified admittance matrix of the simulation model, suspected branch parameters are identified. These parameters are more likely to be responsible for large values of SSE. Utilizing a Gauss-Newton (GN) optimization method, detected parameters will be modified in order to minimize the value of SSE. Required sensitivities in optimization procedure will be calculated numerically by the real time simulator. In addition, by implementing an efficient orthogonalization method, the more effective parameter will be selected among a set of correlated parameters to avoid singularity problems.

Findings

Unlike state estimation-based methods, the proposed method does not need the mathematical functions of measurements to simulation model parameters. The method can enhance other parameter estimation methods that are based on state estimation. Simulation results demonstrate the high efficiency of the proposed optimization method.

Originality/value

Incorrect branch parameter detection and correction procedures are investigated in real time simulators.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 35 no. 5
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 21 June 2023

Margarita Ntousia, Ioannis Fudos, Spyridon Moschopoulos and Vasiliki Stamati

Objects fabricated using additive manufacturing (AM) technologies often suffer from dimensional accuracy issues and other part-specific problems. This study aims to present a…

Abstract

Purpose

Objects fabricated using additive manufacturing (AM) technologies often suffer from dimensional accuracy issues and other part-specific problems. This study aims to present a framework for estimating the printability of a computer-aided design (CAD) model that expresses the probability that the model is fabricated correctly via an AM technology for a specific application.

Design/methodology/approach

This study predicts the dimensional deviations of the manufactured object per vertex and per part using a machine learning approach. The input to the error prediction artificial neural network (ANN) is per vertex information extracted from the mesh of the model to be manufactured. The output of the ANN is the estimated average per vertex error for the fabricated object. This error is then used along with other global and per part information in a framework for estimating the printability of the model, that is, the probability of being fabricated correctly on a certain AM technology, for a specific application domain.

Findings

A thorough experimental evaluation was conducted on binder jetting technology for both the error prediction approach and the printability estimation framework.

Originality/value

This study presents a method for predicting dimensional errors with high accuracy and a completely novel approach for estimating the probability of a CAD model to be fabricated without significant failures or errors that make it inappropriate for a specific application.

Details

Rapid Prototyping Journal, vol. 29 no. 9
Type: Research Article
ISSN: 1355-2546

Keywords

Article
Publication date: 13 November 2018

Xuchun Ren and Sharif Rahman

This paper aims to present a new method, named as augmented polynomial dimensional decomposition (PDD) method, for robust design optimization (RDO) and reliability-based design…

Abstract

Purpose

This paper aims to present a new method, named as augmented polynomial dimensional decomposition (PDD) method, for robust design optimization (RDO) and reliability-based design optimization (RBDO) subject to mixed design variables comprising both distributional and structural design variables.

Design/methodology/approach

The method involves a new augmented PDD of a high-dimensional stochastic response for statistical moments and reliability analyses; an integration of the augmented PDD, score functions, and finite-difference approximation for calculating the sensitivities of the first two moments and the failure probability with respect to distributional and structural design variables; and standard gradient-based optimization algorithms.

Findings

New closed-form formulae are presented for the design sensitivities of moments that are simultaneously determined along with the moments. A finite-difference approximation integrated with the embedded Monte Carlo simulation of the augmented PDD is put forward for design sensitivities of the failure probability.

Originality/value

In conjunction with the multi-point, single-step design process, the new method provides an efficient means to solve a general stochastic design problem entailing mixed design variables with a large design space. Numerical results, including a three-hole bracket design, indicate that the proposed methods provide accurate and computationally efficient sensitivity estimates and optimal solutions for RDO and RBDO problems.

Article
Publication date: 10 May 2019

Marzieh Jafari and Khaled Akbari

This paper aims to measure the sensitivity of the structure’s deformation numerical model (NM) related to the various types of the design parameters, which is a suitable method…

Abstract

Purpose

This paper aims to measure the sensitivity of the structure’s deformation numerical model (NM) related to the various types of the design parameters, which is a suitable method for parameter selection to increase the time of model-updating.

Design/methodology/approach

In this research, a variance-based sensitivity analysis (VBSA) approach is proposed to measure the sensitivity of NM of structures. In this way, the contribution of measurements of the structure (such as design parameter values and geometry) on the output of NM is studied using first-order and total-order sensitivity indices developed by Sobol’. In this way the generated data set of parameters by considering different distributions such as Gaussian or uniform distribution and different order as input along with, the resulted deformation variables of NM as output has been submitted to the Sobol’ indices estimation procedure. To the verification of VBSA results, a gradient-based sensitivity analysis (SA), which is developed as a global SA method has been developed to measure the global sensitivity of NM then implemented over the NM’s results of a tunnel.

Findings

Regarding the estimated indices, it has been concluded that the derived deformation functions from the tunnel’s NM usually are non-additive. Also, some parameters have been determined as most effective on the deformation functions, which can be selected for model-updating to avoid a time-consuming process, so those may better to be considered in the group of updating parameters. In this procedure for SA of the model, also some interactions between the selected parameters with other parameters, which are beneficial to be considered in the model-updating procedure, have been detected. In this study, some parameters approximately (27 per cent of the total) with no effect over the all objective functions have been determined to be excluded from the parameter candidates for model-updating. Also, the resulted indices of implemented VBSA were approved during validation by the gradient-based indices.

Practical implications

The introduced method has been implemented for a circular lined tunnel’s NM, which has been created by Fast Lagrangian Analysis of Continua software.

Originality/value

This paper plans to apply a statistical method, which is global on the results of the NM of a soil structure by a complex system for parameter selection to avoid the time-consuming model-updating process.

Article
Publication date: 16 November 2020

Chhuonvuoch Koem and Sarintip Tantanee

Cambodia is considered one of the countries that are most vulnerable to adverse effects of climate change, particularly floods and droughts. Kampong Speu Province is a frequent…

Abstract

Purpose

Cambodia is considered one of the countries that are most vulnerable to adverse effects of climate change, particularly floods and droughts. Kampong Speu Province is a frequent site of calamitous flash floods. Reliable sources of flash flood information and analysis are critical in efforts to minimize the impact of flooding. Unfortunately, Cambodia does not yet have a comprehensive program for flash flood hazard mapping, with many places such as Kampong Speu Province having no such information resources available. The purpose of this paper is, therefore, to determine flash flood hazard levels across all of Kampong Speu Province using analytical hierarchy process (AHP) and geographical information system (GIS) with satellite information.

Design/methodology/approach

The integrated AHP–GIS analysis in this study encompasses ten parameters in the assessment of flash flood hazard levels across the province: rainfall, geology, soil, elevation, slope, stream order, flow direction, distance from drainage, drainage density and land use. The study uses a 10 × 10 pairwise matrix in AHP to compare the relative importance of each parameter and find each parameter’s weight. Finally, a flash flood hazard map is developed displaying all areas of Kampong Speu Province classified into five levels, with Level 5 being the most hazardous.

Findings

This study reveals that high and very high flash flood hazard levels are identified in the northwest part of Kampong Speu Province, particularly in Aoral, Phnum Srouch and Thpong districts and along Prek Thnot River and streams.

Originality/value

The flash flood hazard map developed here provides a wealth of information that can be invaluable for implementing effective disaster mitigation, improving disaster preparedness and optimizing land use.

Details

International Journal of Disaster Resilience in the Built Environment, vol. 12 no. 5
Type: Research Article
ISSN: 1759-5908

Keywords

Article
Publication date: 15 August 2016

Ourania Theodosiadou, Vassilis Polimenis and George Tsaklidis

This paper aims to present the results of further investigating the Polimenis (2012) stochastic model, which aims to decompose the stock return evolution into positive and…

Abstract

Purpose

This paper aims to present the results of further investigating the Polimenis (2012) stochastic model, which aims to decompose the stock return evolution into positive and negative jumps, and a Brownian noise (white noise), by taking into account different noise levels. This paper provides a sensitivity analysis of the model (through the analysis of its parameters) and applies this analysis to Google and Yahoo returns during the periods 2006-2008 and 2008-2010, by means of the third central moment of Nasdaq index. Moreover, the paper studies the behavior of the calibrated jump sensitivities of a single stock as market skew changes. Finally, simulations are provided for the estimation of the jump betas coefficients, assuming that the jumps follow Gamma distributions.

Design/methodology/approach

In the present paper, the model proposed in Polimenis (2012) is considered and further investigated. The sensitivity of the parameters for the Google and Yahoo stock during 2006-2008 estimated by means of the third (central) moment of Nasdaq index is examined, and consequently, the calibration of the model to the returns is studied. The associated robustness is examined also for the period 2008-2010. A similar sensitivity analysis has been studied in Polimenis and Papantonis (2014), but unlike the latter reference, where the analysis is done while market skew is kept constant with an emphasis in jointly estimating jump sensitivities for many stocks, here, the authors study the behavior of the calibrated jump sensitivities of a single stock as market skew changes. Finally, simulations are taken place for the estimation of the jump betas coefficients, assuming that the jumps follow Gamma distributions.

Findings

A sensitivity analysis of the model proposed in Polimenis (2012) is illustrated above. In Section 2, the paper ascertains the sensitivity of the calibrated parameters related to Google and Yahoo returns, as it varies the third (central) market moment. The authors demonstrate the limits of the third moment of the stock and its mixed third moment with the market so as to get real solutions from (S1). In addition, the authors conclude that (S1) cannot have real solutions in the case where the stock return time series appears to have highly positive third moment, while the third moment of the market is significantly negative. Generally, the positive value of the third moment of the stock combined with the negative value of the third moment of the market can only be explained by assuming an adequate degree of asymmetry of the values of the beta coefficients. In such situations, the model may be expanded to include a correction for idiosyncratic third moment in the fourth equation of (S1). Finally, in Section 4, it is noticed that the distribution of the error estimation of the coefficients cannot be considered to be normal, and the variance of these errors increases as the variance of the noise increases.

Originality/value

As mentioned in the Findings, the paper demonstrates the limits of the third moment of the stock and its mixed third moment with the market so as to get real solutions from the main system of equations (S1). It is concluded that (S1) cannot have real solutions when the stock return time series appears to have highly positive third moment, while the third moment of the market is significantly negative. Generally, the positive value of the third moment of the stock combined with the negative value of the third moment of the market can only be explained by assuming an adequate degree of asymmetry of the values of the beta coefficients. In such situations, the model proposed should be expanded to include a correction for idiosyncratic third moment in the fourth equation of (S1). Finally, it is noticed that the distribution of the error estimation of the coefficients cannot be considered to be normal, and the variance of these errors increases as the variance of the noise increases.

Details

The Journal of Risk Finance, vol. 17 no. 4
Type: Research Article
ISSN: 1526-5943

Keywords

Article
Publication date: 24 August 2019

Yangtian Li, Haibin Li and Guangmei Wei

To present the models with many model parameters by polynomial chaos expansion (PCE), and improve the accuracy, this paper aims to present dimension-adaptive algorithm-based PCE…

Abstract

Purpose

To present the models with many model parameters by polynomial chaos expansion (PCE), and improve the accuracy, this paper aims to present dimension-adaptive algorithm-based PCE technique and verify the feasibility of the proposed method through taking solid rocket motor ignition under low temperature as an example.

Design/methodology/approach

The main approaches of this work are as follows: presenting a two-step dimension-adaptive algorithm; through computing the PCE coefficients using dimension-adaptive algorithm, improving the accuracy of PCE surrogate model obtained; and applying the proposed method to uncertainty quantification (UQ) of solid rocket motor ignition under low temperature to verify the feasibility of the proposed method.

Findings

The result indicates that by means of comparing with some conventional non-invasive method, the proposed method is able to raise the computational accuracy significantly on condition of meeting the efficiency requirement.

Originality/value

This paper proposes an approach in which the optimal non-uniform grid that can avoid the issue of overfitting or underfitting is obtained.

1 – 10 of 240