Search results
21 – 30 of over 47000In this paper, Sensitivity Method (SM) was used to the identification of boundary conditions. Particular attention in this paper was paid to the Levenberg‐Marquardt (L‐M…
Abstract
In this paper, Sensitivity Method (SM) was used to the identification of boundary conditions. Particular attention in this paper was paid to the Levenberg‐Marquardt (L‐M) regularization method, because the inverse problems of the electromagnetic field are not well conditioned in typical cases. It was proved that using some information about the expected solution, the L‐M regularization method gives satisfactory results even in such cases where the singular value analysis (SVA) fails. The identified boundary conditions were compared with the results obtained by using the direct least squares (LS) method.
E. HINTON, N.V.R. RAO and J. SIENZ
This paper deals with structural shape and thickness optimization of axisymmetric shell structures loaded symmetrically. In the finite element stress analysis use is made of newly…
Abstract
This paper deals with structural shape and thickness optimization of axisymmetric shell structures loaded symmetrically. In the finite element stress analysis use is made of newly developed linear, quadratic, and cubic, variable thickness, C(0) elements based on axisymmetric Mindlin‐Reissner shell theory. An integrated approach is used to carry out the whole shape optimization process in a fully automatic manner. A robust, versatile and flexible mesh generator is incorporated with facilities for generating either uniform or graded meshes, with constant, linear, or cubic variation of thickness, pressure etc. The midsurface geometry and thickness variations of the axisymmetric shell structure are defined using cubic splines passing through certain key points. The design variables are chosen as the coordinates and/or the thickness at the key points. Variable linking procedures are also included. Sensitivity analysis is carried out using either a semi‐analytical method or a global finite difference method. The objective of the optimization is the weight minimization of the structure. Several examples are presented illustrating optimal shapes and thickness distributions for various shells. The changes in the bending, membrane and shear strain energies during the optimization process are also monitored.
Details
Keywords
Terry R. Collins, Manuel D. Rossetti, Heather L. Nachtmann and James R. Oldham
To investigate the application of multi‐attribute utility theory (MAUT) to aid in the decision‐making process when performing benchmarking gap analysis.
Abstract
Purpose
To investigate the application of multi‐attribute utility theory (MAUT) to aid in the decision‐making process when performing benchmarking gap analysis.
Design/methodology/approach
MAUT is selected to identify the overall best‐in‐class (BIC) performer for performance metrics involving inventory record accuracy within a public sector warehouse. A traditional benchmarking analysis is conducted on 14 industry warehouse participants to determine industry best practices for the four critical warehouse metrics of picking and inventory accuracy, storage speed, and order cycle time. Inventory and picking tolerances are also investigated in the study. A gap analysis is performed on the critical metrics and the absolute BIC is used to measure performance gaps for each metric. The gap analysis results are then compared to the MAUT utility values, and a sensitivity analysis is performed to compare the two methods.
Findings
The results indicate that an approach based on MAUT is advantageous in its ability to consider all critical metrics in a benchmarking study. The MAUT approach allows the assignment of priorities and analyzes the subjectivity for these decisions, and provides a framework to identify one performer as best across all critical metrics.
Research limitations/implications
This research study uses the additive utility theory (AUT) which is only one of multiple decision theory techniques.
Practical implications
A new approach to determine the best performer in a benchmarking study.
Originality/value
Traditional benchmarking studies use gap analysis to identify a BIC performer over a single critical metric. This research integrates a mathematically driven decision analysis technique to determine the overall best performer over multiple critical metrics.
Details
Keywords
Mathias Le Guyadec, Laurent Gerbaud, Emmanuel Vinot and Benoit Delinchant
The thermal modelling of an electrical machine is difficult because the thermal behavior depends on its geometry, the used materials and its manufacturing process. In the paper…
Abstract
Purpose
The thermal modelling of an electrical machine is difficult because the thermal behavior depends on its geometry, the used materials and its manufacturing process. In the paper, such a thermal model is used during the sizing process by optimization of a hybrid electric vehicle (HEV). This paper aims to deal with the sensitivities of thermal parameters on temperatures inside the electrical machine to allow the assessment of the influence of thermal parameters that are hard to assess.
Design/methodology/approach
A sensitivity analysis by Sobol indices is used to assess the sensitivities of the thermal parameters on electrical machine temperatures. As the optimization process needs fast computations, a lumped parameter thermal network (LPTN) is proposed for the thermal modelling of the machine, because of its fastness. This is also useful for the Sobol method that needs too many calls to this thermal model. This model is also used in a global model of a hybrid vehicle.
Findings
The difficulty is the thermal modelling of the machine on the validity domain of the sizing problem. The Sobol indices allow to find where a modelling effort has to be carried out.
Research limitations/implications
The Sobol indices have a significant value according to the number of calls of the model and their type (first-order, total, etc.). Therefore, the quality of the thermal sensitivity analysis is a compromise between computation times and modelling accuracy.
Practical implications
Thermal modelling of an electrical machine in a sizing process by optimization.
Originality/value
The use of Sobol indices for the sensitivity analysis of the thermal parameters of an electrical machine.
Details
Keywords
Ourania Theodosiadou, Vassilis Polimenis and George Tsaklidis
This paper aims to present the results of further investigating the Polimenis (2012) stochastic model, which aims to decompose the stock return evolution into positive and…
Abstract
Purpose
This paper aims to present the results of further investigating the Polimenis (2012) stochastic model, which aims to decompose the stock return evolution into positive and negative jumps, and a Brownian noise (white noise), by taking into account different noise levels. This paper provides a sensitivity analysis of the model (through the analysis of its parameters) and applies this analysis to Google and Yahoo returns during the periods 2006-2008 and 2008-2010, by means of the third central moment of Nasdaq index. Moreover, the paper studies the behavior of the calibrated jump sensitivities of a single stock as market skew changes. Finally, simulations are provided for the estimation of the jump betas coefficients, assuming that the jumps follow Gamma distributions.
Design/methodology/approach
In the present paper, the model proposed in Polimenis (2012) is considered and further investigated. The sensitivity of the parameters for the Google and Yahoo stock during 2006-2008 estimated by means of the third (central) moment of Nasdaq index is examined, and consequently, the calibration of the model to the returns is studied. The associated robustness is examined also for the period 2008-2010. A similar sensitivity analysis has been studied in Polimenis and Papantonis (2014), but unlike the latter reference, where the analysis is done while market skew is kept constant with an emphasis in jointly estimating jump sensitivities for many stocks, here, the authors study the behavior of the calibrated jump sensitivities of a single stock as market skew changes. Finally, simulations are taken place for the estimation of the jump betas coefficients, assuming that the jumps follow Gamma distributions.
Findings
A sensitivity analysis of the model proposed in Polimenis (2012) is illustrated above. In Section 2, the paper ascertains the sensitivity of the calibrated parameters related to Google and Yahoo returns, as it varies the third (central) market moment. The authors demonstrate the limits of the third moment of the stock and its mixed third moment with the market so as to get real solutions from (S1). In addition, the authors conclude that (S1) cannot have real solutions in the case where the stock return time series appears to have highly positive third moment, while the third moment of the market is significantly negative. Generally, the positive value of the third moment of the stock combined with the negative value of the third moment of the market can only be explained by assuming an adequate degree of asymmetry of the values of the beta coefficients. In such situations, the model may be expanded to include a correction for idiosyncratic third moment in the fourth equation of (S1). Finally, in Section 4, it is noticed that the distribution of the error estimation of the coefficients cannot be considered to be normal, and the variance of these errors increases as the variance of the noise increases.
Originality/value
As mentioned in the Findings, the paper demonstrates the limits of the third moment of the stock and its mixed third moment with the market so as to get real solutions from the main system of equations (S1). It is concluded that (S1) cannot have real solutions when the stock return time series appears to have highly positive third moment, while the third moment of the market is significantly negative. Generally, the positive value of the third moment of the stock combined with the negative value of the third moment of the market can only be explained by assuming an adequate degree of asymmetry of the values of the beta coefficients. In such situations, the model proposed should be expanded to include a correction for idiosyncratic third moment in the fourth equation of (S1). Finally, it is noticed that the distribution of the error estimation of the coefficients cannot be considered to be normal, and the variance of these errors increases as the variance of the noise increases.
Details
Keywords
S.A. Oke and O.E. Charles‐Owaba
The purpose of this paper is to work on an analytical approach to test sensitivity of a maintenance‐scheduling model. Any model without sensitivity analysis is a “paper work”…
Abstract
Purpose
The purpose of this paper is to work on an analytical approach to test sensitivity of a maintenance‐scheduling model. Any model without sensitivity analysis is a “paper work” without advancing for wider applications. Thus, the simulation of simultaneous scheduling of maintenance and operation in a resource‐constrained environment is very important in quality problem and especially in maintenance.
Design/methodology/approach
The paper uses an existing model and presents a sensitivity analysis by utilising an optimal initial starting transportation tableau. This is used as input into the Gantt charting model employed in the traditional production scheduling system. The degree of responsiveness of the model parameters is tested.
Findings
The paper concludes that some of these parameters and variables are sensitive to changes in values while others are not.
Research limitations/implications
The maintenance engineering community is exposed to various optimal models in the resource‐constraint‐based operational and maintenance arena. However, the models do lack the sensitivity analysis where the present authors have worked. The work seems significant since the parameters have the boundary values so the user knows where he can apply the model after considering the constraints therein.
Originality/value
The underlying quest for testing the sensitivity of the model parameters of a maintenance scheduling model in a multi‐variable operation and maintenance environment with resource constraints is a novel approach. An optimal solution has to be tested for robustness, considering the complexity of the variables and criteria. The objective to test the model parameters is a rather new approach in maintenance engineering discipline. The work hopefully opens a wide gate of research opportunity for members of the maintenance scheduling community.
Details
Keywords
Vineeta Nigam, Tripta Thakur, V.K. Sethi and R.P. Singh
The purpose of this paper is to benchmark the Indian mobile telecommunication service providers for relative efficiencies. In this paper, a method for benchmarking performance of…
Abstract
Purpose
The purpose of this paper is to benchmark the Indian mobile telecommunication service providers for relative efficiencies. In this paper, a method for benchmarking performance of mobile telecom utilities based on data envelopment analysis (DEA) is presented. The paper discusses some concepts between quality performance and benchmarking and the results include performance efficiency and sensitivity‐based classification of utilities. Also, peer‐to‐peer comparison of inefficient with efficient utilities is provided. Based on these results, inefficient utilities can develop strategic plans to improve performance.
Design/methodology/approach
The authors use DEA to measure comparative efficiencies of mobile telecom companies and two different DEA models, CCR and BCC, were applied to evaluate the relative efficiency of mobile telecom operators in India. Sensitivity‐based classification of utilities is carried out by removing one or more inputs or outputs from the base model to construct a new DEA model. Comparisons of DEA efficiencies from the base model with the structurally perturbed models show the impact on efficiency. Data include annual and quarterly reports showing various quality parameters.
Findings
DEA is used to derive the benchmarks based on the comparison of the 126 utilities which include public sector undertaking (PSU) operators (MTNL and BSNL) and private operators of the Indian mobile telecommunication sector. The result includes performance efficiency and peer‐to‐peer comparison of inefficient utilities with efficient utilities. Based on these results, inefficient utilities can develop strategic plans to improve their performance. Sensitivity analysis, based on removal of one or more variables from the base model to determine changes in DEA efficiencies is done for selecting the strength of parameters of utilities for performance improvement.
Practical implications
Benchmarking of service utilities in the telecom sector is virtually non‐existent at the national level in India. This research identifies the different variables and then a model is prepared for benchmarking of the service providers in India. Based on the efficiency analysis, benchmarks can be set, and utility efficiency scores can be obtained based on the set benchmarks. These scores can help develop a strategic plan for mitigating the factors that contribute to the system inefficiencies.
Originality/value
This paper is one of the few published studies that benchmark the performance of mobile telecom services in India. This research promises to be amongst the first of the works carried out taking specific parameters of mobile telecom utilities of India.
Details
Keywords
Konstanty M. Gawrylczyk and Piotr Putek
Describes the algorithm allowing recognition of cracks and flaws placed on the surface of conducting plate. The algorithm is based on sensitivity analysis in finite elements…
Abstract
Describes the algorithm allowing recognition of cracks and flaws placed on the surface of conducting plate. The algorithm is based on sensitivity analysis in finite elements, which determines the influence of geometrical parameters on some local quantities, used as objective function. The methods are similar to that of circuit analysis, based on differentiation of stiffness matrix. The algorithm works iteratively using gradient method. The information on the gradient of the goal function provides the sensitivity analysis. The sensitivity algorithm allows us to calculate the sensitivity versus x and y, so the nodes can be properly displaced, modeling complicated shapes of defects. The examples show that sensitivity analysis applied for recognition of cracks and flaws provides very good results, even for complicated shape of the flaw.
Details
Keywords
C. Angulo, E. Garcia Vadillo and J. Canales
In this paper an application of structural optimization to the design ofstructures with constraints in frequency and mode shape is presented. Theobjective is to obtain an optimum…
Abstract
In this paper an application of structural optimization to the design of structures with constraints in frequency and mode shape is presented. The objective is to obtain an optimum design making adequate changes in the structure to modify its dynamic characteristics. The method is based on an iterative process of optimization that includes structural analysis by the Finite Element Method (FEM), sensitivity analysis, and optimization techniques. An efficient and accurate method is used to calculate the sensitivities of the dynamic behaviour of the structure. The sensitivity analysis is accomplished using a semi‐analytical procedure based on the Nelson method. A Sequential Linear Programming (SLP) algorithm is used to solve the optimization problem. In the minimization process the convergence is assured even in a short number of iterations. The validation of the method is also shown by means of two examples of application.
Details
Keywords
U. Dinesh Kumar, Haritha Saranga, José E. Ramírez‐Márquez and David Nowicki
The evolution of six sigma has morphed from a method or set of techniques to a movement focused on business‐process improvement. Business processes are transformed through the…
Abstract
Purpose
The evolution of six sigma has morphed from a method or set of techniques to a movement focused on business‐process improvement. Business processes are transformed through the successful selection and implementation of competing six sigma projects. However, the efforts to implement a six sigma process improvement initiative alone do not guarantee success. To meet aggressive schedules and tight budget constraints, a successful six sigma project needs to follow the proven define, measure, analyze, improve, and control methodology. Any slip in schedule or cost overrun is likely to offset the potential benefits achieved by implementing six sigma projects. The purpose of this paper is to focus on six sigma projects targeted at improving the overall customer satisfaction called Big Q projects. The aim is to develop a mathematical model to select one or more six sigma projects that result in the maximum benefit to the organization.
Design/methodology/approach
This research provides the identification of important inputs and outputs for six sigma projects that are then analyzed using data envelopment analysis (DEA) to identify projects, which result in maximum benefit. Maximum benefit here provides a Pareto optimal solution based on inputs and outputs directly related to the efficiency of the six sigma projects under study. A sensitivity analysis of efficiency measurement is also carried out to study the impact of variation in projects' inputs and outputs on project performance and to identify the critical inputs and outputs.
Findings
DEA, often used for relative efficiency analysis and productivity analysis, is now successfully constructed for six sigma project selection.
Practical implications
Provides a practical approach to guide the selection of six sigma projects for implementation, especially for companies with limited resources. The sensitivity analysis discussed in the paper helps to understand the uncertainties in project inputs and outputs.
Originality/value
This paper introduces DEA as a tool for six sigma project selection.
Details