Search results
1 – 10 of over 6000
Keith M. Mueller and Scott A. Burns
The numerical treatment of non‐linear engineering phenomena often involves some sort of mathematical simplification. In many cases, the system under investigation is linearized…
Abstract
The numerical treatment of non‐linear engineering phenomena often involves some sort of mathematical simplification. In many cases, the system under investigation is linearized about an operating point using the linear part of the Taylor’s series expansion. This allows local representation for use in incremental or iterative methods using well‐established computational tools for linear algebra. Demonstrates a different linearization technique that generally provides a higher quality fit to a certain class of functions than the standard Taylor linearization. This class of functions is general enough to represent all systems of algebraic equations. Presents a graphical demonstration of the quality of fit along with a discussion of why this alternative linearization provides a high quality fit. Also presents an engineering application of the linearization.
Details
Keywords
The purpose of this paper is to state new formulation of the programme‐styled framework of pansystems research and related expansions.
Abstract
Purpose
The purpose of this paper is to state new formulation of the programme‐styled framework of pansystems research and related expansions.
Design/methodology/approach
Pansystems‐generalized extremum principle (0**: (dy/dx=0)**) is presented with recognitions to various logoi of philosophy, mathematics, technology, systems, cybernetics, informatics, relativity, biology, society, resource, communications and related topics: logic, history, humanities, aesthetics, journalism, IT, AI, TGBZ* <truth*goodness*beauty*Zen*>, etc. including recent rediscoveries of 50 or so pansystems logoi.
Findings
A keynote of the paper is to develop the deep logoi of the analytic mathematics, analytic mechanics, variational principles, Hilbert's sixth/23rd problems, pan‐axiomatization to encyclopedic principles and various applications. The 0**‐universal connections embody the transfield internet‐styled academic tendency of pansystems exploration.
Originality/value
The paper includes topics: history megawave, pansystems sublation‐modes, pan‐metaphysics, pansystems dialogs with logoi of 100 thinkers or so, and pansystems‐sublation for a series of logoi concerning the substructure of encyclopedic dialogs such as systems, derivative, extremum, quantification, variational principle, equation, symmetry, OR, optimization, approximation, yinyang, combination, normality‐abnormality, framework, modeling, simulation, relativity, recognition, practice, methodology, mathematics, operations and transformations, quotientization, product, clustering, Banach completeness theorem, Weierstrass approximation theorem, Jackson approximation theorem, Taylor theorem, approximation transformation theorems due to Walsh‐Sewell mathematical school, Hilbert problems, Cauchy theorem, theorems of equation stability, function theory, logic, paradox, axiomatization, cybernetics, dialectics, multistep decision, computer, synergy, vitality and the basic logoi for history, ethics, economics, society OR, aesthetics, journalism, institution, resource and traffics, AI, IT, etc.
Details
Keywords
The social sciences are really the “hard sciences” and the physical sciences are the “easy” sciences. One of the great contributors to making the job of the social scientist very…
Abstract
The social sciences are really the “hard sciences” and the physical sciences are the “easy” sciences. One of the great contributors to making the job of the social scientist very difficult is the lack of fundamental dimensions on the basis of which absolute (i.e. ratio) scales can be formulated and in which relationships could be realized as the [allegedly] coveted equations of physics. This deficiency leads directly to the uses of statistical methods of various types. However it is possible, as shown, to formulate equations and to use them to obtain ratio/absolute scales and relationships based on them. This paper uses differential/integral equations, fundamental ideas from the processing view of the brain‐mind, multiple scale approximation via Taylor series, and basic reasoning some of which may be formulated as infinite‐valued logic, and which is related to probability theory (the theoretical basis of statistics) to resolve some of the basic issues relating to learning theory, the roles of nature and nurture in intelligence, the measurement of intelligence itself, and leads to the correct formulation of the potential‐actual type behaviors (specifically intelligence) and dynamical‐temporal model of intelligence development. Specifically, it is shown that the: (1) basic model for intelligence in terms of genetics and environment has to be multiplicative, which corresponds to a logical‐AND, and is not additive; (2) related concept of “genetics” creating its own environment is simply another way of saying that the interaction of genetics and environment is multiplicative as in (1); (3) timing of environmental richness is critical and must be modeled dynamically, e.g. in the form of a differential equation; (4) path functions, not point functions, must be used to model such phenomena; (5) integral equation formulation shows that intelligence at any time t, is a a sum over time of the past interaction of intelligence with environmental and genetic factors; (6) intelligence is about 100 per cent inherited on a global absolute (ratio) scale which is the natural (dimensionless) scale for measuring variables in social science; (7) nature of the approximation assumptions implicit in statistical methods leads to “heritability” calculations in the neighborhood of 0.5. and that short of having controlled randomized experiments such as in animal studies these are expected sheerely due to the methods used; (8) concepts from AI, psychology, epistemology and physics coincide in many respects except for the terminology used, and these concepts can be modeled nonlinearly.
Details
Keywords
William E. Balson and Gordon Rausser
Risk-based clearing has been proposed by Rausser et al. (2010) for over-the-counter (OTC) derivatives. This paper aims to illustrate the application of risk-based margins to a…
Abstract
Purpose
Risk-based clearing has been proposed by Rausser et al. (2010) for over-the-counter (OTC) derivatives. This paper aims to illustrate the application of risk-based margins to a case study of the mortgage-backed securities derivative portfolio of the American International Group (AIG) during the period 2005-2008. There exists sufficient publicly available information to examine AIG’s derivative portfolio and how that portfolio would depend on conjectural changes in margin requirements imposed on its OTC derivative positions. Generally, such data on OTC derivative portfolio positions are unavailable in the public domain, and thus, the AIG data provide a unique opportunity for an objective evaluation.
Design/methodology/approach
This paper uses modern financial methodology to evaluate risk-based margining and collateralization for the major OTC derivative portfolio of the AIG.
Findings
This analysis reveals that a risk-based margin procedure would have led to earlier margin calls of greater magnitude initially than the collateral calls actually faced by AIG Financial Products (AIGFP). The total margin ultimately required by the risk-based procedure, however, is similar in magnitude to the collateral calls faced by AIGFP by August 2008. It is likely that a risk-based clearing procedure applied to AIG’s OTC contracts would have led to the AIG undertaking significant hedging and liquidation of their OTC positions well before the losses built up to the point they had, perhaps avoiding the federal government’s orchestrated restructuring that occurred in September 2008.
Originality/value
There has been no published risk-based evaluations of a major OTC portfolio of derivatives for any company, let alone the AIG.
Details
Keywords
This paper determines a simple transformation that nearly linearizes the bond price formula. The transformed price can be used to derive a highly accurate approximation of the…
Abstract
Purpose
This paper determines a simple transformation that nearly linearizes the bond price formula. The transformed price can be used to derive a highly accurate approximation of the change in a bond price resulting from a change in interest rates.
Design/methodology/approach
A logarithmic transformation exactly linearizes the price function for a zero coupon bond and a reciprocal transformation exactly linearizes the price function for a perpetuity. A power law transformation combines aspects of both types of transformations and provides a superior approximation of the bond price sensitivity for both short-term and long-term bonds.
Findings
It is demonstrated that the new formula, based on power-law transformation, is a much better approximation than either the traditional duration-convexity approximation and the more recently developed approximations based on logarithmic transformation of the price function.
Originality/value
The new formula will be used by risk managers to perform stress-testing on bond portfolios. The new formula can easily be inverted, making it possible to relate the distribution of prices (which are observable in the market) to the distribution of yields (which are numerical solutions that are not directly observable).
Details
Keywords
The purpose of this paper is to expose computational methods as applied to engineering systems and evolutionary processes with randomness in external actions and inherent…
Abstract
Purpose
The purpose of this paper is to expose computational methods as applied to engineering systems and evolutionary processes with randomness in external actions and inherent parameters.
Design/methodology/approach
In total, two approaches are distinguished that rely on solvers from deterministic algorithms. Probabilistic analysis is referred to as the approximation of the response by a Taylor series expansion about the mean input. Alternatively, stochastic simulation implies random sampling of the input and statistical evaluation of the output.
Findings
Beyond the characterization of random response, methods of reliability assessment are discussed. Concepts of design improvement are presented. Optimization for robustness diminishes the sensitivity of the system to fluctuating parameters.
Practical implications
Deterministic algorithms available for the primary problem are utilized for stochastic analysis by statistical Monte Carlo sampling. The computational effort for the repeated solution of the primary problem depends on the variability of the system and is usually high. Alternatively, the analytic Taylor series expansion requires extension of the primary solver to the computation of derivatives of the response with respect to the random input. The method is restricted to the computation of output mean values and variances/covariances, with the effort determined by the amount of the random input. The results of the two methods are comparable within the domain of applicability.
Originality/value
The present account addresses the main issues related to the presence of randomness in engineering systems and processes. They comprise the analysis of stochastic systems, reliability, design improvement, optimization and robustness against randomness of the data. The analytical Taylor approach is contrasted to the statistical Monte Carlo sampling throughout. In both cases, algorithms known from the primary, deterministic problem are the starting point of stochastic treatment. The reader benefits from the comprehensive presentation of the matter in a concise manner.
Details
Keywords
R.M. Kapila Tharanga Rathnayaka and D.M.K.N. Seneviratna
The time series analysis is an essential methodology which comprises the tools for analyzing the time series data to identify the meaningful characteristics for making future…
Abstract
Purpose
The time series analysis is an essential methodology which comprises the tools for analyzing the time series data to identify the meaningful characteristics for making future ad-judgments. The purpose of this paper is to propose a Taylor series approximation and unbiased GM(1,1) based new hybrid statistical approach (HTS_UGM(1,1)) for forecasting time series data under the poor, incomplete and uncertain information systems in a short period of time manner.
Design/methodology/approach
The gray forecasting is a dynamical methodology which can be classified into different categories based on their respective functions. The new proposed methodology is made up of three different methodologies including the first-order unbiased GM(1,1), Markov chain and Taylor approximation. In addition to that, two different traditional gray operational mechanisms include GM(1,1) and unbiased GM(1,1) used as the comparisons. The main objective of this study is to forecast gold price demands in a short-term manner based on the data which were taken from the Central Bank of Sri Lanka from October 2017 to December 2017.
Findings
The error analysis results suggested that the new proposed HTS_UGM(1,1) is highly accurate (less than 10 percent) with lowest RMSE error values in a one head as well as weakly forecasting’s than separate gray forecasting methodologies.
Originality/value
The findings suggested that the new proposed hybrid approach is more suitable and effective way for forecasting time series indices than separate time series forecasting methodologies in a short-term manner.
Details
Keywords
Jayantha Pasdunkorale A. and Ian W. Turner
An existing two‐dimensional finite volume technique is modified by introducing a correction term to increase the accuracy of the method to second order. It is well known that the…
Abstract
An existing two‐dimensional finite volume technique is modified by introducing a correction term to increase the accuracy of the method to second order. It is well known that the accuracy of the finite volume method strongly depends on the order of the approximation of the flux term at the control volume (CV) faces. For highly orthotropic and anisotropic media, first order approximations produce inaccurate simulation results, which motivates the need for better estimates of the flux expression. In this article, a new approach to approximate the flux term at the CV face is presented. The discretisation involves a decomposition of the flux and an improved least squares approximation technique to calculate the derivatives of the dependent function on the CV faces for estimating both the cross diffusion term and a correction for the primary flux term. The advantage of this method is that any arbitrary unstructured mesh can be used to implement the technique without considering the shapes of the mesh elements. It was found that the numerical results well matched the available exact solution for a representative transport equation in highly orthotropic media and the benchmark solutions obtained on a fine mesh for anisotropic media. Previously proposed CV techniques are compared with the new method to highlight its accuracy for different unstructured meshes.
Details
Keywords
Victor M. Pérez, John E. Renaud and Layne T. Watson
To reduce the computational complexity per step from O(n2) to O(n) for optimization based on quadratic surrogates, where n is the number of design variables.
Abstract
Purpose
To reduce the computational complexity per step from O(n2) to O(n) for optimization based on quadratic surrogates, where n is the number of design variables.
Design/methodology/approach
Applying nonlinear optimization strategies directly to complex multidisciplinary systems can be prohibitively expensive when the complexity of the simulation codes is large. Increasingly, response surface approximations (RSAs), and specifically quadratic approximations, are being integrated with nonlinear optimizers in order to reduce the CPU time required for the optimization of complex multidisciplinary systems. For evaluation by the optimizer, RSAs provide a computationally inexpensive lower fidelity representation of the system performance. The curse of dimensionality is a major drawback in the implementation of these approximations as the amount of required data grows quadratically with the number n of design variables in the problem. In this paper a novel technique to reduce the magnitude of the sampling from O(n2) to O(n) is presented.
Findings
The technique uses prior information to approximate the eigenvectors of the Hessian matrix of the RSA and only requires the eigenvalues to be computed by response surface techniques. The technique is implemented in a sequential approximate optimization algorithm and applied to engineering problems of variable size and characteristics. Results demonstrate that a reduction in the data required per step from O(n2) to O(n) points can be accomplished without significantly compromising the performance of the optimization algorithm.
Originality/value
A reduction in the time (number of system analyses) required per step from O(n2) to O(n) is significant, even more so as n increases. The novelty lies in how only O(n) system analyses can be used to approximate a Hessian matrix whose estimation normally requires O(n2) system analyses.
Details