Search results1 – 10 of over 2000
THE general theorems given in Sections 4 and 6 include, from the fundamental point of view, all that is required for the analysis of redundant structures. However, to…
THE general theorems given in Sections 4 and 6 include, from the fundamental point of view, all that is required for the analysis of redundant structures. However, to facilitate practical calculations it is helpful to develop more explicit methods and formulae. To find these is the purpose of this Section.
This paper describes a scheme which enables an electronic digital computer to deal directly with matrices and matrix instructions. It enables the transformation between…
This paper describes a scheme which enables an electronic digital computer to deal directly with matrices and matrix instructions. It enables the transformation between the specification of matrix calculations on paper and the actual operations within the computer to be carried out in easy and concise terms. Using this scheme the paper develops the appropriate programmes of instructions to be given to the computer for the calculations involved when applying the Argyris matrix method for the analysis of stresses and displacements in arbitrary clastic structures. In order to introduce the reader to the technique a programme for a simple structure is given in Part I. General purpose programmes applicable to more complex structures are given in Parts II and III.
This chapter aims at making clear growth and distribution of China’s economy 1987–2000 with fixed capital on the input-output table basis. Since fixed capital data are not…
This chapter aims at making clear growth and distribution of China’s economy 1987–2000 with fixed capital on the input-output table basis. Since fixed capital data are not sufficiently available, one has to estimate fixed capital coefficients. In the outset, this chapter outlines the Sraffa–Fujimori method, which simulates the maximum growth path and estimates the marginal fixed capital coefficients on that path. In the second place, the marginal fixed capital coefficients of China’s economy are estimated. In the third place, the wage-profit curves of China’s economy will be drawn, and we discuss some further features obtained by our observations.
Every “structural model” is defined by the set of covariance and mean expectations. These expectations are the source of parameter estimates, fit statistics, and…
Every “structural model” is defined by the set of covariance and mean expectations. These expectations are the source of parameter estimates, fit statistics, and substantive interpretation. The recent chapter by Cortina, Pant, and Smith-Darden ((this volume). In: F. Dansereau & F. J. Yammarino (Eds), Research in multi-level issues (vol. 4). Oxford, England: Elsevier) shows how a formal investigation of the data covariance matrix of longitudinal data can lead to an improved understanding of the estimates of covariance terms among linear growth models. The investigations presented by Cortina et al. (this volume) are reasonable and potentially informative for researchers using linear change growth models. However, it is quite common for behavioral researchers to consider more complex models, in which case a variety of more complex techniques for the calculation of expectations will be needed. In this chapter we demonstrate how available computer programs, such as Maple, can be used to automatically create algebraic expectations for the means and the covariances of every structural model. The examples presented here can be used for a latent growth model of any complexity, including linear and nonlinear processes, and any number of longitudinal measurements.
The purpose of this paper is to analyze the content of the statements that are released by the Federal Open Market Committee (FOMC) after its meetings, identify the main…
The purpose of this paper is to analyze the content of the statements that are released by the Federal Open Market Committee (FOMC) after its meetings, identify the main textual associative patterns in the statements and examine their impact on the US treasury market.
Latent semantic analysis (LSA), a language processing technique that allows recognition of the textual associative patterns in documents, is applied to all the statements released by the FOMC between 2003 and 2014, so as to identify the main textual “themes” used by the Committee in its communication to the public. The importance of the main identified “themes” is tracked over time, before examining their (collective and individual) effect on treasury market yield volatility via time-series regression analysis.
We find that FOMC statements incorporate multiple, multifaceted and recurring textual themes, with six of them being able to characterize most of the communicated monetary policy in the authors’ sample period. The themes are statistically significant in explaining the variation in three-month, two-year, five-year and ten-year treasury yields, even after controlling for monetary policy uncertainty and the concurrent economic outlook.
The main research implication of the authors’ study is that the LSA can successfully identify the most economically significant themes underlying the Fed’s communication, as the latter is expressed in monetary policy statements. The authors feel that the findings of the study would be strengthened if the analysis was repeated using intra-day (tick-by-tick or five-minute) data on treasury yields.
The authors’ findings are consistent with the notion that the move to “increased transparency” by the Fed is important and meaningful for financial and capital markets, as suggested by the significant effect that the most important identified textual themes have on treasury yield volatility.
This paper makes a timely contribution to a fairly recent stream of research that combines specific textual and statistical techniques so as to conduct content analysis. To the best of their knowledge, the authors’ study is the first that applies the LSA to the statements released by the FOMC.
This paper is concerned with rank analysis of rectangular matrix of a homogeneous set of incremental equations regarded as an element of continuation method. The rank…
This paper is concerned with rank analysis of rectangular matrix of a homogeneous set of incremental equations regarded as an element of continuation method. The rank analysis is based on a known feature that every rectangular matrix can be transformed into the matrix of echelon form. By inspection of the rank, correct control parameters are chosen and this allows not only for rounding limit and turning points but also for branch‐switching near bifurcation points.
This paper aims to address not only technical and economic challenges in electrical distribution system but also environmental impact and the depletion of conventional…
This paper aims to address not only technical and economic challenges in electrical distribution system but also environmental impact and the depletion of conventional energy resources due to rapidly growing economic development, results rising energy consumption.
Generally, the network reconfiguration (NR) problem is designed for minimizing power loss. Particularly, it is devised for maximizing power loss reduction by simultaneous NR and distributed generation (DG) placement. A loss sensitivity factor procedure is incorporated in the problem formulation that has identified sensitivity nodes for DG optimally. An adaptive weighted improved discrete particle swarm optimization (AWIDPSO) is proposed for ascertaining a feasible solution.
In AWIDPSO, the adaptively varying inertia weight increases the possible solution in the global search space and it has obtained the optimum solution within lesser iteration. Moreover, it has provided a solution for integrating more amount of DG optimally in the existing distribution network (DN).
The AWIDPSO seems to be a promising optimization tool for optimal DG placement in the existing DN, DG placement after NR and simultaneous NR and DG sizing and placement. Thus, a strategic balance is derived among economic development, energy consumption, environmental impact and depletion of conventional energy resources.
In this study, a standard 33-bus distribution system has been analyzed for optimal NR in the presence of DG using the developed framework. The power loss in the DN has reduced considerably by indulging a new and innovative approaches and technologies.
Gives introductory remarks about chapter 1 of this group of 31 papers, from ISEF 1999 Proceedings, in the methodologies for field analysis, in the electromagnetic community. Observes that computer package implementation theory contributes to clarification. Discusses the areas covered by some of the papers ‐ such as artificial intelligence using fuzzy logic. Includes applications such as permanent magnets and looks at eddy current problems. States the finite element method is currently the most popular method used for field computation. Closes by pointing out the amalgam of topics.
TO establish the strength and stiffness of certain types of structure it is necessary to use a large displacement theory, i.e. one in which allowance is made for the…
TO establish the strength and stiffness of certain types of structure it is necessary to use a large displacement theory, i.e. one in which allowance is made for the redistribution of the loading effects as a consequence of the deformation produced by the loads. The post‐buckling behaviour of panels, cylinders and other types of structure under compressive endloads require such a theory and another important category is constituted in thin plates under normal pressure, the pressure being partly resisted by tensions in the plane of the plate, in the way that membranes resist pressure, and partly by the bending resistance or stiffness of the plate.
A new algorithm for reducing the profile and root‐mean‐square wavefront of sparse matrices with a symmetric structure is presented. Our numerical experiments show an…
A new algorithm for reducing the profile and root‐mean‐square wavefront of sparse matrices with a symmetric structure is presented. Our numerical experiments show an overall better performance than the widely used reverse Cuthill‐McKee, Gibbs‐King and Sloan algorithms. The new algorithm is fast, simple and useful in engineering analysis where it can be employed to derive efficient orderings for both profile and frontal solution schemes.