Search results
1 – 10 of over 8000Gives a bibliographical review of the finite element methods (FEMs) applied for the linear and nonlinear, static and dynamic analyses of basic structural elements from the…
Abstract
Gives a bibliographical review of the finite element methods (FEMs) applied for the linear and nonlinear, static and dynamic analyses of basic structural elements from the theoretical as well as practical points of view. The range of applications of FEMs in this area is wide and cannot be presented in a single paper; therefore aims to give the reader an encyclopaedic view on the subject. The bibliography at the end of the paper contains 2,025 references to papers, conference proceedings and theses/dissertations dealing with the analysis of beams, columns, rods, bars, cables, discs, blades, shafts, membranes, plates and shells that were published in 1992‐1995.
Details
Keywords
Reza Hajipour Farsangi, Ghadir Mahdavi, Majid Jafari Khaledi, Murat Büyükyazıcı and Mitra Ghanbarzadeh
This study aims to price the risk contribution of general Takaful at the level of tariff cells, considering a spatial dependency framework.
Abstract
Purpose
This study aims to price the risk contribution of general Takaful at the level of tariff cells, considering a spatial dependency framework.
Design/methodology/approach
Three different models, including a generalized linear model, a generalized linear mixed model (GLMM) and a spatial generalized linear mixed model (SGLMM), according to the actuarial modeling of general Takaful, are used to price pure risk contribution (PRC).
Findings
The results reveal that the SGLMM yields more accurate predictions of the PRC compared to the other models, emphasizing the significance of spatial modeling in this context. Following the estimation of the PRC, the gross contribution according to the mechanism of Takaful models is calculated considering the spatial model.
Practical implications
Considering the similarities between Takaful and insurance, this study addresses the pricing of general Takaful within different Takaful models through a spatial dependency framework, such that the practical implications of the study are applicable for running Takaful's business in both Islamic and non-Islamic countries.
Originality/value
Most studies consider only the social or practical view of Takaful. This study contributes to the broader knowledge and understanding of Takaful by presenting a conceptual understanding of Takaful and then investigates the practical application of pricing risk contribution using innovative modeling of claim frequency and severity at the level of tariff cells.
Details
Keywords
Rabe Alsafadie, Mohammed Hjiaj, Hugues Somja and Jean‐Marc Battini
The purpose of this paper is to present eight local elasto‐plastic beam element formulations incorporated into the corotational framework for two‐noded three‐dimensional beams…
Abstract
Purpose
The purpose of this paper is to present eight local elasto‐plastic beam element formulations incorporated into the corotational framework for two‐noded three‐dimensional beams. These formulations capture the warping torsional effects of open cross‐sections and are suitable for the analysis of the nonlinear buckling and post‐buckling of thin‐walled frames with generic cross‐sections. The paper highlights the similarities and discrepancies between the different local element formulations. The primary goal of this study is to compare all the local element formulations in terms of accuracy, efficiency and CPU‐running time.
Design/methodology/approach
The definition of the corotational framework for a two‐noded three‐dimensional beam element is presented, based upon the works of Battini .The definitions of the local element kinematics and displacements shape functions are developed based on both Timoshenko and Bernoulli assumptions, and considering low‐order as well as higher‐order terms in the second‐order approximation of the Green‐Lagrange strains. Element forces interpolations and generalized stress resultant vectors are then presented for both mixed‐based Timoshenko and Bernoulli formulations. Subsequently, the local internal force vector and tangent stiffness matrix are derived using the principle of virtual work for displacement‐based elements and the two‐field Hellinger‐Reissner assumed stress variational principle for mixed‐based formulations, respectively. A full comparison and assessment of the different local element models are performed by means of several numerical examples.
Findings
In this study, it is shown that the higher order elements are more accurate than the low‐order ones, and that the use of the higher order mixed‐based Bernoulli element seems to require the least number of FEs to accurately model the structural behavior, and therefore allows some reduction of the CPU time compared to the other converged solutions; where a larger number of elements are needed to efficiently discretize the structure.
Originality/value
The paper reports computation times for each model in order to assess their relative efficiency. The effect of the numbers of Gauss points along the element length and within the cross‐section are also investigated.
Details
Keywords
Donald J. Schepker and Paul D. Bliese
Panel data, where observations of entities are repeated over time, are common in strategic management research. However, explorations of the role of time on predictors of interest…
Abstract
Panel data, where observations of entities are repeated over time, are common in strategic management research. However, explorations of the role of time on predictors of interest are often unexplored. In this chapter, we illustrate how the use of mixed-effect growth models can enhance theory and research in strategic management by exploring changes in outcomes of interest over time. Mixed-effects models allow for testing both within and between effects, while also calculating specific intercepts (firm average values) and slopes (trajectories of specific firms over time) using empirical Bayes estimates. We also illustrate how a discontinuous growth model could be used to assess differences in firm intercepts and slopes surrounding exogenous events (e.g., global pandemics) without requiring a control group.
Details
Keywords
This study is a replication of Tosi and Greckhamer's work examining how uncertainty avoidance, power distance, individualism and masculinity/femininity are related to total CEO…
Abstract
Purpose
This study is a replication of Tosi and Greckhamer's work examining how uncertainty avoidance, power distance, individualism and masculinity/femininity are related to total CEO pay, the ratio of variable to total CEO pay and the ratio of CEO pay to the pay of the lowest level employees in 23 countries. Its main purpose is to investigate whether the replication confirms, questions or extends the results of TG2004.
Design/methodology/approach
Tosi and Greckhamer used generalized linear modeling (GLS) to analyze the relationships between Hofstede's four cultural dimensions and CEO compensation. In the replication, the author used GLS to retest the original seven hypotheses with more recent data from Hofstede and test the same hypotheses relying on cultural values and practices scores from the GLOBE study. Further, using firm-level data unavailable for the original study, the author analyzed fixed and random effects in mixed models.
Findings
The replication generally confirms the findings of the original study for the effects of power distance, individualism and masculinity on CEO total pay. Results are mixed or indicate the lack of significant effect for other relationships.
Research limitations/implications
This study reexamines the effects of country-level contextual variables in the area of CEO compensation.
Originality/value
The replication presents firm-level CEO compensation and firm performance data from 21 countries, extending the original study and unveiling possible spurious effects.
Details
Keywords
ROGER N. CONWAY and RON C. MITTELHAMMER
In the last two decades there has been considerable progress made in the development of alternative estimation techniques to ordinary least squares (OLS) regression. The search…
Abstract
In the last two decades there has been considerable progress made in the development of alternative estimation techniques to ordinary least squares (OLS) regression. The search for alternative estimators has no doubt been motivated by the observance of erratic OLS estimator behavior in cases where there are too few observations, multicollinearity problems, or simply “information‐poor” data sets. Imprecise and unreliable OLS coefficient estimates have been the result.
Glenn W. Harrison and J. Todd Swarthout
We take Cumulative Prospect Theory (CPT) seriously by rigorously estimating structural models using the full set of CPT parameters. Much of the literature only estimates a subset…
Abstract
We take Cumulative Prospect Theory (CPT) seriously by rigorously estimating structural models using the full set of CPT parameters. Much of the literature only estimates a subset of CPT parameters, or more simply assumes CPT parameter values from prior studies. Our data are from laboratory experiments with undergraduate students and MBA students facing substantial real incentives and losses. We also estimate structural models from Expected Utility Theory (EUT), Dual Theory (DT), Rank-Dependent Utility (RDU), and Disappointment Aversion (DA) for comparison. Our major finding is that a majority of individuals in our sample locally asset integrate. That is, they see a loss frame for what it is, a frame, and behave as if they evaluate the net payment rather than the gross loss when one is presented to them. This finding is devastating to the direct application of CPT to these data for those subjects. Support for CPT is greater when losses are covered out of an earned endowment rather than house money, but RDU is still the best single characterization of individual and pooled choices. Defenders of the CPT model claim, correctly, that the CPT model exists “because the data says it should.” In other words, the CPT model was borne from a wide range of stylized facts culled from parts of the cognitive psychology literature. If one is to take the CPT model seriously and rigorously then it needs to do a much better job of explaining the data than we see here.
Details
Keywords
Mário Rui Tiago Arruda and Dragos Ionut Moldovan
– The purpose of this paper is to report the implementation of an alternative time integration procedure for the dynamic non-linear analysis of structures.
Abstract
Purpose
The purpose of this paper is to report the implementation of an alternative time integration procedure for the dynamic non-linear analysis of structures.
Design/methodology/approach
The time integration algorithm discussed in this work corresponds to a spectral decomposition technique implemented in the time domain. As in the case of the modal decomposition in space, the numerical efficiency of the resulting integration scheme depends on the possibility of uncoupling the equations of motion. This is achieved by solving an eigenvalue problem in the time domain that only depends on the approximation basis being implemented. Complete sets of orthogonal Legendre polynomials are used to define the time approximation basis required by the model.
Findings
A classical example with known analytical solution is presented to validate the model, in linear and non-linear analysis. The efficiency of the numerical technique is assessed. Comparisons are made with the classical Newmark method applied to the solution of both linear and non-linear dynamics. The mixed time integration technique presents some interesting features making very attractive its application to the analysis of non-linear dynamic systems. It corresponds in essence to a modal decomposition technique implemented in the time domain. As in the case of the modal decomposition in space, the numerical efficiency of the resulting integration scheme depends on the possibility of uncoupling the equations of motion.
Originality/value
One of the main advantages of this technique is the possibility of considering relatively large time step increments which enhances the computational efficiency of the numerical procedure. Due to its characteristics, this method is well suited to parallel processing, one of the features that have to be conveniently explored in the near future.
Details
Keywords
Ahmed K. Noor and Jeanne M. Peters
A computational procedure is presented for the efficient non‐linear dynamic analysis of quasi‐symmetric structures. The procedure is based on approximating the unsymmetric…
Abstract
A computational procedure is presented for the efficient non‐linear dynamic analysis of quasi‐symmetric structures. The procedure is based on approximating the unsymmetric response vectors, at each time step, by a linear combination of symmetric and antisymmetric vectors, each obtained using approximately half the degrees of freedom of the original model. A mixed formulation is used with the fundamental unknowns consisting of the internal forces (stress resultants), generalized displacements and velocity components. The spatial discretization is done by using the finite element method, and the governing semi‐discrete finite element equations are cast in the form of first‐order non‐linear ordinary differential equations. The temporal integration is performed by using implicit multistep integration operators. The resulting non‐linear algebraic equations, at each time step, are solved by using iterative techniques. The three key elements of the proposed procedure are: (a) use of mixed finite element models with independent shape functions for the stress resultants, generalized displacements, and velocity components and with the stress resultants allowed to be discontinuous at interelement boundaries; (b) operator splitting, or restructuring of the governing discrete equations of the structure to delineate the contributions to the symmetric and antisymmetric vectors constituting the response; and (c) use of a two‐level iterative process (with nested iteration loops) to generate the symmetric and antisymmetric components of the response vectors at each time step. The top‐ and bottom‐level iterations (outer and inner iterative loops) are performed by using the Newton—Raphson and the preconditioned conjugate gradient (PCG) techniques, respectively. The effectiveness of the proposed strategy is demonstrated by means of a numerical example and the potential of the strategy for solving more complex non‐linear problems is discussed.
The adoption of a model‐building approach to marketing is today inevitable, due to improvements in hardware and software and the increased professionalisation of marketing and its…
Abstract
The adoption of a model‐building approach to marketing is today inevitable, due to improvements in hardware and software and the increased professionalisation of marketing and its techniques. Aggregate response models are focused upon, particularly the issues of which responses are realistic and should be modelled, how the response can be expressed and how a choice can be made between options available. The traditional model‐building process is described, and the inclusion of correct variables found to be critical, the primary means of doing this being statistical analysis. Simple expressions perform as effectively as more complex ones, and should be used if able to give operationally meaningful results. Cross‐correlation analysis and biased estimation techniques provide good guides to usable variables and their effects.
Details