Search results
1 – 10 of over 55000K.C. CHELLAMUTHU and NATHAN IDA
Two different ‘a posteriori’ error estimation techniques are proposed in this paper. The effectiveness of the error estimates in adaptive mesh refinement for 2D and 3D…
Abstract
Two different ‘a posteriori’ error estimation techniques are proposed in this paper. The effectiveness of the error estimates in adaptive mesh refinement for 2D and 3D electrostatic problems are also analyzed with numerical test results. The post‐processing method employs an improved solution to estimate the error, whereas the gradient of field method utilizes the gradient of the field solution for estimating the ‘a posterior’ error. The gradient of field method is computationally inexpensive, since it solves a local problem on a patch of elements. The error estimates are tested by solving a set of self‐adjoint boundary value problems in 2D and 3D using a hierarchical minimal tree based mesh refinement algorithm. The numerical test results and the performance evaluation establish the effectiveness of the proposed error estimates for adaptive mesh refinement.
Manuel E. Rademaker, Florian Schuberth and Theo K. Dijkstra
The purpose of this paper is to enhance consistent partial least squares (PLSc) to yield consistent parameter estimates for population models whose indicator blocks contain a…
Abstract
Purpose
The purpose of this paper is to enhance consistent partial least squares (PLSc) to yield consistent parameter estimates for population models whose indicator blocks contain a subset of correlated measurement errors.
Design/methodology/approach
Correction for attenuation as originally applied by PLSc is modified to include a priori assumptions on the structure of the measurement error correlations within blocks of indicators. To assess the efficacy of the modification, a Monte Carlo simulation is conducted.
Findings
In the presence of population measurement error correlation, estimated parameter bias is generally small for original and modified PLSc, with the latter outperforming the former for large sample sizes. In terms of the root mean squared error, the results are virtually identical for both original and modified PLSc. Only for relatively large sample sizes, high population measurement error correlation, and low population composite reliability are the increased standard errors associated with the modification outweighed by a smaller bias. These findings are regarded as initial evidence that original PLSc is comparatively robust with respect to misspecification of the structure of measurement error correlations within blocks of indicators.
Originality/value
Introducing and investigating a new approach to address measurement error correlation within blocks of indicators in PLSc, this paper contributes to the ongoing development and assessment of recent advancements in partial least squares path modeling.
Details
Keywords
To clarify the nature of the error term in formative measurement models, as it had been misinterpreted in prior research.
Abstract
Purpose
To clarify the nature of the error term in formative measurement models, as it had been misinterpreted in prior research.
Design/methodology/approach
The error term in formative measurement models is analytically contrasted with the measurement errors typically found in reflective measurement models.
Findings
It is demonstrated that, unlike in reflective measurement, the error term in formative models is not measurement error but rather a disturbance representing non‐modeled causes. It is also shown that, under certain circumstances, the inclusion of an error term is not necessary/appropriate.
Research limitations/implications
Focus is only on first‐order measurement models; higher‐order specifications are not considered.
Originality/value
The paper helps researchers in their initial specification of formative measurement models as well as their evaluation of the subsequent model estimates, leading to better specifications for formative constructs.
Details
Keywords
Ahmed K. Noor and Jeanne M. Peters
Error indicators are introduced as part of a simple computational procedure for improving the accuracy of the finite element solutions for plate and shell problems. The procedure…
Abstract
Error indicators are introduced as part of a simple computational procedure for improving the accuracy of the finite element solutions for plate and shell problems. The procedure is based on using an initial (coarse) grid and a refined (enriched) grid, and approximating the solution for the refined grid by a linear combination of a few global approximation vectors (or modes) which are generated by solving two uncoupled sets of equations in the coarse grid unknowns and the additional degrees of freedom of the refined grid. The global approximation vectors serve as error indicators since they provide quantitative pointwise information about the sensitivity of the different response quantities to the approximation used. The three key elements of the computational procedure are: (a) use of mixed finite element models with discontinuous stress resultants at the element interfaces; (b) operator splitting, or additive decomposition of the finite element arrays for the refined grid into the sum of the coarse grid arrays and correction terms (representing the refined grid contributions); and (c) application of a reduction method through successive use of the finite element method and the classical Bubnov—Galerkin technique. The finite element method is first used to generate a few global approximation vectors (or modes). Then the amplitudes of these modes are computed by using the Bubnov—Galerkin technique. The similarities between the proposed computational procedure and a preconditioned conjugate gradient (PCG) technique are identified and are exploited to generate from the PCG technique pointwise error indicators. The effectiveness of the proposed procedure is demonstrated by means of two numerical examples of an isotropic toroidal shell and a laminated anisotropic cylindrical panel.
Yves Konkel, Ortwin Farle, Andreas Köhler, Alwin Schultschik and Romanus Dyczij‐Edlinger
The purpose of this paper is to compare competing adaptive strategies for fast frequency sweeps for driven and waveguide‐mode problems and give recommendations for practical…
Abstract
Purpose
The purpose of this paper is to compare competing adaptive strategies for fast frequency sweeps for driven and waveguide‐mode problems and give recommendations for practical implementations.
Design/methodology/approach
The paper first summarizes the theory of adaptive strategies for multi‐point (MP) sweeps and then evaluates the efficiency of such methods by means of numerical examples.
Findings
The authors' numerical tests give clear evidence for exponential convergence. In the driven case, highly resonant structures lead to pronounced pre‐asymptotic regions, followed by almost immediate convergence. Bisection and greedy point‐placement methods behave similarly. Incremental indicators are trivial to implement and perform similarly well as residual‐based methods.
Research limitations/implications
While the underlying reduction methods can be extended to any kind of affine parameter‐dependence, the numerical tests of this paper are for polynomial parameter‐dependence only.
Practical implications
The present paper describes self‐adaptive point‐placement methods and termination criteria to make MP frequency sweeps more efficient and fully automatic.
Originality/value
The paper provides a self‐adaptive strategy that is efficient and easy to implement. Moreover, it demonstrates that exponential convergence rates can be reached in practice.
Details
Keywords
Yuzhe Liu, Jun Wu, Liping Wang, Jinsong Wang, Dong Wang and Guang Yu
The purpose of this study is to develop a modified parameter identification method and a novel measurement method to calibrate a 3 degrees-of-freedom (3-DOF) parallel tool head…
Abstract
Purpose
The purpose of this study is to develop a modified parameter identification method and a novel measurement method to calibrate a 3 degrees-of-freedom (3-DOF) parallel tool head. This parallel tool head is a parallel mechanism module in a five-axes hybrid machine tool. The proposed parameter identification method is named as the Modified Singular Value Decomposition (MSVD) method. It aims to overcome the difficulty of choosing the algorithm parameter in the regularization identification method. The novel measurement method is named as the vector projection (VP) method which is developed to expand the measurement range of self-made measurement implements.
Design/methodology/approach
Newton Iterative Algorithm based on Least Square Method is analyzed by using the Singular Value Decomposition method. Based on the analysis result, the MSVD method is proposed. The VP method transforms the angle measurement into the displacement measurement by taking full advantage of the ability that the 3-DOF parallel tool head can move in the X – Y plane.
Findings
The kinematic calibration approach is verified by calibration simulations, a Rotation Tool Center Point accuracy test and an experiment of machining an “S”-shaped test specimen.
Originality/value
The kinematic calibration approach with the MSVD method and VP method could be successfully applied to the 3-DOF parallel tool head and other 3-DOF parallel mechanisms.
Details
Keywords
A. Rieger and P. Wriggers
Several a posteriori error indicators and error estimators for frictionless contact problems are compared. In detail, residual based error estimators, error indicators relying on…
Abstract
Several a posteriori error indicators and error estimators for frictionless contact problems are compared. In detail, residual based error estimators, error indicators relying on superconvergence properties and error estimators based on duality principles are investigated. Applications are to 2D solids under the hypothesis of nonlinear elastic material behaviour associated with finite deformations. A penalization technique is applied to enforce multilateral boundary conditions due to contact. The approximate solution of the problem is obtained by using the finite element method. Several numerical results are reported to show the applicability of the adaptive algorithm to the considered problems.
Details
Keywords
This chapter draws from an understanding of measurement error to address practical issues that arise in measurement and research design in the day-to-day conduct of research. The…
Abstract
This chapter draws from an understanding of measurement error to address practical issues that arise in measurement and research design in the day-to-day conduct of research. The topics include constructs and measurement error, the measure development process, and the indicators of measurement error. The discussion covers types of measurement error, types of measures, and common scenarios in conducting research, linking measurement to research design.
Details
Keywords
Jörg Henseler, Christian M. Ringle and Marko Sarstedt
Research on international marketing usually involves comparing different groups of respondents. When using structural equation modeling (SEM), group comparisons can be misleading…
Abstract
Purpose
Research on international marketing usually involves comparing different groups of respondents. When using structural equation modeling (SEM), group comparisons can be misleading unless researchers establish the invariance of their measures. While methods have been proposed to analyze measurement invariance in common factor models, research lacks an approach in respect of composite models. The purpose of this paper is to present a novel three-step procedure to analyze the measurement invariance of composite models (MICOM) when using variance-based SEM, such as partial least squares (PLS) path modeling.
Design/methodology/approach
A simulation study allows us to assess the suitability of the MICOM procedure to analyze the measurement invariance in PLS applications.
Findings
The MICOM procedure appropriately identifies no, partial, and full measurement invariance.
Research limitations/implications
The statistical power of the proposed tests requires further research, and researchers using the MICOM procedure should take potential type-II errors into account.
Originality/value
The research presents a novel procedure to assess the measurement invariance in the context of composite models. Researchers in international marketing and other disciplines need to conduct this kind of assessment before undertaking multigroup analyses. They can use MICOM procedure as a standard means to assess the measurement invariance.
Details
Keywords
Chensen Ding, Xiangyang Cui, Chong Li, Guangyao Li and Guoping Wang
Traditional adaptive analysis based on a coarse mesh, using finite element method (FEM) analysis, produces the original solution. Then post-processing the result and figuring out…
Abstract
Purpose
Traditional adaptive analysis based on a coarse mesh, using finite element method (FEM) analysis, produces the original solution. Then post-processing the result and figuring out the regions should be refined and these regions refined once. Finally, this new mesh is used to get the solution of first refinement. After several iterations of above procedures, we can achieve the last result that is closer to the true solution, which takes time, making adaptive scheme inpractical to engineering application. The paper aims to discuss these issues.
Design/methodology/approach
This paper based on FEM proposes a multi-level refinement strategy with a refinement strategy and an indicator. The proposed indicator uses value of the maximum difference of strain energy density among the elements that associated with one node, and divides all nodes into several categories based on the value. A multi-level refinement strategy is proposed according to which category the node belongs to refine different elements to different times rather than whether refine or not.
Findings
Multi-level refinement strategy takes full use of the numerical calculation, resulting in the whole adaptive analysis that only need to iterate twice while other schemes must iterate more times. Using much less times of numerical calculation and approaches, more accurate solution, making adaptive analysis more practical to engineering.
Originality/value
Multi-level refinement strategy takes full use of the numerical calculation, resulting in the whole adaptive analysis only need iterate twice while other schemes must iterate more times. using much less times of numerical calculation and approaches more accurate solution, making adaptive analysis more practical to engineering.
Details