Search results
1 – 10 of over 4000Peter J. Hosie and Roger C. Smith
The purpose of this paper is to raise and critically analyse controversial issues facing the future directions of the academic discipline organisational behaviour (OB).
Abstract
Purpose
The purpose of this paper is to raise and critically analyse controversial issues facing the future directions of the academic discipline organisational behaviour (OB).
Design/methodology/approach
Specifically, the commercial benefits for basic and applied OB research conducted by academics are considered. Arguments are advanced which cast doubt on the discipline's current directions.
Findings
Proponents of traditional research in this field are accused of methodological myopia, inaccessibility, lack of relevance to practitioners and an inability to integrate research with successful practice. Such shortcomings have the potential to render OB theories, research and recommended practices irrelevant in many commercial environments.
Practical implications
Better integration is recommended between popularist management practices and ideas with traditional research techniques to produce more business focussed outcomes. New modes of investigation are proposed which adopt dynamic research methodologies based on “coarse grained theorising” using the “3p” test of performance, productivity and profitability. In this context, coarse grained theorising must be capable of verification in the field with tangible commercial benefits.
Originality/value
Narrowing the theory‐practice gulf requires a more concerted effort to embrace practitioner generated ideas to develop these into theories closely related to organisational concerns rather than purely academic predilections. In this situation, only the most robust of existing theories, with utility for organisations, would survive and continue to be promulgated. A future scenario for OB is envisaged where hybridized theorizing and research are developed and communicated to a wider practitioner audience.
Details
Keywords
This paper aims to introduce an original application of the corrected response surface method (CRSM) in the context of the optimal design of a permanent magnet synchronous machine…
Abstract
Purpose
This paper aims to introduce an original application of the corrected response surface method (CRSM) in the context of the optimal design of a permanent magnet synchronous machine used as an integrated starter generator. This method makes it possible to carry out this design in a very efficient manner, in comparison with conventional optimization approaches.
Design/methodology/approach
The search for optimal conditions is achieved by the joint use of two multi-physics models of the machine to be optimized. The former models most finely the physical functioning of the machine; it is called “fine model”. The second model describes the same physical phenomena as the fine model but must be much quicker to evaluate. Thus, to minimize its evaluation time, it is necessary to simplify it considerably. It is called “coarse model”. The lightness of the coarse model allows it to be used intensively by conventional optimization algorithms. On the other hand, the fine reference model makes it possible to recalibrate the results obtained from the coarse model at any instant, and mainly at the end of each classical optimization. The difference in definition between fine and coarse models implies that these two models do not give the same output values for the same input configuration. The approach described in this study proposes to correct the values of the coarse model outputs by constructing an adjustment (correcting) response surface. This gives the name to this method. It then becomes possible to have the entire load of the optimization carried over to the coarse model adjusted by the addition of this correction response surface.
Findings
The application of this method shows satisfactory results, in particular in comparison with those obtained with a traditional optimization approach based on a single (fine) model. It thus appears that the approach by CRSM makes it possible to converge much more quickly toward the optimal configurations. Also, the use of response surfaces for optimization makes it possible to capitalize the modeling data, thus making it possible to reuse them, if necessary, for subsequent optimal design studies. Numerous tests show that this approach is relatively robust to the variations of many important functioning parameters.
Originality/value
The CRSM technique is an indirect multi-model optimization method. This paper presents the application of this relatively undeveloped optimization approach, combining the features and benefits of (Indirect) efficient global optimization techniques and (multi-model) space mapping methods.
Details
Keywords
D. Lahaye, A. Canova, G. Gruosso and M. Repetto
This work aims to present a multilevel optimization strategy based on manifold‐mapping combined with multiquadric interpolation for the coarse model construction.
Abstract
Purpose
This work aims to present a multilevel optimization strategy based on manifold‐mapping combined with multiquadric interpolation for the coarse model construction.
Design/methodology/approach
In the proposed approach the coarse model is obtained by interpolating the fine model using multiquadrics in a small number of points. As the algorithm iterates the response surface model is improved by enriching the set of interpolation points.
Findings
This approach allows to accurately solve the TEAM Workshop Problem 25 using as little as 33 finite element simulations. Furthermore, it allows a robust sizing optimization of a cylindrical voice‐coil actuator with seven design variables.
Research limitations/implications
Further analysis is required to gain a better understanding of the role that the initial coarse model accuracy plays in the convergence of the algorithm. The proposed model allows to carry out such analysis by varying the number of points included in the initial response surface model. The effect of the trust‐region stabilization in the presence of manifolds of equivalent solutions is also a topic of further investigations.
Originality/value
Unlike the closely related space‐mapping algorithm, the manifold‐mapping algorithm is guaranteed to converge to a fine model optimal solution. By combining it with multiquadric response surface models, its applicability is extended to problems for which other kinds of coarse model such as lumped parameter approximations for instance are tedious or impossible to construct.
Details
Keywords
Ahmed Abou-Elyazied Abdallh and Luc Dupré
Magnetic material properties of an electromagnetic device (EMD) can be recovered by solving a coupled experimental numerical inverse problem. In order to ensure the highest…
Abstract
Purpose
Magnetic material properties of an electromagnetic device (EMD) can be recovered by solving a coupled experimental numerical inverse problem. In order to ensure the highest possible accuracy of the inverse problem solution, all physics of the EMD need to be perfectly modeled using a complex numerical model. However, these fine models demand a high computational time. Alternatively, less accurate coarse models can be used with a demerit of the high expected recovery errors. The purpose of this paper is to present an efficient methodology to reduce the effect of stochastic modeling errors in the inverse problem solution.
Design/methodology/approach
The recovery error in the electromagnetic inverse problem solution is reduced using the Bayesian approximation error approach coupled with an adaptive Kriging-based model. The accuracy of the forward model is assessed and adapted a priori using the cross-validation technique.
Findings
The adaptive Kriging-based model seems to be an efficient technique for modeling EMDs used in inverse problems. Moreover, using the proposed methodology, the recovery error in the electromagnetic inverse problem solution is largely reduced in a relatively small computational time and memory storage.
Originality/value
The proposed methodology is capable of not only improving the accuracy of the inverse problem solution, but also reducing the computational time as well as the memory storage. Furthermore, to the best of the authors knowledge, it is the first time to combine the adaptive Kriging-based model with the Bayesian approximation error approach for the stochastic modeling error reduction.
Details
Keywords
Raja Rajeshwari B. and Sivakumar M.V.N.
Fracture properties depend on the type of material, method of testing and type of specimen. The purpose of this paper is to evaluate fracture properties by adopting a stable test…
Abstract
Purpose
Fracture properties depend on the type of material, method of testing and type of specimen. The purpose of this paper is to evaluate fracture properties by adopting a stable test method, i.e., wedge split test.
Design/methodology/approach
Coarse aggregate of three different sizes (20 mm, 16 mm and 12.5 mm), three ratios of coarse aggregate, fine aggregate (CA:FA) (50:50, 45:55, 40:60), presence of steel fibers, and specimens without and with guide notch were chosen as parameters of the study.
Findings
Load-crack mouth opening displacement curves indicate that for both fibrous and non-fibrous mixes, higher volume of aggregate and higher size of coarse aggregate have high fracture energy.
Originality/value
For all volumes of coarse aggregate, it was noticed that specimens with 12.5 mm aggregate size achieved highest peak load and abrupt drop post-peak. The decrease in coarseness of internal structure of concrete (λ) resulted in the increase of fracture energy.
Details
Keywords
Marco Gallegati and James B. Ramsey
In this chapter we perform a Monte Carlo simulation study of the errors-in-variables model examined in Ramsey, Gallegati, Gallegati, and Semmler (2010) by using a wavelet…
Abstract
In this chapter we perform a Monte Carlo simulation study of the errors-in-variables model examined in Ramsey, Gallegati, Gallegati, and Semmler (2010) by using a wavelet multiresolution approximation approach. Differently from previous studies applying wavelets to errors-in-variables problem, we use a sequence of multiresolution approximations of the variable measured with error ranging from finer to coarser scales. Our results indicate that multiscale approximations to the variable observed with error based on the coarser scales provide an unbiased asymptotically efficient estimator that also possess good finite sample properties.
Details
Keywords
Uchechi G. Eziefula, Hyginus E. Opara and Bennett I. Eziefula
This paper aims to investigate the 28-day compressive strength of concrete produced with aggregates from different sources.
Abstract
Purpose
This paper aims to investigate the 28-day compressive strength of concrete produced with aggregates from different sources.
Design/methodology/approach
Coarse aggregates were crushed granite and natural local stones mined from Umunneochi, Lokpa and Uturu, Isuakwato, respectively, in Abia State, Nigeria. Fine aggregate (river sand) and another coarse aggregate (river stone) were dredged from Otammiri River in Owerri, Imo State, Nigeria. The nominal mix ratios were 1:1:2, 1:2:4 and 1:3:6, whereas the respective water–cement ratios were 0.45, 0.5, 0.55 and 0.6.
Findings
The compressive strength of granite concrete, river stone concrete and local stone concrete ranged 17.79-38.13, 15.37-34.57 and 14.17-31.96 N/mm2, respectively. Compressive strength was found to increase with decreasing water–cement ratio and increasing cement content.
Practical implications
Granite concrete should be used in reinforced-concrete construction, especially when a cube compressive strength of 30 N/mm2 or higher is required.
Originality/value
Granite concrete exceeded the target compressive strength for all the concrete specimens, whereas river stone concrete and local stone concrete failed to achieve the target strength for some mix proportions and water–cement ratios.
Details
Keywords
Piotr Putek, Guillaume Crevecoeur, Marian Slodička, Roger van Keer, Ben Van de Wiele and Luc Dupré
The purpose of this paper is to solve an inverse problem of structure recognition arising in eddy current testing (ECT) – type NDT. For this purpose, the space mapping (SM…
Abstract
Purpose
The purpose of this paper is to solve an inverse problem of structure recognition arising in eddy current testing (ECT) – type NDT. For this purpose, the space mapping (SM) technique with an extraction based on the Gauss‐Newton algorithm with Tikhonov regularization is applied.
Design/methodology/approach
The aim is to have a computationally fast recognition procedure of defects since the monitoring results in a large amount of data points that need to be analyzed by 3D eddy current model. According to the SM optimization, the finite element method (FEM) is used as a fine model, while the model based on an integral method such as the volume integral method (VIM) serves as a coarse model. This approach, being an example of a two‐level optimization method, allows shifting the optimization load from a time consuming and accurate model to the less precise but faster coarse surrogate.
Findings
The application of this method enables shortening of the evaluation time that is required to provide the proper parameter estimation of surface defects.
Research limitations/implications
In this work only the specific kinds of surface defects were considered. Therefore, the reconstruction of arbitrary shapes of defects when using real measurement data from ECT system can be treated in further research.
Originality/value
The paper investigated the eddy current inverse problem. According to aggressive space mapping method, a suitable coarse model is needed. In this case, for the purpose of 3D defect reconstruction, the reduced VIM approach was applied. From a practical view point, the authors demonstrated that the two‐level inversion procedures allow saving of up to 50 percent CPU time in comparison with the optimization by means of regularized Gauss‐Newton algorithm in the same FE model.
Details
Keywords
Andrew Thelen, Leifur Leifsson, Anupam Sharma and Slawomir Koziel
Dual-rotor wind turbines (DRWTs) are a novel type of wind turbines that can capture more power than their single-rotor counterparts. Because their surrounding flow fields are…
Abstract
Purpose
Dual-rotor wind turbines (DRWTs) are a novel type of wind turbines that can capture more power than their single-rotor counterparts. Because their surrounding flow fields are complex, evaluating a DRWT design requires accurate predictive simulations, which incur high computational costs. Currently, there does not exist a design optimization framework for DRWTs. Since the design optimization of DRWTs requires numerous model evaluations, the purpose of this paper is to identify computationally efficient design approaches.
Design/methodology/approach
Several algorithms are compared for the design optimization of DRWTs. The algorithms vary widely in approaches and include a direct derivative-free method, as well as three surrogate-based optimization methods, two approximation-based approaches and one variable-fidelity approach with coarse discretization low-fidelity models.
Findings
The proposed variable-fidelity method required significantly lower computational cost than the derivative-free and approximation-based methods. Large computational savings come from using the time-consuming high-fidelity simulations sparingly and performing the majority of the design space search using the fast variable-fidelity models.
Originality/value
Due the complex simulations and the large number of designable parameters, the design of DRWTs require the use of numerical optimization algorithms. This work presents a novel and efficient design optimization framework for DRWTs using computationally intensive simulations and variable-fidelity optimization techniques.
Details
Keywords
Tanuja Gupta and M. Chakradhara Rao
This study aims to practically determine the optimum proportion of aggregates to attain the desired strength of geopolymer concrete (GPC) and then compare the results using…
Abstract
Purpose
This study aims to practically determine the optimum proportion of aggregates to attain the desired strength of geopolymer concrete (GPC) and then compare the results using established analytical particle packing methods. The investigation further aims to assess the influence of various amounts of recycled aggregate (RA) on properties of low-calcium fly ash-based GPC of grade M25.
Design/methodology/approach
Fine and coarse aggregates were blended in various proportions and the proportion yielding maximum packing density was selected as the optimum proportion and they were compared with analytical models, such as Modified Toufar Model (MTM) and J. D. Dewar Model. RAs for this study were produced in laboratory and they were used in various amounts, namely, 0%, 50% and 100%. 12M NaOH solution was mixed with Na2SiO3 in the ratio of 1:2. The curing of concrete was done at the temperatures of 60° and 90 °C for 24, 48 and 72h.
Findings
The experimentally obtained optimum proportion of coarse to fine aggregate was 60:40 for all amounts of RA. Meanwhile, MTM and Dewar Model resulted in coarse aggregate to fine aggregates as 40:60, 45:55, 55:45 and 55:45, 35:65, 60:40, respectively, for 0% 100% and 50% RAs. The compressive strength of GPC elevated with the increase in curing regime. In addition, the ultrasonic pulse velocity also displayed a similar trend as that of strength.
Originality/value
The GPC with 50% RAs may be considered for use, as it exhibited superior properties compared to GPC with 100% RAs and was comparable to GPC with natural aggregates. Furthermore, compressive strength is correlated with split tensile strength and ultrasonic pulse velocity.
Details