Search results
1 – 10 of over 3000Nathan P. Podsakoff, Wei Shen and Philip M. Podsakoff
Since the publication of Venkatraman and Grant's (1986) article two decades ago, considerably more attention has been directed at establishing the validity of constructs in the…
Abstract
Since the publication of Venkatraman and Grant's (1986) article two decades ago, considerably more attention has been directed at establishing the validity of constructs in the strategy literature. However, recent developments in measurement theory indicate that strategy researchers need to pay additional attention to whether their constructs should be modeled as having formative or reflective indicators. Therefore, the purpose of this chapter is to highlight the differences between formative and reflective indicator measurement models, and discuss the potential role of formative measurement models in strategy research. First, we systematically review the literature on construct measurement model specification. Second, we assess the extent of measurement model misspecification in the recent strategy literature. Our assessment of 257 constructs in the contemporary strategy literature suggests that many important strategy constructs are more appropriately modeled as having formative indicators than as having reflective indicators. Based on this review, we identify some common errors leading to measurement model misspecification in the strategy domain. Finally, we discuss some implications of our analyses for scholars in the strategic management field.
Thi Thu Ha Nguyen, Salma Ibrahim and George Giannopoulos
The use of models for detecting earnings management in the academic literature, using accrual and real manipulation, is commonplace. The purpose of the current study is to compare…
Abstract
Purpose
The use of models for detecting earnings management in the academic literature, using accrual and real manipulation, is commonplace. The purpose of the current study is to compare the power of these models in a United Kingdom (UK) sample of 19,424 firm-year observations during the period 1991–2018. The authors include artificially-induced manipulation of revenues and expenses between zero and ten percent of total assets to random samples of 500 firm-year observations within the full sample. The authors use two alternative samples, one with no reversal of manipulation (sample 1) and one with reversal in the following year (sample 2).
Design/methodology/approach
The authors include artificially induced manipulation of revenues and expenses between zero and ten percent of total assets to random samples of 500 firm-year observations within the full sample.
Findings
The authors find that real earnings manipulation models have lower power than accrual earnings manipulation models, when manipulating discretionary expenses and revenues. Furthermore, the real earnings manipulation model to detect overproduction has high misspecification, resulting in artificially inflating the power of the model. The authors examine an alternative model to detect discretionary expense manipulation that generates higher power than the Roychowdhury (2006) model. Modified real manipulation models (Srivastava, 2019) are used as robustness and the authors find these to be more misspecified in some cases but less in others. The authors extend the analysis to a setting in which earnings management is known to occur, i.e. around benchmark-beating and find consistent evidence of accrual and some forms of real manipulation in this sample using all models examined.
Research limitations/implications
This study contributes to the literature by providing evidence of misspecification of currently used models to detect real accounts manipulation.
Practical implications
Based on the findings, the authors recommend caution in interpreting any findings when using these models in future research.
Originality/value
The findings address the earnings management literature, guided by the agency theory.
Details
Keywords
Nii Ayi Armah and Norman R. Swanson
In this chapter we discuss model selection and predictive accuracy tests in the context of parameter and model uncertainty under recursive and rolling estimation schemes. We begin…
Abstract
In this chapter we discuss model selection and predictive accuracy tests in the context of parameter and model uncertainty under recursive and rolling estimation schemes. We begin by summarizing some recent theoretical findings, with particular emphasis on the construction of valid bootstrap procedures for calculating the impact of parameter estimation error. We then discuss the Corradi and Swanson (2002) (CS) test of (non)linear out-of-sample Granger causality. Thereafter, we carry out a series of Monte Carlo experiments examining the properties of the CS and a variety of other related predictive accuracy and model selection type tests. Finally, we present the results of an empirical investigation of the marginal predictive content of money for income, in the spirit of Stock and Watson (1989), Swanson (1998) and Amato and Swanson (2001).
Alistair Brandon-Jones and Desiree Knoppen
The purpose of this paper is to report on research into the impact of two sequential dimensions of strategic purchasing – purchasing recognition and purchasing involvement – on…
Abstract
Purpose
The purpose of this paper is to report on research into the impact of two sequential dimensions of strategic purchasing – purchasing recognition and purchasing involvement – on the development and deployment of dynamic capabilities. The authors also examine how such dynamic capabilities impact on both cost and innovation performance, and how their effects differ for service as opposed to manufacturing firms.
Design/methodology/approach
The authors test hypotheses using structural equation modeling of survey data from 309 manufacturing and service firms.
Findings
From a dynamic capability perspective, the analysis supports the positive relationships between purchasing recognition, purchasing involvement, and dynamic capability in the form of knowledge scanning. The authors also find support for the positive impact of knowledge scanning on both cost and innovation performance. From a contingency perspective, data supports hypothesized differences caused by industry, whereby service-based firms experience stronger positive linkages in our model than manufacturing-based firms. Finally, emerging from the data, the authors explore a re-enforcing effect from cost performance to purchasing involvement, something that is in line with the dynamic capabilities perspective but not typically addressed in operations management (OM) research.
Originality/value
The research offers a number of theoretical and managerial contributions, including being one of a relative few examples of empirical assessment of dynamic capability development and deployment; examining the enablers of dynamic capability in addition to the more commonly addressed performance effect; assessing the contingency effect of firm type for dynamic capabilities; and uncovering a return (re-enforcing) effect between performance and enablers of dynamic capabilities.
Details
Keywords
Florian Schuberth, Manuel Elias Rademaker and Jörg Henseler
The purpose of this study is threefold: (1) to propose partial least squares path modeling (PLS-PM) as a way to estimate models containing composites of composites and to compare…
Abstract
Purpose
The purpose of this study is threefold: (1) to propose partial least squares path modeling (PLS-PM) as a way to estimate models containing composites of composites and to compare the performance of the PLS-PM approaches in this context, (2) to provide and evaluate two testing procedures to assess the overall fit of such models and (3) to introduce user-friendly step-by-step guidelines.
Design/methodology/approach
A simulation is conducted to examine the PLS-PM approaches and the performance of the two proposed testing procedures.
Findings
The simulation results show that the two-stage approach, its combination with the repeated indicators approach and the extended repeated indicators approach perform similarly. However, only the former is Fisher consistent. Moreover, the simulation shows that guidelines neglecting model fit assessment miss an important opportunity to detect misspecified models. Finally, the results show that both testing procedures based on the two-stage approach allow for assessment of the model fit.
Practical implications
Analysts who estimate and assess models containing composites of composites should use the authors’ guidelines, since the majority of existing guidelines neglect model fit assessment and thus omit a crucial step of structural equation modeling.
Originality/value
This study contributes to the understanding of the discussed approaches. Moreover, it highlights the importance of overall model fit assessment and provides insights about testing the fit of models containing composites of composites. Based on these findings, step-by-step guidelines are introduced to estimate and assess models containing composites of composites.
Details
Keywords
This paper gives a selective review on some recent developments of nonparametric methods in both continuous and discrete time finance, particularly in the areas of nonparametric…
Abstract
This paper gives a selective review on some recent developments of nonparametric methods in both continuous and discrete time finance, particularly in the areas of nonparametric estimation and testing of diffusion processes, nonparametric testing of parametric diffusion models, nonparametric pricing of derivatives, nonparametric estimation and hypothesis testing for nonlinear pricing kernel, and nonparametric predictability of asset returns. For each financial context, the paper discusses the suitable statistical concepts, models, and modeling procedures, as well as some of their applications to financial data. Their relative strengths and weaknesses are discussed. Much theoretical and empirical research is needed in this area, and more importantly, the paper points to several aspects that deserve further investigation.
The purpose of this paper is to describe common questionable research practices (QRPs) engaged in by management researchers who use confirmatory factor analysis (CFA) as part of…
Abstract
Purpose
The purpose of this paper is to describe common questionable research practices (QRPs) engaged in by management researchers who use confirmatory factor analysis (CFA) as part of their analysis.
Design/methodology/approach
The authors describe seven questionable analytic practices and then review one year of journal articles published in three top-tier management journals to estimate the base rate of these practices.
Findings
The authors find that CFA analyses are characterized by a high base rate of QRPs with one practice occurring for over 90 percent of all assessed articles.
Research limitations/implications
The findings of this paper call into question the validity and trustworthiness of results reported in much of the management literature.
Practical implications
The authors provide tentative guidelines of how editors and reviewers might reduce the degree to which the management literature is characterized by these QRPs.
Originality/value
This is the first paper to estimate the base rate of six QRPs relating to the widely used analytic tool referred to as CFA in the management literature.
Details
Keywords
The adoption of a model‐building approach to marketing is today inevitable, due to improvements in hardware and software and the increased professionalisation of marketing and its…
Abstract
The adoption of a model‐building approach to marketing is today inevitable, due to improvements in hardware and software and the increased professionalisation of marketing and its techniques. Aggregate response models are focused upon, particularly the issues of which responses are realistic and should be modelled, how the response can be expressed and how a choice can be made between options available. The traditional model‐building process is described, and the inclusion of correct variables found to be critical, the primary means of doing this being statistical analysis. Simple expressions perform as effectively as more complex ones, and should be used if able to give operationally meaningful results. Cross‐correlation analysis and biased estimation techniques provide good guides to usable variables and their effects.
Details
Keywords
Mohamed F. Omran and Florin Avram
This paper relaxes the assumption of conditional normal innovations used by Fornari and Mele (1997) in modelling the asymmetric reaction of the conditional volatility to the…
Abstract
This paper relaxes the assumption of conditional normal innovations used by Fornari and Mele (1997) in modelling the asymmetric reaction of the conditional volatility to the arrival of news. We compare the performance of the Sign and Volatility Switching ARCH model of Fornari and Mele (1997) and the GJR model of Glosten et al. (1993) under the assumption that the innovations follow the Generalized Student’s t distribution. Moreover, we hedge against the possibility of misspecification by basing the inferences on the robust variance-covariance matrix suggested by White (1982). The results suggest that using more flexible distributional assumptions on the financial data can have a significant impact on the inferences drawn.