Currently, most of the empirical management, marketing, and psychology articles in the leading journals in these disciplines are examples of bad science practice. Bad science practice includes mismatching case (actor) focused theory and variable-data analysis with null hypothesis significance tests (NHST) of directional predictions (i.e., symmetric models proposing increases in each of several independent X’s associates with increases in a dependent Y). Good science includes matching case-focused theory with case-focused data analytic tools and using somewhat precise outcome tests (SPOT) of asymmetric models. Good science practice achieves requisite variety necessary for deep explanation, description, and accurate prediction. Based on a thorough review of relevant literature, Hubbard (2016) concludes that reporting NHST results (e.g., an observed standardized partial regression betas for X’s differ from zero or that two means differ from zero) are examples of corrupt research. Hubbard (2017) expresses disappointment over the tepid response to his book. The pervasive teaching and use of NHST is one ingredient explaining the indifference, “I can’t change just because it’s [NHST] wrong.” The fear of submission rejection is another reason for rejecting asymmetric modeling and SPOT. Reporting findings from both bad and good science practices may be necessary until asymmetric modeling and SPOT receive wider acceptance than held presently.
Woodside, A.G. (2018), "Embracing the Paradigm Shift from Variable-Based to Case-Based Modeling", Improving the Marriage of Modeling and Theory for Accurate Forecasts of Outcomes (Advances in Business Marketing and Purchasing, Vol. 25), Emerald Publishing Limited, pp. 1-18. https://doi.org/10.1108/S1069-096420180000025003Download as .RIS
Emerald Publishing Limited
Copyright © 2018 Emerald Publishing Limited