Search results
1 – 10 of 50Andreas Schwab and William H. Starbuck
Null-hypothesis significance tests (NHST) are a very troublesome methodology that dominates the quantitative empirical research in strategy and management. Inherent limitations…
Abstract
Null-hypothesis significance tests (NHST) are a very troublesome methodology that dominates the quantitative empirical research in strategy and management. Inherent limitations and inappropriate applications of NHST impede the accumulation of knowledge and fill academic journals with meaningless “findings,” and they corrode researchers' motivation and ethics. Inherent limitations of NHST include the use of point null hypotheses, meaningless null hypotheses, and dichotomous truth criteria. Misunderstanding of NHST has often led to applications to inappropriate data and misinterpretation of results.
Researchers should move beyond the ritualistic and often inappropriate use of NHST. The chapter does not advocate a best way to do research, but suggests that researchers need to adapt their methods to reflect specific contexts and to use evaluation criteria that are meaningful for those contexts. Researchers need to explain the rationales that guided the selection of evaluation measures and they should avoid excessively complex models with many variables. The chapter also offers four more focused recommendations: (1) Compare proposed hypotheses with naïve hypotheses or the outcomes of alternative treatments. (2) Acknowledge the uncertainty that attends research findings by stating confidence limits for parameter estimates. (3) Show the substantive relevance of findings by reporting effect sizes – preferably with confidence limits. (4) Use statistical methods that are robust against deviations from assumptions about population distributions and the representativeness of samples.
Constructing and evaluating behavioral science models is a complex process. Decisions must be made about which variables to include, which variables are related to each other, the…
Abstract
Constructing and evaluating behavioral science models is a complex process. Decisions must be made about which variables to include, which variables are related to each other, the functional forms of the relationships, and so on. The last 10 years have seen a substantial extension of the range of statistical tools available for use in the construction process. The progress in tool development has been accompanied by the publication of handbooks that introduce the methods in general terms (Arminger et al., 1995; Tinsley & Brown, 2000a). Each chapter in these handbooks cites a wide range of books and articles on specific analysis topics.
Currently, most of the empirical management, marketing, and psychology articles in the leading journals in these disciplines are examples of bad science practice. Bad science…
Abstract
Currently, most of the empirical management, marketing, and psychology articles in the leading journals in these disciplines are examples of bad science practice. Bad science practice includes mismatching case (actor) focused theory and variable-data analysis with null hypothesis significance tests (NHST) of directional predictions (i.e., symmetric models proposing increases in each of several independent X’s associates with increases in a dependent Y). Good science includes matching case-focused theory with case-focused data analytic tools and using somewhat precise outcome tests (SPOT) of asymmetric models. Good science practice achieves requisite variety necessary for deep explanation, description, and accurate prediction. Based on a thorough review of relevant literature, Hubbard (2016) concludes that reporting NHST results (e.g., an observed standardized partial regression betas for X’s differ from zero or that two means differ from zero) are examples of corrupt research. Hubbard (2017) expresses disappointment over the tepid response to his book. The pervasive teaching and use of NHST is one ingredient explaining the indifference, “I can’t change just because it’s [NHST] wrong.” The fear of submission rejection is another reason for rejecting asymmetric modeling and SPOT. Reporting findings from both bad and good science practices may be necessary until asymmetric modeling and SPOT receive wider acceptance than held presently.
Details
Keywords
Arch G. Woodside, Gábor Nagy and Carol M. Megehee
This chapter elaborates on the usefulness of embracing complexity theory, modeling outcomes rather than directionality, and modeling complex rather than simple outcomes in…
Abstract
This chapter elaborates on the usefulness of embracing complexity theory, modeling outcomes rather than directionality, and modeling complex rather than simple outcomes in strategic management. Complexity theory includes the tenet that most antecedent conditions are neither sufficient nor necessary for the occurrence of a specific outcome. Identifying a firm by individual antecedents (i.e., noninnovative vs. highly innovative, small vs. large size in sales or number of employees, or serving local vs. international markets) provides shallow information in modeling specific outcomes (e.g., high sales growth or high profitability) – even if directional analyses (e.g., regression analysis, including structural equation modeling) indicate that the independent (main) effects of the individual antecedents relate to outcomes directionally – because firm (case) anomalies almost always occur to main effects. Examples: a number of highly innovative firms have low sales while others have high sales and a number of noninnovative firms have low sales while others have high sales. Breaking-away from the current dominant logic of directionality testing – null hypothesis significance testing (NHST) – to embrace somewhat precise outcome testing (SPOT) is necessary for extracting highly useful information about the causes of anomalies – associations opposite to expected and “statistically significant” main effects. The study of anomalies extends to identifying the occurrences of four-corner strategy outcomes: firms doing well in favorable circumstances, firms doing badly in favorable circumstances, firms doing well in unfavorable circumstances, and firms doing badly in unfavorable circumstances. Models of four-corner strategy outcomes advance strategic management beyond the current dominant logic of directional modeling of single outcomes.
Details
Keywords
This chapter identifies research advances in theory and analytics that contribute successfully to the primary need to be filled to achieve scientific legitimacy: configurations…
Abstract
This chapter identifies research advances in theory and analytics that contribute successfully to the primary need to be filled to achieve scientific legitimacy: configurations that include accurate explanation, description, and prediction – prediction here refers to predicting future outcomes and outcomes of cases in samples separate from the samples of cases used to construct models. The MAJOR PARADOX: can the researcher construct models that achieve accurate prediction of outcomes for individual cases that also are generalizable across all the cases in the sample? This chapter presents a way forward for solving the major paradox. The solution here includes philosophical, theoretical, and operational shifts away from variable-based modeling and null hypothesis statistical testing (NHST) to case-based modeling and somewhat precise outcome testing (SPOT). These shifts are now occurring in the scholarly business-to-business literature.
Details
Keywords
The purpose of this paper is to describe how and why to shift away from bad science practices now dominant in research in marketing to good science practices.
Abstract
Purpose
The purpose of this paper is to describe how and why to shift away from bad science practices now dominant in research in marketing to good science practices.
Design/methodology/approach
The essay includes details in theory construction and the use of symmetric tests to illustrate bad science practices. In contrast, the essay includes asymmetric case-based asymmetric theory construction and testing to illustrate good science practices.
Findings
Researchers in marketing science should not report null hypothesis significance tests. They should report somewhat precise outcome tests, avoid using multiple regression analysis (MRA) and do use Boolean-algebra-based algorithms to predict cases of interest.
Research limitations/implications
Given the widespread dominance of bad science practices (e.g. MRA and structural equation modeling), the inclusion of both bad and good science practices may be necessary during the transition years of 2015–2025 (e.g. Ordanini et al., 2014).
Practical implications
Good science practices fit reality much closer than bad science practices. Asymmetric modeling includes recognizing the separate models are necessary for positive vs negative outcomes because the antecedents of each often differ.
Originality/value
This essay presents details of why and how researchers need to embrace a new research paradigm that is helpful for ending bad science practices that are now dominant in research in marketing.
Details
Keywords
Martin Götz and Ernest H. O’Boyle
The overall goal of science is to build a valid and reliable body of knowledge about the functioning of the world and how applying that knowledge can change it. As personnel and…
Abstract
The overall goal of science is to build a valid and reliable body of knowledge about the functioning of the world and how applying that knowledge can change it. As personnel and human resources management researchers, we aim to contribute to the respective bodies of knowledge to provide both employers and employees with a workable foundation to help with those problems they are confronted with. However, what research on research has consistently demonstrated is that the scientific endeavor possesses existential issues including a substantial lack of (a) solid theory, (b) replicability, (c) reproducibility, (d) proper and generalizable samples, (e) sufficient quality control (i.e., peer review), (f) robust and trustworthy statistical results, (g) availability of research, and (h) sufficient practical implications. In this chapter, we first sing a song of sorrow regarding the current state of the social sciences in general and personnel and human resources management specifically. Then, we investigate potential grievances that might have led to it (i.e., questionable research practices, misplaced incentives), only to end with a verse of hope by outlining an avenue for betterment (i.e., open science and policy changes at multiple levels).
Details
Keywords
Colleges and universities conduct regular surveys that provide space for local questions, including library‐related items. Unfortunately these surveys often use incomparable…
Abstract
Purpose
Colleges and universities conduct regular surveys that provide space for local questions, including library‐related items. Unfortunately these surveys often use incomparable metrics and scales. This study seeks to examine techniques to take advantage of such surveys to supply practical results.
Design/methodology/approach
Effect size meta‐analysis is a statistical method used to combine such disparate results. This method and other statistical tools were used to extract significant findings from the survey results, looking at such library constructs as physical access, analysis (the ability to determine information quality and relevance), collection quality and quantity, retrieval, hours and staff.
Findings
The paper describes the meta‐analysis of three separate surveys which contained library‐related data responses, and conclusions subsequently drawn from that analysis.
Research limitations/implications
This paper assumes that the reader possesses some understanding of basic statistical concepts, such as means, variance, standardized scores, and null hypothesis significance testing (NHST). It assumes a “good‐enough” approach to library assessment, one that strives for the greatest possible statistical accuracy, reliability, and validity given the time and resource limitations within which most academic libraries operate.
Originality/value
The method provides a practical, sustainable, and effective library assessment technique using data from Radford University. The use of freeware to undertake the analysis also makes it financially viable.
Details
Keywords
Huat Bin (Andy) Ang and Arch G. Woodside
This study applies asymmetric rather than conventional symmetric analysis to advance theory in occupational psychology. The study applies systematic case-based analyses to model…
Abstract
This study applies asymmetric rather than conventional symmetric analysis to advance theory in occupational psychology. The study applies systematic case-based analyses to model complex relations among conditions (i.e., configurations of high and low scores for variables) in terms of set memberships of managers. The study uses Boolean algebra to identify configurations (i.e., recipes) reflecting complex conditions sufficient for the occurrence of outcomes of interest (e.g., high versus low financial job stress, job strain, and job satisfaction). The study applies complexity theory tenets to offer a nuanced perspective concerning the occurrence of contrarian cases – for example, in identifying different cases (e.g., managers) with high membership scores in a variable (e.g., core self-evaluation) who have low job satisfaction scores and when different cases with low membership scores in the same variable have high job satisfaction. In a large-scale empirical study of managers (n = 928) in four (contextual) segments of the farm industry in New Zealand, this study tests the fit and predictive validities of set membership configurations for simple and complex antecedent conditions that indicate high/low core self-evaluations, job stress, and high/low job satisfaction. The findings support the conclusion that complexity theory in combination with configural analysis offers useful insights for explaining nuances in the causes and outcomes to high stress as well as low stress among farm managers. Some findings support and some are contrary to symmetric relationship findings (i.e., highly significant correlations that support main effect hypotheses).
Details