Editorial: Introducing “short notes on methods”

International Journal of Organization Theory & Behavior

ISSN: 1093-4537

Article publication date: 10 November 2022

Issue publication date: 10 November 2022

264

Citation

Secchi, D. (2022), "Editorial: Introducing “short notes on methods”", International Journal of Organization Theory & Behavior, Vol. 25 No. 3/4, pp. 93-97. https://doi.org/10.1108/IJOTB-12-2022-240

Publisher

:

Emerald Publishing Limited

Copyright © 2022, Emerald Publishing Limited


Do you know when it is appropriate to use Cronbach’s α and when to use McDonald’s ω instead? And when is it relevant to disclose information about dependence/independence of observations in case study research? Why should you not arbitrarily fix alpha (Type I error) and beta (Type II error) when conducting statistical power analysis tests? How can you make sure your interview coding shows clear connections to your results?

Over the time I have served as Editor-in-Chief of the International Journal of Organization Theory and Behavior (IJOTB), I kept asking questions such as the ones above to authors more often than I thought I would have had to. In general, I realized that there is much uncertainty surrounding methods and methodology [1]. This is not limited solely to evident procedural or calculation errors, but it extends to data presentation and reporting, to the disclosure of information and sometimes to a general misinterpretation of how and when a particular method should be used (or not). Some of these uncertainties resolve over editorial and peer-review processes, while others are so pervasive to prevent publication. I tend to believe that the articles submitted to IJOTB can be considered part of a wider phenomenon. In other words, I have no reason to doubt that the articles I read as editor are a sample of the same population of articles submitted elsewhere to other management, human resource management and organizational behavior (OB) journals. Hence, the uncertainties on methods and methodology are not problems typical of IJOTB, but they may well extend to other journals. This is not a criticism to authors, reviewers or fellow journal editors; it is just an observation related to my work as editor.

In the following, I first outline the pieces that should fall in place to deem an article’s method-related information satisfactory. I then write a few words on whether the lack of any of those “pieces” constitutes an issue and why. Some additional reflections follow on why method-related uncertainty emerges among scholars. The last part of this editorial presents what IJOTB’s editorial team has agreed to offer to help potential authors with methods and methodology.

Four guiding principles: replicability, justification, consistency and robustness

Independent of the methodological perspective, there are many choices to be made around how to conduct a research study. An article should be written in a way such that the details of the method are enough for the procedure to be replicated by anyone reading the article. At the same time, each choice made should be justified by either referring to the literature or to best practices. However, replicability and justification are not enough. Not only the (1) principles underneath methodological choices (i.e. justification) and (2) the details presented (i.e. replicability) are to be clear, but execution needs to be consistent with these two as well as robust, as much as possible. Consistency means that the declared principles are then applied without deviations or, when deviations occur, that they are disclosed in full and motivated. Robustness refers to the absence of calculation and/or application problems.

The four principles briefly outlined above probably require further considerations. First of all, the above is not to be considered an advocacy of full alignment with existing or more popular methods, i.e. where standard research practice may be more advanced and easier to report in an article. The point I am making is different. I am suggesting that independent of the chosen method, authors make sure that the reasons underneath their choices are transparent and make sense. It is true that, especially when deviating from the most popular methods, there is a need to explain, detail and justify the choices made. For example, if one were to apply cognitive event analysis (e.g. Steffensen, 2012, 2013), it is legitimate to expect a relatively broader methodological section. This is due to the fact that such a method would be new to the OB field; hence, it may require additional details and justification. This information may be difficult to find – in this case, it comes from cognitive linguistics – and to convey in an effective way to an OB readership. At the same time, popular methods are confronted with opposite challenges. Here, sources can be too many, sometimes presenting inconsistent messages and/or making authors believe they can skip some of the information on the assumption that the “average colleague” would know about it. This is the case, for example, of PLS-SEM (partial least squares structural equation models; Hair et al., 2019). Recently, this method had a boost in popularity due to its malleability and to the relative ease of use. Proponents believe it is a “silver bullet” (Hair et al., 2011) used to work on data that other methods (namely, covariance-based SEM) have difficulty handling. The growing popularity of this method is one of its main strengths, and at the same time, it is its main disadvantage. This is for two reasons. One is that some scholars may be tempted to skip diagnostic tests or to repeat tests they found in the literature without asking “why” questions. Both approaches are difficult to categorize as sound scientific practices and are not to be recommended (in general). The other reason is that some scholars may use this method when it is unnecessary or inappropriate (e.g. there is no path model to be tested), thus ending up with troubling results.

Why should one bother?

Now, the question is whether method-related uncertainties are a problem. Put differently, is the lack of reproducibility, justification, robustness and/or consistency troublesome for scientific articles? The answer is positive. One obvious reason relates to understanding, i.e. making sure readers have a fair idea of why a research study was conducted the way it was. Another is that of transparency or the ability to track all aspects of the process that led to results. Being open about possible deviations from protocols or standard practice is extremely important in these cases. In line with that, one additional reason is that of being able to fully assess limitations of a research study, and outline how these are reflected (or stem off) methodological choices.

The standard narrative relates methods concerns to reproducibility of a study. If research cannot be replicated, then it becomes impossible to establish whether the knowledge gained applies to different contexts; hence, it is generalizable. In a Popperian world, this is essential. And, while it is undoubtedly a noticeable aspect to consider, it is not the only one and probably not the most important. When published information is such that readers are capable of reproducing the study if they wanted to, it means that authors have been transparent about their procedures. However, it tells little (if anything) about (1) consistency with methods principles or (2) robustness (flawlessness) of execution. The message here is that abandoning any of the four principles outlined in this editorial is troublesome for scientific practice.

These are some of the reasons why there should always be an emphasis on methodology in papers. I am far from arguing that this is the most important or the only aspect that matters, but it is certainly one that requires mindful attention.

A love–hate relationship

I am personally very much passionate about the methods I use. As a computational simulation scholar and a quantitative researcher, I have a tendency to go at the core of problems, no matter if I have to learn a new statistical technique, to compile a new software application, to learn a new programming language or to give myself time to refresh linear algebra. At the same time, having worked with a large number of colleagues, I am usually called in a research team exactly because of this attitude. Hence, I know that most scholars have an ambivalent (love–hate) relationship with their methods.

In the above, I have outlined a series of aspects that concern the description, justification and application of methods in OB papers. I have then suggested that failing to take care of these aspects may result in significant problems. Given the emphasis that modern science dedicates to methodological aspects, one may wonder why such issues are still so widespread. This is a very difficult question to answer. An easy way out would be to blame authors, reviewers and editors for a poor job or lack of competence. However, this is a cheap answer, and it does not make justice of the work we all do as authors, reviewers and editors. Hence, the answer should be looked for elsewhere. Perhaps, there are two distinct but related aspects of this issue.

To use a euphemism, methodology is not everyone’s favorite. Quite the opposite, many do not look forward at tackling with the repetitive, attention-craving, demanding and often disappointing tasks related to methods. On top of these, any method-related choice requires to engage in a deep understanding of the philosophy that underpins its relevance, applicability and rationale. While very few really deal with philosophy, everyone must deal with the tasks. One can frame the above in terms of intrinsic motivation (Ryan and Deci, 2000) and state that few have an inner drive to confront the challenges that methods present. Scholars are usually driven by either theoretical quests or by finding interesting (better if surprising) results from their data. In both cases, the method is a mean to an end rather than something that scholars enjoy performing per se.

If the one aspect above relates to scholars and their motivation, the other relates to the sources where methods and methodology are presented and discussed. There are dedicated journals as well as specialized outlets where this information is found. Unfortunately, the information is usually presented in a technical fashion, and it makes it difficult for anyone to access it straightforwardly. In principle, we should all be trained in mathematics and statistics and hence be able to read most quantitative methods papers. In practice, for most management scholars, mathematics and statistics is something they dealt with during their PhD program or, even farther away down the memory lane, during their master’s education. And math-related knowledge deteriorates very quickly unless it is continuously practiced. Qualitative research is different and the problems there relate more to maintaining an up-to-date knowledge base. But some sources can engage in deep philosophical discussions and those are a distant memory too, for some of us, at least. For computational simulation, the problem is different yet again in that the field evolves at such a rapid pace that scholars have to keep themselves informed if they want to model at all. As an example, take two of the questions at the beginning of this editorial. Reliability coefficients and their options are discussed by Zinbarg et al. (2005) in a very clear article, yet probably difficult to follow in full, given their explicit reference to mathematics. On the problem of dependence in multiple case study research, an active approach to this methodology is required (e.g. see Oliveira and Secchi, 2021).

On top of the problems above, it comes software. If one can rarely afford to overlook updates in the theoretical part of method evolution, software cannot be disregarded. This has become essential independent of the chosen method, so much as it impinges on the success of applications and/or even on the likelihood that of their performance. Not only software keeps updating at a very fast pace, but it may be subject to change altogether. Some methods may be performed with some and not with other software, forcing scholars to continuously learn about them.

A method companion

How is it possible to make methods information more accessible without loosening its content? The current initiative presented in this editorial – called Short Notes on Methods (SNM) – is a response to some of the reflections above. As a way to support IJOTB prospective authors and the OB community at large, the editorial team has decided to introduce a new article type. This is a relatively short account of a specific aspect of a method (qualitative, quantitative or computational simulation), designed to describe when it is appropriate to apply it and how to do so in a practice-oriented fashion. SNM do not aim at presenting or describing all aspects relative to a given method, but is designed to have a rather specific focus. For this reason, SNM’s length is shorter than that of regular articles (5,000 instead of 7,000–9,000 words). This should allow for conciseness, enhanced readability and a more to-the-point communication style. SNM is dedicated to a specific aspect of a method, a common problem, a summary of best practices or to an outline of popular mistakes, to name just a few options. This means that they are considered “notes” and not “research articles.” Yet, they undergo the initial scrutiny by the Editor-in-Chief before they can be sent to peer-review, just like any other article submitted to this journal. In other words, even if they are shorter than regular articles (they are indeed notes), they count as a regular publication. Their contribution to knowledge is different.

SNM are an attempt to make method-related information more accessible, and at the same time, they are designed to help with specific aspects of it. For example, the current issue (Vol. 25, Nos 3/4) hosts the first SNM published in this journal. Ganesh and Srivastava describe the use of multilevel confirmatory factor analysis (MCFA) in a way such that its inner mechanisms are presented, together with its applicability. While confirmatory factor analysis (CFA) is extremely popular, its range of applicability has been limited over the years. When data are clustered in multiple levels of analysis and independence of observations is unlikely or breached, there is scope for MCFA to be applied. This SNM satisfies the conditions above in that it discusses the specific application of a method, it does it with rigor and at the same time, it makes it accessible by presenting a data-related exemplification.

There is a broad range of areas that SNM can cover. They are open to more traditional as well as unconventional research methods, and they can range from quantitative to qualitative, to computational simulation and to aspects related to the mix of these methods. They can also focus on epistemological or ontological points that should be considered when making methodological choices. Additionally, they can be set to review software that is relevant to methods in OB.

This editorial serves the purpose of presenting SNM to IJOTB’s readership, and it is an invitation to make use of the SNM we will publish. There is no plan to host at least one SNM in each and every issue of the journal although that is our (loose) goal. You are more than welcome to get in contact with the editor or with any member of the editorial team if you are interested in publishing an SNM.

Note

1.

In this editorial, I use the word “method” to indicate specific application procedures while “methodology” refers to the general approach followed by the researcher.

References

Hair, J.F., Ringle, C.M. and Sarstedt, M. (2011), “PLS-SEM: indeed a silver bullet”, Journal of Marketing Theory and Practice, Vol. 19 No. 2, pp. 139-152.

Hair, J.F., Risher, J.J., Sarstedt, M. and Ringle, C.M. (2019), “When to use and how to report the results of PLS-SEM”, European Business Review, Vol. 31 No. 1, pp. 2-24.

Oliveira, N. and Secchi, D. (2021), “Theory building, case dependence, and researchers’ bounded rationality: an illustration from studies of innovation diffusion”, Sociological Methods and Research, forthcoming.

Ryan, R.M. and Deci, E.L. (2000), “Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being”, American Psychologist, Vol. 55 No. 1, p. 68.

Steffensen, S.V. (2012), “Care and conversing in dialogical systems”, Language Sciences, Vol. 34 No. 5, pp. 513-531.

Zinbarg, R.E., Revelle, W., Yovel, I. and Li, W. (2005), “Cronbach’s α, Revelle’s β, and Mcdonald’s ω H: their relations with each other and two alternative conceptualizations of reliability”, Psychometrika, Vol. 70 No. 1, pp. 123-133.

Further reading

Steensen, S.V. (2013), “Human interactivity: problem-solving, solutionprobing and verbal patterns in the wild”, in Cowley, S.J. and Vallee-Tourangeau, F. (Eds), Cognition beyond the Brain: Computation, Interactivity and Human Artifice, Springer, Dordrecht, pp. 195-221.

Related articles