Abstract
Purpose
The study aims to promote the use of qualitative methods in service research by investigating how these methods are reported in service journals, how the level of reporting has evolved and whether methodological reporting influences the downloads or citations received by qualitative articles.
Design/methodology/approach
Methodological reporting practices were identified through content analysis of 318 qualitative articles published in three major service research journals and comparison with prior methodological literature. Regression analysis was used to test how the level of methodological reporting influences article downloads and citations.
Findings
The study identifies 29 reporting practices related to 9 key methodological reporting areas. The overall level of methodological reporting in published qualitative articles has increased over time. While differences in the level of reporting between service journals persist, they are narrowing. The level of methodological reporting did not influence downloads or citations of qualitative articles.
Research limitations/implications
Service scholars using qualitative methods should pay attention to methodological reporting as it can improve the chances of being published. Factors such as theoretical contributions are likely to have a greater influence on article impact than methodological reporting.
Originality/value
No prior study has explored methodological reporting practices across different qualitative methodologies or how reporting influences article impact. For authors, reviewers and editors, the study provides an inventory of reporting practices relevant for evaluating qualitative articles, which should lower barriers for qualitative methods in service research by providing practical guidelines on what to focus on when reporting and assessing qualitative research.
Keywords
Citation
Valtakoski, A. and Glaa, B. (2024), "Beyond templates: methodological reporting practices and their impact in qualitative service research", Journal of Service Management, Vol. 35 No. 6, pp. 66-108. https://doi.org/10.1108/JOSM-06-2023-0253
Publisher
:Emerald Publishing Limited
Copyright © 2024, Aku Valtakoski and Besma Glaa
License
Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode
Introduction
Qualitative research builds on non-numerical, unstructured data and employs an inductive or abductive mode of inference in which theoretical insights emerge from a careful, often interpretive analysis of rich data (Bansal et al., 2018). Qualitative methodologies, such as case study methods, grounded theory, action research and ethnography, have for long been employed in service research. Recently, scholars have called for a greater use of qualitative methods (Baron et al., 2014; Epp and Otnes, 2021; Ostrom et al., 2015; Witell et al., 2020) given that these methods are argued to be more suitable than quantitative approaches for addressing emerging complex, cross-disciplinary topics such as healthcare service improvement, transformative services and customer experience (Bansal et al., 2018; Eisenhardt et al., 2016; McColl-Kennedy et al., 2015). Despite these calls, qualitative methods remain underrepresented in the service research literature (Benoit et al., 2017; Lages et al., 2013; Valtakoski, 2020).
One significant reason for the relative lack of qualitative service research is the difficulty in evaluating such research, stemming from several factors. First, far from constituting a unified paradigm, there are a plethora of qualitative research traditions (Bansal et al., 2018; Cunliffe, 2011), each with its own evaluative criteria for assessing methodological rigor (Easterby-Smith et al., 2008; Symon et al., 2018; Welch and Piekkari, 2017). Rigor, or the quality of qualitative methods (Amis and Silk, 2008; Symon et al., 2018), serves a similar purpose as methodological rigor for quantitative methods: to convince the reader of the credibility of the theoretical claims based on empirical research (cf. Gephart, 2004; Ketokivi and Mantere, 2010). Second, given the predominance of quantitative methods in business research, there tends to be a bias against qualitative research as the assessment of methodological rigor differs so much from typical quantitative research (Pratt, 2008). Unlike quantitative research which is nearly uniformly based on positivist epistemology and assumptions of objectivity, qualitative research often builds on more subjective research approaches such as constructivist and interpretive epistemologies (Amis and Silk, 2008; Guba and Lincoln, 1994), which both have different criteria for assessing methodological rigor.
To alleviate these issues, qualitative scholars often feel compelled to adopt methodological templates, such as the Eisenhardtian case study (Eisenhardt, 1989) or the “Gioia method” (Gioia et al., 2013) that provide a comprehensive blueprint for conducting and reporting qualitative research. While these templates have advantages and can further the use of qualitative methods – they can improve methodological rigor through systematizing qualitative research, providing new scholars with clear guidelines to conduct research and suggesting transparent criteria for evaluation (Corley et al., 2021; Jonsen et al., 2018) – their use also comes with drawbacks. Overly rigorous, unreflective use of templates can lead to a lack of methodological adaptation and imagination (Eisenhardt et al., 2016) and discarding of the core strengths of qualitative methods. Their use also tends to inhibit the diversity of qualitative research by imposing a “one best way” to conduct such research (Cornelissen, 2017) and promote quasi-transparency by hiding methodological details important for the inference process and the emergence of insights (Pratt et al., 2020, 2022). Finally, most extant templates tend to be based on positivist or constructivist epistemological assumptions (Mees-Buss et al., 2022), making their applicability dubious for interpretivist or critical qualitative research.
The purpose of this study is to contribute to the proliferation and diversity of qualitative methods in service research by focusing on methodological reporting practices. Instead of directly addressing evaluative criteria, these practices are related to how qualitative methodology and findings are motivated and reported (Pratt, 2009; Piekkari et al., 2010). Examples of reporting practices include the elaboration of the used methodology (Aguinis and Solarino, 2019; Köhler, 2016), the way the findings are written up (Jonsen et al., 2018; Zilber and Zanoni, 2022) and what kind of rhetorical devices are used to convince the reader (Golden-Biddle and Locke, 1993). Given that the evaluators and readers of qualitative articles can only assess what is explicated in a written article, reporting practices are critical for establishing methodological transparency (Pratt et al., 2020) and building confidence in the empirical findings (Pratt et al., 2022).
Drawing on prior methodological literature on qualitative research and content analysis of 318 qualitative articles in 3 major service journals, we put forth an inventory of reporting practices applicable across various types of qualitative methodology. Our goal is to provide service scholars with a toolbox of reporting practices that can be employed selectively in all types of qualitative research. We further seek to analyze how the use of reporting practices has evolved over time, and whether there are differences in the extent of employed practices between methodologies and journals. We also test whether the level of methodological reporting matters for article impact measured in terms of article downloads and citations. We address three research questions:
What methodological reporting practices are applicable across qualitative methodologies?
How has the use of reporting practices evolved in qualitative service research and are there differences between methodologies and journals?
How does the level of reporting practices affect article downloads and impact?
Our intention here is not to push qualitative service scholars toward adopting uniform criteria for judging the quality of their work (cf. Cornelissen, 2017; Pratt et al., 2022). Nor do we attempt to put forth a methodological template to be used in all qualitative research. Instead, similar to Tracy (2010) and Verleye (2019), we seek to foster the diversity of qualitative methods employed in service research by offering an elaboration of reporting practices that are generally expected in qualitative papers. We also wish to help service scholars who are more quantitively oriented to understand what to pay attention to when evaluating qualitative studies, particularly in the case of an unfamiliar methodology. We thus hope to reduce impediments to the proliferation of qualitative methods in the service research field by aiding editors and reviewers through providing authors guidelines on which methodological details are important to report.
The structure of the paper is as follows. We first briefly review methodological literature on epistemology, methodological rigor and methodological reporting in qualitative research. Next, we describe the methodology used in our review to identify methodological reporting practices in qualitative service research. These data are then used to trace the evolution of reporting practices, differences in reporting practices between journals and qualitative methodologies as well as the impact on article downloads and citations. Finally, we discuss the implications of the identified inventory of reporting practices and analysis findings for service scholars.
Assessing qualitative research: epistemology, rigor and methodological reporting
Qualitative research builds on non-numerical data such as interviews, documents, field observations, images and video recordings. Such data have distinctive advantages over quantitative data: they are open-ended – qualitative scholars do not need to rely on predetermined theoretical constructs; the data are concrete and vivid, making the insights developed from such data more persuasive and grounded; and the data are rich and nuanced, allowing scholars to capture details and mechanisms invisible to quantitative methods (Graebner et al., 2012). These characteristics allow qualitative researchers to study phenomena in detail and up-close (Yin, 2003), enable the development of new theoretical understandings and explanations (Eisenhardt, 1989), provide a deep understanding of the lived experience of research subjects (Sandberg, 2005) and facilitate longitudinal study of complex processes with multiple stakeholders (Langley et al., 2013).
Epistemology in qualitative research
Far from being a unified paradigm, qualitative research encompasses numerous epistemological perspectives and methodological traditions (Bansal et al., 2018; Cunliffe, 2011). Epistemology refers to basic assumptions about the knowledge we can gain about a studied phenomenon. To illustrate how different these perspectives are and how their inherent assumptions influence the evaluation of qualitative research, we next briefly introduce three commonly adopted epistemological perspectives in qualitative research (cf. Amis and Silk, 2008; Welch and Piekkari, 2017): positivism, constructivism and interpretivism. Although the three do not present an exhaustive list of possible epistemological positions (cf. Cunliffe, 2011; Guba and Lincoln, 1994), they encapsulate well the diversity of qualitative research.
Positivism
Positivism refers to a philosophical perspective that assumes that an objective reality exists irrespective of an observer (cf. Gephart, 2004). Epistemologically positivism, also referred to as foundationalism (Amis and Silk, 2008), assumes that we can attain perfect, certain knowledge of reality through observation and measurement and is thus aligned with the common perspective adopted in quantitative research (Johnson et al., 2006). Positivist research assumes that the researcher can act as an objective observer detached from phenomena and thus can be largely written out of the discussion of methodology.
The positivist perspective is commonly adopted by service scholars who use case study methodology and is explicitly or implicitly the epistemological foundation of many of the common case study methodologies (cf. Beverland and Lindgreen, 2010; Eisenhardt, 1989; Gibbert et al., 2008; Yin, 2003). A typical positivist study seeks to develop a theoretical framework or to explain relationships between constructs as a precursor for a quantitative study. An example of a positivist study in the service literature is van Riel and Lievens’ (2004) study of decision-making in the new service development in the high-tech sector. Their firm-level study builds on a tentative conceptual framework based on prior literature, uses extant theory to justify case selection, follows an Eisenhardian case study design, explicitly addresses validity and reliability and expresses its results in the form of propositions. While the authors mention how they conducted interviews and feedback workshops, the paper omits any other signs of subjectivity. Relying on key informants, the study does not address possible differences in the subjective points of view between informants.
Constructivism
Following the seminal work of Lincoln and Guba (1985), constructivist research dispenses with the notion of absolute, objective reality that we can know perfectly. Instead, the perspective suggests that reality cannot be discussed without a reference to the person observing it, implying that there is no single objective reality. Epistemologically constructivism suggests that we can only attain knowledge of reality imperfectly by accounting for different views (Johnson et al., 2006). We can only gain knowledge of reality in a probabilistic sense, for example, through triangulation (Jick, 1979). The constructivist perspective also accepts that the values and biases of researchers are inherent to the research process, yet their influence can be minimized through careful and systematic methodology and researcher reflexivity (cf. Amis and Silk, 2008; Johnson et al., 2006; Mees-Buss et al., 2022).
While less common than positivist qualitative studies, constructivism is relatively widely employed in management research. Common methodological approaches include naturalist inquiry (Lincoln and Guba, 1985), grounded theory (Glaser and Strauss, 1967) and the “Gioia method” (Gioia et al., 2013). While constructivist research can be used to build explanations, it instead emphasizes the inclusion of different points of view (Mees-Buss et al., 2022), seeking to uncover local truths (Järvensivu and Törnroos).
An example of constructivist service research is the study of co-creation roles of vulnerable customers by Sharma et al. (2017). While providing some theoretical framing, the paper does not develop a preliminary theoretical framework or hypotheses. The study seeks to address subjective customer perspectives instead of objective outcomes and explicitly builds on simultaneous data collection and analysis (cf. Glaser and Strauss, 1967), triangulation and the use of the Gioia method for analyzing and presenting the results. The findings are not formalized as propositions or even graphical frameworks but instead elaborated at length.
Interpretivism
Interpretivism constitutes a clear departure from the positivist and constructivist perspectives on qualitative research. Instead of assuming the existence of an objective reality, interpretivism suggests the world is socially constructed and that all knowledge relies on continuous negotiations between individuals (Sandberg, 2005). This implies that epistemologically we cannot gain knowledge of reality itself, but we can seek to understand the subjective meanings individuals ascribe to reality (Gephart, 2004). Individuals' lived experiences and subjective points of view thus become the focus of research. Interpretivism also embraces subjectivity and the active role of the researcher in the research process. Interpretivist research also typically emphasizes dialogical interaction between researchers and research subjects and often seeks to give voice to marginalized groups (Amis and Silk, 2008). The most common qualitative research methodology building on the interpretivist perspective is ethnography (von Koskull, 2020).
Interpretivist qualitative research often faces an uphill battle in an environment dominated by positivist research (Cornelissen, 2017; Cunliffe, 2011). Despite this, methodological scholars still point out the potential value of the perspective and advocate its use to provide a “thick description” of important phenomena (Bansal et al., 2018). A modest number of service scholars have adopted the interpretivist perspective. One example of interpretive service research is Suquet’s (2010) ethnographic study of fare evasion in public transportation conducted from the inspectors’ point of view. In contrast to the other two types of studies, Suquet explicitly writes himself into the article; he participated in the phenomenon described in the paper and describes the phenomenon in detail, while employing a dramaturgical approach to interpret and organize his observations. Instead of narrowly stated findings, the study discusses key issues arising from the evidence at length.
Methodological rigor in qualitative research
As demonstrated by the above examples, all qualitative articles seek different degrees to convince their readers of the credibility of the emerging theoretical claims through contextualization: providing a rich, believable description of a phenomenon (Corley et al., 2021; Ketokivi and Mantere, 2010). This contrasts with quantitative research, which generally relies on idealization: simplification of a phenomenon to enable the use of statistics and adherence to specific analysis procedures and methods (Ketokivi and Mantere, 2010). Related, the criteria used to assess the methodological rigor of research differ considerably between quantitative and qualitative methods. Whereas quantitative studies have recourse to clearly defined and universally accepted criteria such as construct validity and reliability, the criteria used to evaluate qualitative research are less well-defined and vary greatly between different methodologies (Symon et al., 2018; Welch and Piekkari, 2017).
For example, the criteria applied to assess the methodological rigor of positivist qualitative studies follows closely the traditional criteria of quantitative research: validity and reliability (Amis and Silk, 2008). Subsequently, many authors have sought to elaborate specific criteria for assessing validity and reliability in qualitative research (Beverland and Lindgreen, 2010; Gibbert et al., 2008; Gibbert and Ruigrok, 2010; Gnyawali and Song, 2016). By contrast, constructivist research suggests that validity and reliability should be replaced with the notion of trustworthiness that builds on the credibility, transferability, dependability and confirmability of a study (Lincoln and Guba, 1985; Amis and Silk, 2008). Although much less has been written on these criteria than on positivist notions of rigor, some authors have sought to elaborate detailed criteria for them (Järvensivu and Törnroos, 2010; Lincoln and Guba, 1986; Welch and Piekkari, 2017). As noted by Sandberg (2005), “few studies have sought to identify the criteria that could be used for justifying results produced within interpretive approaches in a systematic way.” While some scholars have sought to elaborate on what “rigor” might mean for interpretive research (Golden-Biddle and Locke, 1993; Jarzabkowski et al., 2014; Leitch et al., 2010; Zilber and Zanoni, 2022), there is considerable divergence on what criteria should be applied. Some examples of criteria suggested for interpretivist research include a commitment to authenticity, researcher reflexivity and catalytic validity (cf. Lincoln and Guba, 1986; Sandberg, 2005).
Given this diversity and the contingency of criteria on epistemological perspective and specific methodology, assessing the methodological rigor of qualitative research can be challenging (Corley et al., 2021; Pratt et al., 2022). The lack of clear criteria can be particularly bewildering to quantitative scholars who are used to definite rules and heuristics for judging the credibility of empirical research. Due to these challenges in assessment, many qualitative scholars have felt compelled to adopt methodological templates (Harley and Cornelissen, 2022; Köhler et al., 2022). These templates are “standardized ways of conducting research that are used as formulas for shaping the methods themselves, especially data collection and analysis” (Pratt et al., 2022, p. 2).
While methodological templates can improve the efficiency of research by suggesting a standardized research process and a blueprint for reporting the methodology and findings, thus potentially easing the communication of results to other scholars (Corley et al., 2021; Köhler et al., 2022), they also come with considerable downsides. Their nonreflexive use can lead to inflexible research that becomes detached from the strengths of qualitative data and the opportunities arising from its open-endedness (Eisenhardt et al., 2016; Graebner et al., 2012). Template use may also lead to quasi-transparency when certain methodological details are reported but some aspects important for understanding the inferential process are not expected to be disclosed (Mees-Buss et al., 2022; Pratt et al., 2022). On the level of research communities, requiring scholars to adhere to specific methodological templates can reduce methodological diversity and hence diminish the potential for original research and novel insights (Corley et al., 2021; Cornelissen, 2017). Since most of the commonly employed methodological templates are based on case study research and assumptions of positivist or constructivist epistemology (Mees-Buss et al., 2022; Pratt et al., 2022), they can hinder the adoption of more interpretive or critical approaches to qualitative research (Cornelissen, 2017).
Methodological reporting in qualitative research
Similar to Tracy (2010), Verleye (2019) and Pratt et al. (2022), we argue that to embrace the full diversity of qualitative methods we must forgo strict methodological templates and accept that valid research designs and criteria for assessing the rigor of qualitative research depend on the specific epistemology and methodology. Yet, the medium for communicating methodological details and findings, the research article, remains constant for all methodologies. Prior authors have noted that methodological reporting is often lackluster in qualitative research. For example, Gephart (2004) notes that authors commonly underspecify the employed qualitative methods, suggesting that the primary issue is with insufficient reporting rather than meeting specific evaluative criteria. Similarly, Graebner et al. (2012) suggest that instead of relying on templates, authors should “be more precise regarding how and when constructs and relationships emerge during the research process.” Hence, methodological reporting is crucial regardless of what specific methods were applied and how their rigor is assessed.
Our study thus focuses on methodological reporting practices, such as the data sources used in a study or how interviews were conducted. Prior methodological literature has commonly discussed the use of reporting practices to enhance methodological rigor. However, prior contributions have generally focused on specific qualitative methodologies, such as case studies (Beverland and Lindgreen, 2010; Gibbert and Ruigrok, 2010), interpretive research (Leitch et al., 2010; Sandberg, 2005) or ethnography (Golden-Biddle and Locke, 1993; Jarzabkowski et al., 2014). While some authors have sought to outline a more general approach for evaluating qualitative research (Jonsen et al., 2018; Pratt et al., 2022; Reay et al., 2019; Tracy, 2010), prior literature has not systematically explored reporting practices applicable across different qualitative methodologies.
Methodology
To identify commonly employed methodological reporting practices, we analyzed the use of these practices in qualitative service articles. The resulting data were also used to analyze the evolution of reporting practices over time, and the impact of reporting on article downloads and citations. Our research process paralleled previous studies that have explored the impact of methodological quality (e.g. Benoit et al., 2017; Kumar et al., 2017) and consisted of five steps: (1) extracting bibliometric data; (2) identifying relevant qualitative articles; (3) identifying and categorizing reporting practices based on prior methodological literature and content analysis of selected qualitative articles; (4) coding the remaining articles with respect to the reporting practices; (5) statistical analysis of the evolution of reporting practices and impact on downloads and citations.
Data collection
Our study focused on qualitative research published in three prominent service journals: Journal of Service Research (JSR, 2020 impact factor 10.67), Journal of Service Management [1] (JOSM, impact factor 11.77) and Journal of Services Marketing (JSM, impact factor 4.47). These journals were selected because they are widely recognized as influential publications in the field, and their combined coverage spans a broad range of service-related topics. JOSM focuses mostly on service management and innovation, while JSM leans toward service marketing, and JSR covers all service topics. By analyzing the qualitative research published in these journals, we hoped to identify similarities and differences in the reporting of qualitative methods and variations in their impact.
We extracted bibliometric data on articles published in the three journals between 1998 and 2020 from the Scopus database, excluding non-research items such as editorials and retraction notes. We selected this period for three reasons. First, JSR was first published in 1998, which allowed us to include all three journals for the widest possible period. Second, this period is characterized by relative stability in the methodologies employed in service research (Valtakoski, 2020), which facilitated meaningful comparisons across the years. Third, the limitation ensured the availability of bibliometric data for all included articles. In total, we retrieved full bibliometric data, including all references, for 2,282 articles (JSR 564 articles, JOSM 647 articles and JSM 1071 articles). The bibliometric data were processed using the bibliometrix package for R (Aria and Cuccurullo, 2017).
Identification of qualitative articles
We employed a multi-stage screening process to identify articles that use qualitative methods. Initially, we reviewed the titles and abstracts of all articles published in the three journals. If the titles and abstracts did not provide sufficient information, we also reviewed the introduction and methodology sections of the articles. We excluded articles that were purely quantitative, used formal modelling or did not collect any empirical data. We also omitted mixed methods articles with a dominant quantitative part, as well as articles that were purely conceptual, literature reviews, editorials, commentaries or focused on methodology. Finally, we excluded purely conceptual articles that used cases only as illustrations of theoretical concepts. After applying these exclusion criteria, we had 318 relevant qualitative research articles for further analysis (49 in JSR, 134 in JOSM and 135 in JSM).
Identification of reporting practices
The process of analyzing the reporting practices in qualitative research proceeded as depicted in Figure 1. We first reviewed extant literature on qualitative methodology to identify tentative areas of methodological reporting. Based on this preliminary understanding, we proceeded to analyze the reporting practices employed in the qualitative service articles. We began by independently reading and analyzing 18 articles chosen based on their representativeness of different methodologies, journals and time periods, focusing on the introduction, methodology, findings and discussion sections of the articles to find reporting practices. Yet, we often had to read the entire article due to considerable divergence in how employed methods were reported. We then discussed our conclusions together to create a tentative list of reporting practices and a corresponding codebook for further analysis, simultaneously abductively developing a categorization of the practices that would correspond with the prior methodological literature. During subsequent analysis, the codebook was updated and altered whenever we encountered practices that would not fit into one of the existing categories. We also updated the reporting categorization of the practices based on further analysis and discussions. In the end, we identified 29 specific reporting practices, categorized into 9 areas of methodological reporting.
Article coding
The rest of the identified articles were coded in stages as illustrated in Figure 1. When coding the articles, we simply noted the use of reporting practices; we did not determine differences in the reporting level since this would require assuming specific evaluative criteria. Given this approach, the used of reporting practices varied considerably across the articles. This diversity was further heightened by the number of different qualitative methodologies employed in the articles. We thus encountered challenges commonly noted in the prior literature on the evaluation of qualitative research; even when focusing on the loosely defined reporting practices, identifying and evaluating these was often laborious and difficult to do consistently. One of the main difficulties was the sheer heterogeneity of methodological reporting. Particularly, in earlier publications, methodological details were frequently missing or scattered throughout the text, which compelled us to read the entire paper to find necessary details. While the consistency of reporting has improved in more recent articles, we still encountered many instances of inadequate or vague methodological reporting.
Following each independent coding phase, we compared our results and discussed any disagreements. Insights from these discussions were used to refine the codebook. In general, the disagreements were minor and typically attributable to human error, as it was easy to miss methodological details that were reported in unexpected sections of an article. Despite having relatively simple criteria to apply, we still often had to read “between the lines” and use subjective judgment of authors' intentions. Consequently, it was virtually impossible to be fully consistent when coding the articles. We would expect these difficulties to become even more pronounced when assessing the methodology against some specific rigor criteria. These experiences highlight the inherent challenges of evaluating qualitative research.
We tested for interrater reliability at two stages during the coding process. The initial test indicated that further refinement of the codebook was necessary. The final codebook was tested by coding 44 additional randomly selected articles, resulting in an agreement of 79.2% and a Cohen’s kappa of 0.59. Although the kappa value is below the recommended threshold of 0.6, the correlation of the aggregated reporting index variables (0.84) between the authors suggested sufficient reliability for our exploratory purposes.
After the second testing stage, the remaining 193 articles were distributed approximately evenly between the 2 authors and coded independently. We also jointly reviewed all training and test datasets to determine their final coding. All data were combined in a final dataset of 318 coded articles.
Dataset on reporting practices
To analyze the evolution and impact of reporting practices, we formed a longitudinal dataset consisting of the identified qualitative articles. The main variable of interest was the overall level of reporting practices, captured as the reporting index variable. This composite index was calculated as the sum of the 29 reporting practices. In other words, it captures how many of the identified practices were used by the article, treating all practices equally important. The index was scaled to the interval 0–10 for easier interpretation [2].
We used two dependent variables. Consistent with previous research examining article impact in marketing research (Kumar et al., 2017; Stremersch et al., 2015), we recorded the total number of citations received by an article in the Scopus database, a common indicator of an article’s importance within a research field. Articles that receive a high number of citations are typically considered to have a substantial impact, given that they are widely referenced by other scholars. We also included article downloads as an intermediary dependent variable (cf. Benoit et al., 2017). The number of downloads reflects an interest in an article that is not yet fully reflected in citations. We expected articles with high levels of interest within the scholarly community to receive more downloads and uses. Unfortunately, article usage data were not available from Scopus for all identified articles. Consequently, we turned to the Web of Science database which provided usage data for most remaining articles. However, neither database had usage data for some articles. Hence, we imputed missing usage data using the number of downloads reported on journal homepages and download data available on Scopus.
Since reporting practices can obviously vary depending on the employed qualitative methodology, we included methodology as a control variable. This variable was based on the methodology explicitly stated by the article authors, or alternatively our analysis of methodology. We distinguish between seven methodologies: action research, case study, critical incidence technique (CIT), grounded theory, interview study, ethnography and netnography. Interview study refers to articles such as Sharma et al. (2009) that are based on interviews of individuals but do not explicitly subscribe to a specific methodology (cf. Gephart, 2004).
We coded the purpose of each article using the typology by Graebner et al. (2012), generally based on the explicitly stated purpose of the article. We differentiated between theory development, capturing the lived experience of research subjects, elaboration of complex processes, using qualitative study as an illustration of theoretical concepts and understanding of linguistic phenomena.
Following prior research (Kumar et al., 2017; Stremersch et al., 2015), further article, journal and author characteristics were included as control variables, elaborated in Table 1. The descriptive statistics of all variables are reported in Table 2.
Findings
Methodological reporting practices in qualitative service research
Our analysis of the qualitative service articles revealed 29 commonly employed methodological reporting practices, divided into 9 areas of methodological reporting. These are described in Table 3. In the following, we briefly discuss the key areas of methodological reporting. Appendix further elaborates on the individuals reporting practices and provides examples of their use in service research.
Methodology justification
Methodology justification involves providing an answer to the key question Why is this qualitative approach appropriate for this study? Prior literature on qualitative methods emphasizes the importance of justifying the use of qualitative methodology (Beverland and Lindgreen, 2010; Piekkari et al., 2010), as well as establishing the study’s connection to prior theory and literature (cf. Ketokivi and Choi, 2014). The need to clearly identify the chosen methodological perspective is also highlighted in accounts of interpretive methods (Leitch et al., 2010; Zilber and Zanoni, 2022) as well as a general discussion on the rigor of qualitative methods (Jonsen et al., 2018; Pratt et al., 2022). In line with these prior accounts, we identified three practices related to methodology justification: comparison with prior literature, consideration for the studied phenomenon and comparison with plausible alternative methodologies.
Justification of empirical context
Given the reliance on purposeful data collection rather than random sampling, qualitative research needs to explain the reasons for choosing to study a specific empirical context and research subjects (Pratt et al., 2022; Siggelkow, 2007). By research subjects, we mean the focal individuals, organizations or other cases that the study addresses. Therefore, authors need to address the key questions Why is this empirical context appropriate for the study? And Why were these research subjects chosen to be studied? The importance of motivating the choice of research context has been highlighted in the literature on positivist qualitative research (Gibbert and Ruigrok, 2010; Piekkari et al., 2010), interpretivist research (Golden-Biddle and Locke, 1993), as well as general literature on qualitative methods (Pratt et al., 2022). Both positivist (Gibbert and Ruigrok, 2010; Gnyawali and Song, 2016) and general methodological literature (Pratt, 2009) also emphasize providing a rationale for choosing a specific number and type of research subjects, which may be based on a priori theoretical reasons (Eisenhardt, 1989). We distinguish four reporting practices related to justifying the empirical context choices: motivating the choice of general context, providing rationale for choosing specific research subjects, providing grounds for including a specific number of subjects and explaining why the subjects were varied in a specific way.
Selection process
Given the non-random nature of research subject selection, qualitative scholars commonly elaborate on the logic, process and practical details of this selection. The central question is How were the research subjects identified and contacted? Addressing this question requires authors not only to explain how relevant research subjects were identified and contacted but also to elaborate on the overall logic of subject selection; for example, the application of replication or contrast logic in subject selection (Eisenhardt et al., 2016). Explaining the selection process is common for positivist qualitative studies (Eisenhardt, 1989; Gnyawali and Song, 2016) but is also relevant for and interpretivist research (Sandberg, 2005) and constructivist research – consider, for example, the concept of theoretical sampling in grounded theory (Fendt and Sachs, 2008). We discovered three reporting practices addressing these issues: description of the overall process of selection, elaboration of how the subjects were identified and contacted in practice and explanation for the selection logic of multiple waves of data collection.
Researcher role
While positivist qualitative research typically seeks to exclude the researcher as an active and visible participant in the research process, describing the position and role of the researcher with respect to the studied phenomenon and research subjects is crucial for most qualitative research methodologies (Bansal and Corley, 2011). This area of reporting seeks to answer the question: What was the researcher’s position in relation to the subjects and the research process? As noted by Bansal et al. (2018), “it is important for researchers to be forthright about their role.” Disclosing the researchers’ position is common for constructivist research (Lincoln and Guba, 1985), and, in particular, for interpretivist research, which emphasizes the instrumental role of researchers in the interpretation of the phenomenon (Cunliffe, 2011; Golden-Biddle and Locke, 1993). Four reporting practices related to the researcher role were identified. These are describing of the relation to research subjects, elaborating the actions of researchers during data collection, describing the actions of researchers during data analysis and providing an honest description of the research process changes and failures.
Data collection
Elaborating the details of data collection is crucial for all qualitative research. Positivist case study scholars (Gibbert and Ruigrok, 2010), constructivists (Lincoln and Guba, 1985) and interpretivist scholars (Zilber and Zanoni, 2022) all agree on the need to describe data collection in sufficient detail. Hence, reporting needs to address two main questions: Which data sources were used? and How exactly were the data collected? In positivist research, the description of data collection methods typically plays the same role as in quantitative research: to ensure that generally accepted rules have been followed, which is supposed to improve the validity and reliability of the conclusions (Eisenhardt, 1989) and potentially even enable study replication (cf. Aguinis and Solarino, 2019). By contrast, constructivist qualitative research seeks to establish study credibility through prolonged engagement with research subjects and triangulation between different researchers and data sources (Lincoln and Guba, 1986; Welch and Piekkari, 2017). Similarly, discussion on data collection in interpretivist studies typically highlights the extent of exposure to the phenomenon and research subjects to enhance the authenticity of the study (Golden-Biddle and Locke, 1993; Jarzabkowski et al., 2014). We identified three reporting practices related to data collection: identification of used data sources, description of the overall data collection process and elaboration of the practical procedures used in data collection.
Data analysis
Given the reliance on inductive and abductive modes of inference (Ketokivi and Mantere, 2010), crafting a theoretical argument based on empirical findings requires more careful deliberation, as induction and abduction are always logically incomplete. Scholars need to address the questions How were the data analyzed? And How did the researchers arrive at their conclusions based on empirical findings and prior literature? Given the open-endedness and iteration common in qualitative research, data analysis is typically not an activity performed only once during the research process but a continuous process that runs in parallel with and influences data collection efforts where authors seek to find a best possible theoretical explanation for their empirical observations (Dubois and Gadde, 2002; Eisenhardt, 1989; Golden-Biddle and Locke, 1993). Consequently, positivist (Gibbert et al., 2008; Gibbert and Ruigrok, 2010), constructivist (Gephart, 2004; Amis and Silk, 2008; Gioia et al., 2013) and interpretivist approaches to qualitative research (Golden-Biddle and Locke, 1993; Leitch et al., 2010; Zilber and Zanoni, 2022) all emphasize the importance of elaborating the data analysis process and procedures. We recognized five reporting practices related to data analysis: description of the overall analysis process, explanation of the used inferential approach, description of how prior literature informed the analysis, elaboration of how data were used in the analysis and demonstration of the chain of evidence.
Rich description of empirical context
Given the emphasis on contextualization as the central persuasion strategy of qualitative research (Ketokivi and Mantere, 2010), most qualitative approaches highlight the need to provide a rich description of the studied phenomenon and subjects. The two key questions to be addressed are Does the study provide a vivid description of the phenomenon? and Does the study give voice to research participants? Scholars are expected to provide a sufficiently detailed description of the phenomenon, including the point of view of the research subjects (Pratt, 2009). However, the purpose of these descriptions varies between the three epistemological perspectives. For constructivist studies, thick description is crucial for the transferability of the findings (Lincoln and Guba, 1986; Johnson et al., 2006) yet is typically central for the justification of the conclusions rather than an important research objective per se (Lincoln and Guba, 1986). A rich description is critical to interpretivist studies, as their objective is commonly the research subjects’ understanding of a phenomenon, which requires understanding and elaborating the phenomenon from their point of view (Jarzabkowski et al., 2014; Leitch et al., 2010). By contrast, in positivist studies, the description of empirical context plays a relatively minor role, given the preoccupation with seeking generalizability of findings, and is mostly used to establish external validity (cf. Gibbert and Ruigrok, 2010). We distinguish two reporting practices for describing the empirical context: providing quotations from interviewees and using all available data to describe the phenomenon.
Addressing biases
Given the reliance on researchers themselves as a key instrument for data collection, qualitative methodology scholars suggest that research should demonstrate awareness of potential biases that might have influenced the research process and the findings (Bansal et al., 2018; Golden-Biddle and Locke, 1993). Authors need to address the questions What type of biases could influence the research process? and What was done to address these biases during the research process? What this elaboration entails depends on the epistemological perspective. For positivist studies, addressing biases is related to addressing the typical notions of validity and reliability (Beverland and Lindgreen, 2010; Gibbert and Ruigrok, 2010). Constructivist studies, while eschewing these positivist notions, still highlight the importance of specific actions, such as triangulation, to ensure the trustworthiness of a study (Lincoln and Guba, 1986). The perspective acknowledges that researchers can be biased, but that these biases can be addressed through specific research strategies such as triangulation (Lincoln and Guba, 1985, ch. 11). Interpretivist studies, by contrast, embrace subjectivity and the potential biases brought to the study by the researcher. However, even interpretivist researchers should practice critical self-reflection when reporting their research findings (Golden-Biddle and Locke, 1993; Jonsen et al., 2018). Two reporting practices related to addressing biases were uncovered: identifying potential sources of bias and using specific procedures to address biases.
Ethical concerns
Tracy (2010) suggests that ethical reflection on the possible harm that researchers may cause to the research subjects is a crucial area of methodological reporting in qualitative research. Thus, qualitative scholars should address the question How did the researchers deal with ethical issues during research? While virtually no positivist methodological authors address the ethical dimensions of qualitative research, ethical considerations are more common in constructivist and interpretivist research traditions (cf. Guba and Lincoln, 1994). This stark contrast is to be expected given the different assumptions regarding the researchers’ position and role toward the research subjects. For example, interpretivist research often makes moral concerns a guiding principle of research (Amis and Silk, 2008; Leitch et al., 2010) or even a criterion for assessing the quality of the research. Of course, certain ethical practices such as preserving the anonymity of informants are also relevant for positivist studies. We delineated three reporting practices related to ethical concerns: description of data confidentiality, elaboration of negotiation process between researchers and research subjects and noting the approval of external ethics board.
Use of reporting practices
Table 4 summarizes the use of reporting practices categorized by employed methodology, organized along the typical epistemological positions related to each methodology. Based on this evidence, we can make the following observations.
First, confirming our arguments, we found that all reporting practices were employed across the different methodologies. However, as could be expected, not all practices were equally common across all methodologies. For example, providing an a priori rationale for research subject selection tends to be more important for positivist-oriented qualitative research. Nevertheless, the findings suggest that all reporting practices can be relevant for regardless of the specific methodology. Of course, authors need to ensure that the reported methodological details meet the expectations of the relevant epistemological perspective and rigor criteria.
Second, while most articles did provide a summary description of data analysis, few studies reported in detail how the analytical process proceeded from empirical data to theoretical insights. Regrettably, many articles simply cited commonly used methodological sources and briefly described a generic process such as open and axial coding phases or within-case and cross-case analyses without further elaborating how exactly the data were used to arrive at conclusions. While specific details of the analysis process go beyond the scope of this review, many qualitative studies could improve on the reporting of this part of the methodology.
Third, although relatively many articles (80) did note the preservation of privacy and confidentiality through the anonymity of research subjects, further discussion on ethical aspects of research was rare. Only 12 articles discussed how the impact of research and ethical issues were negotiated with research subjects and only 8 (mostly in the healthcare context) articles noted the approval of an external ethical review board. While not tracked as a specific reporting practice, details of data management beyond anonymity were only disclosed in a couple of articles. In conclusion, ethical issues are largely ignored in methodological reporting in qualitative service research.
Evolution of methodological reporting
Figure 2 depicts the evolution of the overall level of methodological reporting in the three included service journals, together with trendlines based on regression analysis. Two conclusions can be drawn from the diagram. First, there is a noticeable upward trend in the reporting index in service journals, indicating that the level of methodological reporting published articles has been rising. Second, on average, articles published in JSR exhibit a higher level of reporting compared to the other two journals. Nevertheless, the gap between the three journals has been gradually narrowing, suggesting a convergence in the extent of reporting across these journals.
We used OLS regression analysis to test for the statistical significance of these patterns. Using the reporting index as a dependent variable, we tested how the index scores are explained by publication year, the number of pages in the article, journal, purpose and type of methodology. The results of this analysis are reported in Table 5.
The overall fit of the model is decent (R2 = 0.372). The results confirm the descriptive findings, revealing a positive and statistically significant relationship between publication year and reporting index (b = 0.063, p < 0.001). This indicates that the standards for methodological reporting have thus gradually risen in published qualitative service research. The number of pages of an article has a positive and significant impact on the reporting index (b = 0.055, p < 0.01). This is consistent with the expectation that more space for reporting qualitative research allows for further methodological reporting.
The results confirm that the level of methodological reporting in published articles varies across journals. Research published in JSR typically has a higher level of methodological reporting index (b = 0.947, p < 0.001) compared to articles appearing in JOSM. Methodological reporting in articles published in JSM tends to be somewhat more extensive than those published in JOSM (b = 0.412, p = 0.058).
Apart from the base category of action research, the effects of qualitative methodology on the level of methodological reporting are relatively minor. CIT and ethnography have roughly equally high coefficients. This is interesting, given the very different nature of these methodologies. For CIT, this may be attributed to the quasi-quantitative nature of the methodology and its largely standardized procedures for data collection and analysis that provide a ready “template” authors can follow. By contrast, ethnography as a methodology emphasizes author reflexivity and relies the least on methodological templates, forcing authors to explicate their methodological choices in detail. Unsurprisingly, articles which use qualitative methods only for illustration tend to use significantly more limited methodological reporting than other research types (b = −1.364, p < 0.001). The differences between the other types of qualitative study purposes are not significant.
Impact of reporting practices on article downloads and citations
The scientometric literature suggests that article citation counts are lognormally distributed (Stringer et al., 2010; Thelwall, 2016). Therefore, following a similar approach to Kumar et al. (2017), we employed OLS regression using the logarithms of total article downloads and citations as the dependent variable. To avoid undefined values when applying the logarithm function to 0 citations, we added 1 to the total number of citations and downloads before performing the transformation.
The analyses were conducted hierarchically; the results are reported in Table 6. As seen from Use Model 1, the coefficient for the reporting index, while positive, is not significant (b = 0.064, p = 0.230). While the p-value of the coefficient is somewhat improved in Use Model 2 which adds topic indicators (b = 0.081, p = 0.094), the result is still not significant, suggesting that reporting practices do not influence the use of qualitative service articles. As expected, article age has an inverted-U-shaped relationship with article downloads. Articles appearing in JSR are more likely to be downloaded than articles appearing in JOSM (b = 0.606, p < 0.01), while the difference between JOSM and JSM is nonsignificant.
Next, we examined the relationship between methodological reporting and the citations received by qualitative articles. As seen from Citation Model 1, the coefficient for reporting index was small and non-significant (b = 0.014, p = 0.801), suggesting that the extent of reporting practices does not influence article impact in terms of citations. Article age again has an inverted-U-shaped relationship with article citations. Articles that appear in JSR tend to receive more citations compared to JOSM articles (b = 0.751, p < 0.01), while the difference between citations of JSM and JOSM articles is nonsignificant. Consistent with previous research (Rosenzweig et al., 2016; Stremersch et al., 2015), we find that the authors’ characteristics influence article citations. Specifically, the number of authors (b = 0.135, p < 0.05), the inclusion of a top university author (b = 0.635, p < 0.05) and the average h index of the authors (b = 0.047, p < 0.05) are all positively related to increased citation counts. Interestingly, articles with US authors tend to receive fewer citations than articles with only non-US authors (b = −0.388, p < 0.05).
Citation Model 2 adds the topic indicators, resulting in improvement in the overall model fit without significantly altering the regression coefficients or their significance, which suggests that the above findings hold across service research topics. Finally, Citation Model 3 adds article downloads as an explanatory factor of article citations. Unsurprisingly, article downloads are strongly related to article citations (b = 0.838, p < 0.001). The overall explanatory power of the model is also considerably increased (R2 = 0.739). Given that article age and journal are already key factors linked to article downloads, their direct effects on article citations are diminished. The non-significant coefficient of the methodological index (b = −0.040, p = 0.214) further indicates that methodological reporting does not significantly impact article citations.
Robustness checks
To test the robustness of our findings, we conducted identical analyses using negative binomial regression, a commonly used alternative estimation strategy to study article citations (cf. Stremersch et al., 2015). The results were consistent with our main analyses (correlation of significant standardized coefficients r = 0.99), strengthening our conclusions regarding the impact of reporting practices.
To examine the potential influence of outliers, we reconducted the analyses after excluding the most cited article in the dataset (Oliva and Kallenberg (2003), with 1,475 citations). This exclusion halved the skewness of the article citations variable. The results of the regression analyses were essentially unchanged (correlation of significant standardized coefficients r = 0.99), indicating that the main findings were not influenced by this outlier.
To explore potential variations in the impact on citations between journals, we conducted a supplementary analysis using data exclusively from JSM and JOSM, as these journals have comparable stature. Although the reduced sample size lowered significance levels, the directions and magnitudes of the explanatory variable coefficients remained consistent with the main analysis, indicating the robustness of the findings across journal contexts. To account for the time lag in citation accumulation, we conducted an additional analysis by excluding the most recent publications (those published in 2019 and 2020). Again, the main results were left unchanged.
Finally, to further validate our findings we conducted additional tests using alternative sources of citation data. We employed the total citations recorded on Google Scholar and the Plum Analytics usage score as alternatives to article citation and download data, respectively. The results of these analyses remained consistent with our main findings, confirming the reliability and stability of our results across different data sources.
Discussion
Our study set out to identify methodological reporting practices of qualitative service research that are relevant across different epistemological positions and qualitative methodologies. We also sought to analyze how the level of reporting practices has evolved over time and whether reporting affects downloads and citations of qualitative articles. Through the analysis of qualitative articles in three key service journals, we identified 29 distinctive reporting practices. We noted that while most practices are equally applicable, some are likely to be more relevant to certain epistemologies and methods. We also observed that the level of reporting on data analysis often leaves much to be desired and that few articles report extensively on ethical issues.
Our analysis of the overall data indicated that the level of methodological reporting in published qualitative articles has risen over the past two decades. Additionally, our findings show that the level of reporting varies between service journals, with JSR exhibiting the most extensive reporting of qualitative methods. This finding concurs with prior studies by Gibbert et al. (2008) and Goffin et al. (2019) who find that highest ranked journals typically require higher level of methodological rigor.
We found no evidence for the influence of methodological reporting on downloads or citations of qualitative service research articles. Methodological reporting can thus be considered a “hygiene factor” that needs to be sufficient to get published but does not otherwise affect the impact of an article. Other factors such as actual theoretical or practical contributions are likely to be more influential for the impact of qualitative research (Bansal and Corley, 2011; Holmlund et al., 2020). This finding concurs with Hoorani et al. (2019) who found that methodological quality in case study research does not affect article impact. Similarly, Benoit et al. (2017) found no major difference in article impact between various methodological characteristics. However, since journal reputation does affect the impact of an article, methodological reporting can be understood to indirectly influence citations of an article by improving chances to get published in a higher quality journal, which is likely to generate more citations. This type of indirect effect has been suggested by Bluhm et al. (2011).
Implications for service scholars
Our study contributes to the proliferation of qualitative methods in service research and answers the call for further diversity in qualitative service research (Baron et al., 2014; Witell et al., 2020) by elaborating on how qualitative methods are reported across different perspectives. Our findings have several specific implications for service scholars.
First, we present an inclusive view of what methodological reporting entails in qualitative research beyond methodological templates. We put forth an inventory of reporting practices applicable across different qualitative methodologies and epistemological positions, which hopefully will encourage service scholars to venture outside common methodological paths and templates (cf. Pratt et al., 2022). This inclusive view is also important given the extant diversity of qualitative methods used in service research (Benoit et al., 2017; Witell et al., 2020); an approach focused on a single methodology would only provide a limited understanding of crucial aspects of methodological reporting. Our view helps service scholars employing different methodologies to understand similarities in key areas of reporting and thus to build a common language and understanding between scholars using different qualitative methodologies as well as between qualitative and quantitative service scholars. Despite this, we emphasize that we do not advocate universal evaluative criteria for qualitative service research. Similar to Pratt et al. (2022), Tracy (2010) and Verleye (2019), we highlight the need to carefully match the reporting practices with the expectations of the relevant methodological perspective.
Second, we found that the level of methodological reporting expected from qualitative research has increased over the past two decades, and this trend is likely to continue. Service scholars should thus pay attention to methodological reporting to improve their chances of getting published. Our list of reporting practices can help service scholars to identify which areas of reporting they may need to improve.
Third, the level of methodological reporting varies between service journals. Scholars should carefully consider the journal they submit their qualitative research and ensure that it meets the expected level of reporting for that journal. This may include benchmarking against methodological reporting practices employed in articles previously published in the journal. However, given the rising standards of methodological reporting, we recommend service scholars to use the very latest qualitative research as benchmarks. Related, we identified several articles that demonstrated exemplary reporting of qualitative methods during our review. These are added as a web-based attachment as a reference for service scholars seeking to benchmark their reporting practices.
Lastly, while our findings indicate that the level of methodological reporting does not directly influence the impact of qualitative articles, higher-ranked journals tend to publish articles with more thorough methodological reporting. Since articles in these journals typically receive more citations, the level of reporting may indirectly influence the impact of a qualitative article.
Implications for reviewers and editors
Our study concerns the assessment of qualitative research and thus contributes to the methodological understanding of editors and reviewers. First, by providing an inclusive view of methodological reporting practices applicable across different qualitative methodologies, our study seeks to foster the variety and inclusiveness of service research with respect to qualitative methods. Our findings should help service scholars to evaluate qualitative research irrespective of their own background by providing an inventory of methodological reporting practices that are important for judging the rigor of qualitative research. This is important since assessing the rigor of qualitative research can be challenging due to the diversity of epistemologies and methodologies involved (Bansal et al., 2018). While the specific rigor criteria always depend on the specific methodology, the identified reporting practices provide a guideline on which methodological details are likely to matter.
Second, our review found that reporting on the data analysis process and ethical aspects of qualitative methodology are often lackluster. Editors may want to consider what are the standards that should be maintained regarding these two areas of reporting and whether they should be raised.
Third, we note that the number of pages in an article has a positive relationship with the extent of article methodological reporting. If service journal editors wish to encourage more thorough use of reporting practices, this suggests that special allowance could be provided for qualitative articles to ensure sufficient space for reporting methodology more extensively.
Limitations and future research
Although our inventory of reporting practices may seem like another methodological template for qualitative research, it is not intended as a universal list of criteria. As argued by methodological scholars (Verleye, 2019; Zilber and Zanoni, 2022), there are always viable alternative ways to present qualitative research that give varying weight to the specific types of reporting practices. This is also suggested by general accounts of qualitative methods (Pratt et al., 2022; Reay et al., 2019). Hence, the identified reporting practices should be used in conjunction with careful consideration of the context and purpose of a study, based on a solid understanding of the specific criteria applied within a specific methodological tradition (Jonsen et al., 2018; Pratt et al., 2022).
As noted above, the theoretical contributions and other contents of an article are likely to influence the impact of a qualitative article more than methodological reporting. Future research could thus seek to include an assessment of an article’s contribution and compare it with the impact of reporting practices, as well as explore the possible interaction between the two. Further research would also be needed to elaborate on how specific reporting practices are used for specific qualitative methodologies. Similar to Pratt (2008), future studies could also seek to survey and analyze the expectations of service journals and reviewers with respect to methodological reporting practices.
Given the time and resource limitations, we did not analyze the detailed rhetorical devices used in methodological reporting which may also affect the assessment of methodological rigor (cf. Golden-Biddle and Locke, 1993; Jonsen et al., 2018; Zilber and Zanoni, 2022). Our findings reflect the overall reporting practices rather than a specific style or techniques. Future studies could therefore conduct a detailed analysis of the use and impact of rhetorical strategies used in qualitative research methodological reporting.
Figures
Description of included control variables
Variable | Motivation for inclusion | Derivation |
---|---|---|
Article characteristics | ||
Article age | Older articles have had more time to be cited | 2023-Publication year |
Article age squared | The relationship between article age and citations can be non-linear (Stremersch et al., 2015), | (Article age)ˆ2 |
Article topic | The topic of an article is likely to affect its interest to other scholars and, consequently, its subsequent citations | Latent Dirichlet allocation (LDA) technique was used to determine the main topic of articles based on abstract text (cf. Hannigan et al., 2019). Topic dummy variables indicate the main topic of the article based on a seeded LDA analysis with 10 predetermined topic categories |
Abstract length | A longer abstract provides more room to explain the study contents and contributions to readers, which may raise interest in the article | The number of words in the abstract listed in Scopus |
Number of pages | Longer papers provide more space for methodological reporting | The number of pages reported in Scopus |
Number of keywords | Articles with more keywords may be easier to find and thus have more visibility | The number of keywords listed in Scopus |
Number of references | The more references a paper cites, the more chance there is that other scholars find the paper through these citations. Moreover, citing more references may also lend credibility to the article | The number of references reported in Scopus |
Journal | The prestige of a journal can affect the impact of an article | Two dummy variables denoting article publication in JSR and JSM, treating JOSM as the reference category |
Author characteristics | ||
Number of authors | Having multiple authors may influence the number of citations of an article (Stremersch et al., 2015) | The number of authors listed in Scopus |
US author | US institutions are generally seen as prestigious and therefore articles written by authors affiliated with such institutions can be seen as more credible (Stremersch et al., 2015) | A dummy variable denoting whether at least one of the authors of the article had a US affiliation listed in Scopus |
European author | As above but for European institutions | A dummy variable denoting whether at least one of the authors of the article had a European affiliation listed in Scopus |
Top university author | Top global business university authors may attract more attention and seem more credible than other authors (Stremersch et al., 2015) | A dummy variable denoting that at least one of the authors was affiliated with a top 100 university found on the Financial Times business school ranking |
Author mean h-index | Authors with a strong publication record and high citation counts are often seen as more credible and influential in their field and thus more likely to be cited | Mean value of article’s authors' h indices (Hirsch, 2005), calculated based on all articles appearing in the three service journals |
Source(s): This table is created by the authors of this paper
Descriptive statistics and correlations
Variable | Mean | (sd) | [min, max] | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | Reporting index scaled | 3.88 | (1.18) | [0.69, 7.24] | |||||||||
2 | log(Citations) | 3.55 | (1.06) | [0.69, 7.30] | −0.04 | ||||||||
3 | log(Downloads) | 3.46 | (0.90) | [0, 6.44] | 0.21 | 0.61 | |||||||
4 | Article age | 11.5 | (5.98) | [3, 25] | −0.38 | 0.32 | −0.28 | ||||||
5 | Abstract length | 231.7 | (71.70) | [73, 620] | 0.07 | −0.15 | 0.16 | −0.41 | |||||
6 | Number of pages | 17.63 | (6.28) | [7, 41] | 0.17 | 0.11 | 0.22 | −0.09 | 0.11 | ||||
7 | Number of references | 66.31 | (26.63) | [4, 174] | 0.27 | 0.04 | 0.28 | −0.37 | 0.21 | 0.34 | |||
8 | Number of keywords | 5.03 | (1.51) | [0, 11] | 0.17 | −0.16 | 0.18 | −0.42 | 0.23 | 0.07 | 0.20 | ||
9 | Number of authors | 2.61 | (1.21) | [1, 9] | 0.08 | 0.05 | 0.16 | −0.32 | 0.10 | 0.23 | 0.15 | 0.15 | |
10 | Author mean h index | 3.85 | (3.18) | [0.67, 17] | −0.03 | 0.24 | 0.13 | 0.04 | 0.07 | 0.04 | 0.04 | 0.08 | 0.01 |
Categorical variable | N | (%) | N | (%) | |
---|---|---|---|---|---|
Methodology | Study purpose | ||||
Case study | 151 | (47.5) | Theory development | 201 | (63.2) |
Interview study | 98 | (30.8) | Experience understanding | 77 | (24.2) |
Critical incident technique | 25 | (7.9) | Process study | 23 | (7.2) |
Ethnography | 17 | (5.3) | Illustration | 12 | (3.8) |
Netnography | 12 | (3.8) | Linguistic phenomenon | 5 | (1.6) |
Grounded theory | 11 | (3.5) | |||
Action research | 4 | (1.3) | |||
US author | 62 | (19.5) | |||
European author | 185 | (58.2) | |||
Top university author | 27 | (8.5) |
Note(s): N = 318; Correlations in italic significant at p < 0.05
Source(s): This table is created by the authors of the paper
Identified methodological reporting areas and practices
Area of methodological reporting | Reporting practice | Description |
---|---|---|
Method justification Why is this qualitative methodology appropriate for this study? | 1. Comparison with prior literature | Identifying and describing specific issues and gaps in prior substantial literature |
2. Consideration of the studied phenomenon | Elaborating the nature of the studied phenomenon from ontological and epistemological perspectives | |
3. Comparison with plausible alternative methodologies | Comparing suggested qualitative methodology with previously used or potentially other plausible methodologies | |
Empirical justification Why is this empirical context appropriate for the study? Why were these research subjects chosen to be studied? | 4. Identify, define and motivate the overall empirical context | Elaboration of how the suggested empirical context is suitable for the study |
5. Describe the rationale for choosing specific research subjects | Explanation for why specific research subjects were included in the study in terms of their characteristics and suitability for the study | |
6. Provide grounds for the number of research subjects included in the study | Disclose the number and rationale for research subjects included in the empirical study | |
7. Explain why the research subjects were varied | Elaboration on the variance of included research subject characteristics and how these are related to the research purpose and questions | |
Selection process How were the research subjects identified and contacted? | 8. Describe the overall process of selecting research subjects | Description of the selection process of research subjects for the empirical study |
9. Describe how the subjects were identified and contacted in practice | Elaboration of the practical actions taken by researchers to identify, contact and onboard research subjects | |
10. Explain the logic for multiple waves of subject selection | Description of the rationale used to select further research subjects during the empirical study | |
Researcher role What was the researcher’s position in relation to the subjects and research process? | 11. Describing the relation of the researchers to the research subjects | Description of how the researchers were related to the research subjects, particularly disclosing any pre-existing relationship |
12. Elaborating the researchers' actions during data collection | Description of what specific researchers did during data collection with respect to research subjects and among themselves | |
13. Disclose the researchers' actions during data analysis | Description of how specific researchers contributed to data analysis, particularly in the case of divided roles | |
14. Identify any iteration and surprises faced during the research process | Elaboration of any surprises encountered during the research process and description of changes made to research design and procedures | |
Data collection What data sources were used? How exactly were the data collected? | 15. Identify all data sources used in the research | Description of all used data sources |
16. Describe the overall data collection process | Elaboration of data collection in terms of timeline, phases and methods | |
17. Elaboration of the practical data collection procedures | Detailed description of how the data were collected in practice | |
Data analysis How were the data analyzed? How did the researchers arrive at their conclusions based on empirical findings and prior literature? | 18. Description of the overall data analysis process | Elaboration of the data analysis process in terms of phases and methodologies employed |
19. Explanation of the inferential process employed in data analysis | Description of the logic and specific approaches employed during data analysis | |
20. Explain how prior literature was used during data analysis | Elaboration of how prior literature, theories and models informed the data analysis | |
21. Describe how empirical data was employed during data analysis | Elaboration of how different types of data were used during data analysis | |
22. Demonstrate the chain of evidence | Illustration of how the theoretical insights are related to empirical data | |
Rich description Does the study provide a vivid description of the phenomenon? Does the study give voice to research participants? | 23. Quotations from interviewees | Illustrative quotations from study participants in their own words |
24. Rich description of the phenomenon using all available data | The use of all available data to illustrate the empirical context and specific research subjects | |
Addressing biases What type of biases could influence the research process? What was done to address these biases during the research process? | 25. Identification of potential sources of biases | Discussion on possible factors and sources of bias, related to authors, research subjects, or study design and philosophical foundations |
26. Use of specific procedures to avoid bias | Description of specific actions taken by the researchers to avoid or mitigate the effects of possible biases | |
Ethical concerns How did the researchers deal with ethical issues during research? | 27. Description of data confidentiality | Description of whether the data were treated as confidential, thus resorting to practices such as anonymity or pseudonyms to preserve the privacy of the research subjects |
28. Elaboration of a possible negotiation process between researchers and research subjects | Description of any negotiations between the researchers and research subjects related to privacy or potential impact on subjects | |
29. Approval of ethical review board | Disclosure of whether an external ethical review board was approached to gain a permission for the study |
Source(s): This table is created by the authors of this paper
Use of reporting practices across different methodologies
CIT | Action research | Case study | Interview study | Grounded theory | Netno-graphy | Ethno-graphy | |||
---|---|---|---|---|---|---|---|---|---|
Reporting practice | Positivist | Constructivist | Interpretivist | Total | (%) | ||||
Method justification | |||||||||
Comparison with prior literature | 18 | 1 | 83 | 55 | 6 | 6 | 11 | 180 | (56.6) |
Consideration of the studied phenomenon | 9 | 2 | 48 | 35 | 3 | 7 | 9 | 113 | (35.5) |
Comparison with plausible alternative methodologies | 2 | 1 | 7 | 11 | – | 2 | 1 | 24 | (7.5) |
Empirical justification | |||||||||
Justifying the overall empirical context | 12 | 2 | 114 | 68 | 6 | 12 | 15 | 229 | (72.0) |
Describe the rationale for choosing specific research subjects | 15 | 3 | 128 | 77 | 9 | 11 | 14 | 257 | (80.8) |
Provide grounds for the number of subjects included in study | 5 | 3 | 38 | 16 | 4 | 1 | 2 | 69 | (21.7) |
Explain why specific research subjects were varied | – | – | 22 | 1 | – | – | 1 | 24 | (7.5) |
Selection process | |||||||||
Describe the overall process of selecting research subjects | 22 | 2 | 96 | 77 | 8 | 7 | 10 | 222 | (69.8) |
Describe how the subjects were identified and contacted in practice | 18 | – | 58 | 53 | 3 | 8 | 8 | 148 | (46.5) |
Explain the logic for multiple waves of subject selection | 3 | – | 19 | 26 | 3 | 2 | 1 | 54 | (17.0) |
Researcher role | |||||||||
Describing the relation of the researchers to the research subjects | 3 | 3 | 14 | 9 | 2 | 1 | 5 | 37 | (11.6) |
Elaborating the specific actions of the researchers during the data collection process | 5 | 1 | 30 | 21 | 3 | 2 | 7 | 69 | (21.7) |
Disclose the actions of researchers during the data analysis process | 14 | – | 39 | 41 | 3 | 6 | 5 | 108 | (34.0) |
Identify any iteration and surprises faced during the research process | 3 | – | 7 | 2 | 2 | 1 | 1 | 16 | (5.0) |
Data collection | |||||||||
Identify all data sources used in the research | 24 | 3 | 145 | 97 | 11 | 12 | 17 | 309 | (97.2) |
Describe the overall data collection process | 24 | 4 | 144 | 98 | 11 | 12 | 17 | 310 | (97.5) |
Elaboration of the practical data collection procedures | 15 | 1 | 87 | 73 | 6 | 8 | 13 | 203 | (63.8) |
Data analysis | |||||||||
Description of the overall data analysis process | 24 | 3 | 123 | 88 | 9 | 12 | 15 | 274 | (86.2) |
Explanation of the inferential process | 10 | – | 56 | 36 | 3 | 7 | 9 | 121 | (38.1) |
Explain how prior literature was used | 2 | – | 56 | 33 | 3 | 1 | 4 | 99 | (31.1) |
Describe how empirical data informed analysis | – | – | 11 | 5 | – | 1 | 3 | 20 | (6.3) |
Demonstrate the chain of evidence | 9 | – | 32 | 30 | 4 | 1 | 8 | 84 | (26.4) |
Rich description | |||||||||
Quotations from interviewees | 21 | 1 | 113 | 89 | 10 | 10 | 16 | 260 | (81.8) |
Rich description of phenomenon using all available data | 7 | 2 | 22 | 18 | 1 | 5 | 10 | 65 | (20.4) |
Addressing biases | |||||||||
Identification of possible sources of biases | 3 | 1 | 54 | 27 | 5 | 3 | 6 | 99 | (31.1) |
Use of specific procedures to avoid bias | 18 | – | 77 | 42 | 3 | 6 | 9 | 155 | (48.7) |
Ethical considerations | |||||||||
Description of data confidentiality | 3 | 3 | 36 | 24 | 3 | 5 | 6 | 80 | (25.2) |
Elaboration of negotiation between researchers and research subjects | 1 | – | 2 | 5 | – | 1 | 3 | 12 | (3.8) |
Approval of ethical review board noted | – | – | 2 | 2 | 1 | 1 | 2 | 8 | (2.5) |
Total articles | 25 | 4 | 151 | 98 | 11 | 12 | 17 | 318 |
Source(s): This table is created by the authors of this paper
Results of OLS regression on reporting index
Variable | b | (s.e.) | p |
---|---|---|---|
Publication year | 0.063 | (0.013) | *** |
Number of pages | 0.055 | (0.017) | ** |
Number of references | 0.000 | (0.003) | 0.875 |
Journal (baseline = JOSM) | |||
JSM | 0.412 | (0.216) | 0.058 |
JSR | 0.947 | (0.223) | *** |
Author variables | |||
Number of authors | −0.026 | (0.059) | 0.662 |
Top university author | 0.024 | (0.271) | 0.930 |
Author from US | −0.580 | (0.195) | ** |
Author from Europe | −0.277 | (0.147) | 0.061 |
Author mean h index | −0.028 | (0.021) | 0.168 |
Methodology (baseline = action research) | |||
Case study | 0.551 | (0.234) | * |
CIT | 0.898 | (0.304) | ** |
Ethnography | 0.885 | (0.423) | * |
Grounded theory | 0.522 | (0.467) | 0.264 |
Interview study | 0.613 | (0.259) | * |
Netnography | 0.720 | (0.411) | 0.081 |
Study type (baseline = experience) | |||
Illustration | −1.364 | (0.352) | *** |
Linguistics | 0.074 | (0.734) | 0.919 |
Process | −0.132 | (0.237) | 0.578 |
Theory | −0.232 | (0.146) | 0.114 |
(Intercept) | −123.990 | (26.437) | *** |
Topic indicators | Included | ||
F(29,288) | 5.890 | ||
R2 | 0.372 | ||
Adjusted R2 | 0.309 | ||
Note(s): Heteroskedasticity-robust standard errors in parentheses; ***p < 0.001; **p < 0.01; *p < 0.05; All VIFs < 4
Source(s): This table is created by the authors of this paper
Results of regression analysis on article downloads and citations
Variable | Use Model 1 | Use Model 2 | Citation Model 1 | Citation Model 2 | Citation Model 3 | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
b | (s.e.) | p | b | (s.e.) | p | b | (s.e.) | p | b | (s.e.) | p | b | (s.e.) | p | |
Article variables | |||||||||||||||
Article age | 0.132 | (0.038) | ** | 0.160 | (0.035) | *** | 0.188 | (0.039) | *** | 0.220 | (0.039) | *** | 0.086 | (0.031) | ** |
Article age squared | −0.006 | (0.002) | *** | −0.006 | (0.001) | *** | −0.005 | (0.002) | ** | −0.005 | (0.002) | ** | 0.000 | (0.001) | 0.864 |
Number of pages | 0.026 | (0.013) | * | 0.026 | (0.013) | * | 0.008 | (0.015) | 0.605 | 0.007 | (0.015) | 0.671 | −0.015 | (0.010) | 0.136 |
Abstract length | 0.001 | (0.001) | 0.382 | 0.001 | (0.001) | 0.322 | 0.000 | (0.001) | 0.829 | 0.000 | (0.001) | 0.688 | 0.000 | (0.001) | 0.681 |
Number of keywords | 0.036 | (0.034) | 0.296 | 0.030 | (0.031) | 0.338 | −0.031 | (0.039) | 0.425 | −0.038 | (0.036) | 0.294 | −0.063 | (0.024) | * |
Number of references | 0.002 | (0.002) | 0.226 | 0.002 | (0.002) | 0.314 | 0.003 | (0.002) | 0.103 | 0.003 | (0.002) | 0.154 | 0.001 | (0.001) | 0.356 |
Journal (baseline = JOSM) | |||||||||||||||
JSM | 0.072 | (0.170) | 0.674 | 0.164 | (0.174) | 0.347 | −0.120 | (0.202) | 0.554 | −0.030 | (0.205) | 0.883 | −0.168 | (0.131) | 0.203 |
JSR | 0.606 | (0.210) | ** | 0.649 | (0.197) | ** | 0.751 | (0.226) | ** | 0.746 | (0.219) | ** | 0.203 | (0.137) | 0.139 |
Author variables | |||||||||||||||
Number of authors | 0.033 | (0.044) | 0.463 | 0.007 | (0.043) | 0.879 | 0.135 | (0.053) | * | 0.107 | (0.051) | * | 0.101 | (0.033) | ** |
Top university author | 0.159 | (0.242) | 0.511 | 0.200 | (0.222) | 0.369 | 0.635 | (0.261) | * | 0.655 | (0.246) | ** | 0.487 | (0.122) | *** |
Author from USA | −0.110 | (0.153) | 0.473 | −0.105 | (0.154) | 0.494 | −0.388 | (0.176) | * | −0.354 | (0.174) | * | −0.266 | (0.121) | * |
Author from Europe | 0.047 | (0.115) | 0.685 | 0.075 | (0.112) | 0.504 | −0.094 | (0.130) | 0.469 | −0.057 | (0.127) | 0.651 | −0.120 | (0.094) | 0.205 |
Author mean h index | 0.013 | (0.044) | 0.497 | 0.025 | (0.017) | 0.143 | 0.047 | (0.018) | * | 0.051 | (0.018) | ** | 0.030 | (0.013) | * |
Study type (baseline = experience) | |||||||||||||||
Illustration | 0.675 | (0.283) | * | 0.508 | (0.300) | 0.091 | 0.498 | (0.320) | 0.121 | 0.386 | (0.325) | 0.236 | −0.040 | (0.198) | 0.840 |
Linguistics | 0.019 | (0.365) | 0.959 | 0.008 | (0.341) | 0.981 | −0.197 | (0.435) | 0.650 | −0.151 | (0.386) | 0.696 | −0.158 | (0.388) | 0.685 |
Process | 0.422 | (0.194) | * | 0.391 | (0.165) | * | 0.409 | (0.248) | 0.100 | 0.407 | (0.235) | 0.084 | 0.080 | (0.181) | 0.660 |
Theory | 0.184 | (0.110) | 0.096 | 0.123 | (0.108) | 0.257 | 0.161 | (0.114) | 0.160 | 0.146 | (0.117) | 0.215 | 0.043 | (0.091) | 0.639 |
Reporting index | 0.063 | (0.052) | 0.230 | 0.081 | (0.048) | 0.094 | 0.014 | (0.056) | 0.801 | 0.023 | (0.050) | 0.650 | −0.040 | (0.032) | 0.214 |
log(Article use) | 0.838 | (0.054) | *** | ||||||||||||
Methodology indicators | Included | Included | Included | Included | Included | ||||||||||
Topic indicators | Included | Included | Included | ||||||||||||
(Intercept) | 1.484 | (0.613) | * | 1.465 | (0.704) | * | 1.418 | 0.831 | 0.089 | 1.346 | (0.897) | 0.134 | 0.117 | (0.565) | 0.835 |
R2 | 0.317 | 0.421 | 0.386 | 0.447 | 0.739 | ||||||||||
R2 diff | 0.104 | *** | 0.061 | *** | 0.292 | *** | |||||||||
Adjusted R2 | 0.261 | 0.353 | 0.335 | 0.383 | 0.708 | ||||||||||
Max VIF | 4.034 | 4.543 | 4.034 | 4.543 | 5.359 |
Note(s): Heteroskedasticity-robust standard errors in parentheses; ***p < 0.001; **p < 0.01; *p < 0.05
Source(s): This table is created by the authors of this paper
Methodological reporting practices in qualitative service research
Reporting practice | Description | Rationale | Examples |
---|---|---|---|
Method justification: why is this qualitative approach appropriate for this study? | |||
Comparison with prior literature | Identifying and describing specific issues and gaps with prior substantial literature | To motivate the choice of qualitative methodology based on need to develop further theory or to explore phenomenon from previously underrepresented perspective | “The study was well suited for case research because service development in goods-dominant firms is a contemporary phenomenon for which scant academic research has been published […] In addition, our objective was to isolate and characterize factors that enable successful service development and factors that could not be manipulated to observe their influence on performance” (Neu and Brown, 2005) |
Consideration for the ontology and epistemology of the studied phenomenon | Elaborating the specific nature of the studied phenomenon from ontological and epistemological perspectives | To motivate the choice of qualitative methodology to study this specific phenomenon by linking it to the strengths of qualitative methods | “Due to the dynamic and complex character of the phenomenon that we are studying, the research design must enable the capture of the process character of the servicescape, the interaction processes between customer and service environment, and must be of an exploratory nature.” (Pareigis et al., 2012) |
Comparison with plausible alternative methods | Explicitly comparing suggested qualitative methodology with previously used or potential other plausible methodologies | To strengthen the argument for choosing specific qualitative methodology by eliminating alternative methodologies as less suitable | “Sequential service quality in service encounters is difficult to explore using quantitative methods. Multiple respondents and various views at different levels in the service chain are required. For this reason, a qualitative case-study methodology was employed – thus allowing an exploration of different perceptions of sequential service quality.” (Svensson, 2006) |
Empirical justification: why is this empirical context appropriate for the study? Why were these research subjects chosen to be studied? | |||
Identify, define and motivate the overall empirical context | Elaboration of how the suggested empirical context is suitable for the study | To establish that the context is relevant to study the phenomenon and is amenable for pursuing the research questions | “The setting for the study was the retail sector. The retail sector was deemed appropriate for a variety of reasons including labor intensity, frequency of customer-contact, and size/economic importance.” (Harris and Daunt, 2013) |
Motivate the choice of specific research subjects for the study | Identifying why specific research subjects were included in the study in terms of their characteristics and suitability for the overall study | To demonstrate that the selected research subjects are relevant for the study purpose and research questions | “The key criteria used to select the organisations were: it is a not-for-profit organisation in the mental healthcare sector; it is operating in the community based mental health sector for at least ten years, and it demonstrated evidence of co-creation with its customers”. (Sharma et al., 2017) |
Motivate the number of subjects included in study | Disclose the number and rationale for research subjects included in the empirical study | To demonstrate why a specific research number of research subjects is suitable for studying the research questions | “Given this complexity, we adopted a case study approach incorporating multiple qualitative data sources to identify and illuminate these various perspectives […] In order to simplify this complexity, we centered our outlook upon the perspective of one particular ecosystem actor. This provided opportunities to identify subtle interactions between actors, and key institutional arrangements, which may otherwise remain latent, given their nesting within a larger food waste service ecosystem” (Baron et al., 2018) |
Motivate why specific research subjects were varied | Elaborating how the research subjects included in the study vary in their characteristics and how these are related to the research purpose and questions | To illustrate why different research subjects are included in the study | “We selected cases that differed in their content, business domain and nature of solution, so as to expand the external generalizability of the findings” (Hakanen and Jaakkola, 2012) “It was important to achieve a broad perspective of third place activity in fashion physical stores across different market segments to track and assess variance. […] The study therefore adopts McKinsey-BoF’s Sales Price Index approach to classify the mass, mid, and luxury fashion segments used.” (Alexander, 2019) |
Selection process: how were the research subjects identified and contacted? | |||
Describe the overall process of selecting research subjects | Description of how research subjects were chosen for the empirical study | To describe the overall research design and selection of research subjects | “We used a mix of purposive and snowball sampling to recruit health-care service providers (eight men and six women) who were encouraged to implement the telehealth service. Participants were invited to participate via a global email invitation and the second author recruited in-person during regularly scheduled clinical team meetings to explain the aims of the study.” (Daskalopoulou et al., 2020) |
Describe how the subjects were identified and contacted in practice | Elaboration of the practical actions taken by researchers to identify, contact and onboard research subjects | To ensure the vividness of the study by demonstrating how the cases are found | “Participation in the study was self-selected. A notification of the opportunity to participate was posted in the restaurant one week before the interview commencement date. Each respondent received a $25 gift certificate to the restaurant.” (Rosenbaum 2009) |
Explain the logic for multiple waves of subject selection | Description of the rationale used to select further research subjects during the empirical study | To motivate the addition of specific kind of research subjects during the research | “Multiple cases served as “replication” logic for our results, as contrary replication (observing cases where certain practices were not enacted), or as elimination of alternative explanations” (Cabiddu et al., 2019) “Participants were recruited through referrals and snowballing and data saturation was reached” (Davey and Grönroos, 2019) |
Researcher role: what was the researcher’s position in relation to the subjects and research process? | |||
Describe the relationship of the researchers to the research subjects | Description of how the researchers were related to the research subjects, particularly disclosing any pre-existing relationship | To disclose possible motivations and incentives for studying specific research subjects, and how the subjects may benefit from the research | “The researchers were involved in a collaborative research project that occurred between March 2010 and October 2013” (Sajtos et al., 2018) “We obtained the primary data through our consulting and managerial experience.[…] Over the course of ten years, one of the writers was involved in a real case, which we use to show an example of a mortgage-loan-based bundling strategy that has also been pursued by other banks” (Barrutia Legarreta and Echevarria Miguel, 2004) |
Disclose the specific actions of the researchers during the data collection process | Description of what specific researchers did during data collection with respect to research subjects and among themselves | To clarify the position of the researcher in the empirical study, and to indicate how the researcher might have influenced data collection | “After every interview, the interview team conducted reflexive discussions about the content, language, and informant to consider various and opposing interpretations.” (Sieg et al., 2012) “The researchers volunteered for a nonprofit that serves individuals in homeless pathways to provide tangible help as well as develop empathy, rapport, and understanding.” (Blocker and Barrios, 2015) |
Disclose the actions of researchers during the data analysis process | Description of how specific researchers contributed to data analysis, particularly in the case of divided roles | To elaborate how the researchers engaged in analysis and how their actions and interactions might have influenced the analysis | “Two researchers with experience in the field were involved in the iterative process of defining and delineating these practices, seeking contradictory views, redundancies and new insights throughout the process of analysis […] Two other researchers offered an outside perspective, questioning the interpretations and providing regular feedback on the analysis” (Vink et al., 2019) |
Identify any iteration and surprises faced during the research process | Elaboration of any surprises encountered during the research process and description of what kind of changes were made to research design and procedures | To demonstrate how the researchers reflexively reconsidered and changed their research design and procedures | “The reliability of the interview topic guide was improved by reformulating questions that caused misinterpretation in the early interviews to make them clearer to the interviewees” (Schmidt et al., 2007) “Owing to the interpretive nature of the research, we made minor modifications to the interview protocol as the research progressed” (Sharma and Conduit, 2016) |
Data collection: what data sources were used? How exactly were the data collected? | |||
Identify all data sources used in the research | Description of all used data sources | To convince that collected data was sufficient for the study purpose and questions | “Data collection involved semi-structured interviews with HR representatives/line managers and employees and an employee focus group in Oxygen.” (Hurrell and Scholarios, 2014) “The design for this project involved a longitudinal, qualitative case study, based on informal discussions, objective productivity data, a simple five- question survey and semi-structured interviews” (Dean and Indriati, 2020) |
Describe the overall data collection process | Elaboration of data collection in terms of timeline, phases and different methods | To provide an overview of the methodology used to collect data and its suitability for the research purpose | “The study was conducted in Brazil over a period of eight months in 2017, employing three complementary data collection methods: observation, phenomenological interviews and a diary method” (Becker et al., 2020) |
Explain the practical data collection procedures employed during research | Detailed description of how the data were collected in practice | To demonstrate the suitability and sufficiency of the data collection methods with respect to the research purpose and questions | “We began our interviews with general questions pertaining to informants’ likes and dislikes, occupations, family history and lifestyles. We structured our discussion to gradually arrive at their transgression experiences and finally their narratives of forgiveness. We probed for details of the transgression from its inception to its current state—that is, partially or fully resolved, or still in limbo. Informants discussed their thoughts, feelings and reactions at different stages of the transgression. They also described their relationship prior to the transgression, how the provider addressed it, and the informant’s feelings and thoughts in relation to any recovery efforts. We encouraged elaboration by repeating or paraphrasing comments back to them and asked follow-up questions to secure complete responses.” (Tsarenko et al., 2019) |
Data analysis: how were the data analyzed? How did the researchers arrive at their conclusions based on empirical findings and prior literature? | |||
Description of the overall data analysis process | Elaboration of the data analysis process in terms of phases and methodologies employed | To provide an overview of the analysis process and its suitability for the research purpose | “The data were analyzed in two stages. The first was within-case analysis, involving write-ups of each case, and the second was cross-case analysis, which involved searches for cross-case patterns.” (Löfberg et al., 2010) |
Describe the inferential process employed in data analysis | Description of the logic and specific approaches employed during data analysis | To disclose in detail how the analysis proceeded and what specific logic was employed during the analysis process | “We used the coded and categorized data to perform a thematic content analysis through an inductive process in which we progressed from categorization to abstraction, comparison, and integration” (Ordanini et al., 2011) “In line with the abductive research approach (Dubois and Gadde 2014), three rounds of data analysis followed the first-order and second-order analysis guidelines [entailing] ongoing “cycling between emergent data, themes, concepts and dimensions and the relevant literature (Gioia et al., 2013) […] The data analysis process at each of the three rounds was divided into […] four stages.” (Chandler et al., 2019) |
Explain how literature was used during data analysis | Elaboration of how prior literature, theories and models informed the data analysis | To establish what role the literature played in data analysis relation to the collected data | “Given the purpose of the article, we turned to Goffman’s dramaturgical approach to social life [ …]. By paying attention to the clues actors see, the frames they use and how they define situations throughout everyday interactions, Goffman gave us the conceptual tools to organize our field observations.” (Suquet, 2010) |
Describe how empirical data was employed during data analysis | Elaboration exactly how different types of data were used during data analysis | To disclose how different types of data were analyzed and used to arrive at conclusions | “The texts in the interview transcripts that appeared relevant were then highlighted and coded on the basis of phrases used by respondents […] Observations and documents were coded likewise.[…] Interviews were used as the main source of data while observation and documentation were used for corroborating evidence” (Sharma et al., 2017) |
Demonstrate the chain of evidence: how empirical data led to the conclusions | Illustration of how the theoretical insights are related to empirical data | To highlight the relationship between empirical data and emerging theoretical insights | “Examples of the coding and interpretations of each practice are summarized in Table 2.” (McColl-Kennedy et al., 2015) “The results characterize on-demand services as being highly available, responsive and scalable. […] The findings are supported by evidence from the individual cases, summarized in Table 3.” (Van der Burg et al., 2019) |
Rich description: does the study provide a vivid description of the phenomenon? Does the study give voice to research participants? | |||
Quotations from interviewees | Illustrative quotations from study participants in their own words | To give voice to research subjects or informants and to enhance the vividness of the empirical evidence | “‘There were at least two or three persons [consultants] changing during these five months, I found that not very comfortable that people were changing from the [SP] side.’ Team Member SC (Case A)” (Breidbach et al., 2013) |
Rich description of phenomenon using all available data | The use of all available data to illustrate the empirical context and specific research subjects | To enhance the verisimilitude and vividness of the empirical context and subjects | “Several men gave the researcher poems, readings, short books and novels they had written that spoke to issues discussed in the course [example of story provide in article]”, (Hill et al., 2016) “John explained how wearing t-shirts/hoodies that say ‘I’m a troll’ [photograph in article] turns the stigma of being a misfit into a point of community pride” (Blocker and Barrios, 2015) |
Addressing biases: what type of biases could influence the research process? What was done to address these biases during the research process? | |||
Identification of possible sources of biases | Discussion on possible factors and sources of bias, related to authors, research subjects, or study design and philosophical foundations | To demonstrate reflection on part of the authors of the possible biases that could affect the findings of the study | “The cross-sectional nature of this research might have produced biased results” (Alam and Perry, 2002) “Table 2 lists 10 sample statements illustrating our use of simultaneous coding […] certain cells are left blank, indicating instances where coding was omitted because of ambiguity in the statement which compromised the ability to code it reliably.” (Lyons and Brennan, 2019) |
Use of specific procedures to avoid bias | Description of specific actions taken by the researchers to avoid or mitigate the effects of possible biases | To convince readers that the influence of biases has not overly affected the findings of the study | “To confirm the findings, researchers organized a meeting with the entrepreneurs, in which the researchers presented an initial theoretical framework and their managerial implications. In return, the researchers received feedback that helped to partly confirm findings, but also to identify discrepancies that required revisiting the data,” (Wallin and Fuglsang, 2017) “Inter-rate agreement reached acceptable levels for percentage of agreement across judges (0.94)” (Abney et al., 2017) |
Ethical concerns: how did the researchers deal with ethical issues during research? | |||
Description of data confidentiality | Description of whether the data were treated as confidential, thus resorting to practices such as anonymity or pseudonyms to preserve the privacy of the research subjects | To demonstrate that ethical obligations toward the privacy of research subjects are maintained | “All interviewees and companies were guaranteed confidentiality, anonymity and non-attribution.” (Schmidt et al., 2007) “We have designed safeguards to protect the identity of conscripted participants as we disseminate accessible data.” (Keeling et al., 2013) |
Negotiation process between researchers and research subjects described | Description of any negotiations between the researchers and research subjects related to privacy or potential impact on subjects | To establish that research engaged research subjects on equal footing and attempted to establish common rules for the conduct of research | “All participants provided written informed consent for participating. To acknowledge participants’ autonomy, we emphasized in writing – via a cover letter and in the diary – that participation was voluntary and that participants could withdraw from the study at any stage.” (Engström and Elg, 2015) |
Approval of ethical review board noted | Disclosure of whether an external ethical review board was approach for gaining a permission for the study | To demonstrate that the ethical aspects of the study were assessed by an external party and required ethical permission was granted for the study | “Ethics approval for the methodology as well as permission to publish the results from the social service provider was obtained,” (Hepi et al., 2017) “The ethical practice of qualitative research was ensured by […] gaining IRB approval” (Torres et al., 2018) |
Source(s): This table is created by the authors of this paper
Notes
Until 2008 International Journal of Service Industry Management.
For example, an article that employed 9 reporting practices has the scaled reporting index of 9/29 * 10 = 3.10.
References
Abney, A.K., White, A., Shanahan, K.J. and Locander, W.B. (2017), “In their shoes: co-creating value from deaf/hearing perspectives”, Journal of Services Marketing, Vol. 31 Nos 4/5, pp. 313-325, doi: 10.1108/jsm-05-2016-0201.
Aguinis, H. and Solarino, A.M. (2019), “Transparency and replicability in qualitative research: the case of interviews with elite informants”, Strategic Management Journal, Vol. 40 No. 8, pp. 1291-1315, doi: 10.1002/smj.3015.
Alam, I. and Perry, C. (2002), “A customer-oriented new service development process”, Journal of Services Marketing, Vol. 16 No. 6, pp. 515-534, doi: 10.1108/08876040210443391.
Alexander, B. (2019), “Commercial, social and experiential convergence: fashion's third places”, Journal of Services Marketing, Vol. 33 No. 3, pp. 257-272, doi: 10.1108/jsm-04-2018-0116.
Amis, J.M. and Silk, M.L. (2008), “The philosophy and politics of quality in qualitative organizational research”, Organizational Research Methods, Vol. 11 No. 3, pp. 456-480, doi: 10.1177/1094428107300341.
Aria, M. and Cuccurullo, C. (2017), “bibliometrix: an R-tool for comprehensive science mapping analysis”, Journal of Informetrics, Vol. 11 No. 4, pp. 959-975, doi: 10.1016/j.joi.2017.08.007.
Bansal, P. and Corley, K. (2011), “The coming of age for qualitative research: embracing the diversity of qualitative methods”, Academy of Management Journal, Vol. 54 No. 2, pp. 233-237, doi: 10.5465/amj.2011.60262792.
Bansal, P., Smith, W.K. and Vaara, E. (2018), “New ways of seeing through qualitative research”, Academy of Management Journal, Vol. 61 No. 4, pp. 1189-1195, doi: 10.5465/amj.2018.4004.
Baron, S., Warnaby, G. and Hunter-Jones, P. (2014), “Service(s) marketing research: developments and directions”, International Journal of Management Reviews, Vol. 16 No. 2, pp. 150-171, doi: 10.1111/ijmr.12014.
Baron, S., Patterson, A., Maull, R. and Warnaby, G. (2018), “Feed people first: a service ecosystem perspective on innovative food waste reduction”, Journal of Service Research, Vol. 21 No. 1, pp. 135-150, doi: 10.1177/1094670517738372.
Barrutia Legarreta, J.M. and Echebarria Miguel, C. (2004), “Collaborative relationship bundling: a new angle on services marketing”, International Journal of Service Industry Management, Vol. 15 No. 3, pp. 264-283, doi: 10.1108/09564230410540935.
Becker, L., Jaakkola, E. and Halinen, A. (2020), “Toward a goal-oriented view of customer journeys”, Journal of Service Management, Vol. 31 No. 4, pp. 767-790, doi: 10.1108/josm-11-2019-0329.
Benoit, S., Scherschel, K., Ates, Z., Nasr, L. and Kandampully, J. (2017), “Showcasing the diversity of service research: theories, methods, and success of service articles”, Journal of Service Management, Vol. 28 No. 5, pp. 810-836, doi: 10.1108/josm-05-2017-0102.
Beverland, M. and Lindgreen, A. (2010), “What makes a good case study? A positivist review of qualitative case research published in Industrial Marketing Management, 1971-2006”, Industrial Marketing Management, Vol. 39 No. 1, pp. 56-63, doi: 10.1016/j.indmarman.2008.09.005.
Blocker, C.P. and Barrios, A. (2015), “The transformative value of a service experience”, Journal of Service Research, Vol. 18 No. 3, pp. 265-283, doi: 10.1177/1094670515583064.
Bluhm, D.J., Harman, W., Lee, T.W. and Mitchell, T.R. (2011), “Qualitative research in management: a decade of progress”, Journal of Management Studies, Vol. 48 No. 8, pp. 1866-1891, doi: 10.1111/j.1467-6486.2010.00972.x.
Breidbach, C.F., Kolb, D.G. and Srinivasan, A. (2013), “Connectivity in service systems: does technology-enablement impact the ability of a service system to Co-create value?”, Journal of Service Research, Vol. 16 No. 3, pp. 428-441, doi: 10.1177/1094670512470869.
Cabiddu, F., Moreno, F. and Sebastiano, L. (2019), “Toxic collaborations: Co-destroying value in the B2B context”, Journal of Service Research, Vol. 22 No. 3, pp. 241-255, doi: 10.1177/1094670519835311.
Chandler, J.D., Danatzis, I., Wernicke, C., Akaka, M.A. and Reynolds, D. (2019), “How does innovation emerge in a service ecosystem?”, Journal of Service Research, Vol. 22 No. 1, pp. 75-89, doi: 10.1177/1094670518797479.
Corley, K., Bansal, P. and Yu, H. (2021), “An editorial perspective on judging the quality of inductive research when the methodological straightjacket is loosened”, Strategic Organization, Vol. 19 No. 1, pp. 161-175, doi: 10.1177/1476127020968180.
Cornelissen, J.P. (2017), “Preserving theoretical divergence in management research: why the explanatory potential of qualitative research should Be harnessed rather than suppressed”, Journal of Management Studies, Vol. 54 No. 3, pp. 368-383, doi: 10.1111/joms.12210.
Cunliffe, A.L. (2011), “Crafting qualitative research: morgan and smircich 30 Years on”, Organizational Research Methods, Vol. 14 No. 4, pp. 647-673, doi: 10.1177/1094428110373658.
Daskalopoulou, A., Go Jefferies, J. and Skandalis, A. (2020), “Transforming technology-mediated health-care services through strategic sense-giving”, Journal of Services Marketing, Vol. 34 No. 7, pp. 909-920, doi: 10.1108/jsm-11-2019-0452.
Davey, J. and Grönroos, C. (2019), “Health service literacy: complementary actor roles for transformative value co-creation”, Journal of Services Marketing, Vol. 33 No. 6, pp. 687-701, doi: 10.1108/jsm-09-2018-0272.
Dean, A. and Indrianti, N. (2020), “Transformative service research at the BoP: the case of Etawa goat farmers in Indonesia”, Journal of Services Marketing, Vol. 34 No. 5, pp. 665-681, doi: 10.1108/jsm-07-2019-0251.
Dubois, A. and Gadde, L.-E. (2002), “Systematic combining: an abductive approach to case research”, Journal of Business Research, Vol. 55 No. 7, pp. 553-560, doi: 10.1016/s0148-2963(00)00195-8.
Dubois, A. and Gadde, L.-E. (2014), “‘Systematic combining’—a decade later”, Journal of Business Research, Vol. 67 No. 6, pp. 1277-1284, doi: 10.1016/j.jbusres.2013.03.036.
Easterby-Smith, M., Golden-Biddle, K. and Locke, K. (2008), “Working with pluralism: determining quality in qualitative research”, Organizational Research Methods, Vol. 11 No. 3, pp. 419-429, doi: 10.1177/1094428108315858.
Eisenhardt, K.M. (1989), “Building theories from case study research”, Academy of Management Review, Vol. 14 No. 4, pp. 532-550, doi: 10.2307/258557.
Eisenhardt, K.M., Graebner, M.E. and Sonenshein, S. (2016), “Grand challenges and inductive methods: rigor without rigor mortis”, Academy of Management Journal, Vol. 59 No. 4, pp. 1113-1123, doi: 10.5465/amj.2016.4004.
Engström, J. and Elg, M. (2015), “A self-determination theory perspective on customer participation in service development”, Journal of Services Marketing, Vol. 29 Nos 6/7, pp. 511-521, doi: 10.1108/jsm-01-2015-0053.
Epp, A.M. and Otnes, C.C. (2021), “High-quality qualitative research: getting into gear”, Journal of Service Research, Vol. 24 No. 2, pp. 163-167, doi: 10.1177/1094670520961445.
Fendt, J. and Sachs, W. (2008), “Grounded theory method in management research: users' perspectives”, Organizational Research Methods, Vol. 11 No. 3, pp. 430-455, doi: 10.1177/1094428106297812.
Gephart, R.P. (2004), “Qualitative research and the academy of management journal”, Academy of Management Journal, Vol. 47 No. 4, pp. 454-462, doi: 10.5465/amj.2004.14438580.
Gibbert, M. and Ruigrok, W. (2010), “The ‘what’ and ‘how’ of case study rigor: three strategies based on published work”, Organizational Research Methods, Vol. 13 No. 4, pp. 710-737, doi: 10.1177/1094428109351319.
Gibbert, M., Ruigrok, W. and Wicki, B. (2008), “What passes as a rigorous case study?”, Strategic Management Journal, Vol. 29 No. 13, pp. 1465-1474, doi: 10.1002/smj.722.
Gioia, D.A., Corley, K.G. and Hamilton, A.L. (2013), “Seeking qualitative rigor in inductive research: notes on the Gioia methodology”, Organizational Research Methods, Vol. 16 No. 1, pp. 15-31, doi: 10.1177/1094428112452151.
Glaser, B.G. and Strauss, A.L. (1967), The Discovery of Grounded Theory: Strategies for Qualitative Research, Aldine, Chicago.
Gnyawali, D.R. and Song, Y. (2016), “Pursuit of rigor in research: illustration from coopetition literature”, Industrial Marketing Management, Vol. 57, pp. 12-22, doi: 10.1016/j.indmarman.2016.05.004.
Goffin, K., Åhlström, P., Bianchi, M. and Richtnér, A. (2019), “State-of-the-art: the quality of case study research in innovation management”, Journal of Product Innovation Management, Vol. 36 No. 5, pp. 586-615, doi: 10.1111/jpim.12492.
Golden-Biddle, K. and Locke, K. (1993), “Appealing work: an investigation of how ethnographic texts convince”, Organization Science, Vol. 4 No. 4, pp. 595-616, doi: 10.1287/orsc.4.4.595.
Graebner, M.E., Martin, J.A. and Roundy, P.T. (2012), “Qualitative data: cooking without a recipe”, Strategic Organization, Vol. 10 No. 3, pp. 276-284, doi: 10.1177/1476127012452821.
Guba, E.G. and Lincoln, Y.S. (1994), “Competing paradigms in qualitative research”, in Denzin, N.K. and Lincoln, Y.S. (Eds), Handbook of Qualitative Research, Sage Publications, Vol. 2, pp. 105-117.
Hakanen, T. and Jaakkola, E. (2012), “Co-creating customer-focused solutions within business networks: a service perspective”, Journal of Service Management, Vol. 23 No. 4, pp. 593-611, doi: 10.1108/09564231211260431.
Hannigan, T.R., Haans, R.F.J., Vakili, K., Tchalian, H., Glaser, V.L., Wang, M.S., Kaplan, S. and Jennings, P.D. (2019), “Topic modeling in management research: rendering new theory from textual data”, Academy of Management Annals, Vol. 13 No. 2, pp. 586-632, doi: 10.5465/annals.2017.0099.
Harley, B. and Cornelissen, J. (2022), “Rigor with or without templates? The pursuit of methodological rigor in qualitative research”, Organizational Research Methods, Vol. 25 No. 2, pp. 239-261, doi: 10.1177/1094428120937786.
Harris, L. and Daunt, K. (2013), “Managing customer misbehavior: challenges and strategies”, Journal of Services Marketing, Vol. 27 No. 4, pp. 281-293, doi: 10.1108/08876041311330762.
Hepi, M., Foote, J., Finsterwalder, J., Moana-o-Hinerangi, M.-H., Carswell, S. and Baker, V. (2017), “An integrative transformative service framework to improve engagement in a social service ecosystem: the case of He Waka Tapu”, Journal of Services Marketing, Vol. 31 Nos 4/5, pp. 423-437, doi: 10.1108/jsm-06-2016-0222.
Hill, R.P., Capella, M.L., Rapp, J.M. and Gentlemen, G. (2016), “Antiservice as guiding maxim: tough lessons from a maximum security prison”, Journal of Service Research, Vol. 19 No. 1, pp. 57-71, doi: 10.1177/1094670515593914.
Hirsch, J.E. (2005), “An index to quantify an individual's scientific research output”, Proceedings of the National Academy of Sciences, Vol. 102 No. 46, pp. 16569-16572, doi: 10.1073/pnas.0507655102.
Holmlund, M., Witell, L. and Gustafsson, A. (2020), “Viewpoint: getting your qualitative service research published”, Journal of Services Marketing, Vol. 34 No. 1, pp. 111-116, doi: 10.1108/jsm-11-2019-0444.
Hoorani, B.H., Nair, L.B. and Gibbert, M. (2019), “Designing for impact: the effect of rigor and case study design on citations of qualitative case studies in management”, Scientometrics, Vol. 121 No. 1, pp. 285-306, doi: 10.1007/s11192-019-03178-w.
Hurrell, S.A. and Scholarios, D. (2014), “‘The People make the brand’: reducing social skills gaps through person-brand fit and human resource management practices”, Journal of Service Research, Vol. 17 No. 1, pp. 54–67, doi: 10.1177/1094670513484508.
Järvensivu, T. and Törnroos, J.-Å. (2010), “Case study research with moderate constructionism: conceptualization and practical illustration”, Industrial Marketing Management, Vol. 39 No. 1, pp. 100-108, doi: 10.1016/j.indmarman.2008.05.005.
Jarzabkowski, P., Bednarek, R. and Lê, J.K. (2014), “Producing persuasive findings: demystifying ethnographic textwork in strategy and organization research”, Strategic Organization, Vol. 12 No. 4, pp. 274-287, doi: 10.1177/1476127014554575.
Jick, T.D. (1979), “Mixing qualitative and quantitative methods: triangulation in action”, Administrative Science Quarterly, Vol. 24 No. 4, pp. 602-611, doi: 10.2307/2392366.
Johnson, P., Buehring, A., Cassell, C. and Symon, G. (2006), “Evaluating qualitative management research: towards a contingent criteriology”, International Journal of Management Reviews, Vol. 8 No. 3, pp. 131-156, doi: 10.1111/j.1468-2370.2006.00124.x.
Jonsen, K., Fendt, J. and Point, S. (2018), “Convincing qualitative research: what constitutes persuasive writing?”, Organizational Research Methods, Vol. 21 No. 1, pp. 30-67, doi: 10.1177/1094428117706533.
Keeling, D., Khan, A. and Newholm, T. (2013), “Internet forums and negotiation of healthcare knowledge cultures”, Journal of Services Marketing, Vol. 27 No. 1, pp. 59-75, doi: 10.1108/08876041311296383.
Ketokivi, M. and Choi, T. (2014), “Renaissance of case research as a scientific method”, Journal of Operations Management, Vol. 32 No. 5, pp. 232-240, doi: 10.1016/j.jom.2014.03.004.
Ketokivi, M. and Mantere, S. (2010), “Two strategies for inductive reasoning in organizational research”, Academy of Management Review, Vol. 35 No. 2, pp. 315-333, doi: 10.5465/amr.35.2.zok315.
Köhler, T. (2016), “From the editors: on writing up qualitative research in management learning and education”, Academy of Management Learning and Education, Vol. 15 No. 3, pp. 400-418, doi: 10.5465/amle.2016.0275.
Köhler, T., Smith, A. and Bhakoo, V. (2022), “Templates in qualitative research methods: origins, limitations, and new directions”, Organizational Research Methods, Vol. 25 No. 2, pp. 183-210, doi: 10.1177/10944281211060710.
Kumar, V., Sharma, A. and Gupta, S. (2017), “Accessing the influence of strategic marketing research on generating impact: moderating roles of models, journals, and estimation approaches”, Journal of the Academy of Marketing Science, Vol. 45 No. 2, pp. 164-185, doi: 10.1007/s11747-017-0518-9.
Lages, C.R., Simões, C.M.N., Fisk, R.P. and Kunz, W.H. (2013), “Knowledge dissemination in the global service marketing community”, Managing Service Quality: An International Journal, Vol. 23 No. 4, pp. 272-290, doi: 10.1108/msq-03-2013-0048.
Langley, A., Smallman, C., Tsoukas, H. and Ven, A.H.V.de. (2013), “Process studies of change in organization and management: unveiling temporality, activity, and flow”, Academy of Management Journal, Vol. 56 No. 1, pp. 1-13, doi: 10.5465/amj.2013.4001.
Leitch, C.M., Hill, F.M. and Harrison, R.T. (2010), “The philosophy and practice of interpretivist research in entrepreneurship: quality, validation, and trust”, Organizational Research Methods, Vol. 13 No. 1, pp. 67-84, doi: 10.1177/1094428109339839.
Lincoln, Y.S. and Guba, E.G. (1985), Naturalistic Inquiry, Sage Publications, Beverly Hills.
Lincoln, Y.S. and Guba, E.G. (1986), “But is it rigorous? Trustworthiness and authenticity in naturalistic evaluation”, New Directions for Program Evaluation, Vol. 1986 No. 30, pp. 73-84, doi: 10.1002/ev.1427.
Löfberg, N., Witell, L. and Gustafsson, A. (2010), “Service strategies in a supply chain”, Journal of Service Management, Vol. 21 No. 4, pp. 427-440, doi: 10.1108/09564231011066079.
Lyons, P. and Brennan, L. (2019), “Assessing value from business-to-business services relationships: temporality, tangibility, temperament, and trade-offs”, Journal of Service Research, Vol. 22 No. 1, pp. 27-43, doi: 10.1177/1094670518805569.
McColl-Kennedy, J.R., Gustafsson, A., Jaakkola, E., Klaus, P., Radnor, Z.J., Perks, H. and Friman, M. (2015), “Fresh perspectives on customer experience”, Journal of Services Marketing, Vol. 29 Nos 6/7, pp. 430-435, doi: 10.1108/jsm-01-2015-0054.
Mees-Buss, J., Welch, C. and Piekkari, R. (2022), “From templates to heuristics: how and why to move beyond the Gioia methodology”, Organizational Research Methods, Vol. 25 No. 2, pp. 405-429, doi: 10.1177/1094428120967716.
Neu, W.A. and Brown, S.W. (2005), “Forming successful business-to-business services in goods-dominant firms”, Journal of Service Research, Vol. 8 No. 1, pp. 3-17, doi: 10.1177/1094670505276619.
Oliva, R. and Kallenberg, R. (2003), “Managing the transition from products to services”, International Journal of Service Industry Management, Vol. 14 No. 2, pp. 160-172, doi: 10.1108/09564230310474138.
Ordanini, A., Miceli, L., Pizzetti, M. and Parasuraman, A. (2011), “Crowd‐funding: transforming customers into investors through innovative service platforms”, Journal of Service Management, Vol. 22 No. 4, pp. 443-470, doi: 10.1108/09564231111155079.
Ostrom, A.L., Parasuraman, A., Bowen, D.E., Patrício, L., Voss, C.A. and Lemon, K. (2015), “Service research priorities in a rapidly changing context”, Journal of Service Research, Vol. 18 No. 2, pp. 127-159, doi: 10.1177/1094670515576315.
Pareigis, J., Echeverri, P. and Edvardsson, B. (2012), “Exploring internal mechanisms forming customer servicescape experiences”, Journal of Service Management, Vol. 23 No. 5, pp. 677-695, doi: 10.1108/09564231211269838.
Piekkari, R., Plakoyiannaki, E. and Welch, C. (2010), “‘Good’ case research in industrial marketing: insights from research practice”, Industrial Marketing Management, Vol. 39 No. 1, pp. 109-117, doi: 10.1016/j.indmarman.2008.04.017.
Pratt, M.G. (2008), “Fitting oval pegs into round holes: tensions in evaluating and publishing qualitative research in top-tier north American journals”, Organizational Research Methods, Vol. 11 No. 3, pp. 481-509, doi: 10.1177/1094428107303349.
Pratt, M.G. (2009), “From the editors: for the lack of a boilerplate: tips on writing up (and reviewing) qualitative research”, Academy of Management Journal, Vol. 52 No. 5, pp. 856-862, doi: 10.5465/amj.2009.44632557.
Pratt, M.G., Kaplan, S. and Whittington, R. (2020), “The tumult over transparency: decoupling transparency from replication in establishing trustworthy qualitative research”, Administrative Science Quarterly, Vol. 65 No. 1, pp. 1-19, doi: 10.1177/0001839219887663.
Pratt, M.G., Sonenshein, S. and Feldman, M.S. (2022), “Moving beyond templates: a bricolage approach to conducting trustworthy qualitative research”, Organizational Research Methods, Vol. 25 No. 2, pp. 211-238, doi: 10.1177/1094428120927466.
Reay, T., Zafar, A., Monteiro, P. and Glaser, V. (2019), “Presenting findings from qualitative research: one size does not fit all!”, in Zilber, T.B., Amis, J.M. and Mair, J. (Eds), Research in the Sociology of Organizations, Emerald Publishing, Vol. 59, pp. 201-216.
Rosenbaum, M.S. (2009), “Exploring commercial friendships from employees' perspectives”, Journal of Services Marketing, Vol. 23 No. 1, pp. 57-66, doi: 10.1108/08876040910933101.
Rosenzweig, S., Grinstein, A. and Ofek, E. (2016), “Social network utilization and the impact of academic research in marketing”, International Journal of Research in Marketing, Vol. 33 No. 4, pp. 818-839, doi: 10.1016/j.ijresmar.2016.02.002.
Sajtos, L., Kleinaltenkamp, M. and Harrison, J. (2018), “Boundary objects for institutional work across service ecosystems”, Journal of Service Management, Vol. 29 No. 4, pp. 615-640, doi: 10.1108/josm-01-2017-0011.
Sandberg, J. (2005), “How do we justify knowledge produced within interpretive approaches?”, Organizational Research Methods, Vol. 8 No. 1, pp. 41-68, doi: 10.1177/1094428104272000.
Schmidt, S., Tyler, K. and Brennan, R. (2007), “Adaptation in inter‐firm relationships: classification, motivation, calculation”, Journal of Services Marketing, Vol. 21 No. 7, pp. 530-537, doi: 10.1108/08876040710824889.
Sharma, S. and Conduit, J. (2016), “Cocreation culture in health care organizations”, Journal of Service Research, Vol. 19 No. 4, pp. 438-457, doi: 10.1177/1094670516666369.
Sharma, P., Tam, J.L.M. and Kim, N. (2009), “Demystifying intercultural service encounters: toward a comprehensive conceptual framework”, Journal of Service Research, Vol. 12 No. 2, pp. 227-242, doi: 10.1177/1094670509338312.
Sharma, S., Conduit, J. and Rao Hill, S. (2017), “Hedonic and eudaimonic well-being outcomes from co-creation roles: a study of vulnerable customers”, Journal of Services Marketing, Vol. 31 Nos 4/5, pp. 397-411, doi: 10.1108/jsm-06-2016-0236.
Sieg, J.H., Fischer, A., Wallin, M.W. and von Krogh, G. (2012), “Proactive diagnosis: how professional service firms sustain client dialogue”, Journal of Service Management, Vol. 23 No. 2, pp. 253-278, doi: 10.1108/09564231211226132.
Siggelkow, N. (2007), “Persuasion with case studies”, Academy of Management Journal, Vol. 50 No. 1, pp. 20-24, doi: 10.5465/amj.2007.24160882.
Stremersch, S., Camacho, N., Vanneste, S. and Verniers, I. (2015), “Unraveling scientific impact: citation types in marketing journals”, International Journal of Research in Marketing, Vol. 32 No. 1, pp. 64-77, doi: 10.1016/j.ijresmar.2014.09.004.
Stringer, M.J., Sales-Pardo, M. and Amaral, L.A.N. (2010), “Statistical validation of a global model for the distribution of the ultimate number of citations accrued by papers published in a scientific journal”, Journal of the American Society for Information Science and Technology, Vol. 61 No. 7, pp. 1377-1385, doi: 10.1002/asi.21335.
Suquet, J. (2010), “Drawing the line: how inspectors enact deviant behaviors”, Journal of Services Marketing, Vol. 24 No. 6, pp. 468-475, doi: 10.1108/08876041011072582.
Svensson, G. (2006), “Sequential service quality in service encounter chains: case studies”, Journal of Services Marketing, Vol. 20 No. 1, pp. 51-58, doi: 10.1108/08876040610646572.
Symon, G., Cassell, C. and Johnson, P. (2018), “Evaluative practices in qualitative management research: a critical review”, International Journal of Management Reviews, Vol. 20 No. 1, pp. 134-154, doi: 10.1111/ijmr.12120.
Thelwall, M. (2016), “The discretised lognormal and hooked power law distributions for complete citation data: best options for modelling and regression”, Journal of Informetrics, Vol. 10 No. 2, pp. 336-346, doi: 10.1016/j.joi.2015.12.007.
Torres, E.N., Lugosi, P., Orlowski, M. and Ronzoni, G. (2018), “Consumer-led experience customization: a socio-spatial approach”, Journal of Service Management, Vol. 29 No. 2, pp. 206-229, doi: 10.1108/josm-06-2017-0135.
Tracy, S.J. (2010), “Qualitative quality: eight ‘big-tent’ criteria for excellent qualitative research”, Qualitative Inquiry, Vol. 16 No. 10, pp. 837-851, doi: 10.1177/1077800410383121.
Tsarenko, Y., Strizhakova, Y. and Otnes, C.C. (2019), “Reclaiming the future: understanding customer forgiveness of service transgressions”, Journal of Service Research, Vol. 22 No. 2, pp. 139-155, doi: 10.1177/1094670518802060.
Valtakoski, A. (2020), “The evolution and impact of qualitative research in journal of services marketing”, Journal of Services Marketing, Vol. 34 No. 1, pp. 8-23, doi: 10.1108/jsm-12-2018-0359.
Van Der Burg, R.-J., Ahaus, K., Wortmann, H. and Huitema, G.B. (2019), “Investigating the on-demand service characteristics: an empirical study”, Journal of Service Management, Vol. 30 No. 6, pp. 739-765, doi: 10.1108/josm-01-2019-0025.
van Riel, A.C.R. and Lievens, A. (2004), “New service development in high tech sectors: a decision-making perspective”, International Journal of Service Industry Management, Vol. 15 No. 1, pp. 72-101, doi: 10.1108/09564230410523349.
Verleye, K. (2019), “Designing, writing-up and reviewing case study research: an equifinality perspective”, Journal of Service Management, Vol. 30 No. 5, pp. 549-576, doi: 10.1108/josm-08-2019-0257.
Vink, J., Edvardsson, B., Wetter-Edman, K. and Tronvoll, B. (2019), “Reshaping mental models – enabling innovation through service design”, Journal of Service Management, Vol. 30 No. 1, pp. 75-104, doi: 10.1108/josm-08-2017-0186.
von Koskull, C. (2020), “Increasing rigor and relevance in service research through ethnography”, Journal of Services Marketing, Vol. 34 No. 1, pp. 74-77, doi: 10.1108/jsm-03-2019-0143.
Wallin, A.J. and Fuglsang, L. (2017), “Service innovations breaking institutionalized rules of health care”, Journal of Service Management, Vol. 28 No. 5, pp. 972-997, doi: 10.1108/JOSM-04-2017-0090.
Welch, C. and Piekkari, R. (2017), “How should we (not) judge the ‘quality’ of qualitative research? A re-assessment of current evaluative criteria in International Business”, Journal of World Business, Vol. 52 No. 5, pp. 714-725, doi: 10.1016/j.jwb.2017.05.007.
Witell, L., Holmlund, M. and Gustafsson, A. (2020), “Guest editorial: a new dawn for qualitative service research”, Journal of Services Marketing, Vol. 34 No. 1, pp. 1-7, doi: 10.1108/jsm-11-2019-0443.
Yin, R.K. (2003), Case Study Research: Design and Methods, 2nd ed., Sage Publications, Thousand Oaks, CA.
Zilber, T.B. and Zanoni, P. (2022), “Templates of ethnographic writing in organization studies: beyond the hegemony of the detective story”, Organizational Research Methods, Vol. 25 No. 2, pp. 371-404, doi: 10.1177/1094428120944468.
Acknowledgements
The authors wish to thank the editor and two anonymous reviewers for their insightful comments and suggestions on previous versions of this paper.