Contesting conformity: how and why academics may oppose the conforming influences of intra-organizational performance evaluations

Hans Englund (Örebro University, School of Business, Örebro, Sweden)
Jonas Gerdin (Örebro University, School of Business, Örebro, Sweden)

Accounting, Auditing & Accountability Journal

ISSN: 0951-3574

Article publication date: 22 May 2020

Issue publication date: 10 July 2020

5886

Abstract

Purpose

The purpose of this paper is to develop a theoretical model elaborating on the type of conditions that can inhibit (or at least temporarily hold back) “reactive conformance” in the wake of an increasing reliance on quantitative performance evaluations of academic research and researchers.

Design/methodology/approach

A qualitative study of a research group at a Swedish university who was recurrently exposed to quantitative performance evaluations of their research activities.

Findings

The empirical findings show how the research group under study exhibited a surprisingly high level of non-compliance and non-conformity in relation to what was deemed important and legitimate by the prevailing performance evaluations. Based on this, we identify four important qualities of pre-existing research/er ideals that seem to make them particularly resilient to an infiltration of an “academic performer ideal,” namely that they are (1) central and since-long established, (2) orthogonal to (i.e. very different from) the academic performer ideal as materialized by the performance measurement system, (3) largely shared within the research group and (4) externally legitimate. The premise is that these qualities form an important basis and motivation for not only criticizing, but also contesting, the academic performer ideal.

Originality/value

Extant research generally finds that the proliferation of quantitatively oriented performance evaluations within academia makes researchers adopt a new type of academic performer ideal which promotes research conformity and superficiality. This study draws upon, and adds to, an emerging literature that has begun to problematize this “reactive conformance-thesis” through identifying four qualities of pre-existing research/er ideals that can inhibit (or at least temporarily hold back) such “reactive research conformance.”

Keywords

Citation

Englund, H. and Gerdin, J. (2020), "Contesting conformity: how and why academics may oppose the conforming influences of intra-organizational performance evaluations", Accounting, Auditing & Accountability Journal, Vol. 33 No. 5, pp. 913-938. https://doi.org/10.1108/AAAJ-03-2019-3932

Publisher

:

Emerald Publishing Limited

Copyright © 2020, Hans Englund and Jonas Gerdin

License

Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


1. Introduction

There is a growing stream of literature discussing the experiences and effects of an ongoing neoliberalization of academia, where various forms of journal rankings, league tables and performance evaluation schemes are increasingly mobilized as a means of governing academic work (e.g. Clarke and Knights, 2015; Gendron, 2015; Hopwood, 2008; Martin-Sardesai et al., 2017b; Parker, 2012; ter Bogt and Scapens, 2012; Tourish and Willmott, 2015). A general presumption in this literature is that the proliferation of this type of performance measurement systems (PMS) generates “reactive conformance” (Espeland and Sauder, 2007; Pollock et al., 2018). That is, researchers recurrently exposed to PMSs tend to “conform and perform to the [PMS] criteria” (Gioia and Corley, 2002, p. 110; see also, e.g. Alvesson and Spicer, 2016; Mingers and Willmott, 2013; Parker, 2011, 2012; Sauder and Espeland, 2009), and even become “academic performers” (Gendron, 2008) to whom “satisfying [PMS] criteria is becoming as important, and possibly more important, than paying attention to the substance of the work undertaken” (Willmott, 1995, p. 1024; see also Gendron, 2015; Parker, 2012). The underpinning premise is that PMSs can both force academics to comply with PMS criteria so as to get access to important resources from key constituents (Espeland and Sauder, 2007; Gendron, 2015; Willmott, 1995), and make them comply out of “free will” through appealing to their inner motivations to be “progressive, efficient and competitive” (see, e.g. Davies and Petersen, 2005; Gendron, 2008).

Notwithstanding these important insights, there is a recent stream of literature that has begun to problematize this “reactive conformance thesis” (see, e.g. Gerdin and Englund, 2019; Pollock et al., 2018). In this paper, we build upon and extend this latter literature, through drawing upon an interview study of a research group at a Swedish university who was regularly exposed to bibliometrics-based performance evaluations. Despite that this research group came out very poorly in a major intra-organizational research assessment though, they exhibited a surprisingly high level of non-compliance and non-conformity in relation to what was deemed important and legitimate by the prevailing PMS. Indeed, there are a few earlier accounts in the extant literature indicating that academics may “carve out spaces for thinking otherwise” (Archer, 2008a, p. 265; see also Clegg, 2008; Ylijoki and Ursin, 2013). However, this literature is as of today less detailed about how and why such non-conformity may come about (Kalfa et al., 2018). Based on this, the purpose of this paper is to develop a theoretical model which analytically disentangles the type of conditions that can inhibit (or at least temporarily hold back) reactive conformance in the wake of an increasing reliance on quantitative performance evaluations of academic research and researchers.

Overall, our emergent model highlights the importance of pre-existing cognitive scripts/schemas (Gioia and Poole, 1984; Harris, 1994) in the form of research/er ideals, i.e. an integrated set of norms, values and ideas which not only expresses what (a) good research/er is, but also permeates day-to-day research practices. Specifically, we identify four important qualities of such ideals that seem to make them particularly resilient to an infiltration of an academic performer ideal, namely that they are (1) central and since-long established, (2) orthogonal to (i.e. very different from) the academic performer ideal as materialized by the PMS, (3) largely shared within the research group and (4) externally legitimate. In fact, when pre-existing and alternative research/er ideals comprise these qualities, we find that they may form an important basis and motivation for not only criticizing and delegitimizing, but also for contesting, the academic performer ideal.

Arguably, these findings contribute to extant knowledge by adding further insights to the stream of studies that has shown that individual academics (Archer, 2008a, b; Clarke and Knights, 2015; Clegg, 2008) and academic institutions (Gerdin and Englund, 2019) may carve out some space for “acting otherwise,” through analytically disentangling important qualities of extant research/er ideals which contribute to explaining how and why such non-conforming behavior may come about. In so doing, our findings also provide theoretical nuance to the reactive conformance thesis which thus far has dominated the literature (see, e.g. Espeland and Sauder, 2007; Gendron, 2008, 2015). Again, we know fairly much about how the design and use of PMSs can generate reactive conformance (see, e.g. Englund and Gerdin, 2019, and the literature review in the next section), but considerably less about what makes some extant research/er ideals less vulnerable to the infiltration and colonization of the academic performer ideal. Related to this, our findings also add to the ongoing research-practice gap debate (see, e.g. Baldvinsdottir et al., 2010; Tucker and Parker, 2014; Tucker and Schaltegger, 2016) by analytically disentangling how and why academics may come to prioritize research with policy and practice implications over research specifically designated for publication in international refereed journals as premiered by the PMSs.

The remainder of the paper is organized as follows. The next section reviews extant literature on the mechanisms of reactive conformity (Espeland and Sauder, 2007), after which we turn to the literature that has begun to contest this reactive conformity thesis. Next, we account for our empirical research context, and methods for data collection and analysis. After that, we turn to our empirical material and show how and why the research group studied was able to, at least temporarily, contest the conforming influence of a PMS. In a concluding section, we outline our results and contributions.

2. Mechanisms of reactive conformity

There is a large and growing literature that takes an interest in how and why academic performer ideals may ingratiate themselves, and take root, also in academia whose practices are traditionally seen to be driven by other ideals (see, e.g. Parker, 2012; Willmott, 1995). In this literature, as noted by Kallio et al. (2016, p. 687), it is typically assumed that PMSs and their underlying ideals are largely in conflict “with traditional academic values such as freedom, autonomy and belonging to a community.” But nevertheless, Kallio et al. argue, the proliferation of this type of PMSs contributes to constructing a new academic ideal. That is, a type of academic performer ideal that is composed of “a set of ideas and practices which stress the search for technical optimality via the most efficient input/output ratio” (Gendron, 2008, p. 99; see also Ball, 2012; Davies and Petersen, 2005).

Drawing upon Gendron (2008), PMSs can be said to operate at two levels so as to generate this type of reactive conformity (Espeland and Sauder, 2007), namely disciplinary and self-disciplinary levels. When PMSs operate at a disciplinary level, academic institutions and individual academics feel coerced to comply with the criteria materialized by the PMSs (Englund and Gerdin, 2019; Henkel, 2005; Larner and Le Heron, 2005). Because if they do not, they risk losing particular rewards, or facing different types of sanctions/punishments [see also Gendron's (2015) discussion about the alleged consequences of the paying-off mentality in academia]. Along these lines, for example, Espeland and Sauder (2007, p. 13) showed that even small differences in positional status as manifested in public university rankings made law schools adapt to the evaluation criteria “not only to [influence] the reactions of prospective students, but also to other constituents such as trustees, boards of visitors, and alumni, all of whom provide financial and administrative support to the schools” (see also Parker, 2012, 2013). And as shown by Agyemang and Broadbent (2015), this recurrent exposure to quantitative performance evaluation may even make universities feel obliged to develop internal PMSs that amplify, rather than dissipate, the control signals from external actors (see also Martin-Sardesai et al., 2017a). Along these lines, for example, Gendron (2008, 2015) and others (Alvesson and Spicer, 2016; Clarke and Knights, 2015; Power, 1999; Shore, 2008) have pointed out that tenure and other promotion decisions in universities are increasingly related to the number of “hits” achieved in highly ranked research journals. And accordingly, academics, and also PhD-students (Pelger and Grottke, 2015), tend to adapt to what is deemed important and legitimate in these high-esteemed journals (see also Hopwood, 2008; Mingers and Willmott, 2013; Northcott and Linacre, 2010). Or as put by Parker (2012, p. 1160), “[a]s researchers compete to obtain entry for their work into the highest ranked journals, they increasingly tend towards homogenization of subject matter, research design, methodologies and theories as they mimic what is seen as the recipe for gaining entry.”

However, it should be noted that not only are tangible rewards and punishments such as (the lack of) promotions and performance-based pay at stake, but also more symbolic values related to the academics' self-esteem and perceptions about legitimacy (see, e.g. Ball, 2003). As argued by Gendron (2015, p. 172), for example, PMSs “can be conceived as categorizing and disciplinary mechanisms, which engender the fear of being ‘abnormal’ and not productive enough.” That is, through making particular qualities of academics (in)visible and (ab)normal, and through ordering them on a single scale in terms of who is better/worse (see, e.g. Espeland and Stevens, 1998, 2008), PMSs create strong pressures to adapt to the ways in which academics are (or want to be) displayed (see also Ball, 2003; Englund and Gerdin, 2019; Lynch, 2006).

When PMSs operate at a self-disciplinary level (Gendron, 2008), individuals do not primarily adapt their behavior to the academic performer ideal because of coercive (external) pressures, but rather because the PMSs re-present something that is perceived as natural, relevant and even desirable (Davies and Bansel, 2010; Davies and Petersen, 2005; Morrissey, 2015; Shore, 2008). That is, rather than being viewed as essentially “external objects” that have to be managed so as to secure/avoid particular tangible or symbolic rewards/punishments, the PMSs embody an ideal that academics can identify with and want to achieve (Englund and Gerdin, 2019). Or as put by Archer (2008b, pp. 388–389):

The power of this form of governance lies in the ways it is not simply imposed, but is taken up internally by subjects who learn not only to perform to external audit, but to enact a form of self-governance (a governmentality of the soul, as Rose 1990 describes it). The value of subjects, within neoliberal regimes, is thus assessed by their ability to produce particular products within specified timescales and parameters.

And accordingly, when academics want to realize themselves as good researchers, they tend to conform their work to the ideals expressed by the PMSs (Ball, 2012; Peseta et al., 2017). Along these lines, for example, Ylijoki and Ursin's (2013) interview study of some 40 academics identified several narratives among academics which cherished the virtues of success, dynamism, flexibility and entrepreneurial spirit [1], all of which form part of, and are constituted by, the academic performer ideal (see also Davies and Petersen, 2005). Similarly, while the academics in Knights and Clarke's (2014, p. 352, emphasis in original) study expressed ambivalence about the demands of PMSs, “many of us still remain addicted to the pursuit of a solid sense of self that, however illusive, drives us to aspire to be recognized by external adjudicators [… And accordingly] techniques which are both performative and panoptic contribute to a form of self-regulation that is extremely compelling and seductive.” And as suggested above, when academics feel this type of inner commitment to scoring high in the PMSs, they tend to conform more to the evaluation criteria, thereby leading to less risky, innovative, and practice-relevant research (see also Annisette et al., 2015; Butler and Spoelstra, 2012; Northcott and Linacre, 2010; Parker, 2012).

3. Contesting the reactive conformity thesis

As noted earlier, there has emerged a stream of literature that has begun to problematize the “reactive conformity thesis” described in the previous section. For example, Pollock et al. (2018) investigated organizations exposed to multiple rankings and found that this heterogeneity provided greater room for strategic maneuvering where evaluated organizations deployed a number of response tactics. As the authors argue, “simply conforming would be implausible. An entity reacting equally and indiscriminately to increased and diverse pressures would risk being pulled in different directions” (p. 56). In a similar vein, Gerdin and Englund (2019) found that while PMSs contribute to constitute the relative positional status of evaluated units (Espeland and Stevens, 1998, 2008; Miller and Power, 2013) which are difficult to oppose (Espeland and Sauder, 2007; Gendron, 2008; Parker, 2011; Sauder and Espeland, 2009), such units may nevertheless undertake a number of public response tactics so as to try to affect how imposed ranks are construed among important stakeholders.

Other parts of this literature have taken an interest in how researchers, despite being exposed to the academic performer ideal (re)produced by PMSs, may indeed carve out spaces for thinking and acting otherwise (see also the discussions in Davies and Petersen, 2005; Lynch, 2015). For example, Clegg (2008, p. 343) concluded based on an interview study that “[d]espite all the pressure of performativity, individuals have created spaces for the exercise of principled personal autonomy and agency.” In a similar vein, Archer (2008a, p. 282) noted that while it is difficult, if not impossible, to exist outside the neoliberal context that the proliferation of PMSs engenders, “at the micro level, I would argue that there are important moments and spaces of resistance.”

In the literature, however, there is also evidence of more profound opposition to the academic performer ideal. For example, Ylijoki and Ursin (2013, p. 1139) identified a narrative which they referred to as “resistance”, and argued that “[r]esistance is often linked with a sort of nostalgic yearning for the past. Traditional academic values and ideals such as academic freedom, autonomy, collegiality and the Humboldtian model of the university, are appreciated and used as a reference point in struggling against the current changes” (see also Anderson, 2008). Along these lines, Archer (2008a) found that her respondents located their academic identities in temporal terms by comparing the “present” with a similar type of traditional “past” referred to as “the golden age.” And as Archer concludes in a related paper about younger academics (2008b, p. 401):

[T]he finding that they are tempting to adopt critical and reflexive positions in relation to dominant practices and are trying to resist the drive for performativity through the taking up of more ‘traditional’ academic discourses (e.g. around notions of collegiality), might be interpreted as reflecting the ongoing power and resilience of ‘traditional’ constructions of academic identity/culture that, rather than being under threat and on the brink of disappearance, continue to be actively taken up and reworked by the next generation of academics.

So overall, while the dominating part of the literature observes or foresees that the increasing reliance on PMSs in academia will engender research conformity through both disciplinary and self-disciplinary processes (Englund and Gerdin, 2019; Gendron, 2008; see Section 2 above), a growing stream of research has begun to problematize this reactive conformance thesis (e.g. Archer, 2008a, b; Clegg, 2008; Gerdin and Englund, 2019; Pollock et al., 2018). And an important insight is that attempts to carve out a space for acting otherwise seem to be fed and formed by the existence of pre-existing research/er ideals of some sort (e.g. the Humboldtian ideal referred to above). In fact, it seems as if such ideals can work as cognitive scripts/frames (Gioia and Poole, 1984; Boland and Pondy, 1986) or interpretative schemes (DiMaggio, 1997; Harris, 1994) which shape how researchers make sense of, and act upon, performance measurements and evaluations (see also Abrahamsson et al., 2016; Bay, 2018; Englund et al., 2013). However, this stream of literature is thus far less explicit about how and why such pre-existing ideal(s) may form the basis for contesting, or at least delaying, the colonization of the academic performer ideal. We also have limited knowledge about what more precisely makes some such pre-existing ideals more likely to engender non-conforming behavior while others tend to become (un)consciously adapted to the academic performer ideal (as argued in the dominating part of the literature). These knowledge gaps in the previous literature made us pose the following more specific research question:

What qualities of pre-existing research/er ideals make them more resilient to the colonization of the academic performer ideal?

As noted earlier, we will address this intriguing question by analyzing the empirical data collected from a research group who was continually exposed to bibliometric performance evaluations and came out poorly in a major intra-organizational research assessment, yet showed a remarkably high level of critical reflexivity and non-conforming behavior. Before so doing, however, we will briefly describe the empirical context and how we have collected and analyzed the data.

4. Research context and method

The study reported upon in this paper forms part of a larger research program that aims at exploring potential identity effects of the ongoing neoliberalization of academia. In particular, and based mainly on extant critical literature that stresses the pervasive and largely dysfunctional effects of working with PMSs to govern academic researchers, the program set out to explore how and why (if at all) researchers' identities are reconstituted as an effect of working with such control systems in academia. Within this research program, we have to date conducted some 50 interviews with academics and administrators from different universities in Sweden. Moreover, we have observed meetings between faculty and administrators, and collected various forms of documents. It was during the interviews in this larger program that we came across the particular research group attended to here. A group that was recurrently exposed to quantitative performance evaluations of their research activities and who came out poorly in a major intra-organizational research assessment. Interestingly though, rather than conforming to the criteria constituted by the PMS, our initial interviews suggested that they largely contested the evaluation results and the academic performer ideal (Gendron, 2008) that such results materialized.

Based on this, a more systematic study of this particular research group was made, in which we focused on how and why its members perceived and responded to the PMS in particular ways. That is, we focused on the “insiders’ viewpoints” (Bazeley, 2013; Schwandt, 1994) with the aim of better understanding how and why they seemed able to defy the performative pressure. Apart from leading to a re-phrasing of our initial research question (now focusing more on how and why non-conforming behavior may come about), this emerging focus also led us to consult the small but growing literature that has begun to problematize the reactive conformance thesis (Pollock et al., 2018) that has dominated the literature thus far (see Sections 2 and 3 above for more details about these streams).

In total we conducted in-depth interviews with nine members of this particular group, ranging from new post-docs to senior professors (for an overview of the respondents, see Table 1). Two researchers at a time were involved in conducting the interviews lasting between one and one and a half hour [2]. The interviews were semi-structured, beginning with an assurance on our account that they followed the ethical guidelines issued by the Swedish Research Council. After that, we went through four themes on which interviewees were given an unconstrained opportunity to elaborate. The themes were (1) how the interviewees viewed what is good research and who is a good researcher; (2) how their research group, and its research, was regarded by external stakeholders and within the university, respectively; (3) their views on the design and use of intra-organizational PMSs; and (4) how these PMSs, if at all, influenced different aspects of their research work (including choices of research questions, methods, theories, research collaborations, and publication channels). Each interview was digitally recorded and transcribed.

Drawing upon Ahrens and Chapman (2006) and others (Bazeley, 2013; Silverman, 2011), we analyzed the data through working back and forth between an emergent research question, the empirical observations, and extant literature(s). However, after having reached a preliminary understanding of the material, we conducted the following more systematic coding of it. Based on our preliminary findings showing more indications of interviewees contesting rather than conforming to the PMS criteria, we first systematically looked for in vivo expressions (see, e.g. Bazeley, 2013) in the transcripts which had this character (see Section 5 below for representative examples). We also systematically looked for expressions that indicated why such contesting of the PMSs could come about.

In the second phase of the coding, these in vivo expressions were systematically compared to identify commonalities and differences. As suggested above, a striking commonality was that essentially all interviewees strongly opposed the PMSs and the academic performer ideal that they materialized. This overarching coding led us to develop the notion of “contesting conformance.” Importantly though, the material not only pointed to that they strongly opposed the PMSs, but also why they did so. For example, some utterances suggested that the academic performer ideal was largely in conflict with central and since-long held research/er ideals within the research group, while others implied that researchers in the group, in contrast to the outcomes of the intra-organizational research assessments, perceived that they had a good standing among different external stakeholders.

As a means of making sense of these potential explanations as to why they were able to defy the performative pressure, we mobilized the notion of cognitive scripts/schemas (Gioia and Pool, 1984; Harris, 1994, see also Sections 3, 5 and 6). That is, we saw the research/er ideals as important cognitive scripts which shaped (and hence could potentially explain) how they came to interpret and act upon the PMSs. In total, we identified four different, yet interrelated, qualities of such cognitive scripts through comparing and collapsing the different in vivo expressions into a set of second-order concepts. These second-order concepts, and some representative examples of first-order (in vivo) expressions based on which the second-order concepts were derived, are summarized in Table 2.

The emergent second-order concepts have been continually discussed among the authors and mirrored against other parts of the interview data and the research literature so as to further specify and validate their meaning and significance. In these discussions, we have on the one hand tried to respect the situated complexity of the specific setting under study (cf. Miles and Huberman, 1994). On the other hand though, and based on the fact that every single case embraces a certain degree of universality (Bazeley, 2013), we have tried to generate second-order-concepts “capable of instructive transferability to other settings” (Walton, 1992, p. 126). That is, concepts that may be meaningful and significant for understanding also other academic settings in which researchers oppose the use of PMSs.

As we present the findings next, we intentionally organize the material around a large number of quotes for two main reasons. First, so as to empirically substantiate each emerging concept, and second, to as far as possible retain data in its original and contextual form (cf. Bazeley, 2013). Consistent with our commitment to confidentiality though, a few minor changes have been done to some of the quotes so as to avoid that individual interviewees can be identified. The same argument applies to quotes derived from written material (such as internal reports, memos and strategy documents) so as to ensure anonymity of the focal organization studied referred to as Central University.

5. Emergent findings

As suggested earlier, extant research on the use of PMSs in academia has generally argued that such systems tend to lead to conformity and superficiality (see, e.g. Espeland and Sauder, 2007; Gendron, 2008, 2015; Parker, 2012). In contrast, the present study draws upon, and extends, the stream of research that has begun to problematize this “reactive conformity thesis” (see, e.g. Gerdin; Englund, 2019; Pollock et al., 2018). Specifically, we will next illustrate and theoretically elaborate on how and why the research group under study showed a surprisingly high level of non-compliance and non-conformity in relation to what was deemed important and legitimate by the prevailing PMS. Before doing so, however, we will briefly describe the PMS context in which the focal group was located.

5.1 PMS context

In resemblance with experiences from other parts of the world such as Northern America (Acker and Webber, 2017; Gendron, 2015; Malsch and Tessier, 2015), Europe (Mingers and Willmott, 2013; Morrissey, 2013, 2015; ter Bogt and Scapens, 2012) and Australia (Davies and Bansel, 2010; Martin-Sardesai and Guthrie, 2018; Parker, 2012), Swedish universities have undergone dramatic transformations when it comes to their governing. Generally speaking, such transformations have revolved around increased forms of marketization whereby competition has been promoted as a mechanism for rendering the universities more accountable, transparent, and efficient (see, e.g. Krücken et al., 2018).

While a large number of such competitively oriented reforms have taken place over the past decades in Sweden, two are particularly relevant here. First, there has been a dramatic increase in the share of competition-based project funding on a national level. In fact, while project funding via various forms of research councils and other external funders made up some 30% before the reforms (while the rest was made up of direct funding from the government to the universities), it today makes up more than half of the total funding (Kivistö et al., 2019). And importantly, as this type of funding is based on ex ante assessments of called for or proposed research activities in which different research activities are set up against each other, this has arguably increased the overall competitive landscape for researchers in Sweden.

Second, when it comes to the research funding that is allocated directly from the government to the universities, this was previously allocated mainly in the form of institutional (or “block”) funding – a form of funding where research performing organizations receive funding that is not tied to any particular research activities (cf. Hicks, 2012). In 2009 though, a performance-based funding system (PBRF) was introduced (cf. Hicks, 2012; Zacharewicz et al., 2018), where 10% of the funding was allocated based on ex post performances (a percentage that was doubled to twenty percent in 2014). As a result, this means that part of the government-based research funding is now related to how a particular university performs in relation to other universities on two indicators, namely the number of publications and citations, and the amount of external funding won (Hammarfelt et al., 2016). A transformation that has arguably increased the pressure to publish in peer-reviewed international journals, in particular those included in Web of Science (as normalized citation scores from WoS are used in the indicators).

As suggested elsewhere (Hammarfelt et al., 2016), and as will be shown in more detail below, these (and other) national level transformations have had a tendency to trickle down to, and work as important “incentive systems,” also at the level of the individual universities. In fact, at Central University, the national level systems are typically mobilized as important arguments for the type of PMS developed and used also within the university [3]. Or, as suggested in one of the internal research evaluation reports:

[Central University] experiences a reality where responsible political authorities and central research funders increasingly emphasize that exposure to competition […] increases. As a result, [… it is important that we] establish transparent systems for continuous internal follow-ups of results and explicit principles for quality-based prioritization and resource allocation. (Central University Research Evaluation Report, 2010)

In a similar manner, the faculty board to which the focal research group belonged expressed their strong conviction that market-based competition between research groups would lead to efficient resource allocation, high research productivity and high quality (see also Agyemang and Broadbent, 2015; Archer, 2008a, b; Ball, 2012; ter Bogt and Scapens, 2012; Gendron, 2015; Knights and Clarke, 2014; Lynch, 2015). As noted in one of their strategic documents, for instance:

Flexibility and dynamism is created by the, in competition, issuing of time-limited funding for research. This makes it possible to redistribute funding relatively fast between [research] areas if necessary at the same time as successful research is rewarded. […] In conclusion, the faculty board expects clear effects in the form of a good publication frequency, successful applications for external funding and commitment to collaboration activities. (Faculty Board Strategic Plan, 2014-2016)

In line with this, the board developed and used a PMS that included three key indicators, namely the number of publications, citations, and external research grants won (cf. the national level PMS briefly described above). Moreover, through advanced bibliometric methods (including scientific field adjustments of research production and citations), research groups were made comparable and “rankable” in terms of their overall research performance (see also ter Bogt and Scapens, 2012; Gendron, 2008, 2015). Encouraged by the Vice Chancellor, the faculty board also linked financial incentives to these performance indicators. For example, they made the following statement in a follow-up report:

With the purpose of increasing the awareness and knowledge about publication issues, the faculty board has with the assistance of the university’s bibliometric expert [NN] followed up the development of publication volume within our area of responsibility. In order to stimulate and reward quality in research, the board has during the year allocated the by the vice-chancellor assigned funds to the [chosen] research groups in the form of [i] stimulation funds based on publication [ii] rewarding funds based on external grants and [iii] rewarding funds based on the level of citations.

As suggested by the quote, the PMS was thus not only used for regular evaluations of research groups, but also for allocating strategically oriented research funds. For example, the faculty board used it to define research excellence, based on which certain rewards were conferred:

The stimulation funding allocated based on publication has departed from a model where publication in journals included in Web of Science (WoS) with a high impact factor has been stimulated. […] Publications that have an impact factor in WoS which belongs to the 10 percent highest within the research field were further rewarded with a four times higher amount than the rest of the articles. [… Also,] the 10 percent most cited articles within each research area have been rewarded. (Faculty Board Annual Report, 2015)

All-in-all therefore, it is evident that Central University at the time of the study had a well-developed PMS for governing research activities; one that was highly in line with what is stressed at the national level in Sweden (and, indeed highly in line with what has been reported from other parts of the world, see, e.g. Acker and Webber, 2017; Davies and Bansel, 2010; Gendron, 2015; Martin-Sardesai et al., 2017a, 2017b; Mingers and Willmott, 2013; Morrissey, 2013, 2015; Parker, 2012; ter Bogt and Scapens, 2012). It is also evident that this PMS context was highly present among the members in the focal research group. In fact, all interviewees deemed these recurrent evaluations as a critical and increasing part of being an academic researcher. For example, one member of the research group argued that “It has become more serious […] a harsher climate,” while another thought that “The competition is very tough nowadays, not least for post-docs.” Or as pinpointed by yet another researcher in the group, “When I started as a PhD student, I had no idea that academic life was so much about ‘strategic publishing’ and career management. […] It’s much more industrialized than I thought.”

Also, all interviewees had a clear view of what it meant to be an “academic performer” (Gendron, 2008) as materialized by the internal PMS. For instance, one researcher explained that; “[The system] rewards those who publish by themselves, and apply for and get [external] research grants,” while another argued that “It's about having publications and citations in the right journals.” Again, however, while the researchers in the group were highly aware of the PMS, and the academic performer ideal it nurtured, we nevertheless find a surprisingly high level of non-conforming behavior. It is to this interesting finding that we turn next. More specifically, we will in the following three sections demonstrate the empirical foundations of the emergent theoretical model depicted in Figure 1. Starting from the left, we will begin by illustrating the pivotal roles of four important, yet largely unexplored qualities of pre-existing research/er ideals (Section 5.2). After that, we demonstrate how these ideals formed the basis for critical reflection (Section 5.3), and contestation (Section 5.4), of the academic performer ideal as materialized by the PMS.

5.2 Extant research/er ideals as inhibitors of reactive conformance

As suggested by the left-hand side of Figure 1, an important condition that inhibited the academic performer ideal from infiltrating and taking over the construction of what (a) good research/er is was the pre-existence of alternative research/er ideals in the focal group (see also Archer, 2008a, b; Ylijoki and Ursin, 2013). Specifically, we analytically disentangled four important qualities of these ideals that arguably made them particularly resilient to change, namely that they were (1) central to, and since-long established in, the research group in question, (2) orthogonal in relation to the academic performer ideal, (3) largely shared among members in the focal group and (4) perceived to be highly legitimate among important external stakeholders. Table 2 shows how these four qualities of the alternative ideals (Column 2) were derived from the interview data (Column 1), and how and why they inhibited reactive conformance to the academic performer ideal as materialized by the intra-organizational PMS (Column 3).

As suggested by the table (and Figure 1), a first important quality of extant research/er ideals in the group was that they were central and since-long established. The underpinning premise is that such ideals work as cognitive scripts (Gioia and Poole, 1984) or interpretative schemas (DiMaggio, 1997; Harris, 1994) that inform how members look upon their unit and identify with it (Ashforth et al., 2008). They are also important for understanding how group members construe and act upon key issues and events (Blanz et al., 1998; Brown, 2015), including the interpretation of the design and uses of PMSs (see, e.g. Abrahamsson et al., 2016; Bay, 2018; Englund et al., 2013). And when such research/er ideals are deemed very central to the group in question, and have also guided their day-to-day research conduct for a long time, they are arguably more resilient to the colonization of an academic performer ideal than when the opposite conditions prevail (i.e. when extant ideals are peripheral, newly adopted, and not yet being continually reproduced in and through daily conduct).

One such central and since-long established ideal was the belief that research/ers should contribute to improving practice in some respect. For example, one interviewee replied on a direct question of what made her/him choose to become a researcher in terms of; “Oh, that's easy to tell—it’s to be able to contribute to a better society, to be useful, full stop.” In a similar vein, two other interviewees expressed this ideal of making practical contributions to society in terms of “The overall goal of our scientific discipline is to create a better world” and; “Since we do design science—i.e. action research—it's about trying to make things better” (see also Table 2 for further representative examples of first-order in vivo expressions).

We also find strong evidence of a second central and since-long established research/er ideal within the group, namely to become part of and contribute to the research community within their academic discipline. In fact, one interviewee said that “[o]ur research leader safeguards us and says that we can publish in journals which have readers interested in what we do, even if those journals do not count in the internal performance evaluation system.” In a similar vein, another interviewee defended their extant ideal by arguing that; [i]t's about your identity as a researcher. If the university management defines a “good researcher” as your score on bibliometric data, that becomes constitutive of your self-image. But if you associate with the standards within your research community [as we do], that becomes your identity.”

A second quality of extant research/er ideals that arguably inhibited reactive conformance was that these ideals were perceived to be orthogonal in relation to the academic performer ideal as materialized by the PMS (see Figure 1 and Table 2). That is, not only was it important that alternative research/er ideals were central and since-long established, it also seemed crucial that these ideals stood in stark contrast to the academic performer ideal (cf. Butler and Spoelstra, 2012, 2014; Mingers and Willmott, 2013; Norris and O'Dwyer, 2004; Northcott and Linacre, 2010). The premise is twofold. First, this quality made it more difficult for the performer ideal to, on a more or less subconscious level, incrementally infiltrate extant ideals of what constituted (a) good research/er in the group in question (cf. Archer, 2008a; Davies and Petersen, 2005). Rather, orthogonality of ideals made the alternatives visible and differences explicit. Or as formulated by one of the interviewees; “I have a particular view on research, and a particular driving force [namely, to make contributions to practice]. And then you get a large gap between that which we are expected to do [as defined by the PMS] and that which has an impact on practice. It's a huge gap.” Table 2 provides further evidence of this perceived “research-practice gap” (e.g. Baldvinsdottir et al., 2010; Tucker and Parker, 2014), i.e. between the extant ideal of contributing to practice and that of the academic performer stressing the importance of publications in the “right” research journals.

Second, and related to this, an orthogonal relationship between extant ideals and that of an academic performer is likely to spark critical discussions and reflexivity (cf. Archer, 2008a; Englund and Gerdin, 2018; Knights and Clarke, 2014). The following two quotes illustrate how perceived gaps between, on the one hand, the academic performer ideal and, on the other hand, the already-established ideals of practical relevance and being a respected part of the intra-disciplinary research community, respectively, generated such critical reflexivity.

  • There is a risk that focus is on counting publications and you forget why you are here in the first place, i.e. forget about the content of research. Instead of counting publications, we should focus on the interesting things that we actually develop. […] We talk much about this within our research group.

  • There is a discrepancy between how we view ourselves as an academic discipline [which is essentially positive] and how we are regarded by top management [which is negative]. This really annoys us. I mean, internationally, the big names [i.e. very highly esteemed researchers] know who we are and what we do. […] So it is some kind of disrespect shown by the university management.

A third quality of already-established research ideal(s) which arguably made them more resilient to the colonization of the academic performer ideal was that they were shared within the research group (see Figure 1 and Table 2 for representative first-order in vivo expressions). The underpinning premise is that when extant (non-performer) ideals are shared, the members in the group will experience less uncertainty as to what behavior is deemed (un)important and (in)appropriate. Several of the interviewees referred to these shared ideals as a type of culture. For example, one interviewee argued that “The overall goal of our scientific discipline is to create a better world. I think we have that kind of culture […] And this culture is reinforced by everyone taking part in this, and applying research methods that require close collaboration with practice, e.g. design science and action research.” Another interviewee also used the term culture to put the finger on what was not seen as part of the intra-group research/er ideals; “It is not part of our culture [within the research group] to follow up our research [in quantitative terms].”

Figure 1 also suggests a fourth and final important quality of extant in-group research/er ideal(s) that arguably made them resilient to colonization of the academic performer ideal, namely that they were perceived to have a high legitimacy among important external stakeholders. The underpinning premise is that academics, like human beings in general, are motivated by a desire to see themselves and their groups in a positive light (Ashforth et al., 2008; Haslam and Ellemers, 2005). And accordingly, when getting continuous positive feedback from important others, this made them feel good about who they were and what they were already doing.

Along these lines, a number of interviewees expressed the importance of getting positive feedback about their work from practicing professionals. For example, one interviewee stressed that; “It is important that you get requests from the wider society, [signaling] that what you do research-wise is interesting and means something for actors outside the university,” while another argued that “When you have shown that some things work in practice… that is really rewarding.”

In a similar vein, many of the interviewees expressed the importance and satisfaction of being a legitimate part of the wider research community within their specific scientific discipline. These perceptions of external legitimacy for who they were and what they were doing research-wise were expressed in many ways. Some interviewees referred more generally to a perceived positive image of the group within their research community, both nationally and internationally (see Table 2 for empirical illustrations). Others referred to the many inquiries about collaboration that they received from external stakeholders as a sign of their legitimacy and good standing within their research community. For example, one interviewee concluded that “Our research is recognized internationally, and we get a lot of questions if we want to join in on applications [for external research grants]. In fact, I just got an invitation to chair an international research network. And many of us [in our research group] get similar kinds of inquiries. Similarly, another interviewee proudly noted that “If you look at our standing within our research community, we come out really good. We get a lot of invitations to participate in research applications, in conferences, and to take positions as editors.”

Importantly, however, the group not only got symbolic types of appreciation (e.g. in the form of recognition from practitioners and peers within their research community) but also more tangible ones. For example, they successfully attracted external research grants from an important funding agency that appreciated (and required) collaboration with practitioners. In a similar vein, they were able to attract research grants from particular public agencies that were responsible for issues related to their research foci. In this respect, therefore, the group's perceived legitimacy among external stakeholders was very important as it provided them with great opportunities to fund the type of research they deemed important. Or put in the opposite way, thanks to this external legitimacy, the group became less dependent on that part of intra-organizational research funding which was allocated based on current research performance as defined by the PMS (see Section 5.1 for a short description of the PMS).

So to conclude the line of reasoning thus far, the presence of central and since-long established, shared, orthogonal, and externally legitimate ideals of what was deemed (a) good research/er was important for understanding how and why members of the group were less inclined to conform to the academic performer ideal as materialized by the intra-organizational PMS. And importantly, as will be dealt with in more detail next, we find that these alternative ideals formed an important basis and motivation for not only critically reflecting upon, but also contesting, the academic performer ideal per se.

5.3 Extant research/er ideals as a basis for critical reflection of the academic performer ideal

Despite the apparent appeal of rankings and research performance evaluations as primary and neutral “measures of success and achievement of organizations as well as individuals” (Gendron, 2008, p. 99), their underpinning academic performer ideal did not stand uncontested in the research group investigated. On the contrary, our empirical findings suggest that comparisons of the since-long established research/er ideals within the group and that of the academic performer ideal gave rise to a fundamental and multifaceted critique of the latter (see mid-part of Figure 1).

Overall, we identify four related, yet distinct areas of criticism, all of which were closely connected to and formed by the since-long established and shared research/er ideals of the group. A first area of criticism emanated from the perceived reductionist quality of intra-organizational research performance evaluations. That is, it was believed that they were one-eyed and failed to capture what was deemed good research within the group, specifically in terms of a lack of focus on research content per se. The quotes below illustrate their frustration over what was above referred to as the “research-practice gap,” i.e. that they felt that the internal research performance evaluations did not recognize their central and since-long held research/er ideal of making significant practical contributions to organizations and society (see Table 2; see also the discussions in Northcott and Linacre, 2010; Parker, 2012; Tucker and Parker, 2014, about this issue):

  • We put a lot of effort into interacting and communicating with practitioners, but this is not captured in the internal performance evaluations. I think this is a pity. […] I guess, we should do these tasks at our spare time, hide them somehow, because we're expected to write journal papers.

  • All the things you have done with and for the surrounding society are not recognized what so ever [in the research assessments. …] No one cares about this.

  • No one is interested in the content of research … if it is important [for society]. You do not pose those kinds of questions today.

  • No one is interested in following up the practical impact of our research, which is a pity in my view.

The researchers in the group also criticized that the PMSs did not capture their recognition and relatively good standing within their own research community (cf. Table 2 above). For example, one researcher noted that; “I think that the image of our research is positive among our peers [within our research community] as opposed to that within our university. This is so because those within the university do not have a clue about what type of research we do … they just make their judgment based on the numbers. They do not look at the [research] content and its impact.” In a similar vein, another researcher said that “We came out really bad in the internal research assessment. And this is due to the type of publication statistics used. […] We had a lot of publications that, in our community, count as good, e.g. conference papers and book chapters. […] But according to the new [internal] system, you're not supposed to publish like that.”

A second and related area of criticism was directed towards the type of research usually conducted in the kind of journals deemed important in the internal PMS, namely journals indexed in Web of Science. The premise is that much of this type of research did not match the type valued within the group (see Table 2). As one member argued:

  • The risk is that in order to publish fast enough, you do not have time to do the type of studies that we do where we are close to practice. It takes a lot of time to collect and analyze data and then to publish. [… Instead,] the systems steer towards another type of research where researchers use surveys on students in order to test hypotheses [… where] they use a well-known theory and add one variable […] I do not think this is good research and the fact that you publish a lot is not a good indicator of you being a good researcher.

In a similar vein, another researcher in the group argued that; “Since I think that research should make a difference [for practice], I myself am totally uninterested in [research] models that are only models for their own sake,” while yet another researcher opposed the type of research published in these journals in terms of; “The big mainstream journals are not very innovative in my view. [Rather,] they want you to use established theoretical frameworks […] I find smaller journals more innovative” (see also the criticism forwarded by, e.g. Gendron, 2015; Hopwood, 2008; Mingers and Willmott, 2013; Northcott and Linacre, 2010).

A third and related area of criticism emanating from their since-long established view on what (a) good research/er is concerned what they saw as a tendency towards “strategic” publishing. That is, if significant contributions to practice and their own research community were deemed highly valued in the group, it became logical to oppose a perceived tendency to “slice up” research, thereby taking lower risks and increasing the likelihood of scoring well in the performance evaluations (see also the discussions in Gendron, 2015; Parker, 2012). Along these lines, for example, one interviewee said that “I dislike when researchers take a research area and subdivide it into small pieces and then elaborate on details so as to be able to publish in many outlets,” while another argued that “I think that it’s more about strategic publishing nowadays which is unfortunate. One research question is subdivided into four publications. I do not think it was like this in the old days.” In a similar vein, yet another interviewee opposed the emergence of so-called “career researchers”; “I react to pure ‘career researchers’. I do not say that because I'm jealous. I mean, I have made a career and published many papers, but I dislike people who aim for as many publications as possible, and to be as famous as possible.” As illustrated by the following quotes, members of the research group also opposed “strategic” citing, and the legitimacy of the number of citations as a measure of research quality in the first place:

  • I think it is dangerous when researchers apply different kinds of strategies just to succeed in the bibliometric follow ups, such as when research fellows cite one another or even cite one's own work […] You get well cited, but I think it's problematic when you cite only with that purpose in mind. […] I strongly oppose this.

  • The point of departure for choosing research areas cannot be how easy it will be to get citations. Of course, citations are important to achieve impact [in the research community], but citations cannot be the motivating force.

  • It is not a valuation of how smart people are or how hard they have been working, because many are smart and hardworking but did not get the winning ticket. While others have, without too much effort, had a stroke of luck and become the most cited researcher this year. And that is what is valued as good research.

A fourth and final area of criticism concerned the orientation towards comparison and competition that the PMSs materialized. Because as noted in the literature, the reduction of a multifaceted organizational reality into single quantitative performance measurements not only makes particular attributes and dimensions of this reality (in)visible, it also “creates relations between [these] attributes and dimensions where value is revealed in comparison” (Espeland and Stevens, 1998, p. 317; see also Espeland and Sauder, 2007). However, this idea that the value of research activities conducted should or could be compared with that of others was not unchallenged within the group. On the contrary, one member of the group argued that “I'm not interested in comparing with others … I do not see it [i.e. my work] as a competition,” while another expressed that “I do not think that we should compare with others […] and I never do it myself.” As illustrated by the following quotes, many members also saw immediate risks of formally comparing research output for evaluative purposes, both between individual researchers and scientific disciplines (see also ter Bogt and Scapens, 2012):

  • We do not make comparisons within the research group. That would cause a bad climate, […] because then we start competing with each other instead of being one group.

  • Comparisons between individual researchers risk leading to different types of cooperation. You begin seeking collaboration partners more strategically. Research content becomes less important and we become less inclusive when it comes to junior colleagues [… Instead] you seek collaboration with those who publish a lot.

  • Comparisons of scientific disciplines risk hampering cooperation and instead one tends to isolate one's own discipline.

  • It's reckless to compare us [i.e. our research discipline] with other disciplines.

So all in all, we find that perceived gaps between central and since-long established ideals of what constitutes (a) good research/er and that of the academic performer ideal enabled the formulation of a severe critique of the latter. A critique that led them to contest rather than conform to the performer ideal as will be illustrated next.

5.4 Contesting conformity to the academic performer ideal

As suggested by the right-hand side of the theoretical model depicted in Figure 1, the conscious and critical reflection of the performer ideal largely prevented the members of the group from feeling obliged or seduced to “conform and perform to the rankings criteria” (Gioia and Corley, 2002, p. 110), above referred to as reactive conformance (cf. Espeland and Sauder, 2007; Gendron, 2008). Indeed, our empirical evidence suggests that researchers in the group to some extent adapted their behavior to better match the criteria of the PMS. For example, if it was possible to choose a research outlet that was indexed in Web of Science, they would typically do so because only publications in Web of Science “count” in the internal performance evaluations. Nevertheless (and again), we also find a surprisingly high level of non-conforming behavior within the research group. To illustrate, consider the following utterances:

  • I think that many [researchers within the group] do not bother about the “rules” [stipulated by the PMS].

  • At the end of the day, it's about what is important and fun to do. That is the yardstick that you should return to [even though it contradicts the intentions materialized by the PMS].

  • I know which type of papers that are easy to get published in particular journals, but this does not make me write this kind of papers even though it hampers my career.

  • I would not be happier if I got “4” instead of “3” in some internal performance evaluation … I would not give a shit.

  • The changes we undertake [in response to the intra-organizational research assessments] are rather “cosmetic”.

  • For your daily life as a researcher, your scores on the internal [performance evaluation] systems are totally unimportant.

In other words, while all researchers in the group were highly aware of the academic performer ideal, and some of them at least superficially adapted their behavior to this ideal, the research group was first and foremost characterized by non-conforming behavior. Or as one of the group members argued “I know that it sounds as if we try to blame others, but the measurements are so strange. […] So the overall reaction [from the group] was to take a defense position and also to make fun of the assessment,” while another said that “The recognition you have within your international research community may be very different from the score you get on our internal performance evaluations. They are completely different things. […] And as far as I'm concerned our international standing is what matters. [… Furthermore,] these systems change over time. So as I see it, it's only a ‘fabrication’, it's a game which is not for real.”

6. Conclusions and contributions

As suggested earlier, a small but growing stream of research (e.g. Archer, 2008a; Clegg, 2008; Gerdin and Englund, 2019; Pollock et al., 2018) has begun to problematize the reactive conformance thesis which has dominated the literature thus far (e.g. Alvesson and Spicer, 2016; Espeland and Sauder, 2007; Gendron, 2008, 2015; Mingers and Wilmott, 2013; Parker, 2012). The current paper builds on and extends this emerging literature by drawing upon the findings from an empirical study of a research group at a Swedish university who was exposed to, but nevertheless seemed able to contest, a strong pressure to conform to PMS criteria.

Overall, our findings stress the importance of pre-existing and alternative research/er ideals for understanding such contestations (see emergent model in Figure 1). The premise is that such pre-existing ideals enabled the research group to formulate severe critique of the academic performer ideal underpinning the PMS criteria. Importantly though, and as has already been suggested elsewhere (e.g. Acker and Webber, 2017; Alvesson and Spicer, 2016), this type of criticism does not necessarily lead to non-compliance. On the contrary, even though some academics may certainly turn their criticism into a contestation of the performer ideal (see, e.g. Anderson, 2008; Clegg, 2008), many studies have shown that such criticism often remains on an ideological level (see, e.g. Clarke et al., 2012; Kalfa et al., 2018; Teelken, 2012). That is, it remains as a more or less overt criticism based on one's own pre-existing ideals, but in practice one nevertheless complies with the PMS criteria. Interestingly though, and again, we still know rather little about when and why the critique will result in either or – i.e. in either contestation or compliance (Kalfa et al., 2018).

Contributing towards such an understanding, we identify four qualities of a pre-existing and alternative research/er ideal that will increase the likelihood that researchers will not only be critical of, but also contest, the performer ideal. As described in detail above (Section 5.2), we refer to such qualities as the ideals being (1) central and since-long established, (2) orthogonal to the academic performer ideal, (3) largely shared within the group, and (4) externally legitimate. Next, we will explain in more detail why these qualities are likely to, at least temporarily, disarm the disciplinary and self-disciplinary influences of PMSs (see Gendron, 2008; Englund and Gerdin, 2019; and also Section 2).

To begin with, existing research/er ideals can be seen as resilient to change because of how they work as cognitive scripts (Gioia and Poole, 1984) or interpretative schemes (DiMaggio, 1997; Harris, 1994) which frame how issues and events, including the design and use of PMSs (see, e.g. Abrahamsson et al., 2016; Bay, 2018; Englund et al., 2013; Norris and O'Dwyer, 2004), will be interpreted. And when such pre-existing research/er ideals are very central to the group, and since-long established, they form a solid standard for what is deemed appropriate behavior, against which other ideals can be compared and evaluated.

Furthermore, and related, when these pre-existing and highly valued ideals are very different from, or even contradictory to, the academic performer ideal, this orthogonality is likely to spark overt and critical reflection of the latter (cf. Archer, 2008a; Englund and Gerdin, 2018). This is so because orthogonality of ideals puts actors in a better position to “see through” and critically judge the underpinning premises of the academic performer ideal. And this, in turn, decreases the probability that researchers will gradually and unconsciously come to view the academic performer ideal as natural, relevant and even desirable (cf. Davies and Petersen, 2005; Gendron, 2008; Lynch, 2015). That is, academics are less likely to become self-disciplining subjects who see no alternatives but reproducing the academic performer ideal.

However, we also propose that while it may be necessary that academics are able to “see through” the “extremely compelling and seductive” premises of the academic performer ideal (Knights and Clarke, 2014, p. 352), it may not be a sufficient condition. Rather, our emergent findings also suggest the importance of pre-existing research/er ideals being shared within the group. Because as previous research has shown (Abrahamsson and Gerdin, 2006; Seo and Creed, 2002), the opposite condition – i.e. structural multiplicity – provides a set of alternative ways of thinking and acting that may be politically exploited by particular group members who may want to overthrow the since-long established research/er ideals (see also Englund and Gerdin, 2018). Not least, this can apply to researchers who see PMSs “as a means to strengthen one's position in academia” (Ylijoki and Ursin, 2013, p. 1142), i.e. by researchers who see PMS practices as something that enables careering through academia (see, e.g. Alvesson and Spicer, 2016; Clarke and Knights, 2015). But when structural homogeneity prevails, it becomes more difficult for the academic performer ideal to infiltrate and get hold of individual members of the group. Or as one interviewee put it, the group “becomes an external shell” behind which the individual researchers can take shelter.

In addition, our evidence suggests the importance of pre-existing research/er ideals being perceived as highly legitimate among external stakeholders. The premise is twofold. First, when this is the case, the basic need for a positive self-esteem (cf. Ashforth et al., 2008; Haslam and Ellemers, 2005) can be satisfied by sources other than those emanating from high scores in intra-organizational PMSs. That is, an alternative and external basis for legitimacy arguably mitigates the “fear of abnormality, the stress of being perceived as a low performer, and perhaps even the stigma of laziness” (Gendron, 2015, p. 173; see also Archer, 2008a) which low scores on intra-organizational PMSs typically engender. Second, perceived legitimacy among external stakeholders can broaden the array of potential funders of the type of research the group deems important. That is, external legitimacy may imply that the research group becomes less dependent on, and thus affected by, the symbolic and tangible rewards and punishments that follow from (not) adhering to the criteria of (a) good research/er as constituted by PMSs [4]. And as a result, they become less vulnerable to the (self-)disciplining influence of these PMSs (see also Section 2).

Taken together, these findings arguably add further nuance to the reactive conformance thesis that has dominated the literature thus far, by analytically disentangling important qualities of already-established research/er ideals that potentially make them less vulnerable to the conforming influence of the academic performer ideal. In particular, we contribute to overcoming the apparent divide in the literature between those who argue that the proliferation of PMSs in academia will engender reactive conformity (see, e.g. Alvesson and Spicer, 2016; Espeland and Sauder, 2007; Gendron, 2008, 2015; Mingers and Wilmott, 2013; Parker, 2012), and those who find that academics may indeed carve out spaces for thinking and acting otherwise (see, e.g. Archer, 2008a, b; Clegg, 2008; Ylijoki and Ursin, 2013). This is so because our findings neither support the idea that the academic performer ideal acts as a monolithic force that coerces or seduces researchers into adapting to the criteria stipulated by PMSs in a more or less deterministic way (see Section 2), nor that academics will generally contest this performer ideal. Rather, we pinpoint important conditions that will decrease the likelihood that this performer ideal will infiltrate and take over pre-existing research/er ideals by means of disciplining and/or self-disciplining forms of governance (cf. Englund and Gerdin, 2019; Gendron, 2008; Martin-Sardesai et al., 2017b).

In so doing, we also extend the growing literature that has begun to problematize the reactive conformity thesis (see, e.g. Archer, 2008a, b; Clegg, 2008; Gerdin and Englund, 2019; Pollock et al., 2018). Because our study not only confirms the idea that academics may indeed carve out spaces for thinking and acting otherwise (including strategies for “gaming” the system, see, e.g. Lynch, 2015; Martin-Sardesai and Guthrie, 2018; Parker, 2011), but also theoretically elaborates on how and why such non-conforming behavior may come about. In fact, our study identifies four qualities of extant researcher/er ideals (i.e. cognitive scripts) which each seems to constitute a necessary, but not sufficient, condition for explaining non-conforming research/er behavior. That is, not only is it important that such ideals are central and since-long established (to make them less fluid and inclined to change), but also that they are orthogonal (to enable critical reflection of alternatives), shared within the group (to provide a sense of unity and belongingness), and externally legitimate (to provide alternative sources of symbolic and tangible resources upon which the group is dependent) (see also Englund and Gerdin, 2019).

Finally, these findings add to the ongoing debate on the so-called research-practice gap (see, e.g. Baldvinsdottir et al., 2010; Tucker and Parker, 2014; Tucker and Schaltegger, 2016). An important argument in this debate has been that PMSs may – due to how they incentivize publications in international refereed journals – contribute to create a climate where researchers feel (just as they did in the current case) that they are “forced” away from research that engages more directly with practice (Tucker et al., 2019; Tucker and Parker, 2014; van Helden, 2019). The premise is, as suggested by Tucker and Parker (2014, p. 127) and others (see, e.g. Laughlin, 2011; van Helden, 2019), that such journals tend to favor studies emphasizing theoretical rigor over those “that deliberately set out to encourage publication of research that engages with policy and practice.”

And indeed, in one sense, our findings vindicate the risk of PMSs having this effect as the systems per se were perceived to leave little room for more practice-oriented research among our interviewees. In another sense though, they point to how researchers who find the research-practice gap problematic may indeed find ways to carve out spaces for conducting more practice-oriented research, even when such research is largely orthogonal to that which is prioritized by the PMS. That is, and again, they point to the importance of having (or developing) alternative (orthogonal) research/er ideals that are central, largely shared among members of a research group, and deemed legitimate among important external stakeholders. As such, our findings thus provide insights into one potential way of overcoming the research-practice gap; one that could become increasingly important if universities continue to evaluate researchers (mainly) based on their contributions to other researchers rather than to practice.

However, it should be acknowledged that these findings suffer from the usual limitations of approaching a general research problem in a very specific empirical context. From such a perspective, there is always a risk that the findings become too context specific, and hence, that they are hard to generalize or transfer to other settings. Again though, while we have intentionally tried to stay close (and true) to the insiders' viewpoint on the studied phenomenon – and hence have taken seriously the notion of context – the emerging model as such (see Figure 1 above) should arguably be transferable to other settings. The reason being that the conceptual building blocks of the model are not necessarily tied to the particular context under study. Rather, they point to a number of general qualities of cognitive scripts, i.e. to whether such scripts are, for example, central and since-long established, largely shared among members of a group, and/or perceived as legitimate by important others. Moreover, while it is certainly an empirical (and contextual) question whether particular research groups display such qualities, the concepts per se constitute “general class labels” (Bazeley, 2013) that lend themselves well to situational contingency. That is, they may be more or less prevalent in different research groups, and hence it may be a question of degree rather than of absolute (binary) distinction.

Having said that though, our emergent model is just a first step towards better understanding the particular qualities that may make certain cognitive scripts more resilient to a colonization of the academic performer ideal than others. To confirm the relevance and transferability of our particular findings, more research is needed. Such research could also shed some further light on whether the conditions highlighted in the model merely delays the colonization of the performer ideal, or whether they imply that academics may fully or partly contest its conforming influence (see also the discussions in Archer, 2008a, b). And if they at least partly can do so also in the long run, what do such “hybrid” researcher ideals look like [5] and what type of behavior would they engender – an interesting and rewarding question to address in future studies.

Figures

Emergent theoretical model

Figure 1

Emergent theoretical model

List of interviewees

PositionGenderAge-spanYears since receiving a PhD
ProfessorMale61–7031–40
ProfessorMale61–7031–40
ProfessorMale31–4011–15
ProfessorFemale51–6011–15
Associate professorFemale51–606–10
Assistant professorFemale41–500–5
Assistant professorMale41–506–10
Assistant professorFemale51–606–10
Assistant professorMale31–400–5

Emergent second-order concepts and explanations of how and why extant research/er ideals may inhibit reactive conformance to the academic performer ideal materialized by PMSs

Representative examples of first-order (in vivo) expressionsSecond-order concepts
Qualities of alternative research/er ideals
Explanations of how and why qualities of alternative research/er ideals inhibit reactive conformance to PMS ideals
Expressions about the importance of making a difference in practice
  1. I think that university management has a very elitist view on research […] but does not care about its impact on practice, something that we traditionally have been good at

  2. It's about making a difference [in practice]

  3. I am commissioned by society to deliver knowledge useful in practice

  4. To explore what works for society. These people get a better situation because I've shown what works

  5. You want to make a difference to someone, in some respect. […] So for me, research is about engagement in societal issues

(1) Central and since-long establishedThe more central and since-long established research/er ideals are, the more persistent to change they become
Expressions about perceived gaps between research/er ideals
  1. I feel that the measurement and evaluation “cloud” gets larger and larger, and this makes it even more important for me to decide where to go […] and I have chosen to read and do what I find interesting [rather than to focus on writing journal publications per se]

  2. It is important that people outside the research community take an interest in what we do. After all, that's what it is all about in my view. There is no point in writing papers for one another

  3. It all depends on whose standard you use. […] Either you see yourself as commissioned by society, or by the university [… who] expect me to work on my CV with the right publications

(2) Orthogonal in relation to performer idealThe more orthogonal research/er ideals are in relation to the academic performer ideal, the more likely they are to prompt critical reflection and conscious contestation of this ideal
Expressions of unity within the research group
  1. We are very assured [that we do the right thing]. So the [central] control system does not pervade within our group

  2. What we have in common in our research group is that everybody is very close to the practices that we investigate

  3. We have a common view on what research should be about

  4. Our research leader always talks about us as a group, that is, that we as a group has published through particular channels

(3) Shared within research groupThe more shared researcher ideals are within research groups, the more likely they are to reduce uncertainty as to what is (not) deemed legitimate behaviour
Expressions about a perceived legitimacy within own research community
  1. I think we are highly respected in our own research community

  2. When it comes to my research group, our standing in the international research community is really good, not least considering the relatively small size of the group

  3. I know that our research focused on [X] has a very good position [within our research community]. The researchers in question have introduced some path-breaking perspectives. [… And], these new perspectives are now becoming more established

  4. I think that we are seen as innovative and entrepreneurial, in particular among our Swedish counterparts at other universities

(4) Perceived as legitimate among important external stakeholdersThe stronger the perceived legitimacy of alternative research/er ideals is among important (external) stakeholders, the more likely they are to inhibit a deterioration of these ideals through being an alternative and significant source of symbolic and tangible appreciations

Notes

1.

Note that Ylijoki and Ursin (2013) also identified a number of other narratives such as those focusing on resistance and loss of job content.

2.

The project group that undertook the interviews consisted of three researchers of which none was a member of the research group studied.

3.

Note that we in this study focus specifically on the PMS for following up of research groups although, indeed, there also were follow ups of individuals, e.g. when setting wages.

4.

It should also be noted though that all interviewees held permanent positions as teachers within the university. Hence, a contestation of the academic performer ideal would not lead to them losing their jobs altogether. As insightfully noted by one anonymous reviewer, however, this could happen to researchers on, e.g. tenure tracks, thereby affecting their willingness and ability to contest the academic performer ideal.

5.

See, e.g. the recent works of Knights and Clarke (2014) and others (Malsch and Tessier, 2015; Messner, 2015; Ylijoki; Ursin, 2013), which report on various types of ambivalence and tensions related to the ongoing exposure of academics to PMSs.

References

Abrahamsson, G. and Gerdin, J. (2006), “Exploiting institutional contradictions: the role of management accounting in continuous improvement implementation”, Qualitative Research in Accounting and Management, Vol. 3 No. 2, pp. 126-144.

Abrahamsson, G., Englund, H. and Gerdin, J. (2016), “On the (re) construction of numbers and operational reality: a study of face-to-face interactions”, Qualitative Research in Accounting and Management, Vol. 13 No. 2, pp. 159-188.

Acker, S. and Webber, M. (2017), “Made to measure: early career academics in the Canadian university workplace”, Higher Education Research and Development, Vol. 36 No. 3, pp. 2541-2554.

Agyemang, G. and Broadbent, J. (2015), “Management control systems and research management in universities: an empirical and conceptual exploration”, Accounting, Auditing & Accountability Journal, Vol. 28 No. 7, pp. 1018-1046.

Ahrens, T. and Chapman, C.S. (2006), “Doing qualitative field research in management accounting: positioning data to contribute to theory”, Accounting, Organizations and Society, Vol. 31 No. 8, pp. 819-841.

Alvesson, M. and Spicer, A. (2016), “(Un)Conditional surrender?: why do professionals willingly comply with managerialism?”, Journal of Organizational Change Management, Vol. 29 No. 1, pp. 29-45.

Anderson, G. (2008), “Mapping academic resistance in the managerial university”, Organization, Vol. 15 No. 2, pp. 251-270.

Annisette, M., Cooper, C. and Gendron, Y. (2015), “Living in a contradictory world: CPAs admission to SSCI”, Critical Perspectives on Accounting, Vol. 31, pp. 1-4.

Archer, L. (2008a), “The new neoliberal subjects? Young/er academics' constructions of professional identity”, Journal of Education Policy, Vol. 23 No. 3, pp. 265-285.

Archer, L. (2008b), “‘Younger academics’ constructions of ‘authenticity’, ‘success’ and professional identity”, Studies in Higher Education, Vol. 33 No. 4, pp. 385-403.

Ashforth, B.E., Harrison, S.H. and Corley, K.G. (2008), “Identification in organizations: an examination of four fundamental questions”, Journal of Management, Vol. 34 No. 3, pp. 325-374.

Baldvinsdottir, G., Mitchell, F. and Nørreklit, H. (2010). “Issues in the relationship between theory and practice in management accounting”, Management Accounting Research, Vol. 21 No. 2, pp. 79-82.

Ball, S.J. (2003), “The teacher's soul and the terrors of performativity”, Journal of Education Policy, Vol. 18 No. 2, pp. 215-228.

Ball, S.J. (2012) , Foucault, Power, and Education, Routledge, New York.

Bay, C. (2018), “Makeover accounting: investigating the meaning-making practices of financial accounts”, Accounting, Organizations and Society, Vol. 64, pp. 44-54.

Bazeley, P. (2013), Qualitative Data Analysis: Practical Strategies, Sage, London.

Blanz, M., Mummendey, A., Mielke, R. and Klink, A. (1998), “Responding to negative social identity: a taxonomy of identity management strategies”, European Journal of Social Psychology, Vol. 28 No. 5, pp. 697-729.

Boland Jr, R.J. and Pondy, L.R. (1986). “The micro dynamics of a budget-cutting process: modes, models and structure”, Accounting, Organizations and Society, Vol. 11 Nos 4-5, pp. 403-422.

Brown, A.D. (2015), “Identities and identity work in organizations”, International Journal of Management Reviews, Vol. 17 No. 1, pp. 20-40.

Butler, N. and Spoelstra, S. (2012), “Your excellency”, Organization, Vol. 19 No. 6, pp. 891-903.

Butler, N. and Spoelstra, S. (2014), “The regime of excellence and the erosion of ethos in critical management studies”, British Journal of Management, Vol. 25 No. 3, pp. 538-550.

Clarke, C.A. and Knights, D. (2015), “Careering through academia: securing identities or engaging ethical subjectivities?”, Human Relations, Vol. 68 No. 12, pp. 1865-1888.

Clarke, C., Knights, D. and Jarvis, C. (2012), “A labour of love? Academics in business schools”, Scandinavian Journal of Management, Vol. 28 No. 1, pp. 5-15.

Clegg, S. (2008), “Academic identities under threat?”, British Educational Research Journal, Vol. 34 No. 3, pp. 329-345.

Davies, B. and Bansel, P. (2010), “Governmentality and academic work: shaping the hearts and minds of academic workers”, Journal of Curriculum Theorizing, Vol. 26 No. 3, pp. 5-20.

Davies, B. and Petersen, E.B. (2005), “Neoliberal discourse in the academy: the forestalling of (collective) resistance”, Learning and Teaching in the Social Sciences, Vol. 2 No. 2, pp. 77-98.

DiMaggio, P. (1997), “Culture and cognition”, Annual Review of Sociology, Vol. 23 No. 1, pp. 263-287.

Englund, H. and Gerdin, J. (2018), “Management accounting and the paradox of embedded agency: a framework for analyzing sources of structural change”, Management Accounting Research, Vol. 38, pp. 1-11.

Englund, H. and Gerdin, J. (2019), “Performative technologies and teacher subjectivities: a conceptual framework”, British Educational Research Journal, Vol. 45 No. 3, pp. 502-517.

Englund, H., Gerdin, J. and Abrahamsson, G. (2013), “Accounting ambiguity and structural change”, Accounting, Auditing & Accountability Journal, Vol. 26 No. 3, pp. 423-448.

Espeland, W.N. and Sauder, M. (2007), “The reactivity of rankings: how public measures recreate social worlds”, American Journal of Sociology, Vol. 113 No. 1, pp. 1-40.

Espeland, W.N. and Stevens, M.L. (1998), “Commensuration as a social process”, Annual Review of Sociology, Vol. 24 No. 1, pp. 313-343.

Espeland, W.N. and Stevens, M.L. (2008), “A sociology of quantification”, European Journal of Sociology/Archives Européennes de Sociologie, Vol. 49 No. 3, pp. 401-436.

Gendron, Y. (2008), “Constituting the academic performer: the spectre of superficiality and stagnation in academia”, European Accounting Review, Vol. 17 No. 1, pp. 97-127.

Gendron, Y. (2015), “Accounting academia and the threat of the paying-off mentality”, Critical Perspectives on Accounting, Vol. 26, pp. 168-176.

Gerdin, J. and Englund, H. (2019), “Contesting commensuration: public response tactics to performance evaluation of academia”, Accounting, Auditing & Accountability Journal, Vol. 32 No. 4, pp. 1098-1116.

Gioia, D.A. and Corley, K.G. (2002), “Being good versus looking good: business school rankings and the Circean transformation from substance to image”, The Academy of Management Learning and Education, Vol. 1 No. 1, pp. 107-120.

Gioia, D.A. and Poole, P.P. (1984), “Scripts in organizational behavior”, Academy of Management Review, Vol. 9 No. 3, pp. 449-459.

Hammarfelt, B., Nelhans, G., Eklund, P. and Åström, F. (2016), “The heterogeneous landscape of bibliometric indicators: evaluating models for allocating resources at Swedish universities”, Research Evaluation, Vol. 25 No. 3, pp. 292-305.

Harris, S.G. (1994), “Organizational culture and individual sensemaking: a schema-based perspective”, Organization Science, Vol. 5 No. 3, pp. 309-321.

Haslam, S.A. and Ellemers, N. (2005), “Social identity in industrial and organizational psychology: concepts, controversies and contributions”, International Review of Industrial and Organizational Psychology, Vol. 20 No. 1, pp. 39-118.

Henkel, M. (2005), “Academic identity and autonomy in a changing policy environment”, Higher Education, Vol. 49 Nos 1-2, pp. 155-176.

Hicks, D. (2012), “Performance-based university research funding systems”, Research Policy, Vol. 41 No. 2, pp. 251-261.

Hopwood, A.G. (2008), “Changing pressures on the research process: on trying to research in an age when curiosity is not enough”, European Accounting Review, Vol. 17 No. 1, pp. 87-96.

Kalfa, S., Wilkinson, A. and Gollan, P.J. (2018), “The academic game: compliance and resistance in universities”, Work, Employment and Society, Vol. 32 No. 2, pp. 274-291.

Kallio, K.-M., Kallio, T.G.J., Tienari, J. and Hyvönen, T. (2016), “Ethos at stake: performance management and academic work in universities”, Human Relations, Vol. 69 No. 3, pp. 685-709.

Kivistö, J., Pekkola, E., Berg, L.N., Hansen, H.F., Geschwind, L. and Lyytinen, A. (2019). “Performance in higher education institutions and its variations in nordic policy”, in Reforms, Organizational Change and Performance in Higher Education, Palgrave Macmillan, Cham, pp. 37-67.

Knights, D. and Clarke, C.A. (2014), “It's a bittersweet symphony, this life: fragile academic selves and insecure identities at work”, Organization Studies, Vol. 35 No. 3, pp. 335-357.

Krücken, G., Engwall, L. and De Corte, E. (2018), “Introduction to the special issue on ‘university governance and creativity’”, European Review, Vol. 26 No. S1, pp. S1-S5.

Larner, W. and Le Heron, R. (2005), “Neo-liberalizing spaces and subjectivities: reinventing New Zealand universities”, Organization, Vol. 12 No. 6, pp. 843-862.

Laughlin, R.C. (2011), “Accounting research, policy and practice: worlds together or worlds apart?”, in Evans, E., Burritt, R. and Guthrie, J. (Eds), Bridging the Gap Between Academic Accounting Research and Professional Practice. Centre for Accounting, Governance and Sustainability, University of South Australia and the Institute of Chartered Accountants of Australia, Sydney, pp. 23-30.

Lynch, K. (2006), “Neo-liberalism and marketisation: the implications for higher education”, European Educational Research Journal, Vol. 5 No. 1, pp. 1-17.

Lynch, K. (2015), “Control by numbers: new managerialism and ranking in higher education”, Critical Studies in Education, Vol. 56 No. 2, pp. 190-207.

Malsch, B. and Tessier, S. (2015), “Journal ranking effects on junior academics: identity fragmentation and politicization”, Critical Perspectives on Accounting, Vol. 26, pp. 84-98.

Martin-Sardesai, A. and Guthrie, J. (2018), “Human capital loss in an academic performance measurement system”, Journal of Intellectual Capital, Vol. 19 No. 1, pp. 53-70.

Martin-Sardesai, A., Irvine, H., Tooley, S. and Guthrie, J. (2017a), “Organizational change in an Australian university: responses to a research assessment exercise”, The British Accounting Review, Vol. 49 No. 4, pp. 399-412.

Martin-Sardesai, A., Irvine, H., Tooley, S. and Guthrie, J. (2017b), “Government research evaluations and academic freedom: a UK and Australian comparison”, Higher Education Research and Development, Vol. 36 No. 2, pp. 372-385.

Messner, M. (2015), “Research orientation without regrets”, Critical Perspectives on Accounting, Vol. 26, pp. 76-83.

Miles, M.B. and Huberman, A.M. (1994), Qualitative Data Analysis: An Expanded Sourcebook, Sage, Thousand Oaks, CA.

Miller, P. and Power, M. (2013), “Accounting, organizing and economizing: connecting accounting research and organization theory”, Academy of Management Annals, Vol. 7 No. 1, pp. 557-605.

Mingers, J. and Willmott, H. (2013), “Taylorizing business school research: on the ‘one best way’ performative effects of journal ranking lists”, Human Relations, Vol. 66 No. 8, pp. 1051-1073.

Morrissey, J. (2013), “Governing the academic subject: foucault, governmentality and the performing university”, Oxford Review of Education, Vol. 39 No. 6, pp. 797-810.

Morrissey, J. (2015), “Regimes of performance: practices of the normalised self in the neoliberal university”, British Journal of Sociology of Education, Vol. 36 No. 4, pp. 614-634.

Norris, G. and O'Dwyer, B. (2004), “Motivating socially responsive decision making: the operation of management controls in a socially responsive organisation”, The British Accounting Review, Vol. 36 No. 2, pp. 173-196.

Northcott, D. and Linacre, S. (2010), “Producing spaces for academic discourse: the impact of research assessment exercises and journal quality rankings”, Australian Accounting Review, Vol. 20 No. 1, pp. 38-54.

Parker, L.D. (2011), “University corporatisation: driving redefinition”, Critical Perspectives on Accounting, Vol. 22 No. 4, pp. 434-450.

Parker, L.D. (2012), “Beyond the ticket and the brand: imagining an accounting research future”, Accounting and Finance, Vol. 52 No. 4, pp. 1153-1182.

Parker, L.D. (2013), “Contemporary university strategising: the financial imperative”, Financial Accountability and Management, Vol. 29 No. 1, pp. 1-25.

Pelger, C. and Grottke, M. (2015), “What about the future of the academy? Some remarks on the looming colonisation of doctoral education”, Critical Perspectives on Accounting, Vol. 26, pp. 117-129.

Peseta, T., Barrie, S. and McLean, J. (2017), “Academic life in the measured university: pleasures, paradoxes and politics”, Higher Education Research and Development, Vol. 36 No. 3, pp. 453-457.

Pollock, N., D'Adderio, L., Williams, R. and Leforestier, L. (2018), “Conforming or transforming? How organizations respond to multiple rankings”, Accounting, Organizations and Society, Vol. 64, pp. 55-68.

Power, M. (1999), “Research assessment exercise: a fatal remedy?”, History of the Human Sciences, Vol. 12 No. 4, pp. 135-137.

Sauder, M. and Espeland, W.N. (2009), “The discipline of rankings: tight coupling and organizational change”, American Sociological Review, Vol. 74 No. 1, pp. 63-82.

Schwandt, T.A. (1994), “Constructivist, interpretivist approaches to human inquiry”, Handbook of Qualitative Research, Vol. 1, pp. 118-137.

Seo, M.G. and Creed, W.D. (2002), “Institutional contradictions, praxis, and institutional change: a dialectical perspective”, Academy of Management Review, Vol. 27 No. 2, pp. 222-247.

Shore, C. (2008), “Audit culture and Illiberal governance: universities and the politics of accountability”, Anthropological Theory, Vol. 8 No. 3, pp. 278-298.

Silverman, D. (2011), Interpreting Qualitative Data: A Guide to the Principles of Qualitative Research, SAGE, London.

Teelken, C. (2012), “Compliance or pragmatism: how do academics deal with managerialism in higher education? A comparative study in three countries”, Studies in Higher Education, Vol. 37 No. 3, pp. 271-290.

ter Bogt, H.J. and Scapens, R.W. (2012), “Performance management in universities: effects of the transition to quantitative measurement systems”, European Accounting Review, Vol. 21 No. 3, pp. 1-47.

Tourish, D. and Willmott, H. (2015), “In defiance of folly: journal rankings, mindless measures and the ABS guide”, Critical Perspectives on Accounting, Vol. 26, pp. 37-46.

Tucker, B. and Parker, L. (2014), “In our ivory towers? The research-practice gap in management accounting”, Accounting and Business Research, Vol. 44 No. 2, pp. 104-143.

Tucker, B.P. and Schaltegger, S. (2016), “Comparing the research-practice gap in management accounting: a view from professional accounting bodies in Australia and Germany”, Accounting, Auditing & Accountability Journal, Vol. 29 No. 3, pp. 362-400.

Tucker, B.P., Waye, V. and Freeman, S. (2019), “The use and usefulness of academic research: an EMBA perspective”, The International Journal of Management Education, Vol. 17 No. 3, p. 100314.

van Helden, J. (2019), “The practical relevance of public sector accounting research; time to take a stand”, Public Money & Management, Vol. 39 No. 8, pp. 595-598.

Walton, J. (1992), “Making the theoretical case”, in Ragin, C. and Becker, H. (Eds), What is a Case? Exploring the Foundations of Social Inquiry, Cambridge University Press, New York, pp. 121-137.

Willmott, H. (1995), “Managing the academics: commodification and control in the development of university education in the U.K.”, Human Relations, Vol. 48, No. 9, pp. 993-1027.

Ylijoki, O.H. and Ursin, J. (2013), “The construction of academic identity in the changes of Finnish higher education”, Studies in Higher Education, Vol. 38 No. 8, pp. 1135-1149.

Zacharewicz, T., Lepori, B., Reale, E. and Jonkers, K. (2018), “Performance-based research funding in EU Member States—a comparative assessment”, Science and Public Policy, Vol. 46 No. 1, pp. 105-115.

Acknowledgements

We gratefully acknowledge the useful comments made by the two anonymous referees, Sabina du Rietz at CEROC (Centre for Empirical Research on Organizational Control), Örebro University, and seminar participants at SCORE (Stockholm Centre for Organizational Research).Funding: Financial support was provided by The Swedish Research Council (Project no. 421-2014-740), the Jan Wallander and Tom Hedelius foundation (P2015-0031:1), and Örebro University.

Corresponding author

Jonas Gerdin can be contacted at: jonas.gerdin@oru.se

Related articles