The purpose of this paper is to develop, implement, test and further enhance a framework for measuring organizational change initiatives.
The conceptual part of the framework is based on the structured analysis of existing literature. The framework was further developed during an action research (AR) study where the authors developed, implemented, evaluated and improved the measurement system for organizational change initiatives.
The academic literature is rich in conceptual articles providing required characteristics of a “good” measurement system and frameworks for how organizations should measure performance. However, academia provides less empirical evidence of how these performance measurement systems can be implemented, evaluated and improved. In this paper, the authors present a study where the developed measurement system has been implemented, evaluated and improved. The results in terms of how the actual framework worked as well as the response from the case organizations are equally positive.
The framework has been implemented in two different, major change initiatives in one case organization. While the results are truly encouraging, the framework needs to be further tested and refined in more organizations.
There is a gap between academic perception and practical reality regarding how organizations should measure performance in general as well as measuring organizational change initiatives. The presented, and empirically tested, framework measures both the results of the change initiative (effectiveness) the actual change process (efficiency) as well as the perception of the change initiative and process from different key stakeholders.
This is the first developed, implemented and further improved measurement system for organizational change which measures both the efficiency and effectiveness of the change initiative (process).
Naslund, D. and Norrman, A. (2019), "A performance measurement system for change initiatives: An action research study from design to evaluation", Business Process Management Journal, Vol. 25 No. 7, pp. 1647-1672. https://doi.org/10.1108/BPMJ-11-2017-0309
Emerald Publishing Limited
Copyright © 2019, Dag Naslund and Andreas Norrman
Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode
Both practitioners and academics frequently use clichés such as “what gets measured gets done” and “the only constant thing in business is change.” Thus, it is not surprising that several authors discuss the importance of organizational change as well as the importance of measuring change initiatives (Taskinen and Smeds, 1999; Bourne et al., 2003a, b; Grote, 2008; Elving et al., 2011; Parkes and Davern, 2011; Jääskeläinen and Sillanpää, 2013). However, when it comes to actual frameworks for measuring organizational change initiatives, only a few conceptual frameworks exist (Taskinen and Smeds, 1999; Teh and Pang, 1999; Taskinen, 2003). The empirical research concerning implementation and evaluation of these systems is inadequate and the knowledge of the impact a measurement system could have on organizational change initiatives is limited (Franco-Santos et al., 2012).
The situation is similar for performance measurement systems (PMS) in general. Over the last 30 years, academic authors have provided various models and frameworks for how organizations could measure performance (see e.g. Neely, 1999; Bitichi et al., 2012; Choong, 2013a, b; Yadav and Sagar, 2013; Choong, 2014; Parida et al., 2015). Yet, the empirical evidence of their applications is often less well described. Neely et al. (2000) describe four phases in the PMS lifecycle: design, implementation, use, and evaluation of the system to maintain it (See Figure 1). Not surprisingly, the design phase is by far the most portrayed. Nudurupati et al. (2011, p. 281) write: “[…] implementation as well as using and updating PMS has received attention only in recent years.” However, the lack of articles dealing with implementation of measurement systems, and the fact that articles tend to be more descriptive than analytical is a recurring theme in literature reviews over the years (Neely, 1999; Brignall and Modell, 2000; Bourne et al., 2003a; Nudurupati et al., 2011; Tung et al., 2011; Bitichi et al., 2012; Franco-Santos et al., 2012; Gopal and Thakkar, 2012; Choong, 2014a, b; Parida et al., 2015; Meastrini et al., 2017).
Thus, a gap seems to exist between academic perception and practical reality when it comes to PMS – both in general as well as for organizational change initiatives. Significant complications range from strategic to operational issues and there are problems with definitions, terminology and lack of standards (see e.g. Neely, 1999).
Few rigorous empirical studies exist, and the impression is that many articles are of questionable academic quality since data collection and data analysis are often lacking in description (see e.g. Tung et al., 2011; Franco-Santos et al., 2012; Brignall and Modell, 2000). Trustworthiness suffers when the connection between the goal and the approach of the study is unclear. Bourne et al. (2003a, p. 20) indicated the lack of serious implementation research, stating it is an “[…] important deficiency in our knowledge of performance measurements.”
Another problem is the lack of theoretical foundation in many published articles (Perego and Hartmann, 2009). Choong (2013a) means that the field is neither theoretically nor conceptually developed. One could also argue that performance measurement is not an established academic discipline. Articles dealing with the topic are published in a variety of disciplines. This can create an issue as performance measurement is a systems problem yet disciplinary expertise is organized in silos. Thus, any field relying on systems approaches will be poorly developed in academic settings. Despite the increased number of articles, the actual evolution is insignificant. Neely (1999), for example, summarized the main problems in the field 20 years ago, and his summary is not significantly different from similar reviews in more recent articles (see e.g. Yadav and Sagar, 2013; Choong, 2014a, b). Furthermore, the influence by practitioners and consultants may not lead to positive consequences for the academic development of the field (see e.g. Bourne et al., 2003b).
Thus, while there is consensus that more research is needed when it comes to empirical research regarding PMS for organizations as well as for organizational change initiatives, it may be beneficial to begin with the latter as it represents a smaller system. Furthermore, given the seemingly increasing importance of organizational change, it is surprising that more research does not exist regarding measurement systems for organizational change. Thus, the purpose of this paper is to develop a measurement system for organizational change initiatives.
In exploratory research, where conceptual development is in its formative stage, case studies can provide depth and richness allowing the researchers to search for patterns to help them understand what is happening, and how and why it is done (Ellram, 1996; Stuart et al., 2002; Yin, 2003; Marshall and Rossman, 2006). This research has been conducted as an AR study. AR is a form of case study that places increased emphasis on relevance (Naslund, 2002). AR deals with real-world organizational problems and thus projects should, ideally, contribute both to practice and science (Argyris, 1993; Avison et al., 1999; Ellis and Kiely, 2000; Gummesson, 2000; Coughlan and Coghlan, 2002; Raelin and Coghlan, 2006).
The ideal problem domains for AR are thus those where the researcher is actively involved, the knowledge can be immediately applied, and the research process links theory and practice (Baskerville and Wood-Harper, 1996; Susman and Evered, 1978). The outcome is typically both an action and research (Coughlan and Coghlan, 2002), where research is used to inform practice, and practice is used to inform research (Näslund et al., 2010). Ross et al. (2006), for example, refer to four distinguishable characteristics of AR which are equally applicable to our study: it emphasizes the complex and multivariate nature of the problem domain; it simultaneously addresses solving of a practical problem (e.g. evaluate change initiatives) and expanding research knowledge (develop measurement system); it is a collaborative effort between researchers and managers; and it is primarily applied to understand various aspects of change.
Despite the potential of AR in applied fields such as operations and SCM, comparatively few AR articles are published in leading journals (Naslund, 2002; Frankel et al., 2005; Näslund et al., 2010). One reason for this reluctance to adopt AR can be attributed to the lack of rigor in some of the previously published works (Voss et al., 2002). On the other hand, there is a continuous discussion and increased demand for more relevance in research as too few published articles include both good research and workable answers for managers (see e.g. Alvesson, 1996; McCutcheon and Meredith, 1993; Markides, 2007; Shapiro et al., 2007). Toffel (2016, p. 1) wrote “Much of today’s business school scholarship is far removed from the actual practice of management.” Naturally, being applied and relevant cannot and should not be an excuse for doing research that is not rigorous. In this paper, we follow the guidelines for AR, as suggested by Näslund et al. (2010), who suggest that rigorous AR should include a detailed discussion of three major categories: design, data collection and data analysis.
2.1 Design aspects
The case organization in this study is the Swedish Transport Administration, which is responsible for the overall long-term infrastructure planning of road, rail, sea and air transport. Based on a pre-study, the researchers and the headquarter (HQ) of the case organization decided that an AR study over two and a half years would be the best research approach. The case organization is implementing several change initiatives and thus the idea of developing a measurement system in order to track progress and results of these initiatives was appealing to them. In the spirit of relevant and rigorous AR, we could advance theory and practice in a collaborative manner.
A key aspect of a rigorous research design is the adequate description of the unit of analysis. In AR, the unit of analysis is treated as an active object. The unit of analysis in this study is a change initiative in the case organization. The change initiatives were selected by HQ. The first change initiative alpha (α) considers an implementation of a new information system to support long-term planning of infrastructure investments. The second change initiative beta (β) is similar in nature but larger in scale and affecting more people. It is about the implementation of a new IT system but also work routines for maintenance – both planning and operations. Many divisions of the case organization will be affected by these two implementations: planning and maintenance, as well as both road and railways. From a general perspective, these are quite common change initiatives: implementation of a cross-functional information system to improve corporate processes.
2.2 Data collection and analysis aspects
AR projects are often characterized as cyclical in nature – corresponding to the cyclic loop of learning – with phases of planning, action (implementing), observing (evaluating), and overall analysis and reflection as a basis for new planning and action (Ballantyne, 2004; Coghlan and Brannick, 2001). A difference between AR and other forms of case studies is the involvement of the researcher(s) in the case. Unlike more traditional forms where the researcher participates as a passive observer outside the subject of investigation, s/he becomes an active participant in AR (Checkland, 1993; Naslund, 2002; Schein, 1987). In this cycle, the researcher is involved in the actual project, and then steps aside to meticulously reflect and analyze what happened in the organization (Daudelin, 1996). AR therefore requires a combination of participative action and critical reflection from the researcher since he/she both contributes to, and evaluates, the change process during the participation (Dick, 2001; Naslund, 2002; Ballantyne, 2004; Kates and Robertson, 2004). Thorough understanding and analysis constitute a key requirement for taking new action. This cyclical approach, with significant phases of reflection and analysis, is a vital difference between AR and consulting and thus these cyclical steps have to be clearly described in the article (see Table I and case description).
Naturally, this fact has consequences not only for the required skills of the researcher, but also for how the researcher conducts the research part of the change project. Multiple forms of data collection methods as well as triangulation are also recommended (Silverman, 1993; Coughlan and Coghlan, 2002). Similarly, a team-based approach is encouraged as a research team can increase the rigor of data collected in terms of reliability as well as reduce the risk of bias (investigator triangulation). Thus, Baskerville and Wood-Harper (1996) recommend two or more researchers relating to the same phenomenon. Furthermore, throughout the research process thoughts and ideas of the research are shared and discussed with the participating organization(s). Joint project reviews will enhance the understanding and also take the learning forward – for both the researcher and the organization (Gummesson, 2004; Raelin and Coghlan, 2006). In Table I we have summarized the main steps in the AR cycle based on activities performed by the researchers, HQ (case organization headquarter) and the two change initiative cases case (α and β). The description on Table I also strives to follow the main steps in the performance measurement lifecycle.
3. The case
Given the cyclical nature of an AR project, we have structured the case presentation after the four phases in the performance measurement lifecycle (Neely, 1999). From a research perspective, each phase was in itself a cycle in the AR project and each phase often consisted of several research loops with various activities. Thus, we first present a comprehensive description of how the PMS was designed. Then we describe the implementation of the system, followed by analysis and evaluation before we discuss future research.
3.1 The design phase
A significant part of the design phase included a comprehensive review of the literature – both related to performance measurements in general but also regarding organizational change initiatives. Although no formal definition of PMS exists and the field thus suffers from slight confusion to the meaning of PMS (Franco-Santos et al., 2007), an often referred to definition is the one by Neely et al. (1995, p. 81) who define the performance measurement concept as “[…] the set of metrics used to quantify the efficiency and effectiveness of actions.” However, the specific meaning of the terms effectiveness and efficiency are not particularly clear (Choong, 2013a) and there is no consensus on how to actually classify categories of performance metrics/measures (Braz et al., 2011).
Given this deficit in the literature, one could argue that it is not truly known which aspects are important for a measurement system or what a “good” measurement system looks like. Still, existing articles indicate areas of importance. A measurement system should be based on, aligned with, as well as support, the organizational strategy (Gomes et al., 2011; Choong, 2013a). Similarly, a balanced approach is often highlighted as key for a successful PMS (Kaplan and Norton, 1992; Kanji, 2002; Tung et al., 2011). The balanced approach also stresses the importance of leading vs lagging measures. Since financial measures are lagging, they can be misleading as potential problems in the processes may not show up instantly in the financial results (Neely et al., 2005; Tangen, 2005; Tung et al., 2011; Franco-Santos et al., 2012; Choong, 2013a, b; Taticchi et al., 2013).
The importance of systems theory/systems thinking is emphasized by many authors who also, almost ironically, notice that most of the conceptual systems do not seem to be founded on systems thinking/systems theory (e.g. Franco-Santos and Bourne, 2005; Franco-Santos et al., 2007; Taticchi et al., 2010; Yadav and Sagar, 2013; Choong, 2013b, 2014b). On the contrary, most systems seem to be founded on a traditional, analytical approach – both from a measuring and a management/strategic perspective. Another ideal condition is the cross-functional process based PMS (e.g. Kueng, 2000; Glavan, 2011; Wieland et al., 2015). The core cross-functional processes are the link between strategy and operations, and thus it is via measuring process performance that organizations can develop measurement systems founded on and aligned with the organizational strategy (Näslund, 1999). Finally, the importance of including customer and stakeholder perspectives is often mentioned. However, Neely (1999) argues that an often vague definition of the customer makes it complicated to capture effectiveness. The term “stakeholder” is equally problematic and for that reason it is important to define key stakeholders in order to measure performance from different key stakeholders’ perspectives (Franco-Santos et al., 2012; Choong, 2014b; Melnyk et al., 2014).
3.1.1 PMS for measuring organizational change initiatives
In a similar manner, it is critical for organizations to understand how to better manage and cope with change (Geanuracos and Meikiejohn, 1993; Szamosi and Duxbury (2002). Taskinen and Smeds (1999) even mean that the efficient and effective management of change is becoming more important than the effective management of operations in order to stay competitive. Teh and Pang (1999) describe organizational transformation as a complex process, and therefore suggest that performance measures can act as a compass and guide the organization through the change. Prosci (2012) states that the success of change management is represented by the degree to which the change objectives are realized – in other words, by measuring. Having a change measurement system is a way of assessing and monitoring the present situation and enables identification of flaws and gaps between the “as is” and “should be” (Barbosa and Musetti, 2011; Fiorentino, 2010).
One important aspect when measuring change initiatives is to define the outcome – what does success look like? Methods of measuring the success of organizational change are needed in order to evaluate the value of any new frameworks (By, 2004). Sullivan et al. (2011) argue that too few organizations sufficiently emphasize the long-term measurement of change in the final phase of institutionalization. Thus, in order to evade the results of change being short-lived, they suggest that change efforts need to be measured over a long period of time. Neely et al.’s (2000) ideas of a process based system where both effectiveness and efficiency is emphasized can also be applied – with both quantitative and qualitative measures.
Measuring change readiness is a key aspect. In order to assess change readiness organizations should evaluate five success factors for mastering change management: executives’ support; commitment to develop end-to-end process strategy; the ability to develop a convincing business case; deciding on change management methodology; and the ability and previous record in maintaining the engagement and involvement of all stakeholders (Sabri and Verma, 2015, p. 133). However, they do not present any measurement or assessment tool for the evaluation.
Regularly communicating the importance of change in all stages of the change initiative is crucial to individual adoption of change and to motivate members to continue working on the change initiative (Wheelan Berry and Somerville, 2010). Communicating goals and performance targets, expected behavior and feedback actions should be motivating and encouraging. Measuring signals that the change is important enough to be monitored and it can be a way to effectively communicate to employees (and potentially other stakeholders), and ultimately a way of affecting people’s commitment to change (Barbosa and Musetti, 2011). Sabri and Verma (2015) add the importance of acknowledgment of and reward for new behavior, as well as the importance of updating performance measures to reflect new performance baselines.
3.1.2 Steps of planned change initiatives
While different approaches to organizational change exist (e.g. planned, emergent, contingency), we follow the planned approach (Burnes, 1996; By, 2004). The planned approach emphasizes the importance of understanding the different states an organization will have to pass in order to move from an unsatisfactory state to an identified desired state (Eldod II and Tippett, 2002). Although criticized, the planned approach is well established and held to be effective (By, 2004). It was initiated by Lewin (1947), who proposed that a successful change initiative should involve three distinct steps: unfreezing the present level, moving to the new level and refreezing this new level. This three-step model is broad, and over the years authors have operationalized it into several sub-steps with slightly different terminology (Judson, 1991; Kanter et al., 1992; Kotter, 1995; Galpin, 1996; Kettinger et al., 1997; Armenakis et al., 1999; Luecke, 2003; Fernandez and Rainey, 2006; Greer and Ford, 2009; Ackerman Anderson and Anderson, 2010; Kickert, 2014; Sabri and Verma, 2015) (Table II).
During the Summer of 2016, we developed the framework to our measurement system – change initiative measurement system (CIMS – see Figure 2) – based on several key aspects:
It is founded on Neely’s ideas of a measurement system to capture process improvements both in terms of effectiveness (customer) and efficiency (resources, time, cost, quality) as well as the internal environment improvements (positive cultural change, more satisfied employees, etc.). The measurement system should also capture the effectiveness and efficiency of the change project itself – how the change initiative was conducted/progress toward change. It is important to identify warning signals (find red flags) before certain aspects impair the project development. In addition, if these potential threats are identified then corrective action can be taken as early as possible.
The framework also strives to capture the status of the change initiative from different key stakeholders’ perspectives. The main stakeholders mentioned in most change management literature are top management, change leaders/agents and future users.
From a theoretical perspective, we had identified a gap as measurement systems related to organizational change efforts almost constitute a white space in academic literature. From a practical perspective, the case organization needs a system to evaluate change efforts.
From a managerial perspective, the AR approach means that the system will be collaboratively developed, designed, evaluated and redesigned. The systematic AR approach will enhance both relevance and rigor of the research.
The steps we strive to measure are originally based on the literature review. The final model was developed in close collaboration with the project management teams in the case organization. Thus, the intention of the CIMS is to measure and compare the different stakeholders’ perception of all four steps in a change initiative (see Figures 4 and 5):
change readiness (unfreeze) – divided into five sub-steps;
implementation (move) – divided into four sub-steps;
institutionalizing (refreeze) – divided into three sub-steps; and
The intention is also to present detailed data/information on each step and the different dimensions of the sub-steps to selected stakeholders and finally to provide analysis and conclusions on the overall status.
For each of the four steps in the change initiative, the plan is to conduct two rounds of measuring. The first two measurement rounds measured the initial step: change readiness. Though the literature review did not result in any measurement systems for change initiatives, we found some studies focused on change readiness (Armenakis et al., 2007; Holt et al., 2007) or certain aspect of change in order to test different hypotheses in statistical models (e.g. Greer and Ford, 2009; van der Voet, 2015). Based on their indicators, we developed a first draft of our CIMS. We used three questions to capture the status of each sub-step. Some questions originated from, or were modified based upon requirements from the case organization. We used a seven-point Likert scale, ranging from strongly disagree to strongly agree, as response format.
In a collaborative manner during several meetings with the project management teams, the key stakeholders for this project were decided to be: management, supervision committee, project management and employees (future users). Although the core stakeholder groups were the same, exact definitions, size and constitution of the groups vary slightly between the two projects due to different project characteristics (Table III).
The Supervision committee for project α consisted of 26 members representing the different functions (road, rail, planning and maintenance) and geographical regions of the case organization. Project β had one steering committee (10 members) and another stakeholder group called activity committee with 11 members. These committees should receive information about the change initiative in order to provide structured feedback in planned meetings.
3.2 The implementation phase
The second phase in the performance measurement lifecycle is implementation. We implemented the system by collecting data in the form of web surveys. To date, we have conducted two rounds of measuring for both change initiatives. Survey I included background information on the respondents such as type of stakeholder and general view on personal readiness for change (see Appendix). The majority of the questions related to Change readiness/unfreeze while some related to implement/move. A few open questions were included in order for the respondents to be able to provide comments. The questionnaires were refined in several iterations in collaboration with the case organizations. We conducted a pilot test with three members of the project management team in α. To send out the surveys, we used an existing IT system the organization normally uses for internal web surveys. For the first change initiative α, Survey I was sent to 311 relevant respondents divided in the different stakeholder groups (see Table III) in October 2016. A reminder was sent out after 10 days, and after 20 days, the survey closed. We received 199 answers (64 percent response rate).
Survey I for change initiative β was developed in a similar manner but we also used the experience from α to facilitate the development. The survey was sent out to 882 respondents before the Summer holidays in 2017 with two reminders after the holidays. We received 515 answers (58 percent response rate).
We transformed the answers to different graphs (“a thermometer”) in order to visualize the results. The graphs give stakeholders insights on how all stakeholder groups perceive the project in terms of change readiness (see Figure 3). The graphs also provide initial insights on the next step: implement.
More detailed graphs show the results for each sub-question (see Figure 4). The project management teams also received all detailed data, e.g. a system-generated report with statistics for each question as well as free text answers.
3.3 The use phase
The purpose of a measurement system is to provide relevant and actionable feedback to managers and other stakeholders in order to evaluate and improve their processes. Neely et al. (2000) argue that this step is seldom developed. In this study, we addressed this third phase in the PMS lifecycle. The analysis and the resulting feedback was conducted at different levels such as the implications for each change initiative, implications for the measurement system, “higher level” learnings for the case organization and evaluation of the overall research project. In short, the analysis was conducted in three steps.
First, the researchers sent the results (graphs and reports) as well as the raw data to the project management teams. Second, the initial analysis of the data was conducted by the two researchers – first individually and then combined. Simultaneously, the project management team conducted their own, independent analysis. Third, in a collaborative manner, we then met to discuss the results as well as the implications for the project, the case organization and the measurement system.
Survey I provided a snapshot of the existing situation regarding step 1 – change readiness. An interesting observation is that the results were similar for both change initiatives. There was a relatively strong support among different stakeholders for change and a supportive environment and culture for change. However, most other indicators for change readiness were weak in both change initiatives. Specific issues included a weak problem analysis, goal development, project priority and project awareness. The last issue was especially noticeable for the end users. In general, most of the barriers for change initiatives listed in literature were visible and thanks to the measurement model, the project management team understood, early in the implementation, that they had not properly addressed the issues.
Top management support, for example, is another aspect which is identified in theory as one of the most critical success factor for change initiatives (see e.g. Näslund, 2013). The measurements signaled significant problems with top management support. First of all, the results indicated that top management did not consider themselves being truly supportive of the change initiative. Second, the project management teams were hesitant in terms of how they perceived top management support. This is a major problem for the change initiatives. Even more troubling is that the supervision committees considered the top management support to be significantly lacking.
The analysis of the results provided several “aha – moments” for project management. We discussed corrective actions to improve the information to the end users. Focus was on communicating the vision and goal of the change as well as the underlying problem analysis (which, in itself, could have been improved). Such communication can decrease the potential level of resistance but also increase support. We also found that there has to be communication “up” in the case organization, as top management clearly was not very well aware or supportive of the change initiative and its potential implications. Similarly, as a result of the survey, it was decided that the project sponsor (management) have to be more involved in terms of actively supporting the projects.
For both change initiatives, a brief summary of the measurement results, and main findings of the analysis, was sent out to all respondents. For the second project β, we also posted a brief article on the intranet. The project teams conducted workshops to communicate the results and to discuss the impact.
3.3.1 Implications for the measurement system
The measurement system works better than expected. The system provides snapshots of the existing situation and with Survey II, we could also identify both positive and negative developments. Survey II was sent out in November 2017 (α) and in April 2018 (β) (Figure 5).
In an ideal situation, there would be improvements based on actions taken from the analysis of Survey I. Even though the actual results for the change initiatives were mixed, the project teams were very satisfied with the system itself. One project manager said “it provides evidence, it confirmed my gut feeling” in response to the decreased lack of top management support for β. We also saw positive aspects such as increased project awareness due to the information campaign after Survey I. This aspect resembles the cost of quality approach where issues can be addressed before they become significant problems. In short, the system captures the temperature of the change initiative at various stages and it also highlight positive and negative developments.
3.3.2 “Higher level” learnings for the case organization
We have had several presentations for the HQ. In May 2018, we presented the results and the analysis to the top management group at HQ. Focus was on higher level learning for the case organization. Significant research exists regarding critical success factors for change initiatives (see e.g. Näslund, 2013). One framework to classify success factors for change initiatives is the purpose, process and people (3P) framework (see Figure 6).
The purpose has to be clear, there has to be a burning platform and top management has to fully be behind and support the initiative in order to give it a high probability of success. The organization has to be ready for change. Process refers to more “hard” aspects of the change such as structure, maps, resources, etc. People refer to the more “soft” aspects of change. While all the aspects are important, properly working with purpose aspects is most probably required before organizations can deal with the process aspects. Similarly, while nobody would question the importance of people in change management, one could also argue that it is easier to get employees to buy into a change project if they fully understand the nature of the change. A clear purpose, and a solid approach to the process aspects, could facilitate buy in from employees. Thus, the 3Ps, to some extent, work as predecessors with a strong recommendation to focus on the purpose and process aspects in order to get the people aspect to work.
Using this framework to analyze the case organization, we realized that they lack in the purpose category – problem analysis, goal development and top management support. In discussions with both project groups and with HQ, additional problems were identified. They have a silo mentality in how projects are developed, prioritized and managed. As a result, too many projects are started but far from all are finished. They furthermore lack in how they monitor project progress and how to allocate and reallocate resources depending on project progress. As a conclusion, the measurement system highlighted several issues and in discussions we could identify root causes which explain these issues. Thus they need to work with more structural aspects which they may have suspected before the measurement system. However, now the issues are explicitly confirmed.
3.4 The maintain phase
The final stage of the lifecycle of a PMS is to continue develop the system. In this research project, the CIMS was first refined between the two rounds of measuring the step “unfreeze/change readiness.” Based on analysis, feedback and in collaboration after Survey I, we did some minor adjustments to the system. The results and analysis furthermore highlighted certain additional aspects that could be addressed in the next version, e.g. the need for improved communication and the need for increased top management support. The CIMS will be further implemented when we measure the steps “implementation/move” and “institutionalize/refreeze.”
4. Evaluation of research
In order to evaluate the overall AR project, it is important that all parties involved are satisfied with the results and the nature of collaboration. In this project, we have followed all four phases in the PMS lifecycle. The measurement system was developed, implemented, used and maintained in close collaboration with the case organization. We have followed AR philosophy with several loops of action, reflection and new action. Results, impacts and reflections were discussed in several meetings and feedback has been sent out to respondents.
We also evaluated and analyzed on a “higher” level what the case organization can learn from this measurement system in order to improve how they work in change initiatives. The research project highlighted issues of a general nature, and, thus, there may be lessons learned for other change initiatives in the organization. These issues include improved focus on change readiness such as a better problem analysis and goal development before launching an initiative– issues related to the purpose of our 3P framework for a successful change initiative.
Other aspects are efforts to truly secure top management support (e.g. resource commitment and an active sponsor before and during the launch) and a communication plan both up and down the organization. These aspects also relate to the Process and People aspects in our framework. Furthermore, the measurements in itself as well as communication of the results may increase awareness of the change initiative and thus, potentially, decrease resistance. These lessons for a generic change initiative thus primarily include activities conducted in advance to minimize and correct potential problems. They reinforce the importance of a properly-defined purpose and of change readiness according to the change management theory.
4.1 Rigor, validity and future research
A criticism of AR is that it more resembles consulting, rather than rigorous research (Baskerville and Wood-Harper, 1996; Gummesson, 2000; Coughlan and Coghlan, 2002). Ozanne and Saatcioglu (2008) add other criticisms, such as inappropriate application of methods, poor training of researchers, inadequate time in the field, weak research relationships and shallow participation. Researchers must therefore diligently address these dimensions when reporting their use of AR for publication purposes. Furthermore, by discussing its particular strengths, such as the extreme relevance and unique access, and how the research project approaches issues of rigor and quality, we can increase the appreciation for AR (Coughlan and Coghlan, 2002; Näslund, 2008; Ozanne and Saatcioglu, 2008).
Romano and Formentini (2012) refers to Levin (2003), who proposed four criteria for evaluating AR quality. First, in terms of participation, our research reflects the close interaction developed with the case organization. Second, the research was inspired by a real-world problem related to organizational change. Third, the research has followed a collaborative, cyclical process with regular meetings between researchers and organizational members in order to develop the measurement system and to analyze and reflect on results and actions. Furthermore, AR issues “warrants for action” as the participants have been active in defining the existing problems and steps required to deal with the problems. Finally, the research has resulted in a workable solution – a measurement system which has been implemented and analyzed.
Future research will include more rounds of measuring in the existing case organization to further develop and refine the measuring system. We also want to implement and test the system in other organizations in different industries. Finally, another avenue to explore is to conduct statistical tests and analyses when we have implemented the system in other organizations.
5. Concluding discussion
The review of literature indicated a lack of empirical articles dealing with implementation, use and evaluation of PMS. The systems described in conceptual articles do not seem to truly exist in practice. There is a gap between academic ideas and practical reality. Furthermore, PMS is not an established academic discipline and thus the theoretical foundation is weak.
In this project, we have designed a PMS for change initiatives based on existing theory, we have implemented and used a first version of the PMS in close collaboration with the case organization, as well as evaluated the results to maintain it. We have identified corrective actions and we have analyzed the lessons learned on a higher, more generic level for the case organization. These lessons, primarily related to change readiness and Purpose in our 3P framework, largely focus on activities conducted in advance in order to reduce future change management problems. Following the tradition of AR, our research has contributed both to science and to practice. From a practical perspective, the case organization has a system they can use to measure change initiatives. From a research perspective, we have conducted an empirical, longitudinal study of all phases in the performance measurement lifecycle and that we have performed a robust analysis as well as rigorously described all aspects of the study. Given the lack of such studies, this is a major contribution to existing measurement literature.
The action research process
|Phase||Researchers||HQ||Researchers + α||Researchers + β|
|Pre-planning||Theory studies (performance measurement systems)
Develop research proposal
|Accept and fund research proposal|
|Planning||Theory studies (performance measurement + change initiatives)
Develop and design conceptual measurement system (CIMS)
|Identify change initiatives within case organization||Selling idea to larger project group
Develop specific questions for all phases in the change initiative
|Selling idea to larger project group
Develop specific questions for all phases in the change initiative
|Action and implementation||Develop actual web survey and send out||Implement Survey I||Implement Survey I|
|Observing and evaluation||Evaluate responses (individually and combined)||Leaders of project management team evaluated responses individually||Top project management team evaluated responses individually|
|Analysis and reflection||On three levels
For each change initiative
For the case organization
On meta level – theoretical contribution
|Results were shared with HQ in seminar form||Results were analyzed and discussed with researchers
Actions were developed for next phase (maintain)
|Results were analyzed and discussed with researchers
Actions were developed for next phase (maintain)
Overview of different sources related to Lewin (1947)
|Lewin (1947)||Judson (1991)||Kanter et al. (1992)||Kotter (1995)||Galpin (1996)||Kettinger et al. (1997)||Armenakis et al. (1999)||Luecke (2003)|
|1. Unfreezing||1. Analyzing and planning change||1. Analyze the organization and its need for change||1. Establishing a Sense of Urgency||1. Establishing the need to change||1. Envision||1. Discrepancy (we need to change)||1. Mobilize energy and commitment|
|2. Communicating the change||2. Create a shared vision and a common direction||2. Forming a Powerful Guiding Coalition||2. Developing and disseminating a vision of planned change||2. Initiate||2. Self-efficacy (we have the capability to successfully change)||2. Develop a shared vision of how to organize and manage|
|3. Gaining acceptance of new behaviors||3. Separate from the past||3. Creating a vision||3. Diagnosing and analyzing the current situation||3. Diagnose||3. Personal valence (it is in our best interest to change)||3. Identify leadership|
|4. Create a sense of urgency||4. Communicating the vision||4. Generating recommendations||4. Redesign||4. Principal support (those affected are behind the change)|
|5. Support a strong leader role||5. Appropriateness (the desired change is right for the focal organization|
|6. Line up political sponsorship|
|2. Moving||4. Changing from status quo to a desired state||7. Craft an implementation plan||5. Empowering Others to act on the vision||5. Detailing recommendations||5. Reconstruct||4. Focus on short-term results, not activities|
|8. Develop enabling structures||6. Planning for creating short-term wins||6. Pilot testing the recommendations||5. Start change at the periphery|
|9. Communicate, involve people and be honest||7. Preparing recommendations for roll-out||6. Institutionalize success through policies, systems and structures|
|8. Rolling out recommendations|
|3. (Re)freezing||5. Consolidating and institutionalizing the new state||10. Reinforce and institutional change||7. Consolidating improvements and Producing still more change||9. Measuring, reinforcing, and refining the change||6. Evaluate||7. Monitor and adjust strategies in response to problems in the change process|
|8. Institutionalizing new approaches|
|Lewin (1947)||Fernandez and Rainey (2006)||Greer and Ford (2009)||Ackerman Anderson and Anderson (2010)||Kickert (2014)||Sabri and Verma (2015)||Sub-steps in our model|
|1. Unfreezing||1. Ensure the need||1. Problem analysis||1. Prepare to lead the change||1. Establish sense of urgency, ensure the need for change, build internal support||1.1 Assess organization change readiness||1.1 Problem analysis|
|2. Action Planning||2. Create Organizational Vision, Commitment and capability||1.2 Identify need, transformation team, success criteria and change roadmap||1.2 Get need for change jointly accepted|
|3. Build internal support||3. Assess the Situation to determine design requirements||2. Develop a vision and strategy, provide a plan.||1.3 Develop communication plan||1.3. Formulate and communicate vision/strategy of change|
|4. Ensure top management support||4. Design the desired state||3. Communicate the change, empower employees for action||1.4. Articulate cultural support plan (educational need, organizational structural alignment)||1.4 Ensure management support|
|5. Analyze the impact||4. Ensure top management support and commitment, create a guiding coalition||1.5 Ensure change recipient support|
|2. Moving||2. Provide a plan||3. Skills development||6. Plan and organize for implementation||5. Build external support||2.1 Execute change plan||2.1 Develop implementation plan|
|5. Build external support||7. Implement the change||6. Provide resources||2.2 Evaluate supply chain transformation progress||2.2 Create organization and get resources|
|6. Provide resources||2.3 Develop skills|
|3. (Re)freezing||7. Institutionalize change||4. Behaviour management||8. Celebrate and integrate the new state||7. Institutionalize change||3.1 Improve SC transformation progress||3.1 Management control. monitoring and adjustment|
|8. Pursue comprehensive change||5. Management control||9. Learn and course correct||8. Pursue comprehensive change||3.2 Update SC performance measures and anchor the new behavior in the culture||3.2 Incentives/rewards|
|3.3 Institutionalize change|
|6. Outcomes||4. Outcomes|
Stakeholder groups in the change initiatives
|Case α||Case β|
|Management (27 persons identified)||Management (64)|
|Supervision committee (26)||Steering committee (10)|
|Activity committee (11)|
|Project management team (9)||Project management team (8)|
|Sub project participants (9)|
|Future users (249)||Future user (771)|
|Total identified stakeholders 311||Total identified stakeholders 882|
|Response Rate I: 199/311 (64%)||Response Rate I: 515/882 (58.4%)|
|Response Rate II: 168/354 (47.5%)||Response Rate II: 422/873 (48.3%)|
Ackerman Anderson, L. and Anderson, D. (2010), The Change Leader’s Roadmap: How to Navigate Your Organization’s Transformation, Wiley.
Alvesson, M. (1996), “Leadership studies: from procedure and abstraction to reflexivity and situation”, Leadership Quarterly, Vol. 7 No. 4, pp. 455-485.
Argyris, C. (1993), “Social theory for action: how individuals and organizations learn to change”, Industrial and Labour Relations Review, Vol. 46 No. 2, pp. 426-427.
Armenakis, A., Harris, S. and Feild, H. (1999), “Paradigms in organizational change: change agent and change target perspectives”, in Golembiewski, R. (Ed.), Handbook of Organizational Behavior, 2nd ed., Marcel Dekker, New York, NY, pp. 631-658.
Armenakis, A.A., Bernerth, J.B., Pitts, J.P. and Walker., H.J. (2007), “Organizational change recipients’ beliefs scale. Development of an assessment instrument”, The Journal of Applied Behavioral Science, Vol. 43 No. 4, pp. 481-505.
Avison, D., Lau, F., Myers, M. and Nielsen, P.A. (1999), “Action research – to make academic research relevant, researchers should try out their theories with practitioners in real situations and real organizations”, Association for Computing Machinery, Communications of the AMC, Vol. 42 No. 1, pp. 94-97.
Ballantyne, D. (2004), “Action research reviewed: a market-oriented approach”, European Journal of Marketing, Vol. 38 Nos 3/4, pp. 321-337.
Barbosa, D. and Musetti, M. (2011), “The use of performance measurement in logistics change process”, International Journal of Productivity and Performance Management, Vol. 60 No. 4, pp. 339-359.
Baskerville, R.L. and Wood-Harper, A.T. (1996), “A critical perspective on action research as a method for information systems research”, Journal of Information Technology, Vol. 11 No. 3, pp. 235-246.
Bitichi, U., Garengo, P., Dörfler, V. and Nudurupati, S. (2012), “Performance measurement: challenges for tomorrow”, International Journal of Management Reviews, Vol. 14, pp. 302-327.
Brignall, S. and Sven Modell, S. (2000) “An institutional perspective on performance measurement and management in the ‘new public sector’”, Management Accounting Research, Vol. 11 No. 1, pp. 281-306.
Bourne, M., Neely, A., Mills, J. and Platts, K. (2003a), “Implementing performance measurement systems: a literature review”, International Journal of Business Performance Management, Vol. 5 No. 1, pp. 1-24.
Bourne, M., Neely, A., Mills, J. and Platts, K. (2003b), “Why some performance measurement initiatives fail: lessons from the change management literature”, International Journal of Business Performance Management, Vol. 5 Nos 2-3, pp. 245-269.
Braz, R., Scavarda, L. and Martins, R. (2011), “Reviewing and improving performance measurement systems: an action research”, International Journal of Production Economics, Vol. 133 No. 2, pp. 751-760.
Burnes, B. (1996), “No such thing as … a ‘one best way’ to manage organizational change”, Management Decision, Vol. 34 No. 10, pp. 11-18.
By, R.T. (2004), “Organisational change management: a critical review”, Journal of Change Management, Vol. 5 No. 4, pp. 369-380.
Checkland, P. (1993), Systems Thinking, Systems Practice, John Wiley & Sons, Chichester.
Choong, K.K. (2013a), “Understanding the features of performance measurement system: a literature review”, Measuring Business Excellence, Vol. 1Vol. 7 No. 4, pp. 102-121.
Choong, K.K. (2013b), “Are PMS meeting the measurement needs of BPM? A literature review”, Business Process Management Journal, Vol. 19 No. 3, pp. 535-574.
Choong, K.K. (2014a), “Has this large number of performance measurement publications contributed to its better understanding? A systematic review for research and applications”, International Journal of Production Research, Vol. 52 No. 14, pp. 4174-4197.
Choong, K.K. (2014b), “The fundamentals of performance measurement systems”, International Journal of Productivity and Performance Management, Vol. 63 No. 7, pp. 879-922.
Coghlan, D. and Brannick, T. (2001), Doing Action Research in Your Own Organization, SAGE, London.
Coughlan, P. and Coghlan, D. (2002), “Action research for operations management”, International Journal of Operations & Production Management, Vol. 22 No. 2, pp. 220-240.
Daudelin, M.W. (1996), “Learning from experience through reflection”, Organizational Dynamics, Vol. 24 No. 3, pp. 36-48.
Dick, B. (2001), “Action research: action and research”, in Sankaran, S., Dick, B., Passfield, R. and Swepson, P. (Eds), Effective Change Management Using Action Learning and Action Research, Southern Cross University Press, Lismore.
Eldrod, P.D. II and Tippett, D.D. (2002), “The ‘death valley’ of change”, Journal of Organizational Change Management, Vol. 15 No. 3, pp. 273-291.
Ellis, J.H.M. and Kiely, J.A. (2000), “Action inquiry strategies: taking stock and moving forward”, Journal of Applied Management Studies, Vol. 9 No. 1, pp. 83-94.
Ellram, L.M. (1996), “The use of the case study method in logistics research”, Journal of Business Logistics, Vol. 17 No. 2, pp. 93-138.
Elving, W., Hansma, L. and De Boer, M. (2011), “Bohica: bend over, here it comes again⋯ construction and test of a change fatigue instrument”, Teorija in Praksa, Vol. 48 No. 6, pp. 1628-1647.
Fernandez, S. and Rainey, H.G. (2006), “Managing successful organizational change in the public sector”, Public Administration Review, Vol. 66 No. 2, pp. 168-176.
Fiorentino, R. (2010), “Performance measurement in strategic changes”, in Epstein, M.J., Manzoni, J.-F. and Davila, A. (Ed.), Performance Measurement and Management Control: Innovative Concepts and Practices (Studies in Managerial and Financial Accounting, Volume 20), Emerald Group Publishing Limited, pp. 253-283.
Franco-Santos, M. and Bourne, M. (2005), “An Examination of the literature relating to issues affecting how companies manage through measures”, Production Planning and Control, Vol. 16 No. 2, pp. 114-124.
Franco-Santos, M., Lucianettti, L. and Bourne, M. (2012), “Contemporary performance measurement systems: a review of their consequences and a framework for research”, Management Accounting Research, Vol. 23, pp. 79-119.
Franco-Santos, M., Kennerley, M., Micheli, P., Martinez, V., Mason, S., Marr, B., Gray, D. and Neely, A. (2007), “Towards a definition of a business performance measurement system”, International Journal of Operations and Production Management, Vol. 27 No. 8, pp. 784-801.
Frankel, R., Näslund, D. and Bolumole, Y. (2005), “The ‘White Space’ of logistics research: a look at the role of methods usage”, Journal of Business Logistics, Vol. 26 No. 2, pp. 185-209.
Galpin, T. (1996), The Human Side of Change: A Practical Guide to Organization Redesign, Jossey-Bass, San Francisco, CA.
Geanuracos, J. and Meikiejohn, I. (1993), “Performance measurement: the new agenda - using non-fiscal indicators to improve profitability”, Business Intelligence, London.
Glavan, L.M. (2011), “Understanding process performance measurement systems”, Business Systems Research, Vol. 2 No. 2, pp. 1-56.
Gomes, C., Yasin, M. and Lisboa, J. (2011), “Performance measurement practices in manufacturing firms revisited”, International Journal of Operations & Production Management, Vol. 31 No. 1, pp. 5-30.
Gopal, P.R.C. and Thakkar, J. (2012), “A review on supply chain performance measures and metrics: 2000-2011”, International Journal of Productivity and Performance Management, Vol. 61 No. 5, pp. 518-547.
Greer, B.M. and Ford, M.W. (2009), “Managing change in supply chains: a process comparison”, Journal of Business Logistics, Vol. 30 No. 2, pp. 47-63.
Grote, G. (2008), “Diagnosis of safety culture: a replication and extension towards assessing ‘safe’ organizational change processes”, Safety Science, Vol. 46 No. 3, pp. 450-460.
Gummesson, E. (2000), Qualitative Methods in Management Research, 2nd ed., Sage Publications, London.
Gummesson, E. (2004), “Qualitative research in marketing – road-map for a wilderness of complexity and unpredictability”, European Journal of Marketing, Vol. 39 Nos 3/4, pp. 309-327.
Holt., D.T., Armenakis, A.A., Feild, H.S. and Harris, S.G. (2007), “Readiness for organizational change: the systematic development of a scale”, The Journal of Applied Behavioral Science, Vol. 43 No. 2, pp. 232-255.
Jääskeläinen, A. and Sillanpää, V. (2013), “Overcoming challenges in the implementation of performance measurement: case studies in public welfare services”, International Journal of Public Sector Management, Vol. 26 No. 6, pp. 440-454.
Judson, A. (1991), Changing Behavior in Organizations: Minimizing Resistance to Change, Basil Blackwell, Cambridge, MA.
Kanji, G.K. (2002), Measuring Business Excellence, Routledge.
Kanter, R.M., Stein, B.A. and Jick, T.D. (1992), The Challenge of Organizational Change, The Free Press, New York.
Kaplan, R.S. and Norton, D.P. (1992), “The balanced scorecard – measures that drive performance”, Harvard Business Review, Vol. 70 No. 1, pp. 71-79.
Kates, S. and Robertson, J. (2004), “Adapting action research to marketing: a dialogic argument between theory and practice”, European Journal of Marketing, Vol. 38 Nos 3/4, pp. 418-432.
Kettinger, W.J., Teng, J.T.C. and Guha, S. (1997), “Business process change: a study of methodologies, techniques, and tools”, MIS Quarterly, Vol. 21 No. 1, pp. 55-80.
Kickert, W.J.M. (2014), “Specificity of change management in public organizations: conditions for successful organizational change in Dutch ministerial departments”, American Review of Public Administration, Vol. 44 No. 6, pp. 693-717.
Kotter, J.P. (1995), “Leading change: why transformation efforts fail”, Harvard Business Review, Vol. 73 No. 2, pp. 59-67.
Kueng, P. (2000), “PPMS: a tool to support process-based organizations”, Total Quality Management, Vol. 11 No. 1, pp. 67-85.
Levin, M. (2003), “Action research and the research community”, Concepts and Transformation, Vol. 8 No. 3, pp. 275-280.
Lewin, K. (1947), “Frontiers in group dynamics: concepts, method and reality in social sciences, social equilibria and social change”, Human Relations, Vol. 1, pp. 5-42.
Luecke, R. (2003), Managing Change and Transition, Harvard Business School Press, Boston, MA.
McCutcheon, D. and Meredith, J. (1993), “Conducting case study research in operations management”, Journal of Operations Management, Vol. 11 No. 3, pp. 239-256.
Markides, C. (2007), “In search of ambidextrous professors”, Academy of Management Journal, Vol. 50 No. 4, pp. 762-768.
Marshall, C. and Rossman, G.B. (2006), Designing Qualitative Research, Sage Publication, Thousand Oaks, CA.
Meastrini, V., Luzzini, D., Maccarrone, P. and Caniato, F. (2017), “Supply chain performance measurement systems: a periodic review and research agenda”, International Journal of Production Economics, Vol. 183, pp. 299-315.
Melnyk, B., Gallagher-Ford, L., Long, L. and Fineout-Overholt, E. (2014), “The establishment of evidence-based practice competencies for practicing registered nurses and advanced practice nurses in real-world clinical settings: proficiencies to improve healthcare quality, reliability, patient outcomes, and costs”, Worldviews on Evidence-Based Nursing, Vol. 11 No. 1, pp. 5-15.
Näslund, D. (1999), “Bridging the gap between strategy and operations – a process based framework”, Dissertation, Lund University.
Naslund, D. (2002), “Logistics needs qualitative research-especially action research”, International Journal of Physical Distribution and Logistics Management, Vol. 32 No. 5, pp. 321-338.
Näslund, D. (2008), “Action research: rigorous research approach or Scandinavian excuse for consulting?”, Northern Lights in Logistics and Supply Chain Management, Copenhagen Business School Press, pp. 99-116.
Näslund, D. (2013), “Lean and Six Sigma – critical success factors revisited”, International Journal of Quality and Service Sciences, Vol. 5 No. 1, pp. 86-100.
Näslund, D., Kale, R. and Paulraj, A. (2010), “Action research in supply chain management-a framework for relevant and rigorous research”, Journal of Business Logistics, Vol. 31 No. 2, pp. 331-355.
Neely, A. (1999), “The performance measurement revolution: why now and what next?”, International Journal of Operations & Production Management, Vol. 19 No. 2, pp. 205-228.
Neely, A., Gregory, M. and Platts, K. (1995), “Performance measurement system design: a literature review and research agenda”, International Journal of Operations & Production Management, Vol. 15 No. 4, pp. 80-116.
Neely, A., Gregory, M. and Platts, K. (2005), “Performance measurement system design: a literature review and research agenda”, International Journal of Operations & Production Management, Vol. 25 No. 12, pp. 1228-1263.
Neely, A., Mills, J., Platts, K., Richards, H., Gregory, M., Bourne, M. and Kennerley, M. (2000), “Performance measurement system design: developing and testing a process-based approach”, International Journal of Operations & Production Management, Vol. 20 No. 10, pp. 1119-1145.
Nudurupati, S.S., Bititci, U.S., Kumar, V. and Chan, F.T.S. (2011), “State of the art literature review on performance measurement”, Computers and Industrial Engineering, Vol. 60 No. 2, pp. 279-290.
Ozanne, J.L. and Saatcioglu, B. (2008), “Participatory action research”, Journal of Consumer Research, Vol. 35 No. 3, pp. 423-439.
Parida, A., Kumar, U., Galar, D. and Stenström, C. (2015), “Performance measurement and management for maintenance: a literature review”, Journal of Quality in Maintenance Engineering, Vol. 21 No. 1, pp. 2-33.
Parkes, A. and Davern, M. (2011), “A challenging success: a process audit perspective on change”, Business Process Management Journal, Vol. 17 No. 6, pp. 876-897.
Perego, P. and Hartmann, F.G.H. (2009), “Aligning performance measurement systems with strategy: the case of environmental strategy”, Abacus, Vol. 45 No. 4, pp. 397-428.
Prosci (2012), Prosci 2012 Edition of Best Practices in Change Management Benchmarking Report, Prosci, Loveland, CO, available at: www.change-management.com/tutorial-2012-bp-obstacles.htm
Raelin, J.A. and Coghlan, D. (2006), “Developing managers as learners and researchers: using action learning”, Journal of Management Education, Vol. 30 No. 5, pp. 670-689.
Romano, P. and Formentini, M. (2012), “Designing and implementing open book accounting in buyer–supplier dyads: a framework for supplier selection and motivation”, International Journal of Production Economics, Vol. 137 No. 1, pp. 68-83.
Ross, A., Buffa, F. and Dröge, C. (2006), “Supplier evaluation in a dyadic relationship: an action research approach”, Journal of Business Logistics, Vol. 27 No. 2, pp. 75-101.
Sabri, E. and Verma, L. (2015), “Mastering change management for successful supply chain transformation (Chapter 5)”, IGI Global, available at: www.igiglobal.com/ondemand
Schein, E. (1987), The Clinical Perspective in Fieldwork, Sage, Thousand Oaks, CA.
Shapiro, D., Kirkman, B. and Courtney, H. (2007), “Perceived causes and solutions of the translation problem in management research”, Academy of Management Journal, Vol. 50 No. 2, pp. 249-266.
Silverman, D. (1993), Interpreting Qualitative Data – Methods for Analysing Talk, Text, and Interaction, Sage Publications, London.
Stuart, I., Mc Cutcheon, D., Handfield, R., McLachlin, R. and Samson, D. (2002), “Effective case research in operations management: a process perspective”, Journal of Operations Management, Vol. 20 No. 5, pp. 419-433.
Sullivan, K., Kashiwagi, D. and Lines, B. (2011), “Organizational change models: a critical review of change management processes”, COBRA 2011 – Proceedings of RICS Construction and Property Conference, pp. 256-266.
Susman, G. and Evered, R. (1978), “An assessment of the scientific merits of action research”, Administrative Science Quarterly, Vol. 23, pp. 582-603.
Szamosi, L.T. and Duxbury, L. (2002), “Development of a measure to assess organizational change”, Journal of Organizational Change Management, Vol. 15 No. 2, pp. 184-201.
Tangen, S. (2005), “Demystifying productivity and performance”, International Journal of Productivity and Performance Management, Vol. 54 No. 1, pp. 34-46.
Taskinen, T. (2003), “Improving change management capabilities in manufacturing: from theory to practice”, Production Planning & Control, Vol. 14 No. 2, pp. 201-211.
Taskinen, T. and Smeds, R. (1999), “Measuring change project management in manufacturing”, International Journal of Operations & Production Management, Vol. 19 No. 11, pp. 1168-1187.
Taticchi, P., Tonelli, F. and Cagnazzo, L. (2010), “Performance measurement and management: a literature review and a research agenda”, Measuring Business Excellence, Vol. 14 No. 1, pp. 4-18.
Taticchi, P., Tonelli, F. and Pasqualino, R. (2013), “Performance measurement of sustainable supply chains”, International Journal of Productivity and Performance Management, Vol. 62 No. 8, pp. 782-804.
Teh, A. and Pang, L.C. (1999), “Performance measurement for public sector organizational transformation”, International Journal of Business Performance Management, Vol. 1 No. 4, pp. 433-454.
Thakkar, J.J. (2012), “SCM based performance measurement system: a preliminary conceptualization”, Decision, Vol. 39 No. 3, pp. 5-43.
Toffel, M.W. (2016), “Enhancing the practical relevance of research”, Harvard Business School Working Paper No. 16-082.
Tung, A., Baird, K. and Schoch, P.H. (2011), “Factors influencing the effectiveness of performance measurement systems”, International Journal of Operations & Production Management, Vol. 31 No. 12, pp. 1287-1310.
van der Voet, J. (2015), “Change leadership and public sector organizational change: examining the interactions of transformational leadership style and red tape”, The American Review of Public Administration, Vol. 46 No. 6, pp. 660-682.
Voss, C., Tsikriktsis, N. and Fröhlich, M. (2002), “Case research in operations management”, International Journal of Operations and Production Management, Vol. 22 No. 2, pp. 195-219.
Wheelan Berry, K.S. and Somerville, K.A. (2010), “Linking change drivers and the organizational change process: a review and synthesis”, Journal of Change Management, Vol. 10 No. 2, pp. 175-193.
Wieland, U., Fischer, M., Pfitzner, M. and Hilbert, A. (2015), “Process performance measurement system – towards a customer-oriented solution”, Business Process Management Journal, Vol. 21 No. 2, pp. 312-328.
Yadav, N. and Sagar, M. (2013), “Performance measurement and management frameworks: research trends of the last two decades”, Business Process Management Journal, Vol. 19 No. 6, pp. 941-971.
Yin, R.K. (2003), Case Study Research: Design and Methods, Sage Publications, Thousand Oaks, CA, available at: www.balancedscorecard.org/Resources/Articles-White-Papers/Why-Manage-Performance
Bourne, M. and Neely, A. (2000), “Designing, implementing and updating performance measurement systems”, International Journal of Operations & Production Management, Vol. 20 No. 7, pp. 754-771.
Brewer, P.C. and Speh, T.W. (2000), “Using the balanced scorecard to measure supply chain performance”, Journal of Business Logistics, Vol. 21 No. 1, pp. 75-93.