The purpose of this paper is to increase the understanding regarding how managers attempt to make purposeful use of innovation management self-assessments (IMSA) and performance information (PI).
An interpretative perspective on purposeful use is used as an analytical framework, and the paper is based on empirical material from two research projects exploring the use of IMSA and PI in three case companies. Based on the empirical data, consisting of interviews and observations of workshops and project meetings, qualitative content analysis has been conducted.
The findings of this paper indicate that how managers achieve a purposeful use of PI is related to their approach toward how to use the specific PI at hand, and two basic approaches are analytically separated: a rule-based approach and a reflective approach. Consequently, whether or not the right thing is being measured also becomes a question of how the PI is actually being interpreted and used. Thus, the extensive focus on what to measure and how to measure it becomes edgeless unless equal attention is given to how managers are able to use the PI to make knowledgeable decisions regarding what actions to take to achieve the desired changes.
Given the results, it comes with a managerial responsibility to make sure that all managers who are supposed to be engaged in using the PI are given roles in the self-assessments that are aligned with the level of knowledge they possess, or can access.
How managers purposefully use PI is a key to understand the potential impact of self-assessments.
Johansson, P.E., Blackbright, H., Backström, T., Schaeffer, J. and Cedergren, S. (2019), "Let us measure, then what? Exploring purposeful use of innovation management self-assessments: A case study in the technology industry", International Journal of Quality & Reliability Management, Vol. 36 No. 10, pp. 1734-1749. https://doi.org/10.1108/IJQRM-09-2018-0243Download as .RIS
Emerald Publishing Limited
Copyright © 2019, Peter E. Johansson, Helena Blackbright, Tomas Backström, Jennie Schaeffer and Stefan Cedergren
Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial & non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode
“You are what you measure” is an expression meaning that what you are putting the spotlight on when doing performance measurements and audits is also what will likely be prioritized activities in the organization (Hauser and Katz, 1998). What happens, however, when people in the organization have different views of – or even do not understand – what is actually being measured? Who will you then become? In this paper, we focus on how first-line and middle managers engage in a particular innovation management self-assessment (IMSA) that measures innovative climate in organizations (Ekvall, 1996) and how they engage to achieve a purposeful use of the performance information (PI) generated by the self-assessment.
Deploying different kinds of performance measurement and self-assessments is a common practice for leveraging organizational development (Bourne et al., 2017; Choong, 2014; Dickson, 2008; Ford and Evans, 2006; Moynihan and Pandey, 2010; Radnor and Barnes, 2007; Samuelsson and Nilsson, 2002), and one particular area of performance measurement that has grown during the last decade is IMSA (Birchall et al., 2011; Björkdahl and Holmén, 2016; Chiesa et al., 1996; Ekvall, 1996; Karlsson, 2015; Radnor and Noke, 2002, 2006). Previous research has predominately focused on what aspects should be measured and how it can be measured (Karlsson, 2015). However, over the last seven years, we have worked with different organizations in the technology industry, spanning from small companies with less than ten employees to large multinational corporations with more than thirty thousand employees, all of which have used different IMSA. Despite the differences in setting, they all share problems concerning subjects like analyzing and contextualizing the PI that is generated from the IMSA, and/or the organizational integration and transformation of PI into more knowledgeable actions – that is, a purposeful use (Kroll, 2015; Moynihan, 2009). These are problems that the organizations share with other organizations described in previous research on the use of PI in the context of performance management and self-assessments (Chiesa et al., 1996; Kerssens-Van Drongeln and Biderbeek, 1999; Loch and Tapper, 2002; Moynihan, 2009; Panizzolo et al., 2010; Samuelsson and Nilsson, 2002; Tarí, 2010; van der Wiele and Brown, 1999). Thus, regardless of what area that is being measured, the PI in itself does not change anything, which raises questions about how the output – the PI – of self-assessments actually is used. A large proportion of previous research adopts a variance theorizing approach (Langley et al., 2013) and has generated valuable know-that type of knowledge on the phenomena of PI use. In this paper, we instead adopt a process studies approach (Langley et al., 2013), aiming at gaining know-how knowledge regarding line managers and middle managers’ use of self-assessments. Following this, the purpose of the paper is to increase the understanding regarding how managers operating in a technology industry context engage to enact a purposeful use of IMSA and PI, and two research questions (RQ) are addressed in the paper. The first RQ is descriptive, and the second analytical:
How do managers engage in IMSA and PI use?
How do managers achieve a purposeful use of IMSA and PI?
The findings indicate that how managers engage in IMSA and achieve a purposeful use of PI is dependent on managers’ approach toward how to use the specific PI at hand, and two basic approaches are analytically separated: a rule-based approach and a reflective approach. The two approaches are, in turn, dependent on managers’ conceptions and knowledge regarding what is being measured, in this case, innovative climate, which seems to be crucial, especially when the assessments cover such tacit dimensions of organizational life. Managers who are acting on PI based on a reflective approach show evidence of a solid understanding of the measured area. Additionally, the findings also indicate the importance of being able to contextualize and operationalize the PI. This means that managers, in addition to, a solid knowledge, also need to be able to further analyze the PI and put them in relation to the specific circumstances during which they are generated and take active measures as to how to progress. On the contrary, managers acting on PI based on a rule-based approach express a limited understanding of what is being measured. Even though being long-time experienced managers, who are highly skilled in their area of expertise, they are novices in the measured area when they lack a basic understanding of what is being measured. As a consequence, these managers are thus not able to make sense of the PI and tend to act on the PI based on a rule-based approach. Previous research indicates that the amount of job experience is a key factor influencing how PI is used. However, as there are many different performance indicators being measured in organizations, both hard and soft, it is important to differentiate and not expect that managers have a readiness to make sense of and act on all types of measures, which supports the suggestions made by Moynihan and Pandey (2010) about PI use being task dependent. It is, therefore, reasonable to conclude that the importance of job experience is true in one sense, but it is also important to emphasize that the amount of experience in the measured area is what is crucial, not the job in itself.
Performance information use based on self-assessments and performance management
Previous research on IMSA and PI use is relatively limited compared with the research on use of self-assessments in the context of quality management (Ford and Evans, 2006; Samuelsson and Nilsson, 2002; Tarí, 2010; van der Wiele and Brown, 1999) and performance management (Dickson, 2008; Holm, 2018; Kroll, 2015; Moynihan, 2005; Moynihan and Pandey, 2010; Taylor, 2014) – a common practice deployed in the public sector. Even though the assessment areas are not the same, it is reasonable to believe that the practices are partly similar. Globerson (1985) stated early on that one major issue with performance management systems is to design feedback loops that respond to differences between the measures of the current state and the expected state. That is to say, the outcomes of measurements need to be fed back into the organization and the intended recipients of PI, who are supposed to take actions on the grounds of the given information. However, the character and quality of the PI varies, and thus the possibilities to take action based on the PI. Holm (2018) also points to the fact that it usually is multiple goals that are being measured at the same time, yet few studies in previous research have explored the strategies managers use in choosing what measures to prioritize and take action on. Thus, performance management systems generally generate a bunch of so-called hard measures (e.g. sick leaves, grades), whereas self-assessments also generates different kinds of so-called soft measures (Dalton et al., 1980), which many times focus on tacit dimensions of organizational life, and therefore becomes even more open to interpretations.
The challenge of making sense of soft measures is well illustrated in a case study of Svensson and Klefsjö (2006), where the authors explored the implementation of a self-assessment and found that the employees who were engaged in the self-assessment did not understand the purpose of the self-assessment, or for whom it was conducted. Furthermore, the employees also expressed concern about not receiving proper training in how to conduct the self-assessments, and the questions used in the assessment were difficult to understand. The managers, on the contrary, underestimated the recourses that were required to conduct the self-assessment.
The findings of Svensson and Klefsjö provide one possible explanation of the fact that managers in general show reluctance to integrate PI in their routines and decisions (Holm, 2018). Due to challenges experienced by organizations in making use of PI, the research of Ford and Evans (2006) on the role of follow-up of self-assessments and Moynihan’s (2009) research on purposeful PI use provide knowledge on some critical aspects to take into consideration. “Follow-up” refers to “the means by which organizations develop and implement interventions once a need for change is realized” (Ford and Evans, 2006, p. 590), and their research indicates that in high-performing organizations, the top management team is involved in initiating follow-up activities, and there is a strong engagement from the CEO. Compared to low-performing organizations, in which lower levels of management and managers are the main drivers. “Purposeful use” of PI refers to the knowledgeable decisions managers – or other users of PI – make based on PI in order to improve organizational practice (Moynihan, 2005). In Kroll’s (2015) systematic literature review of drivers of PI use, a number of drivers with strong impact factor on PI use are identified: Measurement-system maturity, stakeholder involvement, leadership support, support capacity, innovative culture and goal clarity. The second set of drivers was also identified as promising impact factors, including learning routines, user (prosocial) motivation, networking behavior and political support (Kroll, 2015). Based on the review, Kroll suggests that one area that needs further research is the role of the potential users of PI, which is supported by Moynihan and Pandey (2010), as they suggest that managers with task-specific responsibilities are more prone to using PI than managers with more general responsibilities. They conclude, “Managers rarely learn directly from quantitative numbers, but from interpreting these numbers, making sense of what they mean given their knowledge of the context in which they work. Individuals with a deep knowledge of tasks are therefore advantaged in the ability to apply performance data” (Moynihan and Pandey, 2010, p. 854). Furthermore, Kroll (2015) suggests that purposeful use of PI more likely occurs when managers are a part of the design and contextual adaptation of measures and when they are committed to a performance-based steering philosophy.
Following this, for a self-assessment, e.g. an IMSA, to have an impact on organizational behavior, people need to be able to take action based on the PI which is generated as an output. Consequently, the purposeful use of PI is a critical work activity that, by necessity, requires its own set of skills and competences (Johansson, 2017).
An interpretative perspective on managers’ engagement and enactment of PI
One way of approaching how managers engage in IMSA and PI use is to view purposeful use as a critical work activity that managers – or other users of PI – need to conduct. From an interpretative perspective, individual actors engage in tasks based on their conceptions of the specific task, that is, the meaning an aspect of reality takes on for an individual actor (Sandberg, 2000; Sandberg and Targama, 2007). Following this, a reason why the same kind of work is performed in different ways and with different levels of quality is that the understanding and interpretation of work may vary, and as stated by Sandberg (2000), “workers’ knowledge, skills, and other attributes used in accomplishing work are preceded by and based upon their conceptions of work” (p. 20). Johansson and Osterman (2017) identified, in a previous study, that actors’ approach to key concepts is vital to the actions that are taken when solving a task, and two approaches are distinguished: first, a rule-based approach, which is characterized by having a fixed view of the concepts, and second, a reflective approach, which is characterized by sensitivity to the specific context and work process at hand.
How a task is solved is due, then, to the actors’ ability to perform, in other words, their competence as related to the specific field of expertise (Ellström, 2011). However, it is also necessary to identify what kind of tasks need to be conducted, and the context tasks are embedded in (Sandberg, 2000). Based on this, competence is an interplay between how tasks are interpreted and the acquired knowledge and skills that the individual possesses to perform a specific task. When new to a field, many individual actors tend to act in a linear, but at the same time fragmented way, as they are highly dependent on established routines and conventions designed to provide guidance for their actions (Dreyfus, 2004; Johansson, 2017). Having a limited set of knowledge and skills within a specific competence domain means that one simplifies and expresses a black-and-white picture of the area (Johansson, 2017). Thus, the level of knowledge influences an individual’s conceptions (Sandberg, 2000; Sandberg and Targama, 2007). Furthermore, in Sandberg and Pinnington (2009), the authors conclude that individual attributes – such as knowledge and skills – only constitute one aspect of competence, used to perform certain work. Instead, competence in use emerges in any given situation as ways of being, based on the individual’s self-understanding (e.g. identity as a researcher), his or her understanding of work, relations to other people and different tools that are accessible.
In the context of engaging in IMSA, this means that individuals’ conception of the goal and purpose of an IMSA to a large degree frames their potential use of PI. The individual’s conception does thereby shape both his/her understanding of the goals and how the individual perceives his/her part in the IMSA process. It is, therefore, the individual’s conception of his part in the assessment process that will make up, form and organize how they engage in the process of achieving the assessment goals (Sandberg, 2000).
This paper is based on empirical material from two research projects exploring the use of IMSA as a means to increase organizational innovativeness. The first research project, the pilot, spanned over nine months in 2014, and the second, the research project, had a duration of three years, and longitudinal data were collected over a period of 18 months. The projects were designed as exploratory longitudinal case studies (Gerring, 2007) and adopted an interactive research approach (Ohlsson and Johansson, 2010; Svensson et al., 2007).
Selection of case companies
The projects were conducted in close collaboration with three companies, below referred to as Cases A, B and C. At the start of the pilot, there was already a well-established relationship between the research group and the participating companies through an interest organization with approximately 130 member organizations. Some of the company representatives involved in the research project had collaborated with the researchers, on a regular basis over several years. All three companies are located in the same geographical region. Altogether, this enabled a close interaction, which was considered an important condition to enable a case study design that required accessibility, openness and trust between the participants as well as toward the researchers. Furthermore, several initial meetings were held between the researches and each of the companies to ensure that there was a mutual interest in the area researched, which further enabled close access to the key respondents.
Cases A, B and C
Case A is a division within a large multinational company with +10,000 employees around the world. The company pioneered the technology within its market in the 1950s and remains a leader in technological innovation and market share to this day. Over the past 50 years, the company has achieved the vast majority of technology breakthroughs in its field, and the ability to be innovative is stressed as increasingly important. Furthermore, the company was in an expansive stage where a new strategy is to be developed and processed. Participation in the research project was then seen as part of this development process, with the aim of developing the innovative climate and increase the awareness and support for innovation within the organization.
Case B is also part of a large multinational company with +10,000 employees around the world. The company is a global leader within its technology field and has the broadest portfolio in its industry, delivering innovative products and services that set new standards. Much of its development is conducted by operations within the context of delivery order projects in a way that constantly contributes to the development of the platform of system solutions and products they have available. In parallel to this, the company is undertaking the development of both future solutions and products that will form the basis for the next platform to be used in future projects. Both developments are conducted largely with basically the same resource, i.e. without any pure R&D operations. Within this context, it was considered central to strengthen innovation capability, and the research project was considered as part of that.
Case C is a branch office in a company operating on the national market and offers technical IT solutions, services and products to customers who are developing products with high IT content. The company has customers in a wide range of industries, such as energy, defense, life science and telecom. The primary reason for being a part of the research project was due to the importance of employees’ development and provisioning of conditions that allow them to work in an innovative way.
The IMSA – continuous self-assessments of innovative climate
IMSA refers to the conduct of a systematic self-assessment of innovation management practices and conditions of an organizational entity such as a team, a department or an organization (Moultrie et al., 2007). More specifically, the IMSA that has been used in the three cases was developed by a research group linked to the research project. The IMSA, which uses a self-assessment procedure (Radnor and Noke, 2006), is based on measures related to organizational climate for innovation (inspired by the research of Ekvall, 1996). The IMSA comprises ten predefined items integrated into a web-based interface, and internal assessors undertake continuous assessments on a weekly basis.
Within each of the three cases, the IMSA was implemented on a departmental or group level, meaning that the employees within each department or group had been selected as internal assessors. Each assessor was expected to continuously estimate how well the statements correspond to their perceptions of the week that had passed, and the tool provided auto-generated feedback based on her/his individual assessment results. The assessment process was used in both the pilot project and the subsequent research project. One person in each assessment group, below referred to as the feedback provider, was also provided a report of the PI representing the entire assessment group. The PI contained information about the last assessment, comparisons with prior assessments, and trend-carts visually displaying the development over time.
Collecting the empirical material
Apart from undertaking the IMSA, a major activity in both research projects was a research intervention, during which the empirical material used in this paper was collected. To explore how managers and other employees engaged with the IMSA and used the PI, a total of six assessment groups were formed as a part of the projects. The intervention comprised of a series of workshops that were supposed to function as a support to the respondents in their use of the IMSA and the PI and can be seen as a kind of learning forums (Moynihan, 2005).
Throughout the research project, empirical material was collected by using a mixed-methods approach (Merriam and Tisdell, 2015). In total, data were collected in 3 start-up meetings with 8 participants, 28 interviews with 24 participants, 6 workshops with 16 participants, 2 project meetings with 9 participants and 1 observation of one participant.
The unit of analysis in this paper is the six managers – the feedback providers – who were the receivers of the PI from the IMSA, and their daily practice of IMSA and PI use.
In this paper, the empirical material is primarily based on a series of interviews with the managers, and field notes and audio recording from the meetings in which the managers participated: workshops and project meetings (Figure 1).
A series of interviews with the six managers – two from each company – was conducted throughout a period of 18 months. The first round of interviews was conducted during 2014–2016 and the second interview during 2017. A semi-structured interview guide was used when conducting the interviews, covering questions regarding their conceptions about innovation, how they work with innovation, and more specific questions concerning how they engage in the IMSA. The interviews lasted from thirty to ninety minutes and were audio recorded and transcribed. At one occasion, an additional interview was conducted with one of the feedback providers when he left the company. Moreover, in one case, an additional interview with a manager in case A was conducted due to a major, upcoming, reorganization.
Workshops and project meetings
The second source of empirical material, triangulating the interview material, is the audio recordings and field notes from the six workshops and different kinds of project meetings conducted at the companies. In the initial phase of the research project, three project meetings, one at each company, were conducted. The observations of each of these meetings were documented in writing by the researchers. Furthermore, in the pilot project, as well as in the subsequent research project, the group discussions going on, as a part of the workshops with representatives from the three companies, was audio recorded and partially transcribed. Each workshop lasted for four hours. The group discussions that took place during the workshops proved to be an important set of data, as the respondents were able to provide their reflections on the self-assessments, how they engaged in the PI and important contextual impact factors over time.
Analyzing the empirical material
Based on the interviews and the documentation of workshops, a content analysis was used to analyze the empirical material (Merriam and Tisdell, 2015). The intention of the content analysis is to identify critical aspects of how an IMSA is used and how the IMSA was conceived by those who are taking part in the internal assessment process. The intent was also to learn how the participants have used the PI it provides. Thus, the choice of using a content analysis was made for primarily two reasons: first, reveal insights that give an understanding of how middle managers engage with IMSA and PI are difficult to gain with other methodologies and prior research has had difficulties describing, and second, avoid a fragmentation of the empirical material that might otherwise provide an overall simplified picture of the self-assessment situation as a whole (Saunders et al., 2012). Due to ethical considerations, the actual identity of the case companies has been removed, and the respondents have been given fictive names.
The merits of qualitative case studies are not that they are easy to replicable, instead, the strength lies in the possibilities to discover critical aspects by following leads that are identified throughout the study, which tend to be neglected or left out when conducting research in a more structured way. That is, this paper leans more toward theorizing than trying to test already established theories (Swedberg, 2012).
In this section, RQ1 is addressed. To provide a thick description of how the participants engage with the continuous self-assessments as a part of the IMSA, as well how their engagement changed over time, we use the case of John at case A as a storyline. First, John’s conception of innovation relative to the IMSA and his situation when he first entered the pilot project is described. The case description then continues with a general account of how John’s engagement in different stages of the continuous assessments changed during the projects. Intertwined with the case of John, comparisons with other respondents – primarily the other feedback providers – are made, in order to put the case of John into context with the other respondents.
In each participating case company, a number of feedback providers were appointed. John and William at Company A are the only two participants who have been part of both the pilot and the succeeding research project. The case of John is used because it provides a rich description of how the managers engage in the IMSA, and how it changes over time due to different circumstances, and it further describes the contextual setting and its impact on his PI use in practice. Due to the longitudinal research design, we have been able to track changes over the course of time, and the emergence of purposeful use.
Change of conception – from reliability in measurements to an instrument for increased awareness
John is a highly skilled technician who works in advanced engineering at a large multinational corporation. The concept of innovation is, in this context, to a large extent conceived as something reserved for technological-based product innovation mainly run by a limited group of researchers in R&D. When first entering the pilot project back in 2014, John also expressed a view of innovation that primarily associated the concept with technology-based products. However, John showed a great interest in how to support innovativeness in the organization and was very open-minded toward a broader view of the concept of innovation.
Reliability in measurements – focus on what and how to measure
During one of the first workshops in 2014 in the pilot project, where participants from different companies met, the importance of trusting the IMSA process in itself emerged as an important condition. John, as well as the other participants in the workshop, showed a great interest in the IMSA per se. Both the design of the continuous self-assessments and the reliability of the PI on a numerical level were frequently discussed. Participants from the different case companies were trying hard to make sense of the assessment scale. John was putting a lot of technical focus on the assessments, scale, result plotting and deviation and even discussing if the accuracy of the results could be considered at risk due to different assessors’ various state of mind and attitudes. What if some people by nature were positive and some people by nature negative? How could the results be interpreted in a reliable way if someone always considered things to have been good and some always considered everything to have been bad? What if some assessors always, every week, stated that they have had plenty of time to work with novel ideas, whereas someone else with the same amount of time at hand, or perhaps even more time, would answer that there had not been enough of time to work with novel ideas? A fair amount of time during the workshop was also spent on discussing which weekday to select for the assessments. Was there a mentality connected to certain weekdays that risked distorting the results? Additionally, they discussed whether Mondays could be bad assessment days if people came to work rested and happy after the weekend and thereby underestimated the challenges they had confronted the previous week, or would it be the other way around, that people after a good weekend with friends and family would be a bit depressed by having to return to a new long workweek and that their Monday-depression mood would have a negative impact on the assessment results? The discussions went on and were approached from different perspectives in the search for a reliable approach toward the assessment results.
Thus, at that time, John was struggling to find a way to make the analysis of the assessments fit with the standards of an engineer. For example, at one of the workshops during the pilot project, he asked to get an extended access to the PI so that he could do experimental calculations on the data to see if he could get a deeper understanding of internal relations between metrics, or find other interesting patterns. He was given such access after a discussion about the reliability and usefulness of the assessment results on a numerical level.
Change of conception – an instrument for increased awareness
John belonged to the group of participants that showed the greatest trust in the continuous assessments and the IMSA process. He expressed an openness to address the PI from another, less technical, perspective, and at the end of the pilot, John had dramatically changed his approach toward the assessment results and had come to treat the assessment results primarily as a base for increased awareness and discussions around the focus area. Thus, his focus shifted from the resulting level per se, and instead, he started to focus on the relation between changes in the PI and specific organizational events.
Furthermore, at the end of the pilot, when looking back on the actual impact of his participation, it is possible to detect changes in John’s conception about innovation. Due to his reasoning in relation to the PI, it reveals how he adopted a broader approach toward the concept of innovation, and developed knowledge about the area. Increased accessibility to the concept of innovation – for everyone, not only R&D staff, he argued, was the most valuable result:
“It has especially defused the concept of innovation. It made the concept a bit more accessible to everyone”. It is no longer only a few super technicians who work with innovation - it has become more accessible to the common person. “It has made it more acceptable to talk about innovation, that innovation is not something that should be treated separately and needs to cost a lot of money, but can actually be very ingrained into our work.”
John’s changes in conception can be described as a movement from a rule-based approach to the key concepts of innovation to a more reflective approach (Johansson and Osterman, 2017) and thus offering him more options in how to engage in and use the IMSA.
Engagement in the self-assessment process
John (Case A), David (Case B), as well as Robert and Tom (Case C) have throughout the project showed engagement, openness and trust in the workshops they have been able to participate in. They have all showed an open mind, a willingness to share both positive and negative experiences with the other participants as well as listening and taking in others’ thoughts. Paul (Case B), on the other side, seemed to struggle with the wider approach toward the concept of innovation, which was used in the project. He seemed to a lesser extern trust the conceptual and procedural base of the project compared to the other feedback providers. More often than the others, he took a critical, questioning perspective toward the situation. The following conversation is taken from one of the workshops in 2016, indicating the difference in attitude between David and Paul, both representing case B:
David: “For me, it (the continuous assessments) is a reminder that we should maybe talk a little more about this in our meetings.”
Paul: “But then the question is, are you driven by the fact that you want to come here (to the workshop) without a bad conscience, or are you driven by the fact that you really want to […] if I may put it roughly.”
David: “No, I am driven by the fact that I really want to improve myself.”
By only observing the specific situation, it is not possible to judge whether the conceptions of the two are expressions of a personal attitude toward the project or the area in general, or if it is an attitude based on previous experience with similar situations. Yet, regardless of the cause of the differences in their conceptions, the participants clearly conceive the situation very differently, which most likely influences their and others’ behavior in this situation.
Throughout the projects, managerial commitment was identified as a critical part of purposeful use, and in the three cases, there are large differences regarding managerial commitments. John and William, in Case A, must be considered as having a strong managerial interest and great freedom to act within the organization. William did not only provided active organizational support to the project, he was also an active part in both projects and was, in reality, the project’s company sponsor and champion. Robert, Tom (Case C) and David (Case B), on the contrary, stressed the need for more commitment from senior management and the negative impact from the lack of it:
Robert: “If our organization had been interested and been driving this, it would have become a more natural part of our business, but the commitment disappeared in the staff turnover, and our current manager hardly knows what this is. This is not something I get feedback on that will benefit me and that increases the risk that it falls behind and will not be prioritized.”
Although David is really making an effort to work with the assessment results – both in terms of analyzing the results and in terms of regularly discussing the feedback with the assessment group – he is experiencing that he does not have the mandate to take necessary actions based on the PI of the IMSA. The assessment area itself is important to him in his role as manager, but according to his experience, a much stronger linkage to the organization is required in order to utilize it:
David: “There is a lack of managerial commitment.” […] “There is no long-term idea. No purpose. What are we supposed to do with it? What should we change?”
Thus, even though having support in the organization from employees, and providing David with valuable learning as a manager, without the actual commitment from senior management, the continuous assessments have difficulties in becoming an integrated part of the organizational practice.
In contrast, John and William not only seem to have managerial and organizational support for their PI use, William seems to have had both the required knowledge and the position to enable the required management commitment and organizational support. Here his internal work of contextualizing the continuous assessments, spreading, explaining and linking the PI to other organizational activities seems to have been of critical importance for the managerial conception of this as something of organizational relevance.
Purposeful use of performance information
During the final interview with John in 2017, he described that one of the more interesting results of Case A’s participation in the pilot project was the positive change in perceptions of time and time planning in his assessment group. His experience was that, without providing any additional resources, the pilot made people on an individual level gain greater control over their time and time planning. When asked what made their participation in the pilot successful, John argues that attitude was critical:
Our attitude has been that this isn’t a big thing, and we aimed to integrate it as part of daily business, and that made it easier to get acceptance from higher-ups in the organization. We already had an engagement and drive to work with questions like this in the organization […] but it is important to have the attitude that it doesn’t have to be so overwhelming – it’s more of a mindset.
John, William (Case A) and David (Case B) succeeded very well in their efforts of using and integrating the PI in their assessment groups, and they consciously engaged in trying to integrate the PI into the everyday life of their assessment groups. In line with what they were advised in the workshops, they analyzed and integrated feedback from the continuous assessments in established meetings with their assessment groups. For John, it was, however, easier to work with the feedback integration in the pilot project compared to in the later research project. In the pilot, he worked closer to his assessment group in a daily work context, which allowed him to use more subtle methods, such as bringing up subjects that were interrelated to innovation climate at the coffee machine, so that issues concerning innovation and climate got integrated into other daily discussions. This kind of feedback integration was more difficult for John in the research project, where he did not share his daily work context with the rest of the assessors. David shares a similar experience, as he due to organizational changes later on in the research project did not share his daily work context with his assessment group. Although he really is making an effort to find a routine for feedback integration in the new situation.
Consequently, Case A was, in a way that none of the other companies did, able to utilize the continuous assessments, not only in the everyday setting of the assessment group but also in a wider context of their change processes:
William: “The most important thing for us, is the engagement in the group, who showed the least commitment and drive in the assessments., //[…] Therefore, this was a trigger to actually see, that here is a material that we can use […] and it has now become a trigger for each department to show how we can even out this engagement difference between departments. […] and thus, it became a catalyst to engage the whole organization. Therefore, on Friday, we will present our actions to increase the commitment of each department in the organization, to each department manager. Concrete actions, what we should do, this has made us reflect upon the way we conduct our strategic work.”
On the contrary, Tom and Robert at Company C, along with Paul at Company B, have struggled with major difficulties in integrating the PI in their workgroups. Although expressing a very positive attitude toward the idea of using the IMSA and how it concerns important areas of development in their organization, Tom and Robert are recurrently showing signs of lacking the practical knowledge of how to analyze, share and communicate the PI with the assessment group. Thus, they are showing signs of not fully understanding how the PI relates to the organizational needs and goals, and a lack of the practical knowledge required to apply it to their own context. The challenges for some of the participants to integrate the assessments are well illustrated in the following conversation from one of the workshops:
Robert: “It is very difficult to integrate it into our everyday work. It is easy to make a single effort. Now I am going to do this, and you get it done. But it’s more difficult to change your own everyday situation and actually integrate it into something that gives it continuity.”
Mark: “I haven’t done much since last time, I have filled in the assessments, but that’s all.”
Stephen: “Same here. You take the assessments every week but otherwise, we haven’t done anything.”
Instead of seeking support from colleagues or employees, the uncertainty in how to use the PI creates inertia that, at best, leads to continuing the status quo, but at worst, leads to a state of unwillingness among the employees to engage in these kinds of measurements in the future.
Enactment of purposeful use:
Change of conception – from reliability in measurements to an instrument for increased awareness:
Developing a broader approach toward the concept of innovation, and address PI from another, less technical, perspective. Focus on the relation between changes in the PI and specific organizational events.
A reflective approach emerge, compared to a rule-based approach, which is offering more options in how to engage in and use the IMSA.
Engagement in the self-assessment process:
Open-minded, a willingness to share both positive and negative experiences with other participants as well as listening and taking in others thoughts.
Trust in the conceptual and procedural base of the IMSA.
Necessary to have a mandate to take the necessary actions based on the PI of the IMSA. Without an actual commitment from senior management, the continuous assessments have difficulties in becoming an integrated part of the organizational practice.
The internal work of contextualizing the continuous assessments, spreading, explaining and linking the PI to other organizational activities seems to be of critical importance for the managerial conception of this as something of organizational relevance.
Purposeful use of PI:
Analyze and integrate feedback from the continuous assessments in established organizational routines of the assessment groups. Thus, consciously engage in trying to integrate the PI into everyday life of the assessment groups. Using subtle methods, such as bringing up subjects that are interrelated to innovation climate at the coffee machine, so that issues concerning innovation and climate are integrated into daily discussions.
Having the practical knowledge of how to analyze, share and communicate the PI with the assessment group. Thus, purposeful use implies an understanding of how the PI relates to the organizational needs and goals, and a practical knowledge about apply it to their own context.
Discussion and conclusions
In a time when organizations face the challenge of being dependent on a continuous flow of new knowledge in order to survive, performance measurements – in this case an IMSA – are supposed to provide organizations with increased knowledge about a specific assessed area. This increased knowledge about its current state, weaknesses or strengths, is often supposed to form the basis for positive changes regarding how the organization is to take future actions (Karlsson, 2015). However, as indicated in this paper, there are several challenges on this road, which will be further discussed in this section.
Previous research indicates that the amount of job experience is a key factor influencing how PI is used (Kroll, 2015). However, as there are many different performance indicators being measured in organizations at the same time (Holm, 2018), both hard and soft, it is important to differentiate and not expect that managers have a readiness to make sense of and take actions on all types of measures, which supports the suggestions made by Moynihan and Pandey (2010) about PI use being task dependent. It is reasonable to conclude that the importance of job experience is true in one sense. However, it is also important to emphasize that it is the amount of experience and knowledge acquired in the measured area that makes a difference, not the job in and of itself. Furthermore, even though a certain level of knowledge seems to be necessary for the use of PI, nothing in the empirical material clearly shows that the knowledge acquired by the feedback provider guarantees a purposeful use of PI while lacking knowledge undoubtedly has a negative impact on its use.
Still, as an answer to RQ2, how managers achieve a purposeful use of PI, the findings of this paper provide insights on how the engagement is related to managers’ approach toward how to use the specific PI at hand. By using the framework of Johansson and Osterman (2017) two basic approaches are analytically separated: a rule-based approach, and a reflective approach. The two approaches are in turn related to managers’ conceptions about what is being measured, in this case, innovative climate, which seems to be crucial, especially when the assessments cover such tacit dimensions of organizational life as innovative climate. Well in line with Sandberg’s (2000) theory on competence, the feedback providers’ conceptions of the assessed area affect their ability – that is their competence – to reason and conceptualize the engagement and use of the PI, and thus how to purposefully use PI. Even though being long-time experienced managers, who are highly skilled in their area of expertise, they still become novices (Dreyfus, 2004) in the measured area when lacking a basic understanding of what is being measured. As a consequence, these managers are not able to fully make sense of the PI and tend to use the PI based on a rule-based approach. Hence, lacking foundational knowledge about what to do and how to do it, and acting based on a rule-based approach, as a consequence, limits, or even prevents, the integration of PI in its organizational context, and thereby becomes a hindrance for purposeful use.
On the contrary, the managers who were able to take actions based on the PI show signs of a reflective approach and a higher level of knowledge regarding the assessed area of innovative climate. Moreover, they were able to integrate the PI in organizational practice, as well as possessing the ability to disregard the prescribed ways of using the IMSA when appropriate, and as such bringing flexibility and contextual adaptability, which as a consequence attenuates the negative effects of, for example, the contextual discontinuity seen in this study. In the context of innovation, contextual adaptability must be considered extremely important, since innovative work requires an openness to change and deviate from what has been planned.
In the cases where items were measured but not purposefully used by the managers, it seemed to have a limited impact on the organization. Consequently, whether or not the right item is being measured also becomes a question of how the PI is actually being used. Thus, the extensive focus on what to measure and how to measure it (Björkdahl and Holmén, 2016; Chiesa et al., 1996) becomes edgeless, unless equal attention is given to how managers are able to use the PI to make knowledgeable decisions regarding what actions to take for achieving the desired changes. In this case, when the focus is on soft measures (Dalton et al., 1980), it becomes even more important to make sure managers have a sufficient level of knowledge to interpret and analyze the PI.
By focus on how PI can be purposefully used, instead of mainly focusing on what and how to measure (Karlsson, 2015), the findings presented in this paper provide new guidance to managers. As performance measures, both hard and soft, are open to interpretations, the way managers make sense of PI becomes a critical issue to take into consideration (Sandberg, 2000). In the context of the case organizations, using an IMSA measuring innovative climate, there were few participants having any professional experience of what it takes to create a conducive innovative climate, which means that regardless of what the PI indicates, few have the possibilities to make in-depth sense of the PI, thus increasing the likelihood of inertia. Following this, a conclusion is that achieving purposeful use of PI benefits from managers deploying a reflective approach. A critical question then is how a reflective approach can be trained and acquired by the users of PI.
Based on the overall findings of the paper, we suggest that when introducing a new self-assessment system in an organization, it comes with a managerial responsibility to make sure that all intended users or participants of an IMSA are assigned roles in the self-assessments that are aligned with the level of knowledge they possess to be able to actually perform the tasks they are assigned to do. Otherwise, when lacking an adequate knowledge in the organization and an authentic commitment from senior managers, there is a great risk that this kind of initiative becomes yet another managerial initiative fading out, adding to the pile of failed organizational initiatives.
To verify the findings and conclusions made in this paper, additional research is needed. It would also be of value to conduct empirical studies in other sectors than the technology industry, in order to reach conclusions that are more generalizable. Furthermore, based on the findings and conclusions made in the paper, it would be of great interest in future research to explore how the interpretative perspective used in this paper could guide the design of a distributed and individualized leadership in both the assessment process and in innovation management in general, for example, by exploring the initial preparations of the self-assessment regarding what would be required to provide a more personalized support system with a distributed ownership and individualized sub-goals linked to the organizational overall purpose of the assessment, which most likely also are of relevance for several areas of innovation work outside the assessment situation.
Birchall, D., Chanaron, J.-J., Tovstiga, G. and Hillenbrand, C. (2011), “Innovation performance measurement: current practices, issues and management challenges”, International Journal of Technology Management, Vol. 56 No. 1, pp. 1-20.
Björkdahl, J. and Holmén, M. (2016), “Innovation audits by means of formulating problems”, R&D Management, Vol. 46 No. 5, pp. 842-856.
Bourne, M., Franco-Santos, M., Micheli, P. and Pavlov, A. (2017), “Performance measurement and management: a system of systems perspective”, International Journal of Production Research, Vol. 56 No. 8, pp. 2788-2799.
Chiesa, V., Coughlan, P. and Voss, C.A. (1996), “Development of a technical innovation audit”, The Journal of Product Innovation Management, Vol. 13 No. 2, pp. 105-136.
Choong, K.K. (2014), “Has this large number of performance measurement publications contributed to its better understanding? A systematic review for research and applications”, International Journal of Production Research, Vol. 52 No. 14, pp. 4174-4197.
Dalton, D.R., Todor, W.D., Spendolini, M.J., Fielding, G.J. and Porter, L.W. (1980), “Organization structure and performance: a critical review”, The Academy of Management Review, Vol. 5 No. 1, pp. 49-64.
Dickson, G.T. (2008), “Performance measurement and performance: management of innovative products”, doctoral thesis, University of Bath, Bath.
Dreyfus, S. (2004), “The five-stage model of adult skill acquisition”, Bulletin of Science Technology & Society, Vol. 24 No. 3, pp. 177-181.
Ekvall, G. (1996), “Organizational climate for creativity and innovation”, European Journal of Work and Organizational Psychology, Vol. 5 No. 1, pp. 105-123.
Ellström, P.E. (2011), “Informal learning at work: conditions, processes and logics”, in Malloch, M., Cairns, L., Evans, K. and O’Connor, B. (Eds), The Sage Handbook of Workplace Learning, Sage Publishing, Boston, pp. 105-119.
Ford, M.W. and Evans, J.R. (2006), “The role of follow‐up in achieving results from self‐assessment processes”, International Journal of Quality & Reliability Management, Vol. 23 No. 6, pp. 589-606.
Gerring, J. (2007), Case Study Research: Principles and Practices, Cambridge University Press, New York, NY.
Globerson, S. (1985), “Issues in developing a performance criteria system for an organization”, International Journal of Production Research, Vol. 23 No. 4, pp. 639-646.
Hauser, J. and Katz, G. (1998), “Metrics: you are what you measure!”, European Management Journal, Vol. 16 No. 5, pp. 517-528.
Holm, J.M. (2018), “Successful problem solvers? Managerial performance information use to improve low organizational performance”, Journal of Public Administration Research and Theory, Vol. 28 No. 3, pp. 303-320.
Johansson, P.E. (2017), “Organizing viable development work in operations”, in Backström, T., Fundin, A. and Johansson, P.E. (Eds), Innovative Quality Improvements in Operations: Introducing Emergent Quality Management, Springer International Publishing, Cham, pp. 49-65.
Johansson, P.E. and Osterman, C. (2017), “Conceptions and operational use of value and waste in lean manufacturing – an interpretivist approach”, International Journal of Production Research, Vol. 55 No. 23, pp. 6903-6915.
Karlsson, H. (2015), Innovation Auditing, The Audit & The Auditor, Mälardalen University Press, Västerås.
Kerssens-Van Drongeln, I.C. and Biderbeek, J. (1999), “R&D performance measurement: more than choosing a set of metrics”, R&D Management, Vol. 29 No. 1, pp. 35-46.
Kroll, A. (2015), “Drivers of performance information use: systematic literature review and directions for future research”, Public Performance & Management Review, Vol. 38 No. 3, pp. 459-486.
Langley, A., Smallman, C., Tsoukas, H. and Van De Ven, A.H. (2013), “Process studies of change in organization and management: unveiling temporality, activity, and flow”, Academy of Management Journal, Vol. 56 No. 1, pp. 1-13.
Loch, C.H. and Tapper, S.U.A. (2002), “Implementing a strategy-driven performance measurement system for an applied research group”, Journal of Product Innovation Management, Vol. 19 No. 3, pp. 185-198.
Merriam, S. and Tisdell, E. (2015), Qualitative Research. A Guide to Design and Implementation, John Wiley & Sons, San Francisco, CA.
Moultrie, J.P., Clarkson, J. and Probert, D. (2007), “Development of a design audit tool for SMEs”, Journal of Product Innovation Management, Vol. 24 No. 4, pp. 335-368.
Moynihan, D.P. (2005), “Goal-based learning and the future of performance management”, Public Administration Review, Vol. 65 No. 2, pp. 203-216.
Moynihan, D.P. (2009), “Through a glass, Darkly”, Public Performance & Management Review, Vol. 32 No. 4, pp. 592-603.
Moynihan, D.P. and Pandey, S.K. (2010), “The big question for performance management: why do managers use performance information?”, Journal of Public Administration Research and Theory, Vol. 20 No. 4, pp. 849-866.
Ohlsson, J. and Johansson, P. (2010), “Interactive research as a strategy for practice-based learning: designing competence development and professional growth in local school practice”, in Billett, S. (Ed.), Learning through Practice. Models, Traditions, Orientations and Approaches, Springer International, London and New York, NY, pp. 240-255.
Panizzolo, R., Biazzo, S. and Garengo, P. (2010), “New product development assessment: towards a normative-contingent audit”, Benchmarking: An International Journal, Vol. 17 No. 2, pp. 173-194.
Radnor, Z.J. and Barnes, D. (2007), “Historical analysis of performance measurement and management in operations management”, International Journal of Productivity and Performance Management, Vol. 56 Nos 5/6, pp. 384-396.
Radnor, Z.J. and Noke, H. (2002), “Innovation compass: a self-audit tool for the new product development process”, Creativity and Innovation Management, Vol. 11 No. 2, pp. 122-132.
Radnor, Z.J. and Noke, H. (2006), “Development of an audit tool for product innovation: the innovation compass”, International Journal of Innovation Management, Vol. 10 No. 1, pp. 1-18.
Samuelsson, P. and Nilsson, L.-E. (2002), “Self-assessment practices in large organisations: experiences from using the EFQM excellence model”, International Journal of Quality & Reliability Management, Vol. 19 No. 1, pp. 10-23.
Sandberg, J. (2000), “Understanding human competence at work: an interpretative approach”, Academy of Management Journal, Vol. 43 No. 1, pp. 9-25.
Sandberg, J. and Pinnington, A.H. (2009), “Professional competence as ways of being: an existential ontological perspective”, Journal of Management Studies, Vol. 46 No. 7, pp. 1138-1170.
Sandberg, J. and Targama, A. (2007), Managing Understanding in Organizations, Sage Publications, London.
Saunders, M., Lewis, P. and Thornhill, A. (2012), Research Methods for Business Students, Prentice Hall, Essex.
Svensson, L., Ellström, P.-E. and Bruhlin, G. (2007), “Introduction – on interactive research”, International Journal of Action Research, Vol. 3 No. 3, pp. 233-249.
Svensson, M. and Klefsjö, B. (2006), “TQM-based self-assessment in the education sector: experiences from a Swedish upper secondary school project”, Quality Assurance in Education, Vol. 14 No. 4, pp. 299-323.
Swedberg, R. (2012), “Theorizing in sociology and social science: turning to the context of discovery”, Theory and Society, Vol. 41 No. 1, pp. 1-40.
Tarí, J.J. (2010), “Self‐assessment processes: the importance of follow‐up for success”, Quality Assurance in Education, Vol. 18 No. 1, pp. 19-33.
Taylor, J. (2014), “Organizational culture and the paradox of performance management”, Public Performance & Management Review, Vol. 38 No. 1, pp. 7-22.
van der Wiele, T. and Brown, A. (1999), “Self-assessment practices in Europe and Australia”, International Journal of Quality & Reliability Management, Vol. 16 No. 3, pp. 238-252.
Nilsson, F., Regnell, B., Larsson, T. and Ritzén, S. (2010), “Measuring for innovation – a guide for innovative teams”, Applied Innovation Management, No. 2, pp. 1-30.