Implementation issues of a design management indicator system: A case study of four product development companies

Paula Görgen Radici Fraga (Universidade Federal do Rio Grande do Sul, Porto Alegre, Brazil)
Maurício Moreira e Silva Bernardes (Universidade Federal do Rio Grande do Sul, Porto Alegre, Brazil)
Darli Rodrigues Vieira (Universite du Quebec à Trois-Rivieres, Trois-Rivières, Canada)
Milena Chang Chain (Universite du Quebec à Trois-Rivieres, Trois-Rivières, Canada)

International Journal of Productivity and Performance Management

ISSN: 1741-0401

Publication date: 11 June 2018

Abstract

Purpose

The purpose of this paper is to present and discuss the process and results achieved from the implementation of a design management indicator system in four product development companies.

Design/methodology/approach

To this end, instruments and techniques for implementing and collecting composite data were adopted.

Findings

The implementation made it possible to test the system metrics, and the analysis of the results enabled the identification of factors that hinder a successful implementation.

Originality/value

Design is being recognized as providing significant economic, social, and environmental benefits, and as it becomes a part of the management process, it can have an impact on business performance. Therefore, information sharing through indicator systems that consider factors that generate reliable and quantifiable information has become fundamental.

Keywords

Citation

Radici Fraga, P., Bernardes, M., Vieira, D. and Chain, M. (2018), "Implementation issues of a design management indicator system", International Journal of Productivity and Performance Management, Vol. 67 No. 5, pp. 890-915. https://doi.org/10.1108/IJPPM-01-2017-0009

Download as .RIS

Publisher

:

Emerald Publishing Limited

Copyright © 2018, Emerald Publishing Limited


Introduction

Many researchers who promote and study design have developed research that shows design’s impact on companies and the economy and its role as a value adder (Chiva and Alegre, 2009; D’Ippolito, 2014; Gemser and Leenders, 2001; Moultrie and Livesey, 2014; Mozota, 2003). According to Mozota (2003), design must be recognized as a creative and managerial process to be integrated into a company, modifying its traditional structure through the updating of management processes. In this manner, design can become part of the management process, providing benefits that impact business performance and help the company guarantee its long-term market position (Best et al., 2010).

A change in management reinforces the collaboration and flexibility needed for organizational consolidation (Sanchez, 2006). One foundational requirement for this change to successfully occur is a well-structured decision-making process (Chiu, 2002). For this process, it is important to formulate and implement a measurement and control system that generates reliable and quantifiable information. This system becomes the key to business performance measurement, and the factors that drive its success are known as performance indicators (Velimirović et al., 2011).

Within this approach of measurement and control and aiming to broaden the study of design as a tool that can help improve the performance of companies’ business, a project was developed in Brazil to conceive an indicator system to assist design management. This paper presents and discusses issues related to the process of implementing this system in four medium to large product development companies. Its intention is to contribute to the process of implementing indicator systems suitable for design management in these companies.

Performance measurement

Performance measures are an organization’s vital signs, highlighting its strengths and weaknesses and enabling the comparison of results, the identification of effective mechanisms for decision making, and improvement in performance (Davis and Albright, 2004; Delorme and Châtelain, 2011). Therefore, the development of such measures must link strategies, resources, and processes (Hronec, 1993). This link forms a crucial relationship between strategy and day-to-day operations and becomes the basis of the company’s competitive advantage (Vieites and Calvo, 2011). In other words, it allows the quantification of an action’s efficiency and effectiveness (Valmohammadi and Servati, 2011).

As a foundation, the performance measurement process has metrics that characterize and evaluate business performance, providing indications of the extent to which the company is driving itself toward reaching its goals (Taylor and Kristensen, 2013; Velimirović et al., 2011). These metrics are called performance indicators (Eckerson, 2011). They show what is important to the company, comparing goals over time and playing a key role in the business perspective (Melnyk et al., 2014) by focusing on operational, tactical, and strategic aspects (Eckerson, 2011; Parmenter, 2010). This monitoring helps create a culture of performance improvement and promote organizational learning, communication, and strategic alignment, ensuring that the organization’s key players work to achieve the same objectives (Micheli and Mari, 2014; Barbuio, 2007).

Performance indicators uniquely determine what an organization must do to increase its performance (Shabaninejad et al., 2014). Therefore, their selection should be made such that they serve to encourage improvements within the company (Jovanović et al., 2012). The results provided by the indicators should be part of the information system at all levels of the organization (Kaplan and Norton, 1996). They should involve all stakeholders, promoting their greater ownership of the indicators and ensuring everybody’s effective participation (Schirnding, 2002).

Indicator system for design management

The indicator system studied was conceived in the Innovation, Competitiveness, and Design Project (Bernardes et al., 2015) and is based on three master’s theses (Dziobczenski, 2012; Fraga, 2016; Plentz, 2014). Its structure includes 26 indicators divided into five analytical categories (Table I), which aim to evaluate the following issues:

  1. Consumer response: the extent to which the company delivers what consumers expect (Hill and Jones, 1998).

  2. Efficiency: how efficient the company is (Hill and Jones, 1998).

  3. Innovation: the company’s innovative capacity (Hill and Jones, 1998).

  4. Quality: how competitively the company is performing, based on the quality of its products and processes (Hill and Jones, 1998).

  5. Outcome: the company’s financial results (Kaplan and Norton, 1996).

To construct the system (Plentz, 2014), 33 employees from the product development, marketing, engineering, sales, strategic planning, financial, and information technology sectors of five product development companies were invited to select 20 indicators – out of 72 available – and distribute them into five categories. The objective was to select four indicators per category (“required”). However, in some categories, there was no consensus among the employees, and the decision was made to keep these indicators in the system and designate them as “optional.”

The system’s score was constructed based on 20 indicators, which can total up to 100 points (Plentz, 2014). The company selects four indicators per category; the result obtained with the formula of each indicator is transformed into a score from 0 to 5 (Table I). Finally, the sum of the results obtained in the five categories generates the Design Management Composite Indicator (DMCI), indicating the company’s degree of design management (Figure 1) (Table II).

Implementation of an indicator system

The process of building understanding and commitment through engagement is a key step in developing a set of performance measures; without understanding “why” it is being measured, its use will be flawed (Neely et al., 2002). In this sense, the implementation phase is structural: when it is not planned correctly, it requires greater time and effort (Neely et al., 2002), causing the focus to be lost and the process to fail.

According to Damschroder et al. (2009, p. 3), “[…] implementation is the constellation of processes intended to get an intervention into use within an organization.” It is a period of transition during which the members of the organization involved in the process become stronger, more consistent, and more committed to the change (Klein and Sorra, 1996). According to Klein and Sorra (1996), implementation is the critical gateway between the organizational decision to adopt and use an intervention in the company’s day-to-day operations.

By its very nature, implementation is a social process of using or integrating interventions, which is confused with the context (characteristics of the internal and external environment) in which it occurs (Damschroder et al., 2009; Rabin et al., 2008). Implementation is understood to be a complex task since it occurs through the transformation, behavioral change, and restructuring of organizational contexts (Fixsen et al., 2005). For this reason, when beginning an implementation process, it is necessary to answer a few critical questions about its viability. Some of these questions relate to the progress of the implementation in real time, the status and potential influence of contextual factors, the participants’ response to the process, and the adaptations necessary to achieve change (Stetler et al., 2006). Fixsen et al. (2005, p. vi) argue that the implementation process is more successful when:

  1. carefully selected professionals receive frequent training and performance appraisals; and

  2. organizations provide the infrastructure necessary for appropriate training, supervision, regular processes, and evaluations of outcomes.

The effectiveness of the implementation is a homogeneous construction that describes the quality and consistency of an intervention’s application within the organization (Klein and Sorra, 1996). In this sense, different groups and relationships are crucial to the implementation: sometimes, the flow of strategy – going from management to the front line – will be the vital link; at other times, the public’s participation will be the key to success (Pawson et al., 2005). The success of an implementation thus depends on the success of the entire sequence of mechanisms through which it develops.

Some of these mechanisms include the following: the publication of performance measures on bulletin boards or in news bulletins; the existence of a forum in which performance can be discussed; and the opportunity to connect the implementation to an existing initiative (Neely et al., 2002). Bammer (2005) adds to this, stressing that collaboration is also a key point for implementation. Pawson et al. (2005, p. 32) emphasize that:

[…] implementation is not a question of everyone stopping doing A and starting to do B. It is about individuals, teams and organizations taking account of all the complex and inter-related elements of the programme theory that have been exposed by the review and applying these to their particular contexts […].

When designing implementation, it is therefore important to be mindful of how to obtain, construct, and interpret the results to reflect the perceptions of both individuals and the organization (Damschroder et al., 2009). Dealing with this complexity – the uncertainties and changes inherent to the process – involves defining the limits to the approach that will be taken, such as, for example, what and who will be included in the implementation (Bammer, 2005).

Research method

To achieve the study’s objective, instruments and techniques for the analysis and collection of composite data were used. This study adopted action research (Thiollent, 2011) as a strategy, the semi-structured interview (Lodico et al., 2010) for the evaluations, and the content analysis method (Bardin, 2011) for the results. The objects of the study were four medium to large Brazilian product development companies (Table III).

This study was developed in three distinct stages (Figure 2). A detailed description of the methods applied and the results achieved are presented in the following paragraphs.

Implementation of the indicator system for design management

The implementation process occurred in two phases: planning and execution. In the planning phase, the instructional materials, which included both a booklet (presenting the indicators and their collection methods and characteristics) and a data collection spreadsheet, were developed. The implementation actions were sequentially planned with companies’ managers. The execution phase began with the training of the companies: in companies A, B, and C, the strategic planning analyst was trained (because the companies were a corporation, this person would be responsible for the companies’ implementation and data collection); in company D, the product development supervisor and the marketing manager were trained and then replicated the training internally. The sequence of activities developed thereafter was as follows:

  1. delivery of the instructional material via physical (paper) and digital media for it to be distributed to the sectors involved in the data collection;

  2. presentation of the indicator system with an explanation of the indicators, their peculiarities, and procedures for data collection;

  3. indication of the deadline for delivery of the data collections, with monthly collections between April and September 2015 and digital submission on the 30th day of each month;

  4. selection of the 20 indicators that would structure the system in each company;

  5. delivery of the first data collection;

  6. interviews for evaluating the implementation process;

  7. delivery of the second data collection;

  8. meeting for adjustments in the implementation process (when necessary); and

  9. convening a workshop to identify possible decisions to be made in relation to the products, based on the results of the indicator system for design management.

The problems in the implementation process in companies A, B, and C emerged during activity 4 (selection of 20 indicators), with employee conflicts and a lack of employee understanding about the system’s content and the need to introduce it into the company. To solve these problems, it was stipulated that the three companies’ financial data (13 indicators) would be collected by the strategic planning analyst and the other data (7 indicators) would be each company’s responsibility.

The delivery of the first collection was delayed by four months, indicating the need for a meeting to discuss implementation problems. The analyst and the corporate manager of strategy planning and management participated in the meeting and explained the difficulties that they encountered in collecting and delivering the data. During the meeting, it was decided to decompose the indicators to identify and generate solutions to the difficulties that were occurring (Table IV).

At the end of the meeting, the first collection for the month of May was partially delivered; it consisted only of data that are the responsibility of the analyst. The company committed to delivering complete data by September 2015.

In an attempt to obtain the seven missing indicators and resolve each company’s doubts about the system and data collections, a meeting with company A’s design coordinator, company B’s manager of innovation and product development, and company C’s marketing analyst was convened. Each company’s difficulties were individually addressed, and the outcomes obtained are summarized in Table V.

By the end of the meeting, the questions had been answered, and commitments to collect and deliver the data had been made. The employees requested that the instructional material be sent, alleging that they had not received it. It was sent digitally. The outcomes from the data collections of companies A, B, and C for the period from April to September 2015 are compiled in Table VI. The lack of complete collections made it impossible to analyze the behavior of the indicators and the DMCI (Indicador Composto de Gestão de Design) for the companies. It was also impossible to develop the workshop to identify possible product-related decisions based on the results of the design management indicator system.

Company D

As in companies A, B, and C, the problems in company D’s implementation process also began during activity 4 (selection of the 20 indicators) because there were doubts about the indicators (Table VII).

Once the questions had been answered and the data collected from April to September 2015 were obtained, the indicator system was calculated. The results obtained by Company D were analyzed by compiling the values obtained in the system’s five categories (Table VIII). The sum of the scores generated the composite indicator for each period analyzed.

The behavior of the categories was plotted (Figure 3) so that the trends could be observed. The composite indicator was analyzed separately (Figure 4).

Category 1 (Consumer response) tells the company how well it is meeting customer expectations. The values of the indicators provide information about both the product delivery aspect and the relationship between the company and its public, which includes both customers (retailers) and end consumers. The variation in the category, on a scale of 0-20 points, shows that the company had an above-average result only in April. The indicators responsible for this performance were “Variation in the number of site views” and the “Ratio of new customers.” The same cannot be observed in the months of May and June, for which a drop in these indicators means that the scores in the category were the lowest in the period analyzed. From July onward, there was a small increase in the values, influenced by the increase in “Revenue obtained from the sale of new products.”

The indicators from Category 2 (Efficiency) evaluate how efficient the company is, that is, whether it is able to produce more outputs (products and/or services) with fewer inputs (resources such as materials, information, technology, and employees), thus increasing its competitiveness. The variation in this category shows that the company’s efficiency efforts were concentrated in April and May. In August and September, the “Projects that complied with the budget” indicator positively influenced this category, preventing the result from being even lower than it was earlier.

In Category 3 (Innovation), the values of the indicators provide information that makes it possible to evaluate the company’s innovative capacity, that is, to what extent its innovation process makes it more competitive in the market. This category addresses a very delicate subject – there is a certain resistance from the companies to invest in this aspect – and it showed a peculiarity compared to the previous two categories: the results of its indicators oscillated from one extreme to the other; that is, a score of 0 or 5 was obtained. This oscillation may represent the company’s effort (dedication), or lack thereof, in relation to the themes associated with the indicators (e.g. radical innovations and investments in R&D). The category performed highly in April, driven by the “Internal radical innovations,” “New patented products,” and “Profit obtained with new products” indicators. Average performance was observed in August, with a balance between scores of 5 for the “Investment in R&D” and “Profit obtained with new products” indicators and scores of 0 for the “Internal radical innovations” and “New patented products” indicators. In the other months, the scores were below the mean, supported by the uniformity of values for the “Profit obtained with new products” indicator.

The indicators of Category 4 (Quality) evaluated how competitive the company is based on the quality of its products and processes. This category had average performance in the period analyzed. Three of its four indicators had results with few variations. What led to the decrease in the monthly scores of the category was the “Hours training the production staff” indicator, which had much lower results than the other indicators in this category.

The constant indicators in Category 5 (Outcome) enable an evaluation of the company’s financial results arising out of its operational activities. This category had the most variations compared to the other categories in the system. The scores obtained in the analyzed period were high – above average in almost every month. However, there was a decline from July onward that was influenced by the “Revenue variation” indicator; this decline worsened in August and September with the low results of the “EBITDA margin” indicator.

The composite indicator for design management aims to show the company its current degree of design management, enabling the creation of aids to improve the results, when necessary. The data shown in Table VIII were transformed into Figure 2 so that the analysis of the composite indicator’s monthly behavior could more effectively be performed and inferences concerning the indicator could be constructed.

The composite indicator of company D had a high score in April, leveraged by the high performance in all categories, particularly in “Efficiency,” “Innovation,” and “Outcome.” From May onward, the indicator’s score began to decrease, primarily influenced by the drop in the results of the “Consumer response” and “Innovation” categories. The decrease in the scores of the “Efficiency” and “Outcome” categories in August and September meant that the values of the indicator were average and below average, respectively. Following these observations, it was concluded that the company has been reducing its degree of design management; therefore, special care is necessary with the “Consumer response,” “Efficiency,” and “Outcome” categories.

Evaluation of the implementation process

The data for evaluating the implementation process of the indicator system in the companies were collected through the semi-structured interview technique (Table AI of this paper). The objective was to collect the participants’ perceptions and evaluations – difficulties, points of improvement, and suggestions – regarding the activities and methods used in the implementation. The strategic planning analyst (companies A, B, and C) and the product development supervisor and marketing manager (company D) were interviewed. To facilitate our analysis, the issues were grouped according to the topics being addressed.

Companies A, B, and C

Regarding the implementation process, the interviewee explained that there were difficulties in the implementation and data collection, particularly in relation to the company’s unmapped processes. One Company A employee involved in the implementation had been dismissed. Regarding the sufficiency or insufficiency of the amount of time for the implementation and data collection, the interviewee said “It was sufficient. Internal issues were responsible for what did not get done.”

In the set of questions about instructional material, the interviewee indicated that the informative material (booklet) helped in the implementation process because the concepts were translated into the same language for everyone involved. However, when questioned about the digital material (table), he noted that “Because our reality is slightly more complex, the collection table becomes irrelevant.”

Regarding data collection, the interviewee stated that most of the indicators informed were extracted from the company’s own system and that it was already customary to collect them. In the question about the sectors’ contribution to data collection, the following was revealed:

We had surprises. Areas that were not even participating directly in the process – Financial Planning, for example – became a great source of help. On the other hand, areas closely linked to the project, for example, the Marketing department, began accepting indicators throughout the process, which later became very complex to measure, and in some cases, they are still not being measured.

The employee answered the last question of this set positively, asserting “I believe that everything has been very well elaborated.”

With regard to the topic about the indicator system, the first question concerned the reason for selecting the indicators, and the answer to this question was as follows:

The main reason for the selection was the relevance to the process. At first, we even thought of choosing those that were easier to measure, but as things progressed, we ended up choosing those that seemed to be most suited to the process.

The same answer was observed for three questions: “if the company would use more indicators”; “if the company considers any obligatory indicators to be irrelevant”; and “if any indicators would be added to the selected set.” The interviewee stated that it was difficult to answer the questions because the system is not solid enough to allow such evaluations. When asked about the differences between the system used by the company and the implemented system, the response was as follows:

Basically, our indicators are linked to the financial perspective of the Balanced Scorecard (BSC), that is, economic-financial indicators. Process indicators are our Achilles heel; however, we are improving them for various areas and processes in the company.

Regarding the aspect of the indicator system’s aggregating information for the company, the interviewee positively stated: “Certainly! It accelerates the creation of the indicators and the culture of indicator management for the product development area.”

Company D

Examining the interview of company D, it can be observed that in the implementation process, no difficulties were encountered in either the implementation or the data collection. The interviewees stated that the satisfactory progress of the process was the result of the strategy developed by them for the presentation of the indicator system and for the data collection. The supervisor highlighted the procedure used as follows:

[…] what I did, I distributed (the indicators) to the areas responsible for the figures of the controls, so to speak, and the staff promptly returned with the figures requested. […] some had questions, “Ah, but what is this for?”, but they were the ones who had not been participating since the beginning of the project. Yes, there was questioning, but the reason for it was then explained. Then, from there, it was straightforward.

The manager added that the fact that the company already had its indicators greatly facilitated the data collection and that in addition to explaining the reasons for the collection, special care was taken so that the employees would not consider it to be rework. Regarding those involved in the implementation, the same people remained from beginning to end, and the sales, commercial, marketing, safety, HR, production, finance, distribution, cost, and product development sectors participated. Regarding the time invested in the implementation, the interviewees stated that it was sufficient and that “[…] the first time is always slightly more complicated in relation to trying to understand (the indicators) and knowing where to get the information. After the first reading, everyone now knows.”

Concerning the questions about the instructional material, the interviewees emphasized that the informative material (booklet) was very explanatory, simple, and easy to understand. When asked about the digital material (data collection table), both stated that “[…] it was all straightforward, no mysteries.”

On the data collection theme, the supervisor explained that 18 of the 20 indicators were collected based on existing data, and only the “Orders delivered within the period stipulated” and “Productivity” indicators required more work. Regarding the contribution of the sectors involved in the implementation, there was no resistance, but there was bewilderment regarding the need for the data; the sectors wanted to know the purpose for which they would be used: “‘Ah, but where is this from?’; hence, I explained that it is a project in conjunction with the university, and so, after that, they obtained the data.”

Regarding the questions about the indicator system, the motivation for selecting the 20 indicators was the “utilization […] it was not because of the ease of access (or lack thereof) to the information but instead as a way of measuring and having a better notion in our understanding of the indicators.” When asked whether the company would use more indicators, the answer was affirmative. Regarding the irrelevance of any indicator, the manager pointed to “Estimated market share” as being a complicated indicator:

[…] because we do not have a direct competitor, and we operate in several segments. So, the market that we consider is the total market of our products. […] actually, it will not tell us very much because I will have to obtain it by category, it will be very distorted.

Regarding the addition of some indicators to the system, the manager also suggested some related to budgetary control. When questioned about the differences between the system used by the company and the implemented system, the supervisor stressed the issue of the automatic generation of data, without the need for the spreadsheet. Referring to the fact that the indicator system aggregates information for the company, the possibility of comparing the company with others and the construction of the company’s snapshot in relation to the aspects of innovation, competitiveness, and design were emphasized.

Workshop for analysis of the indicator system’s outcomes

After data collection was completed, the values of company D’s indicator system were generated, and the results and instructions for reading them were presented in the “Workshop for identifying possible product-related decisions to be made, based on the results of the design management indicator system.” The dynamic followed the cyclical experiential learning model (Braus and Monroe, 1994) in which participants engage with a topic in an interactive way; have a set amount of time to process and internalize the information received; develop correlational reasoning, extending knowledge to other situations (generalization); and apply what was learned.

During the exercise, the participants made a connection between the results obtained and the company’s day-to-day activities, generating insights about possible decisions that would contribute to improving the indicators. After understanding the purpose of each indicator, it was possible to apply the knowledge acquired through reflection and discussion concerning the behavior of the indicators and the possible reasons that they showed variations (both positive and negative). To finalize the workshop, the participants were presented with the value of the composite indicator obtained by the company and its variation in the collection period. The participants listed some factors as guiding these variations:

  1. the crisis in the country, which hit the company in the middle of May 2015;

  2. the time of new product launches, which would justify the positive peaks found in some indicators;

  3. seasonality, which can either positively or negatively affect production and the amount of labor employed;

  4. the critical months for sending product shipments to other regions of the country, which directly affects both production and the number of returns; and

  5. the sector’s national and international fairs, which influence sales.

After the end of the workshop, the deliberations were analyzed, and summary tables were constructed that listed each of the system’s categories, the indicators selected by the company, and the possible decisions indicated. Many decisions were generated, and to present them all would exceed the scope of this paper. Nevertheless, to illustrate the results of the workshop, the analysis of the “New product designs executed within the period stipulated” indicator from the “Efficiency” category is presented.

By analyzing this indicator, the company observed that to improve its performance, it could analyze the product development process and the sectors involved; create mechanisms for greater control of project deadlines, thus identifying bottlenecks; restructure product development to make it more efficient; ensure better decisions at the beginning of projects; identify and list the reasons for project development delays; identify and list the reasons for delays in the commercial area (regarding the projects); and eliminate small delays in each sector to avoid compromising project launch dates.

Issues related to the implementation process

The development of a measurement system requires design, review and acceptance, implementation, and use (Bourne et al., 2000). From this list of activities, implementation is the point at which most measurement initiatives fail (Neely et al., 2002). It is the phase in which systems and procedures are put into practice to collect and process the data that enable the measurements to be regularly performed (Bourne et al., 2000). It represents a transitional period for all those involved and begins from the change in behavior and the restructuring of the organizational contexts (Fixsen et al., 2005; Klein and Sorra, 1996).

In an implementation process, the guidelines should consider, among other things, information security, the influence of contextual factors, and the response of the participants in the process (Sans Institute, 2003; Stetler et al., 2006). In this regard, analyzing the two implementations – the first in companies A, B, and C and the second in company D – it can be observed that both received the same initial assistance. However, they had completely different outcomes in the finalization stage. It was observed that the actions taken in each company determined the failure and the success, respectively, of the implementation.

Workshops are one strategy that can be used for the effective dissemination of information (Tanner and Hale, 2002). The dynamic thus allowed the company to visualize its daily processes. Moreover, individual analysis of the indicators is important to obtain ideas for actions that can improve their results, whereas more comprehensive and systemic analyses call attention to global actions in which work with one indicator triggers a series of developments in others.

Based on the results obtained in the study and in the literature on the topics addressed, it was possible to create a list of issues related to the implementation of the design management indicator system (Table IX). The objective is to create an understanding of these issues and the agents involved. The individual description of each issue does not determine its occurrence in isolation; in other words, one issue can trigger others:

  1. Lack of understanding of the importance of implementing the indicator system: this factor is related to the fact that the organization does not understand the purpose and benefits of the indicator system (Marble, 2003; Mendibil and Macbryde, 2006). The organization’s members do not see improvements that can be linked to the system’s use, and there is a strong inclination toward skepticism (Niven, 2006), i.e., toward understanding the system as a trend that the company has tried out without seriously intending to implement it (Othman et al., 2006), causing them to abandon the implementation process.

    Despite the various requests for data delivery and the researcher’s availability to clear up any doubts, it is believed that the failure to place any priority on the system or understand its importance were obstacles that strongly impacted data collection in companies A, B, and C. There were employee conflicts and a lack of employee understanding of the system’s content and the need to introduce it into the company. Simultaneously, the data collection began to be seen as “data collection for the study” and not as the foundation for developing an indicator system for the company that would enable support for decision making related to the design management processes and whose result would stimulate innovation and increase competitiveness. In company D, whose team recognized the usefulness and objectives of the indicator system, the implementation was more effective.

  2. Lack of commitment and involvement from senior management: this factor refers to the level of involvement and support from managers in the implementation of the indicator system (Mendibil and Macbryde, 2006). No initiative in an organization, regardless of its potential, has any chance of succeeding without the support of senior management (Bourne et al., 2002; Niven, 2005). Senior managers are responsible not only for communicating the initiative but also for clearly presenting its importance to the organization as a whole (Kaplan and Norton, 2001; Mendibil and Macbryde, 2006).

    Although the managers in companies A, B, and C had seen the system from its conception, they were not sufficiently involved in guiding its implementation process. The managers’ participation was limited to interventions only when implementation problems became so severe that the process was undermined. It was observed in company D, however, that when managers are committed and guide the process, it rolls out more effectively, allowing any negative impact from other obstacles to be overcome or at least minimized.

  3. The organization is in an unstable phase: this issue refers to a moment of instability experienced by the company that can influence the implementation of measurement systems (Marble, 2003). This instability may be related to, inter alia, projects such as reorganizations, mergers, or acquisitions; factors such as market uncertainties; and the company’s economic situation (Bourne et al., 2005). In most cases, this unstable environment transfers a great deal of stress to management, distracting managers from the implementation process (Waal and Counet, 2009).

    Because of the restructuring that was occurring in units of companies A, B, and C, nobody involved gave priority to the implementation and collection of data from the system. Furthermore, during the implementation period, there were major instabilities and uncertainties related to the market and the economic situation of both the company and the country. These issues, all of which arose at the same time, caused delays in deliveries, and the requests for data caused fatigue among the employees involved.

  4. There is resistance from members of the organization: resistance to change – or something new – is also a challenge. The implementation of an indicator system involves changes such as an investment of time, a culture of responsibility for the data generated, and a commitment to the information provided, among others (Nair, 2004; Pereira and Melão, 2012). These changes are not always well received by the organization’s members, as the addition of a new and unknown tool raises questions about its function (i.e. whether or not it will work) and shakes up work habits that can sometimes be deeply rooted in the employees, preventing them from innovating and applying what is new (Pereira and Melão, 2012). Therefore, employee commitment can be considered an important facilitator of implementation (Greiling, 2010).

    In companies A, B, and C, the collections were divided between the strategic planning analyst and the employees representing the companies, with the former responsible for the data collection of 13 indicators from each of the three companies and the latter responsible for the collection of 7 indicators from their company of origin. Since the seven indicators referred to data that had not previously been collected at the company, the employees responsible for the collection had to create a different data-entry routine. During the meeting to discuss their doubts, the employees said that they saw the system as “more work to be done”; i.e., they did not feel motivated to try out the new tool, preferring to work with the system they already used. Even after all of our explanations and demonstrations, no priority was placed on the data collection because the employees still considered the system’s implementation to be irrelevant.

  5. The system is not used for day-to-day decision making throughout the organization: the implementation of performance measurement systems requires the creation of conditions that allow them to be incorporated into decision making at different levels of the organization (Lantelme and Formoso, 2000). These conditions are related to the procedures, rules, routines, skills, and attitudes of everybody there. This implies changing how management is conducted within the organization, creating a more transparent and participatory environment (Lantelme and Formoso, 2000; Othman et al., 2006). When the information generated by the system is not used in the organization’s day-to-day management, corrective measures may not be taken in time, resulting in a failure to achieve the organization’s objectives (Nair, 2004; Othman et al., 2006). Organizations that make an indicator system part of everyone’s day-to-day functions – i.e., part of the company’s performance communication culture – are guided toward success (Nair, 2004).

    The difficulties encountered in the implementation also occurred because some managers from companies A, B, and C did not understand that the indicator system should be used for decision making at all levels of the company, not just at the managerial level. Decision making occurs on a daily basis in each employee action, and an indicator system acts as a compass, guiding actions so that they are aligned with the company’s strategic objectives.

  6. There are difficulties in obtaining the data to calculate the indicators: the process of collecting and sorting data can be challenging. For the most part, organizations do not suffer from a lack of data but rather a failure to identify the correct and relevant sources of data that will help improve their performance (Nair, 2004). Generally, the employees who provide the most reliable data about a process are the ones who are directly involved in this process (Lohman et al., 2004). These employees’ cooperation thus becomes fundamental, and communicating the importance and priority of the system’s implementation can improve the data collection and delivery process (Lohman et al., 2004; Nair, 2004).

    This issue is strongly linked to (4), “There is resistance from members of the organization,” as the employees of companies A, B, and C had no sense of responsibility for the data generated and did not feel motivated to use it as a source of information. The employees failed to comply with their data collection in a comprehensive manner, preventing their companies from advancing to later research stages.

  7. Insufficient training: for the implementation of an indicator system to be properly carried out, the people involved must learn about it (Niven, 2006). The essence of any initiative related to the use of an indicator system is to encourage people throughout the organization to implement it and take advantage of the information it generates (Fixsen et al., 2005). If these people do not have an in-depth understanding of the tool, the chances of success are reduced (Mendibil and Macbryde, 2006; Niven, 2006).

    Because of the centralization of data collection in a single employee, there was insufficient training in companies A, B, and C for everyone involved. The employee who received the training did not pass the information on, resulting in a failure to understand the purpose of the system, the data that should be collected, the type of information that could be generated from these data, and how the system could help and/or interfere with the company’s day-to-day operations. Furthermore, the insecurity generated by this lack of knowledge and understanding created disbelief about the initiative.

  8. Lack of a proper implementation team: for an indicator system to be properly implemented and connect individuals, creating new behaviors, and improving communication, a team of people must be involved (Marble, 2003; Niven, 2006). In this sense, many initiatives failed only because they were led by ineffective teams or because they did not have a team assigned to the process (Niven, 2006; Othman et al., 2006).

    The choices made by the managers of companies A, B, and C at the beginning of the implementation determined the path along which it developed. They were advised to create a multidisciplinary team so that through the participation of a representative from each sector of the companies, the system could be disseminated and the data collected and delivered. However, the position adopted by the managers was that there would be only one person responsible for collecting and delivering the data. Delegating the responsibility for collections to a single employee resulted in a heavy workload for them, along with communication problems between them and each company’s various sectors. This led to incomplete data delivery.

    At company D, however, all the sectors were involved in the implementation process. Each sector received both the instructional material and an explanation of the reason for the data collection. The contribution of other sectors and the integration of the company culminated in a successful implementation.

  9. The entire organization does not use the system: when an indicator system is implemented arbitrarily, i.e., from top to bottom (from senior management down to employees), the result may be a lack of employee understanding and employee commitment to the initiative (Nair, 2004; Othman et al., 2006). Involving employees in the process of implementing the system can reduce resistance and increase the use of the performance measures in the company’s day-to-day activities (Kaplan and Norton, 2001; Franco and Bourne, 2003; Nair, 2004).

    The implementation of the system in companies A, B, and C did not occur in such a way that it became part of the employees’ day-to-day routine. It occurred in an imposed and unstructured way, causing discomfort and resistance in the provision of data. As a result, there was a lack of understanding of the initiative, little acceptance of the system and low commitment to the goals of the implementation process.

  10. Lack of planning and communication: this factor refers to an issue that makes the implementation of the system more complicated, causing it to take longer and leaving people confused and unsatisfied (Niven, 2005, 2006; Othman et al., 2006). The development of an indicator system requires a precise development plan to guide the team through the process (Hwang et al., 2013; Speckbacher et al., 2003).

In companies A, B, and C, there was no planning or communication of the initiative to implement the indicator system. During the meeting to discuss their doubts, the employees stated that they did not receive either the instructional material or the collection timeline and that there was no way to resolve any doubts about the system because they did not know with whom they should speak. These facts show that even when there was interest among employees in collaborating, a lack of organization and communication caused the implementation to fail.

Conclusions

This paper sought to present the process and the outcomes achieved with the implementation of an indicator system for design management and the issues that arose during this implementation. The results obtained in companies A, B, and C demonstrated that design is not treated in a strategic way. It also made it possible to identify that its usefulness as an articulating and multidisciplinary activity, which integrates strategic and operational plans according to the company’s vision and mission (Stoner and Freeman, 1994), remains unknown.

We emphasize that the implementation of an indicator system does not end after the data collection. Implementation must be included in a company’s routine (Lantelme and Formoso, 2000; Nair, 2004) because it involves continual reassessments, adjustments, and improvements to continually fine-tune both the process and the system. However, if the companies do not educate themselves in relation to the development of a sense of system permanence and maintenance, the system will not work properly. Experience has shown that the implementation strategy should involve the entire organization; when an indicator system is available to the entire company, it can promote integration and lead those involved to achieve the same objective (Kaplan and Norton, 2001; Franco and Bourne, 2003; Nair, 2004; Niven, 2006; Othman et al., 2006).

The results obtained show the strong influence of the human factor in the implementation process. It is observed that the effectiveness of the latter depends on two capacities: creating new knowledge, i.e., identifying, acquiring, and processing relevant information, and transmitting that knowledge by providing fast and accurate information (Hwang et al., 2013; Speckbacher et al., 2003). It is understood that issues related to indicators, metrics, and nomenclatures, for example, are easily resolved through a detailed analysis of the problem. However, because of their importance in the implementation process, issues connected to human factors can become an insurmountable obstacle, compromising the entire process to the point of making it unfeasible (Kaur et al., 2012; Habtoor, 2016).

It is recommended that future studies create comparisons between the results of companies before and after the use of an indicator system; evaluate the role of the system to identify improvements in the processes with a view toward superior performance in the companies’ innovation, competitiveness, and design; generate a set of best practices used by companies that function as a benchmarking tool for problem solving; and develop a computational system that allows the comparison of results between companies in a benchmarking process.

Figures

Calculation of the indicator system

Figure 1

Calculation of the indicator system

Research stages

Figure 2

Research stages

General behavior of the categories

Figure 3

General behavior of the categories

Behavior of the “composite indicator for design management”

Figure 4

Behavior of the “composite indicator for design management”

Characteristics of the indicator system for design management

Category/indicator Formula for calculation Criterion
Consumer response
Estimated market share estimated total sales volume of the market estimated company sales volume of the market   × 100 Optional
Complaints about new products number of products sold that received complaints total number of new products sold × 100 Optional
Variation in the number of views of the site number of views of the site in the current period number of views of the site in the previous period × 100 Optional
Repeat purchase rate number of customers who bought more than once total number of customers in the period × 100 Optional
net revenue from customers who purchased more than once in the period net revenue in the current period × 100
Ratio of new customers number of new customers in the period company ' s total number of customers × 100 Optional
Revenue obtained from the sale of new products net ( or gross ) revenue from new products total net ( or gross ) revenue in the current period × 100 Obligatory
Efficiency
New product designs executed within the period stipulated number of projects executed within the period stipulated total number of projects in the period × 100 Obligatory
Orders delivered within the period stipulated number of orders delivered within the period stipulated total number of orders sold × 100 Obligatory
Waste of materials cost of the wasted material total cost of the raw material × 100 Obligatory
Projects that complied with the budget number of projects within the budget total number of projects in the period × 100 Optional
Finalized product designs number of product projects finalized total number of product projects in the period × 100 Optional
Productivity production achieved production capacity installed × 100 Optional
Innovation
Internal radical innovations number of radical innovation projects total number of projects in the period × 100 Obligatory
New products patented number of invention patents total number of new products × 100 Obligatory
Investment in research and development (R&D) investment in R & D net revenue in the current period × 100 Obligatory
Profit obtained with new products net ( or gross ) profit obtained with new products total net ( or gross ) profit × 100 Obligatory
Quality
Hours of rework hours of rework total hours worked × 100 Optional
Rate of returns with return of merchandise value of products returned in the period net revenue in the current period × 100 Optional
Variation in the rate of rejection ( products rejected current period total number of products produced in the current period products rejected in the previous period total of products produced in the previous period 1 ) × 100 Optional
Compliance with the checklist checklist items complied with total number of items on the checklist × 100 Optional
Accident frequency rate number of accidents with work accident communication ( CAT ) total man hours worked × 100 Optional
Hours training the production staff total hours training production workers during the year total number of workers involved in production × 100 Optional
Outcome
Revenue variation net ( or gross ) revenue in the current period net ( or gross ) revenue in the previous period × 100 Obligatory
Return on investment (ROI) net profit total investment × 100 Obligatory
Earnings before interest, tax, depreciation, and amortization (EBITDA) margin EBITDA net revenue in the current period × 100 Obligatory
Revenue per employee net ( or gross ) revenue in the current period number of employees × 100 Obligatory

Range of scores of the indicator system for design management

Score
Category/indicator 0 1 2 3 4 5
Consumer response
Estimated market share <1% 1%⩽x<11% 11%⩽x<21% 21%⩽x<31% 31%⩽x<41% ⩾41%
Complaints about new products ⩾81% 61%⩽x<81% 41%⩽x<61% 21%⩽x<41% 1%⩽x<21% <1%
Variation in the number of views of the site <1% 1%⩽x<21% 21%⩽x<41% 41%⩽x<61% 61%⩽x<81% ⩾81%
Repeat purchase rate <1% 1%⩽x<21% 21%⩽x<41% 41%⩽x<61% 61%⩽x<81% ⩾81%
Ratio of new customers <1% 1%⩽x<2% 2%⩽x<3% 3%⩽x<4% 4%⩽x<5% ⩾5%
Revenue obtained from the sale of new products <1% 1%⩽x<11% 11%⩽x<21% 21%⩽x<31% 31%⩽x<41% ⩾41%
Efficiency
New product designs executed within the period stipulated <1% 1%⩽x<21% 21%⩽x<41% 41%⩽x<61% 61%⩽x<81% ⩾81%
Orders delivered within the period stipulated <1% 1%⩽x<21% 21%⩽x<41% 41%⩽x<61% 61%⩽x<81% ⩾81%
Waste of materials ⩾81% 61%⩽x<81% 41%⩽x<61% 21%⩽x<41% 1%⩽x<21% <1%
Projects that complied with the budget <1% 1%⩽x<21% 21%⩽x<41% 41%⩽x<61% 61%⩽x<81% ⩾81%
Finalized product designs <1% 1%⩽x<21% 21%⩽x<41% 41%⩽x<61% 61%⩽x<81% ⩾81%
Productivity <1% 1%⩽x<21% 21%⩽x<41% 41%⩽x<61% 61%⩽x<81% ⩾81%
Innovation
Internal radical innovations <1% 1%⩽x<2% 2%⩽x<3% 3%⩽x<4% 4%⩽x<5% ⩾5%
New products patented <1% 1%⩽x<2% 2%⩽x<3% 3%⩽x<4% 4%⩽x<5% ⩾5%
Investment in R&D <0.5% 1%⩽x<2% 2%⩽x<3% 3%⩽x<4% 4%⩽x<5% ⩾5%
Profit obtained with new products <1% 1%⩽x<2% 2%⩽x<3% 3%⩽x<4% 4%⩽x<5% ⩾5%
Quality
Hours of rework ⩾81% 61%⩽x<81% 41%⩽x<61% 21%⩽x<41% 1%⩽x<21% <1%
Rate of returns with return of merchandise ⩾5% 4%⩽x<5% 3%⩽x<4% 2%⩽x<3% 1%⩽x<2% <1%
Variation in the rate of rejection ⩾10% 10%>x⩾7.5% 7.5%>x⩾5% 5%>x⩾2.5% 2.5%>x⩾1% <1%
Compliance with the checklist <1% 1%⩽x<21% 21%⩽x<41% 41%⩽x<61% 61%⩽x<81% ⩾81%
Accident frequency rate ⩾60% 50%⩽x<60% 40%⩽x<50% 20%⩽x<40% 10%⩽x<20% <10%
Hours training the production staff <10 h 10 h⩽x<21 h 21 h⩽x<31 h 31 h⩽x<41 h 41 h⩽x<50 h >50 h
Outcome
Revenue variation <1% 1% to 5% 5%<x⩽10% 10%<x⩽15% 15%<x⩽20% >20%
ROI <1% 1% to 5% 5%<x⩽10% 10%<x⩽15% 15%<x⩽20% >20%
EBITDA margin <1% 1% to 5% 5%<x⩽10% 10%<x⩽15% 15%<x⩽20% >20%
Revenue per employee ⩽10K R$/employee 10K<x⩽25K R$/employee 25K<x⩽50K R$/employee 50K<x⩽100K R$/employee 100K<x⩽200K R$/employee >200K R$/employee

Characterization of the companies studied

Characteristic Company A Company B Company C Company D
No. of employees 644 833 692 482
Branch of activity Manual tools Cleaning implements Housewares Games and toys
Portfolio of products Paint brushes, paint rollers, spatulas, adhesive tape, sandpaper, etc. Brooms, mops, dustpans, brushes, buckets, sponges, cloths, gloves, etc. Pots, wastebaskets, organizing boxes, plant vases, etc. Educational toys, games, school supplies, tricycles, etc.

Source: Elaborated by the author, based on the data supplied by the companies

Problems encountered during data collection – companies A, B, and C

Problem Indicators related Solution
Questions about values (value collected refers to the partial share of the company in the market) Estimated market share The use of the market value that the company makes available, whether partial or complete, was established
Difficulty in collecting the data (values are generated outside the company) Estimated market share It was made flexible, accepting the value in percentage informed by external audit
Questions about concepts Revenue obtained from the sale of new products It was established that a new product is one that contains originality in its form, function, or raw material (including improvements)
Internal radical innovations The definition was in the booklet: “every product, process, or service that is new to the company and that leads to performance and cost improvements; that is, radical innovations ‘within’ the company”
Absence of data (companies do not control them) New product designs executed within the period stipulated The researcher sought the sectors of each company responsible for the existence of controls and data
Finalized product designs
Investment in R&D The manager committed to creating a cost center for “Investment in R&D”
Hours of rework Data collection was begun
Partial absence of data (only one company controls them) Waste of materials In company C, all of the material, reused or sold, will be considered waste; company A will begin controlling the data
The data exist but were not collected Variation in the rate of rejection The manager ordered the delivery of the data

Source: Elaborated by the author

Outcomes of the individual meetings with the representatives of the companies

Summary of the indicators delivered by companies A, B, and C

Problems encountered in the collection – company D

Problem Related indicators Solution
Questions about the collection period Investment in R&D It was established that the period for analysis would be from the latest months (12, 6, 3, or 1), in accordance with that established in the indicator
Revenue obtained from the sale of new products
All of the “Outcome” category
Questions about concepts Compliance with the checklist It was defined that the checklist would refer to the list of steps involved in the development of the product

Source: Elaborated by the author

Result for the categories of the system and the composite indicator

Indicator system for design management
Month
Category April May June July August September
Category 1: Consumer response 13 8 8 9 9 9
Category 2: Efficiency 20 19 9 10 7 7
Category 3: Innovation 15 5 5 5 10 5
Category 4: Quality 11 11 14 13 14 13
Category 5: Outcome 16 17 19 15 10 10
Composite Indicator 75 60 55 52 50 44

Source: Elaborated by the author

Issues related to implementation

Issue Related literature Evidence
(1) Lack of understanding of the importance of implementing the indicator system Marble (2003), Mendibil and Macbryde (2006), Niven (2006), Othman et al. (2006) Employee conflicts and a lack of employee understanding of the system’s content and the need to introduce it into the company
The indicator system was not understood as a potential support for decision making related to the design management processes
(2) Lack of commitment and involvement from senior management Kaplan and Norton (2001), Bourne et al. (2002), Niven (2005), Mendibil and Macbryde (2006) Managers did not guide the implementation process for the system but intervened only when problems arose
(3) The organization is in an unstable phase Marble (2003), Bourne et al. (2005), Waal and Counet (2009) The company was going through a restructuring in its units
Instability and uncertainties arose in the market and the economic situation of both the company and the country
(4) There is resistance from members of the organization Nair (2004), Greiling (2010), Pereira and Melão (2012) Employees did not see any increase in knowledge with the calculation of the indicators and the use of a new system, considered the data collection to be useless and a waste of time, and did not provide the requested data
Even after all of the interventions, employees did not prioritize the data collection, considering the implementation of the system to be irrelevant
(5) The system is not used for day-to-day decision making throughout the organization Lantelme and Formoso (2000), Nair (2004), Othman et al. (2006) Managers did not see the system as an aid to decision making at all levels of the company instead of just at the managerial level
(6) There are difficulties in obtaining the data to calculate the indicators Lohman et al. (2004), Nair (2004) Employees had no sense of responsibility for the data generated
Employees were not motivated to use the data as a source of information
Some of the data requested in the system were not controlled by the company
(7) Insufficient training Fixsen et al. (2005), Mendibil and Macbryde (2006), Niven (2006) The lack of sufficient training in and a proper introduction to the system led to a lack of understanding among employees
(8) Lack of a proper implementation team Marble (2003), Niven (2006), Othman et al. (2006) A heavy workload existed for the employee involved in the implementation, along with communication problems between her/him and the different sectors of each company
(9) The entire organization does not use the system Kaplan and Norton (2001), Franco and Bourne (2003), Nair (2004), Othman et al. (2006) The implementation of the system did not occur in such a way that it became part of the employees’ day-to-day routine. It occurred in an imposed way, causing discomfort and resistance in data provision
(10) Lack of planning and communication Niven (2005), Niven (2006), Othman et al. (2006), Hwang et al. (2013), Speckbacher et al. (2003) Employees had no knowledge of the dates of data delivery, the system’s operation and purpose, or how the information generated could help with their day-to-day work

Source: Author

Interview protocol classified by topic addressed

Topic addressed Protocol issue
Implementation process (1) Was there any difficulty with the implementation or data collection of the indicators? What doubts and problems were encountered? How were they resolved?
(4) Were the same people involved in the implementation from the beginning to the end of the process? If not, why was there a change?
(6) With regard to the amount of time invested in the implementation and data collection, do you think that the time dedicated to implementing the system was sufficient or insufficient? Explain.
(15) Reflecting on the implementation and results obtained, do you have any suggestions for improving the system?
Instructional material (2) Did the informative material (booklet) help in the process? How?
(3) Did the digital material (data collection table) help in the process? How? Do you believe it was suitable for your data collection routines and informational needs?
Data collection (5) Were the data that were already available in the company collected through systems or control procedures that were already in use? Which ones?
(7) Did the sectors involved in the implementation and data collection make a positive contribution to the data collection (facilitated collection)? If not, what happened?
(8) With regard to the requested information, categories, and indicators themselves, do you believe that this presentation sequence is logical and leads to an understanding of each element of the system?
About the UFRGS ICD indicator system (9) What is the reason for selecting this set of indicators?
(10) Would the company use more indicators if it were possible to do so? Which ones?
(11) Is there any required indicator that the company considers irrelevant? Why?
(12) Would you add any indicator to this set? Which one?
(13) Was the company already using an indicator system? If so, what differences between them can you observe?
(14) Do you believe that the ICD indicator system will add information to your company? In what way?

Appendix

References

Bammer, G. (2005), “Integration and implementation sciences: building a new specialization”, Ecology and Society, Vol. 10 No. 6, pp. 95-107.

Barbuio, F. (2007), “Performance measurement: a practical guide to KPIs and benchmarking in public broadcasters. [S. L.]: commonwealth broadcasting association”, available at: www.cba.org.uk/wp-content/uploads/2012/04/PerformanceMeasurementAPracticalGuide.pdf (accessed July 12, 2015).

Bardin, L. (2011), Análise de Conteúdo, Edições, São Paulo, 70.

Bernardes, M.M.S., Oliveira, G.G. and van der Linden, J.C.S. (2015), “Project: in pursuit of guidelines to increase competitiveness in the Brazilian industry through innovative product design management”, Journal of Modern Project Management, Vol. 2 No. 3, pp. 62-75.

Best, K., Kootstra, G. and Murphy, D. (2010), “Design management and business in Europe: a closer look”, Design Management Review, Vol. 21 No. 2, pp. 26-35.

Bourne, M., Kennerley, M. and Franco-Santos, M. (2005), “Managing through measures: a study of impact on performance”, Journal of Manufacturing Technology Management, Vol. 16 No. 4, pp. 373-395.

Bourne, M., Neely, A., Platts, K. and Mills, J. (2002), “The success and failure of performance measurement initiatives: perceptions of participating managers”, International Journal of Operations & Production Management, Vol. 22 No. 11, pp. 1288-1310.

Bourne, M., Mills, J., Wilcox, M., Neely, A. and Platts, K. (2000), “Designing, implementing and updating performance measurement systems”, International Journal of Operations & Production Management, Vol. 20 No. 7, pp. 754-771.

Braus, J.A. and Monroe, M.C. (1994), “Designing effective workshops. Detroit: University of Michigan. The environmental education toolbox – workshop resource manual”, available at: www.naaee.net/sites/default/files/publications/eetoolbox/DesigningEffectiveWorkshops.pdf (accessed December 14, 2015).

Chiu, M.-L. (2002), “An organizational view of design communication in design collaboration”, Design Studies, Vol. 23 No. 2, pp. 187-210.

Chiva, R. and Alegre, J. (2009), “Investment in design and firm performance: the mediating role of design management”, Journal of Product Innovation Management, Vol. 26 No. 4, pp. 424-440.

D’Ippolito, B. (2014), “The importance of design for firms’ competitiveness: a review of the literature”, Technovation, Vol. 34 No. 11, pp. 716-730.

Damschroder, L.J., Aron, D.C., Keith, R.E., Kirsh, S.R., Alexander, J.A. and Lowery, J.C. (2009), “Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science”, Implementation Science, Vol. 4 No. 50, pp. 1-15.

Davis, S. and Albright, T. (2004), “An investigation of the effect of balanced scorecard implementation on financial performance”, Management Accounting Research, Vol. 15 No. 2, pp. 135-153.

Delorme, P. and Châtelain, O. (2011), “Policy steering: the role and use of performance measurement indicators. [S. L.]: aid delivery methods programme”, available at: www.dochas.ie/Shared/Files/4/Guide_on_Performance_Measurement.pdf (accessed May 5, 2015).

Dziobczenski, P.R.N. (2012), “Diretrizes para a proposição de um sistema de indicadores para a gestão de design de empresas desenvolvedoras de produtos”, 137 f. Dissertação (Mestrado em Design) – Programa de Pós-Graduação em Design, Universidade Federal do Rio Grande do Sul, Porto Alegre.

Eckerson, W.W. (2011), Performance Dashboards: Measuring, Monitoring, and Managing Your Business, John Wiley & Sons, Hoboken, NJ.

Fixsen, D.L., Naoom, S.F., Blase, K.A., Friedman, R.M. and Wallace, F. (2005), Implementation Research: A Synthesis of the Literature, National Implementation Research Network, Tampa, FL.

Fraga, P.G.R. (2016), “Validação e implementação de sistema de indicadores de inovação, competitividade e design em empresas desenvolvedoras de produtos”, 159 f. Dissertação (Mestrado em Design) – Programa de Pós-Graduação em Design, Universidade Federal do Rio Grande do Sul, Porto Alegre.

Franco, M. and Bourne, M. (2003), “Factors that play a role in ‘managing through measures’”, Management Decision, Vol. 41 No. 8, pp. 698-710.

Gemser, G. and Leenders, M.A.A.M. (2001), “How integrating industrial design in the product development process impacts on company performance”, Journal of Product Innovation Management, Vol. 18 No. 1, pp. 28-38.

Greiling, D. (2010), “Balanced scorecard implementation in German non-profit organizations”, International Journal of Productivity and Performance Management, Vol. 59 No. 6, pp. 534-554.

Habtoor, N. (2016), “Influence of human factors on organisational performance: quality improvement practices as a mediator variable”, International Journal of Productivity and Performance Management, Vol. 65 No. 4, pp. 460-484.

Hill, C.W.L. and Jones, G.R. (1998), Strategic Management: An Integrated Approach, Houghton Mifflin, Boston, MA.

Hronec, S.M. (1993), Vital Signs: Using Quality, Time, and Cost Performance Measurements to Chart Your Company’s Future, Amacon, New York, NY.

Hwang, Y., Kettinger, W.J. and Yi, M.Y. (2013), “A study on the motivational aspects of information management practice”, International Journal of Information Management, Vol. 33 No. 1, pp. 177-184.

Jovanović, J., Krivokapić, Z. and Vujović, A. (2012), “Process establishing of performance management”, Proceedings of the International Conference Center for Quality, Faculty of Engineering, University of Kragujevac, Kragujevac, pp. 305-314.

Kaplan, R.S. and Norton, D.P. (1996), The Balanced Scorecard: Translating Strategy into Action, Harvard Business Press, Boston, MA.

Kaplan, R.S. and Norton, D.P. (2001), The Strategy-Focused Organization: How Balanced Scorecard Companies Thrive in the New Business Environment, Harvard Business Press, Boston, MA.

Kaur, M., Singh, K. and Ahuja, I.S. (2012), “An evaluation of the synergic implementation of TQM and TPM paradigms on business performance”, International Journal of Productivity and Performance Management, Vol. 62 No. 1, pp. 66-84.

Klein, K.J. and Sorra, J.S. (1996), “The challenge of innovation implementation”, The Academy of Management Review, Vol. 21 No. 4, pp. 1055-1080.

Lantelme, E. and Formoso, C.T. (2000), “Improving performance through measurement: the application of lean production and organizational learning principles”, Proceedings of the 8th International Group for Lean Construction Conference, University of Sussex, Brighton, August 17-19.

Lodico, M.G., Spaulding, D.T. and Voegtle, K.H. (2010), Methods in Educational Research: From Theory to Practice, 2nd ed., John Wiley & Sons, San Francisco, CA.

Lohman, C., Fortuin, L. and Wouters, M. (2004), “Designing a performance measurement system: a case study”, European Journal of Operational Research, Vol. 156 No. 2, pp. 267-286.

Marble, R. (2003), “A system implementation study: management commitment to project management”, Information & Management, Vol. 41 No. 1, pp. 111-123.

Melnyk, S.A., Bititci, U., Platts, K., Tobias, J. and Andersen, B. (2014), “Is performance measurement and management fit for the future?”, Management Accounting Research, Vol. 25 No. 2, pp. 173-186.

Mendibil, K. and Macbryde, J. (2006), “Factors that affect the design and implementation of team-based performance measurement systems”, International Journal of Productivity and Performance Management, Vol. 55 No. 2, pp. 118-142.

Micheli, P. and Mari, L. (2014), “The theory and practice of performance measurement”, Management Accounting Research, Vol. 25 No. 2, pp. 147-156.

Moultrie, J. and Livesey, F. (2014), “Measuring design investment in firms: conceptual foundations and exploratory UK survey”, Research Policy, Vol. 43 No. 3, pp. 570-587.

Mozota, B.B. (2003), Design Management: Using Design to Build Brand Value and Corporate Innovation, Allworth Press, New York, NY.

Nair, M. (2004), Essentials of Balanced Scorecard, Wiley, New York, NY.

Neely, A., Bourne, M., Mills, J., Platts, K. and Richards, H. (2002), Strategy and Performance: Getting the Measure of Your Business, Cambridge University Press, Cambridge.

Niven, P. (2005), Balanced Scorecard Diagnostics: Maintaining Maximum Performance, John Wiley & Sons, New Jersey, NJ.

Niven, P. (2006), Balanced Scorecard Step-by-Step: Maximizing Performance and Maintaining Results, John Wiley & Sons, New Jersey, NJ.

Othman, R., Ahmad, A.K.D., Che, Z.S., Abdullah, N. and Hamzah, N. (2006), “A case study of balanced scorecard implementation in a Malaysian company”, Journal of Asia-Pacific Business, Vol. 7 No. 2, pp. 55-72.

Parmenter, D. (2010), Key Performance Indicators: Developing, Implementing, and Using Winning KPIs, John Wiley & Sons, Hoboken, NJ.

Pawson, R., Greenhalgh, T., Harvey, G. and Walshe, K. (2005), “Realist review: a new method of systematic review designed for complex policy interventions”, Journal of Health Services Research & Policy, Vol. 10 No. 1, pp. 21-34.

Pereira, M.M. and Melão, N.F. (2012), “The implementation of the balanced scorecard in a school district”, International Journal of Productivity and Performance Management, Vol. 61 No. 8, pp. 919-939.

Plentz, N.D. (2014), “Proposição de um sistema de indicadores de inovação, competitividade e design voltado para empresas desenvolvedoras de produtos”, 175 f. Dissertação (Mestrado em Design) – Programa de Pós Graduação em Design, Universidade Federal do Rio Grande do Sul, Porto Alegre.

Rabin, B.A., Brownson, R.C., Haire-Joshu, D., Kreuter, M.W. and Weaver, N.L. (2008), “A glossary for dissemination and implementation research in health”, Journal of Public Health Management Practice, Vol. 14 No. 2, pp. 117-123.

Sanchez, R. (2006), “Integrating design into strategic management processes”, Design Management Review, Vol. 17 No. 4, pp. 10-17.

Sans Institute (2003), “A practical methodology for implementing a patch management process, Boston”, available at: www.sans.org/reading-room/whitepapers/bestprac/practical-methodology-implementing-patch-management-process-1206 (accessed March 2, 2016).

Schirnding, Y.V. (2002), Health in Sustainable Development Planning: The Role of Indicators, World Health Organization, Geneva.

Shabaninejad, H., Mirsalehian, M.H. and Mehralian, G. (2014), “Development of an integrated performance measurement (PM) model for pharmaceutical industry”, Iranian Journal of Pharmaceutical Research, Vol. 13 No. S1, pp. 207-215.

Speckbacher, G., Bischof, J. and Pfeiffer, T. (2003), “A descriptive analysis on the implementation of balanced scorecards in German-speaking countries”, Management Accounting Research, Vol. 14 No. 4, pp. 361-388.

Stetler, C.B., Legro, M.W., Wallace, C.M., Bowman, C., Guihan, M., Hagedorn, H., Kimmel, B., Sharp, N.D. and Smith, J.L. (2006), “The role of formative evaluation in implementation research and the QUERI experience”, Journal of General Internal Medicine, Vol. 21 No. S2, pp. S1-S8.

Stoner, J.A.F. and Freeman, R.E. (1994), Management, Prentice-Hall, New Delhi.

Tanner, J. and Hale, C. (2002), “The workshop as an effective method of dissemination: the importance of the needs of the individual”, Journal of Nursing Management, Vol. 10 No. 1, pp. 47-54.

Taylor, T.P. and Kristensen, S.A. (2013), “Performance measurement in global product development”, Proceedings of the 19th International Conference on Engineering Design – ICED, Sungkyunkwan University, Seoul, August 19-22.

Thiollent, M. (2011), Metodologia da Pesquisa-ação, Cortez, São Paulo.

Valmohammadi, C. and Servati, A. (2011), “Performance measurement system implementation using balanced Scorecard and statistical methods”, International Journal of Productivity and Performance Management, Vol. 60 No. 5, pp. 493-511.

Velimirović, D., Velimirović, M. and Stanković, R. (2011), “Role and importance of key performance indicators measurement”, Serbian Journal of Management, Vol. 6 No. 1, pp. 63-72.

Vieites, A.G. and Calvo, J.L. (2011), “A study on the factors that influence innovation activities of Spanish big firms”, Scientific Research, Vol. 2 No. 1, pp. 8-19.

Waal, A.A. and Counet, H. (2009), “Lessons learned from performance management systems implementations”, International Journal of Productivity and Performance Management, Vol. 58 No. 4, pp. 367-390.

Acknowledgements

The authors thank the Foundation for Research Support of Rio Grande do Sul (FAPERGS) for the financial support in the development of this study.

Corresponding author

Paula Görgen Radici Fraga is the corresponding author and can be contacted at: paula.radici@ufrgs.br

About the authors

Paula Görgen Radici Fraga was a PhD Student at the Federal University of Rio Grande do Sul (UFRGS) (2016), Master in Design (2016), Specialization in Controlling (2009), and Graduate in Business Administration (2007) by the same institution. She performs research works in the areas of design, management, strategy, indicators, and instructional design, among others.

Maurício Moreira e Silva Bernardes is an Associate Professor at the Federal University of Rio Grande do Sul (UFRGS) in Brazil and the Vice Director of the product development center at the University’s School of Engineering. He holds a BA in Civil Engineering, an MSc and a PhD in Construction Management, and a post-doctorate degree in Design from the IIT Institute of Design in Chicago. Bernardes’ research areas include project and design management, design methods, and product development. He is also the Coordinator of the ICD project that aims at enhancing the competitiveness of the Brazilian industry through innovative solutions and design management.

Darli Rodrigues Vieira, PhD, Full Professor. Holds the research chair in Management of Aeronautical Projects, Management School, Université du Québec à Trois Rivières (UQTR - Trois Rivières, Québec, Canada) and a former Professor at the Universidade Federal do Paraná and at the Instituto Tecnológico de Aeronáutica (ITA). His current research focuses on project management, logistics chain management, strategy and management of operations, and management of MRO (maintenance, repair, and overhaul).

Milena Chang Chain is a DBA candidate and a Researcher at the Chair in Management of Aeronautical projects at the University of Quebec at Trois-Rivieres. She holds a degree in Production Engineering from the Catholic University of Paraná (PUC-PR) and a degree in Business Administration from the Federal University of Paraná (UFPR). She also holds an MBA in Logistics Management Systems from the UFPR.