Abstract
Purpose
This paper aims to investigate the mechanisms for managing coordinated benchmarking projects and the outcomes achieved from such coordination. While there have been many independent benchmarking studies comparing the practices and performance of public sector organisations, there has been little research on initiatives that involve coordinating multiple benchmarking projects within public sector organisations or report on the practices implemented and results from benchmarking projects. This research will be of interest to centralised authorities wishing to encourage and assist multiple organisations in undertaking benchmarking projects.
Design/methodology/approach
The study adopts a case study methodology. Data were collected on the coordinating mechanisms and the experiences of the individual organisations over a one-year period.
Findings
The findings show successful results (financial and non-financial) across all 13 benchmarking projects, thus indicating the success of a coordinated approach to managing multiple projects. The study concluded by recommending a six-stage process for coordinating multiple benchmarking projects.
Originality/value
This research gives new insights into the application and benefits from benchmarking because of the open access the research team had to the “Dubai We Learn” initiative. To the authors’ knowledge the research was unique in being able to report accurately on the outcome of 13 benchmarking projects with all projects using the TRADE benchmarking methodology.
Keywords
Citation
Mann, R., Adebanjo, D., Abbas, A., El Kahlout, Z.M., Al Nuseirat, A.A. and Al Neaimi, H.K. (2021), "An analysis of a benchmarking initiative to help government entities to learn from best practices – the “Dubai We Learn” initiative", International Journal of Excellence in Government, Vol. 2 No. 1, pp. 2-23. https://doi.org/10.1108/IJEG-11-2018-0006
Publisher
:Emerald Publishing Limited
Copyright © 2020, Robin Mann, Dotun Adebanjo, Ahmed Abbas, Zeyad Mohammad El Kahlout, Ahmad Abdullah Al Nuseirat, and Hazza Khalfan Al Neaimi.
License
Published in International Journal of Excellence in Government. Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode
1. Introduction
This paper presents the findings of a study into the operation of a coordinated programme of 13 benchmarking projects for public sector organisations. The research adopts a case study approach and studies the benchmarking initiative called “Dubai we learn” (DWL), which was administered and facilitated by the Dubai Government Excellence Programme (DGEP) and the Centre for Organisational Excellence Research (COER), New Zealand. The DGEP is a programme of the General Secretariat of the Executive Council of Dubai that reports to the Prime Minister of the United Arab Emirates (UAE) and aims to raise the excellence of public sector organisations in Dubai.
To the authors’ knowledge this study is the first published account of this type of benchmarking initiative and provides unique research data as a result of closely monitoring the progress of so many benchmarking projects over a one-year period. All benchmarking projects used the same benchmarking methodology, TRADE benchmarking, which assisted in the coordination and monitoring of the projects. Lessons can be learned from this approach by institutions tasked with capability building and raising performance levels of groups of organisations.
Benchmarking is a versatile approach that has become a necessity for organisations to compete internationally and for the public service to meet the demands of its citizens. However, for all its versatility, there is a paucity of research showing how public sector organisations have undertaken benchmarking projects independently or via a third-party coordinated approach to identify and implement best practices with the results reported. In the analysis of the DWL initiative, this study will address two important questions that shed new light on the application of benchmarking. These are:
How can centralising the coordination of multiple benchmarking projects in public sector organisations be successfully achieved?
What are the key success factors and challenges that underpin the process of co-ordinated benchmarking projects?
In addition, the research will summarise the achievements of the individual 13 projects, which is in itself a significant contribution to the benchmarking field. The paper begins with a literature review on the importance of benchmarking, its use in the public sector in Section 2 and an overview of the TRADE benchmarking methodology. This is followed by presenting the research aims and objectives in Section 3, research methodology in Section 4, findings on the DWL process in Section 5 and outcomes in Section 6. Finally, Section 7 ends with a discussion and conclusion in Section 8.
2. Literature review
Benchmarking is an established managment technique, it has been over 25 years since the publication of the first book on benchmarking by Dr Robert Camp (1989). However, the technique has continued to be popular and beneficial as shown by a multinational review by Adebanjo et al. (2010) and its rating as the second most used management tool in 2015 in a global study of tools and techniques (Rigby and Bilodeau, 2015). According to Taschner and Taschner (2016), benchmarking has been widely adopted to identify gaps and underpin process improvement and has been defined as a structured process to enable improvement in organisational performance by adopting superior practices from organisations that have successfully deployed them (Moffett et al., 2008). A detailed discussion of benchmarking definitions and typology is beyond the scope of this paper and there are already several publications that have discussed them extensively (Chen, 2002; Panwar et al., 2013; Prašnikar et al., 2005).
2.1 Benchmarking and the public sector
For the public sector, it has long been recognised the importance of benchmarking to maximise value for money for the public (Raymond, 2008). Its use has grown with the wide availability of benchmark data and best practice information from both local and international perspectives. In particular, the availability of international comparison data has led to pressure for governments to act and improve their international ranking. Examples of international metrics that are avidly monitored by governments and used to encourage benchmarking in the public sector include; the Programme for International Student Assessment study comparing school systems across 72 countries (OECD, 2016) the National Innovation Index comparing innovation across 126 countries (Cornell University, INSEAD, and WIPO, 2016), the Global Competitiveness Report comparing competitiveness across 138 countries (Schwab and Sala-i-Martin,2017) the Ease of Doing Business comparing 190 countries (International Bank for Reconstruction and Development, 2016) and Government Effectiveness comparing government governance and effectiveness across 209 countries (World Bank, 2016).
From an academic perspective, most benchmarking research has compared the practices and performance of organisations within a sector or across sectors rather than focussing on the benchmarking activities undertaken by the organisations themselves. For example, there has been benchmarking studies undertaken in tourism (Cano et al., 2001), water (Singh et al., 2011), health (May and Madritsch, 2009; Mugion and Musella, 2013; van Veen-Berkx et al., 2016), local councils (Robbins et al., 2016) and across the public sector on contract management (Rendon, 2015) and procurement (Holt and Graves, 2001).
2.2 Benchmarking models
While benchmarking data is often provided by third parties such as consultancies, trade associations and academic research it is important that organisations themselves become proficient at undertaking benchmarking projects. The success of benchmarking projects depends on the ability to adopt a robust and suitable approach (Jarrar and Zairi, 2001) that includes not only obtaining benchmarks but also learning and implementing better practices. There are many benchmarking models or methodologies that can be used to guide benchmarking projects. Anand and Kodali (2008) stated that there were more than 60 benchmarking models and they included those developed by academics (research-based), consultants (expert-based) or individual organisations (bespoke organisation-based). Table 1 lists examples of different benchmarking models and classifies them by the number of benchmarking steps that they recommend from starting to finishing a benchmarking project. While examining some of these models Partovi (1994) argued that while the number of benchmarking steps differs the core of the models are similar. Further examination of the models indicates that while many of them suggest a number of main stages and associated steps, only two (TRADE and Xerox) provide detailed and published sequential steps that are clearly defined and guide the benchmarking process step-by-step from beginning to end.
With respect to the DWL initiative, the adopted model was the TRADE benchmarking methodology developed by Mann (Mann, 2015). While TRADE consists of five main stages (Figure 1), each of these stages are split into four to nine steps enabling project teams to be guided from one step to the next and for project progress to be easily tracked.
3. Research aim and objectives
The aim of this research was to define a comprehensive framework for assessing the success of a benchmarking process while also identifying the key success factors identified with each stage of the benchmarking process. The study also aims to understand how the organisations and projects operate within the structure of centralisation and coordination, characterised by the sharing of support resources and constraint of time. The key objectives, which supported the project aim were as follows:
Evaluate the success of the benchmarking projects within the context of coordinated support.
Investigate the key support resources required for an effective coordinated programme of benchmarking projects in different public-sector organisations.
The uniqueness of this research was the study of a coordinated approach for multiple benchmarking projects that all used the same benchmarking methodology from concept to completion (including the implementation of best practices). The closest example to this study has been when networks of organisations have been provided with services to assist with best practice sharing, finding benchmarking partners and comparing performance. For example, research into the services of the New Zealand Benchmarking Club (Mann and Grigg, 2004) and the dutch operating room benchmarking collaborative (van Veen-Berkx et al., 2016). However, these networks largely left it to the member organisations to decide on how they use these services with no specific monitoring of individual benchmarking projects that may have been undertaken by the network members.
4. Research methodology
The adopted research methodology was the case study methodology. The case study methodology has the advantage of providing in-depth analysis (Gerring, 2006). Furthermore, case studies allow the researcher to collect rich data and consequently, develop a rich picture based on the collection of multiple insights from multiple perspectives (Thomas, 2016). Furthermore, case study methodology enables the researcher to retain meaningful characteristics of real life events such as organisational processes and managerial activity (Yin, 2009).
The DWL initiative was selected as the case for research because of the ease of access to data by the authors and the fact that this was the only known case that met the aims and objectives as set out in this paper. In deciding the most suitable organisations to participate in the co-ordinated initiative, the DGEP publicised its desire to promote benchmarking in public organisations and then invited different government agencies to indicate their interest. A total of 36 projects were tendered for consideration to be part of the DWL programme and 13 of these were selected. The projects were selected based on their potential benefits to the applying organisation, the government as a whole, and the citizens/residents of Dubai Emirate. The commitment of the government organisations, including their mandatory presence at all programme events, was also a consideration.
The selected projects were then monitored by the research team over a one-year period at the end of which the project teams were required to submit a benchmarking report showing how the project was conducted and the results achieved. The research team had direct access to the project teams and project data at all times.
4.1 Data collection
Data collection was based on document analysis and notes taken at meetings with each team and at events where the teams presented their projects. For this study, the following were carried out:
Each benchmarking team submitted bi-monthly reports and a project management spreadsheet, consisting of over 20 worksheets, which they used to manage their benchmarking projects. The worksheets recorded all the benchmarking tools they used such as fishbone diagrams, swot analysis, benchmarking partner selection tables, site visit questions, best practice selection grid and action plans. This information enabled the research team to evaluate the “benchmarking journey” of each team.
Three progress sharing days were held at which each of the benchmarking organisations gave a presentation of their projects. These events were attended by three members of the research team and notes were taken.
Two members of the research team met with each benchmarking team days before or after each progress sharing day and before the team’s final presentation at the close of the project. These 2-hour meetings enabled more in-depth understanding of the activities of the benchmarking teams and an understanding of the centralised support that they required. Each team was met four times and consequently, a total of 52 meetings were held across all 13 organisations.
At the end of the project, each team submitted a comprehensive benchmarking project report that detailed the purpose of the project, project findings from each of the five stages of the benchmarking methodology, actions implemented and results achieved, project benefits non-financial and financial, strengths and weaknesses of the project and finally a review of the positive points and challenges faced with the centralised co-ordination of the projects.
At the end of the project, each team gave a final presentation and this event was attended by all members of the research team.
4.2 Data analysis
The analysis of data was carried out in several ways. Analysis of the notes taken during the 52 meetings and those from the progress sharing days enabled an understanding of the centralised support activities that were found to be most beneficial and aspects of centralised co-ordination that were found to be less beneficial. Details from the project report and notes from the final presentation gave a clear indication and in many cases, quantification, of the benchmarking successes achieved by each of the 13 organisations. In addition, analysis of the bi-monthly reports and project management spreadsheets enabled an understanding of which benchmarking teams progressed quickly and why, while also indicating the on-going challenges faced by teams that did not progress as quickly. All these sources of data were analysed for common themes/statements. An analysis of the effectiveness of the co-ordinated initiative was carried out by comparing the individual outcomes of the 13 projects to identify evidence of success and factors that supported the success achieved. Documentary evidence in the form of project reports of all 13 projects were used to analyse how the individual benchmarking project teams managed the balance between adhering to a centralised structure, the individuality of their projects and the different organisational structures and cultures in the 13 organisations.
The collection of data from multiple sources of evidence has been identified as an important approach for delivering robust analysis based on the ability to triangulate data and therefore identify important themes based on converging lines of enquiry (Yin, 2009; Patton, 1987).
5. Findings on Dubai we learn process
5.1 “Dubai we learn” initiative
From the DGEP perspective, benchmarking is considered a very powerful tool for organisational learning and knowledge sharing. Consequently, DGEP launched the initiative with the aims of promoting a benchmarking culture in the public sector, improving government performance, building human resource capability and promoting the image of Dubai.
In preparation for starting the benchmarking initiative, all government entities were requested to tender potential projects and teams for consideration by the DGEP and COER. The benchmarking teams would comprise of between four and eight members with each team member expected to spend between half and a full day on the project per week. Each project would have a sponsor who would typically be a senior executive or director and who would take overall responsibility for the project. While the sponsor would not be expected to be a member of the team, they would ensure that the project teams had the necessary time and resources required to complete their projects.
Table 2 presents the organisations that took part in the initiative and their key achievements as a result of their DWL benchmarking project. The one-year projects all commenced in October 2015. The support services provided by COER and DGEP were:
Two three-day training workshops on the TRADE best practice benchmarking methodology.
A full set of training materials in Arabic and English, including benchmarking manual and TRADE project management system.
Centralised tracking and analysis of all projects with each team submitting bi-monthly progress reports and project management spreadsheet and documents.
Desktop research to identify best practices and potential benchmarking partners was conducted for each benchmarking team to supplement their own search for best practices.
Three progress sharing days were held at which each project team gave a presentation on their progress to-date. This was an opportunity for sharing and learning between teams and an opportunity for the teams to receive expert feedback.
Face-to-face meetings with the project teams were scheduled for the week before or after the progress sharing days and a week before the closing sharing day at which a final presentation was given. This enabled detailed input and analysis before the sharing days and detailed feedback after the sharing days.
Two meetings were held to provide added assistance and learning specifically for the team leaders and benchmarking facilitators of each team.
All teams were required to complete a benchmarking report and deliver a final presentation on their project.
An “achieving performance excellence through benchmarking and organisational learning” book (Mann et al..,2017) was produced describing in detail how the 13 projects were undertaken and the results achieved.
5.2 Programme monitoring and completion
At each progress sharing day, each benchmarking team gave an 8-minute presentation showing the progress they had made. Other teams and experts from COER were then able to vote on which teams had achieved the most progress, as the last progress sharing day. The sharing days were held in November 2015, January 2016 and April 2016.
A closing sharing day was held in October 2016. For the closing sharing day each benchmarking team was required to submit a detailed benchmarking report (showing how the project was conducted and the results achieved) and give a 12-min presentation of the final outcomes of their project. Each project was then assessed by judging how well the project followed each stage of the methodology. Table 3 shows the criteria and grading scale used to assess the terms of reference (TOR) stage. A similar level of detailed criteria and grading scale was applied to other stages of the benchmarking process.
6. Findings on Dubai we learn outcomes
6.1 Benchmarking project outcomes
This section presents findings from the project reports, presentations and meetings involving the research team. First of all, the teams reviewed and refined their TOR through undertaking the review stage of the benchmarking process. This involved assessing their current performance and processes in their area of focus. Techniques such as brainstorming, swot, fishbone analysis, holding stakeholder focus groups and analysing performance data were used. Once the key areas for improvement were identified or confirmed the teams progressed to the acquire stage of the benchmarking process. This involved desk-top research to identify benchmark data and potential best practices. This was then followed by face-to-face meetings, video conferencing, written questionnaires, workshops and site visits. For example, General Directorate of Residency and Foreigners Affairs (GDRFA) visited five organisations while Dubai Corporation for Ambulance Services (DCAS), Dubai Electricty and Water Authority (DEWA), Dubai Courts, Dubai Municipality, Dubai Public Prosecutions, Dubai Statistics, Knowledge and Human Development Authority (KHDA) and Mohammed Bin Rashid Housing Establishment (MRHE) each visited four organisations. Roads and Transport Authority (RTA) visited three organisations, Dubai Land Department visited two organisations, and Dubai Police visited one organisation twice for indepth site visits. The organisations visited included both local and international organisations that comprised private sector and public-sector organisations. The organisations visited were based in countries that included Australia, Bahrain, China, Ireland, Singapore, South Korea, UAE, UK and USA. Organisations visited included Emirates, Changi Airports, Cambridge University, Kellogg’s, GE, Zappos, Supreme Court of Korea, Dubai Customs and DHL.
An analysis of the reports submitted by the teams at the end of the project indicated that these approaches were very successful in identifying proposed improvement actions. The individual reports indicated that each team identified between 30 to 99 potential actions to implement. For example, DEWA identified 73 improvement actions of which 35 were approved for implementation and Dubai Statistics Centre identified 58 improvement actions of which 14 were approved for implementation. All 13 benchmarking project teams identified suitable improvement actions, which were approved for implementation and deployed partly or fully by the end of the one-year programme.
Deployment of actions was undertaken by the benchmarking teams themselves or the teams worked with relevant partners within their organisations to ensure successful deployment. For example, KHDA transferred the deployment of actions to three different teams across the organisation. Commendably, all 13 benchmarking teams went through the cycle of learning about benchmarking, developing relevant benchmarking skills, identifying areas for improvement, understanding current performance and practices, identifying and visiting benchmarking partners, and identifying and deploying improvement ideas within a 12-month period.
Data from the benchmarking reports and from presentations at the closing sharing day indicated that the teams enjoyed significant success. Table 2 summarises the key achievements of each project. For example, KHDA implemented 21 practice to improve employee happiness from 7.3 to 7.6 to place KHDA among the top 10% happiest organisations (according to the happiness @ work survey) while GDRFA piloted a new passenger pre-clearance system (a world first) in Terminal 3 Dubai Airport that reduced the processing time of documents/passport check-in to 7 seconds, an improvement of almost 80%. Dubai Police developed a new knowledge management plan incorporating 33 projects and resulting in savings to date of US$250,000 while DCAS launched the first advanced paramedic training course in the Gulf region. The new initiatives at Dubai Courts have resulted in 87% user satisfaction and saving of more than US$1,000,000 per year while MRHE has successfully launched a 24/7 smart service for its residents. In addition, the changes to the procurement process at Dubai Municipality led to, among others, a 97% completion of purchase requisitions within 12.2 days (in contrast to previous performance of 74% completion within 15.5 days), a reduction of cancelled purchase requisitions from 848 to 248 and overall savings in excess of US$600,000 per year.
When assessing the success of each project after one year, using the assessment criteria shown in Table 3, it was considered that four teams had conducted role model projects, two teams had conducted excellent projects and the remainder had reached a level of proficiency. The teams that rated the highest had excelled in terms of the depth, richness of their benchmarking projects and implemented impactful ideas and best practices.
Projects that were classed as proficient had shown a competent approach and had met or were on track to meet their aims and objectives. However, the quality of their analysis or approach to their project in terms of engaging with stakeholders, for example, was less than the role model projects or they had not fully completed the deploy or evaluate stage of the project. Table 4 provides a summary of Dubai Municipality’s project that was assessed as a role model project.
6.2 Success factors for benchmarking
As part of their final report and presentation, each team was asked to identify the success factors for each stage of their benchmarking project. These were then collated and similar comments were combined to make a draft list, which was then issued to all the teams for further feedback and then finalised. Success factors for each stage of the benchmarking methodology summarised from feedback provided by the teams presents the key success factors.
In summary, over the course of the one-year initiative, all 13 benchmarking project teams successfully deployed benchmarking tools and techniques and identified improvement activities for their organisations and were considered to have met or exceeded expectations in terms of meeting their initial project aims and objectives.
Noticeably, they had applied more of the success factors shown in Success factors for each stage of the benchmarking methodology summarised from feedback provided by the teams.
6.2.1 Key success factors for the terms of reference – plan the project:
Define projects clearly and ensure that they fit within your organisational strategy.
Provide a clear description of the background to the project and put it into the context of the organisation’s overall strategy.
Each project needs a proper project management plan
When identifying the need for the project, view this as an opportunity to gain the commitment of the various parties.
Have a good understanding of the issues facing the organisation before beginning a project.
Define the usefulness of the project from a long-term perspective.
A clear definition of the scope of a project will ensure that everyone has the same understanding of the purpose of the project.
Provide a detailed breakdown of the objectives of the project with objectives for each stage of TRADE.
Select an appropriate project team that have the right competencies and can spend time on the project.
Continually refine the TOR as the project develops.
Document in detail project risks and continually assess and mitigate these risks throughout the project.
Identify the project’s stakeholders and how they will benefit from the project.
Provide a regular bulletin to inform stakeholders about project progress and use other methods such as focus groups to actively obtain stakeholder opinion and ideas throughout the project.
Weekly or at most monthly progress reviews should be undertaken involving project team members and relevant stakeholders.
6.2.2 Key success factors for review current state
Thoroughly assess the current situation.
Interrogate the information gathered on current processes and systems by using techniques such as rankings, prioritizing matrix and cross-functional tables to determine the key priority areas to focus on.
A common understanding of the current situation by all team members quickly led to identifying appropriate benchmarking partners and finding solutions.
Self-assessments proved to be a powerful assessment tool for identifying problem areas at the start of the project and showing how much the organisation has improved at the end of the project.
Adequate time should be spent on defining relevant performance measures and targets for the project.
The selection of the right performance measures are critical to effectively measuring the success of the project.
6.2.3 Key success factors for acquire best practices
Carry out desktop research as a complement to site visits.
Benchmarking partners should be selected through selection criteria related to the areas for improvement.
Completing a benchmarking partners’ selection scoring table can be a lengthy process but it is very useful for clarifying what is needed in a benchmarking partner.
Involve other staff to conduct benchmarking interviews on the team’s behalf when opportunities arise (for example, during travel for other work purposes).
Undertake benchmarking visits outside the focal industry to gain a wide perspective of the issues involved.
Makes sure that at least a few benchmarking partners are from outside the industry.
Record detailed notes on the learning from benchmarking partners.
Use standardised forms for the capture and sharing of information from site visits.
Share the learning from the benchmarking partners with your stakeholders.
Capture all ideas for improvement. Ideas may come from team members and stakeholders as well as from benchmarking partners.
6.2.4 Key success factors for deploy – communicate and implement best practices
Ensure that there is support for implementing changes in a short timeframe otherwise the enthusiasm of the team may suffer.
Provide clear descriptions of proposed actions, resources required, time-lines and likely impact.
Have a clear understanding of the needs of the organisation and ensure actions address these issues.
If the benchmarking team is not responsible for implementation, make sure the team has oversight of the implementation.
Communicate with relevant stakeholder groups when implementing the actions so that they understand the changes taking place and can provide feedback and ideas.
6.2.5 Key success factors for evaluate the benchmarking process and outcomes
Analyse project benefits including financial benefits. Financial benefits may include benefits accrued by stakeholders such as citizens.
A thorough evaluation of improvements undertaken with performance measured and showing benefits in line with or surpassing targets is necessary to demonstrate project success and get further support for future projects.
Lessons learned should be collated and applied to new projects.
Generate new ideas for future projects by reflecting on what has been implemented and learnt. Link the new projects to the strategy and operations of the organisation.
Share experience with other government entities to encourage them to do benchmarking.
Recognise the learning, growth and achievements of the benchmarking team.
7. Discussion
The outcomes of the DWL initiative have confirmed the value of having a co-ordinated programme of benchmarking projects in different government organisations. The DWL initiative can be summarised into a six-stage process. The first stage is the centralised selection of individual participating organisations and their associated projects while the second stage is the centralised training of all project teams across all organisations in the use of a suitable benchmarking methodology. The third stage is the deployment of the teams to undertake their individual projects with the support of sponsors in their organisation and the provision of appropriate benchmarking tools. The fourth stage is the on-going provision of centralised facilitation and external support to the benchmarking teams, which may include providing assistance in finding benchmarking partners and undertaking benchmarking research on the team’s behalf. The fifth stage is the central monitoring of the projects through a project management system, regular meetings and the provision of sharing events such as progress sharing days, and the sixth stage is the formal closing of the initiative, which may involve formal benchmarking reports to be submitted and evaluated with recognition given to completed projects and those projects that were most successful. This six-stage process to co-ordinated benchmarking has developed uniquely from the DWL project and is a key contribution of this study as no previous study has reported such an approach for co-ordinated benchmarking projects. For example, the benchmarking projects carried out by the New Zealand benchmarking club (Mann and Grigg, 2004) and the UK food industry (Mann et al., 1998) lacked such rigour and structure.
This six-stage process for coordinating benchmarking projects suggests that such an initiative can be successfully deployed, particularly in public sector organisations where centralisation may be easier to manage. While previous studies such as Cano et al. (2001) and Jarrar and Zairi (2001) have extolled the benefits of adopting a robust benchmarking methodology, this study has found that it is also possible to develop a robust process for coordinating multiple benchmarking projects that are being undertaken in different organisations. This approach differs significantly from the association-sponsored benchmarking approach promoted by Alstete (2000). The association-sponsored approach involves outsourcing the benchmarking process to third party for a fee and consequently, the benchmarking organisations do not develop the competencies of benchmarking and do not have control over the process and the methods used. In contrast, the six-stage process defined above enables the benchmarking teams to take ownership of the project and develop their benchmarking capabilities. The ownership and strong involvement of key stakeholders from start to finish was considered as crucial by the teams in ensuring that their recommendations were accepted and successfully implemented. The six-stage process can, therefore, provide a framework for central authorities that wish to implement benchmarking initiatives simultaneously across several entities.
7.1 Benefits of centralising and coordinating benchmarking initiatives
Centralising and coordinating benchmarking initiatives such as DWL have the potential to deliver significant benefits. As can be seen from the successes achieved by the different organisations that participated in the DWL initiative, there can be significant process improvements and enhanced operational performance achieved simultaneously across several functions thereby presenting system-wide improvement in government (or other centralised authorities). Similarly, the successes achieved can result in system-wide cultural change, which can be otherwise difficult to achieve. In essence, the use of benchmarking to support organisational improvement is likely to become entrenched. Perhaps more importantly, a 2nd cycle of DWL has now started and many of the organisations that took part in the 1st cycle have signed up for new projects in different areas of operation. The second cycle was started by DGEP based on the success of the 1st cycle of projects. Consequently, within the timeline of less than two years, several government organisations that had little knowledge and experience of benchmarking have developed requisite skills and successfully undertaken benchmarking projects. Such system-wide significant improvement can be more readily achieved by centralisation rather than approaching each individual government organisation separately. The successes reported by the 13 benchmarking projects and the enthusiasm to get involved in new projects contrast with the suggestion of Putkiranta (2012) that benchmarking is losing popularity because of a difficulty in relating benchmarking to operational improvements. They also contrast with the conclusion of Bowerman et al. (2002) that many public sector benchmarking initiatives fail to quantify the benefits of benchmarking.
In addition, the centralised training and mentoring of the benchmarking teams in the use of the adopted benchmarking methodology is time and cost efficient and provides an environment for team members from different government organisations to interact, develop networks, provide mutual support and learn together. This is an important element of developing a culture supportive of benchmarking in large disparate entities such as government departments. This development of culture change across several government organisations is important because it contrasts with published benchmarking studies, which usually compare performance data or practices within a sector or on a topic (such as Holt and Graves, 2001; Robbins et al., 2016). Such studies are usually undertaken by third parties and do not usually develop the benchmarking capabilities of organisations or help them to apply a full benchmarking process including change management, implementation of best practices and evaluation of results. The study presented in this paper involves widespread culture change and adoption of benchmarking in 13 public sector organisations. The snowballing and multiplier effect, in government performance, of adopting benchmarking capabilities identified in this study would not be possible with the type of benchmarking studies, which have been pre-dominant in the literature.
Furthermore, centralisation and co-ordination have the advantage of elevating the profile of benchmarking at high levels in government. Previous studies such as Holloway et al. (1998) had stressed the importance of benchmarking project teams having champions that will provide needed executive sponsorship for the team. The experience of the DGEP suggests that in addition to each benchmarking team having an executive sponsor in their organisation, centralisation implies that there will also be an executive sponsor within central government with significant clout at high levels of central government.
A final benefit of centralising and co-ordinating benchmarking projects across several organisations is the generation of friendly competitiveness among the organisations. Within the context of the DGEP’s DWL initiative, this was achieved by the progress sharing days where each team presented their progress and all the teams were able to vote for the most progressive team. Such progress sharing not only acts as a spur to other teams but also provides a forum for project teams to discuss any challenges they face and provide support in finding solutions to such problems. Furthermore, as all the projects were structured to start at the same time, all projects teams were at comparable stages during the duration of the initiative. This is a significant difference from the benchmarking approaches presented in studies such as Mann and Grigg (2004) and van Veen-Berkx et al. (2016). While these studies also involved multiple organisations and encouraged sharing of experience and practices, there was not the same pressure to achieve progress by a specified time-line. In the case of DWL, as all organisations were using the same methodology it was easy to compare the progress the teams were making and the quality of their work at each stage of the benchmarking process. Perhaps more importantly, the DWL project gave the individual organisations the flexibility to focus their efforts on diverse projects (e.g. Dubai Land focussed on people happiness while Dubai Police focussed on knowledge management). In contrast, previous studies reporting on multiple organisations (Mann and Grigg, 2004; Mann et al., 1998) have been less flexible and required all organisations to conduct benchmarking on similar issues or against a standard set of criteria such as the European Foundation for Quality Management excellence model. However, the flexibility inherent in the DWL approach also implies that the different organisations adopted diverse measures of success relevant to their projects and, consequently, a cross-project statistical analysis of benchmarking-enabled improvements is not possible. Success for DWL projects was based on whether they achieved their stated aims and expected benefits as detailed in their TOR and as indicated earlier this was achieved for all projects.
7.2 The role of external facilitation and support
For the 13 government organisations that took part in the DWL project, the crucial role of external facilitation and support played by COER and DGEP was a key factor in the successful delivery of their individual projects as well as the centralised structure. In addition to providing such assistance as training, tracking progress, providing secondary research support, organising progress sharing days and individual meetings/support to the project teams, external facilitation is important in providing an impartial and “removed” mirror for the centralised process as well as for the individual benchmarking teams. The perceptions of the benchmarking teams on the adopted benchmarking methodology provide further justification for the selection of a robust methodology for benchmarking. Jarrar and Zairi (2001) noted that the success of benchmarking depends on the adoption of a robust benchmarking process and this study can qualify this assertion further by proposing that such a robust process needs to be facilitated by detailed steps and supporting documents. The comments from the organisations that took part in the DWL project showed that the prescriptive nature and detailed steps of the adopted benchmarking methodology was central to success. This success is evident in the fact that all 13 organisations successfully deployed a benchmarking project at the first attempt.
7.3 Potential challenges of a centralised structure for benchmarking projects
While it has been argued that centralising and co-ordinating benchmarking projects across several organisations has significant advantages, there are also potential challenges that need to be understood and managed because of issues such as different project scopes, resource availability, competencies of team members and time-lines for implementation. One of the potential challenges is the difficulty that the project teams can have to work at the same pace. While some teams may find it relatively easy to progress their project, other teams may face difficulties and progress more slowly. Therefore, within a centralised structure, where all teams report their progress at the same time, teams that have experienced slow progress may come under undue pressure. A second challenge is the variety associated with the different projects. As can been seen from the DWL initiative, the nature of the projects differ across the different government organisations. While the centralised structure does support variety, it may not necessarily take into account the fact that some deployed improvements have a longer gestation and payback time than others. Therefore, at the closing sharing day, some projects had already been able to measure substantive success while others had only measured preliminary indicators of success with definitive measures not expected for several months after the close of the formal DWL initiative.
8. Conclusion
This study has been based on the experience of the DGEP in the launch and management of its DWL benchmarking initiative. The study set out to achieve two objectives, which are revisited as follows:
8.1 Evaluate the success of the benchmarking projects within the context of coordinated support
The study assessed each project using criteria similar to that presented in Table 3 and obtained feedback from the teams on success factors as listed from section 6.2.1 to 6.2.5. Based on the assessment approach all projects were assessed as at least proficient in how they applied the benchmarking methodology with four being considered at role model status. Those that were assessed highest were found to have applied more of the success factors.
8.2 Investigate the key support resources required for an effective coordinated programme of benchmarking projects in different public-sector organisations
A six stage approach for providing support and coordinating projects was proposed, which included; a centralised selection of individual participating organisations and their associated projects; centralised training of all project teams on using the same benchmarking methodology; deployment of the teams to undertake their individual projects with the support of project sponsors and the provision of appropriate benchmarking tools; the on-going provision of centralised facilitation and external support to the benchmarking teams; central monitoring of the projects through a project management system, regular meetings and the provision of sharing events; and finally the formal closing of the initiative involving presentations, formal benchmarking reports and recognition to the project teams.
The findings from this study have several implications for practice and research. With respect to practice, the study suggests that central authorities such as governments may gain system-wide improvements by facilitating multiple benchmarking projects in several organisations. The study also suggests that enabling such simultaneous deployment of benchmarking projects would be an important way to expedite cultural change and the adoption and acceptance of improvement techniques such as benchmarking. Furthermore, centralisation and co-ordination have potential advantages of the exploitation of economies of scale with respect to activities such as training and project management. Finally, central authorities that wish to implement such benchmarking initiatives need to ensure that there is a robust facilitation and support package made available to all participating organisations. For research, this study has started a new conversation about the mechanisms of managing multiple benchmarking projects by confirming that there are other potential approaches that can be deployed on a larger scale and which deliver positive results.
Finally, the study limitations and recommendations for future studies are presented. With respect to limitations, the study is based on the case of a government agency where it was relatively easy to recruit different public sector organisations to participate. This may not necessarily be the case in other countries where government structures and cultural inclinations may be different. Secondly, the study is based on public sector organisations and it is unclear to what extent the findings may be applicable to private sector organisations. Thirdly, the study was only for one year, and therefore only short-term benefits of the projects could be accurately assessed. It would be useful to revisit the projects after a few years to check whether all the envisaged benefits have materialised. Future studies could focus on long-term outcomes of such centralised benchmarking initiatives. Future studies could also evaluate the cultural impacts of such initiatives on participating organisations and personnel.
Figures
Type and number of steps of different benchmarking methodologies
Model or author’s name | Type | No. of benchmarking steps | Reference |
---|---|---|---|
APQC | Consultant | 4 stages comprising 10 steps | APQC (2009) |
Bendell | Consultant | 12 stages | Bendell, Boulter and Kelly (1993) |
Camp R | Consultant | 5 stages, 10 steps | Camp (1989) |
Codling | Consultant | 4 stages comprising 12 steps | Codling (1992) |
Harrington | Consultant | 5 stages comprising 20 steps | Harrington and Harrington (1996) |
TRADE/Mann | Consultant | 5 stages comprising 34 steps | Mann (2017) |
AT and T | Organisation | 9 and 12 stages (two models) | Spendolini (1992) |
ALCOA | Organisation | 6 | Bernowski (1991) |
Baxter | Organisation | 2 stages comprising 15 steps | Lenz et al. (1994) |
IBM | Organisation | 5 stages comprising 14 steps | Behara and Lemmink, 1997; Partovi, 1994). |
Xerox | Organisation | 4 stages comprising 10 steps and 39 sub-steps | Finnigan (1996) |
Yasin and Zimmerer | Academic | 5 stages comprising 10 | Yasin and Zimmerer (1995) |
Longbottom | Academic | 4 stages | Longbottom (2000) |
Carpinetti | Academic | 5 stages | Carpinetti and De Melo (2002) |
Fong et al. | Academic | 5 stages comprising 10 steps | Wah Fong, Cheng and Ho (1998) |
Participating government organisations and key achievements of DWL projects
Government entity | Key achievements of DWL projects within one year time frame |
---|---|
Dubai Corporation for Ambulance Services | Development of an advanced paramedic training course, and launch of the training course. This will result in an increase in survival rates from 4% to 20% for out of hospital cardiac arrests and an increase in revenue from insurance claims by 45 million AED per year |
Dubai Courts | Transformed 39 personal status certification services, processing approximately 29,000 certificates per year, into smart services. This reduced processing time by 58%, saved 77% of the service cost |
Dubai Culture and Arts Authority | Development of a training plan and method of delivery to support all 48 staff assigned to work at the Etihad Museum |
Dubai Electricity and Water Authority | Major transformation in promotion and marketing of Shams Dubai leading to an increase in customer awareness from 55% to 90% and an overall outcome of 1,479% total growth of solar installation projects within 12 months |
Dubai Land Department | Implemented a range of initiatives to improve employee happiness and early results show an increase in employee happiness from 83% to 86% |
Dubai Municipality | Savings in excess of 2,000,000 AED per annum through a faster automated purchasing requisition process, removal of all 20,219 printed purchase requisitions, reductions in the number of cancelled purchase requisitions from 848 to 248 annually |
Dubai Police | Development and implementation of a knowledge transfer process with 26 knowledge officers appointed and trained and roll out of 33 projects addressing knowledge gaps and producing savings/productivity gains in excess of 900,000 AED |
Dubai Public Prosecution | Identified the factors that are affecting the transfer of judicial knowledge between prosecutors, staff and stakeholders and implemented as follows: an e-library, internal webpages for sharing documents and knowledge, rewards for sharing knowledge, and a knowledge bulletin |
Dubai Statistics Centre | Undertook a comprehensive analysis of innovation maturity and developed and implemented an innovation management strategy and system based on best practices. Achieved certification to the innovation management standard CEN TS 16,555 – 1 and won the innovation award at the international business award 2016 |
General Directorate of Residency and Foreigners Affairs Dubai | Piloting of a new passenger pre-clearance system in Terminal 3 Dubai Airport that reduced the processing time of documents/passport check-in to 7s, an improvement of almost 80% |
Knowledge and Human Development Authority | 21 practices implemented within one year to improve employee happiness from 7.3 to 7.6 to place KHDA among the top 10% happiest organisations (according to the happiness@work survey) |
Mohamed Bin Rashid Enterprise for Housing | Developed a strategy on how to “to reduce the number of customers visiting its service centres by 80%” by 2018 |
Road and Transport Authority (RTA) | RTA’s knowledge management (KM) maturity level increased from 3.7 in 2015 to 3.9 in 2016 and improvements to the expert locator system were made with an increase in subject matter experts from 45 to 65. Improvements included changes to the organisational structure to support KM and introducing a more effective KM strategy |
Criteria and grading system used to assess the terms of reference stage of TRADE
TOR criteria for assessment | |
---|---|
Clarity of the project (review clarity of the project aim, scope, objectives) Value/importance of the project (review if expected benefits (non-financial and financial) and expected costs were provided. Were these benefits specific and measurable showing performance at the start of the project and expected performance at the end? Were expected benefits greater than expected costs?) Purpose of the project fits the need (review relationship between background and aim, scope, objectives) Project plan and management system in place (review TOR form, task worksheets, communication plan, minutes of meetings, planning documents and risk assessment and monitoring forms) Selection of team members and a team approach (review if team members’ job roles are related to the project topic, are team members all contributing to the project with responsibilities and tasks allocated? Have all team members been attending project meetings?) Training of team members in benchmarking and other skills as required (review TOR form to see if all team members have attended a TRADE training course or if other benchmarking training was provided, were other training needs for the project identified and training given as appropriate?) Involvement of key stakeholders (review if key stakeholders were identified and involved in the TOR stage via meetings or through other activities, is there evidence of two way communication with stakeholders rather than one-way?) Review and refinement of project (review if TOR form, task worksheets, project plan has been reviewed and refined based on stakeholder involvement and gaining project knowledge) Project support from sponsor (review if sponsor holds a senior position and whether the sponsor has supported the team’s requests and recommendations, review if there has been regular involvement of the sponsor in meetings or other activities) Adherence to benchmarking code of conduct (BCoC) (Review if training on BCoC has been provided, has a benchmarking agreement form been signed with all team members indicating adherence to the BCoC?) |
|
Grading system | |
Commendation seven stars ⋆⋆⋆⋆⋆⋆⋆ | Role model approach/ deployment of TRADE steps in the TOR stage |
Commendation five to six stars ⋆⋆⋆⋆⋆⋆ | Excellent approach/ deployment of TRADE steps in the TOR stage |
Proficient three to four stars ⋆⋆⋆⋆ | Competent approach/ deployment of TRADE steps in the TOR stage |
Incomplete one to two stars ⋆⋆ | Deficient approach/ deployment of TRADE steps |
Summary of Dubai Municipality’s benchmarking project
Terms of reference | |||||
Aim: To identify and implement best practices in purchasing to increase the percentage of purchase requisitions processed within a target of 20 days from 74% to 85% | |||||
Review | |||||
The team conducted an in-depth study of their current procurement system and performance using analysis tools such as workload analysis, value stream analysis, influence-interest matrix, customer segmentation, fishbone diagram, process flowchart analysis and waste analysis. Of particular use in prioritising what to improve was the calculation in cost and time of each purchase stage (receiving purchase requisition, submitting and closing purchase requisition, closing until approval and approval until issuing purchasing order). A number of areas for improvement were identified including the elimination of non-value adding processes (37% were non-value adding), ensuring correctly detailed technical specifications, and automation of these processes | |||||
Acquire | |||||
Methods of learning: desk-top research (minimum of 33 practices reviewed), site visits/face to face meetings, phone calls Number of site visits: four Number of organisations interviewed (by site visit or phone calls): four Names of organisations interviewed (site visit or phone calls) and countries: Dubai statistics (UAE), Dubai Health Authority (UAE), Emirates Global Aluminium (UAE) and Dubai Civil Aviation Authority (UAE) Number of best practices/improvement ideas collected in total: 57 Number of best practices/improvements ideas recommended for implementation: 5 |
|||||
Deploy | |||||
Number of best practices/improvements approved for implementation: 5 Description of key best practices/improvements approved for implementation: Three projects were approved and implemented, namely, Eliminating waste in the purchasing process. Automating and improving how supplier information is obtained and used through the request for further information process. Introducing separate technical and commercial evaluations for requisitions above one million AED to ensure that only when the technical requirements are met will a bid be assessed on a commercial basis (to improve the efficiency and accuracy of the awarding process). Two projects were approved for later implementation as follows: Applying a service level agreement between the service provider (purchasing) and the service user (business units). Contracting with suppliers for long periods (three to five years) |
|||||
Evaluate | |||||
Key achievement: Savings in excess of 2,000,000 AED per annum through a faster automated purchasing requisition process (from 74% of purchase requisitions completed within 15.5 days to 97% of purchase requisitions completed within 12.2 days), removal of all 20,219 printed purchase requisitions, reductions in the number of purchase requisitions being cancelled from 848 to 248 per annum and the number of retenders reduced from 630 to 403 per year. These achievements benefit all stakeholders (internal departments and suppliers) that use the purchasing system | |||||
Non-financial benefits achieved within one year and expected future benefits: Improvement from 74% of purchase requisitions completed within 15.5 days to 97% of purchase requisitions completed within 12.2 days Improvement from 45% of purchase requisitions completed in the bid evaluation stage within 11 days to 76% of purchase requisitions completed within 7.7 days Reduction in the number of purchase requisitions being cancelled from 848 to 284 per annum Reduction in the number of purchase requisitions retendered from 630 to 407 per annum Reduction in the number of printed purchase requisitions from 20,219 to 0 pieces of paper per year Reduction from 309 to 278 min for buyers to perform their daily purchasing cycle, thus increasing productivity |
|||||
Financial benefits achieved within one year and expected future benefits: Financial benefits exceeding 2,000,000 AED per annum. Some of the specific savings were: Estimated saving of 1,305,013.90 AED per year as a result of a faster requisition process Estimated saving of 714,187 AED per year from 566 less purchase requisitions being cancelled Estimated saving of 173,676.15 AED per year from automated confirmations of purchase requisitions Estimated saving of 73,144 AED per year from 223 less retenders Estimate saving of 60,095 AED per year from having an automated dashboard for purchase evaluation |
|||||
Status of project | |||||
Terms of reference | Review | Acquire | Deploy | Evaluate | |
Start: | 6 October 2015 | 27 October 2015 | 17 December 2015 | 24 March 2016 | 1 August 2016 |
Finish: | 29 October 2015 | 10 December 2015 | 23 February 2016 | 31 July 2016 | 26 September 2016 |
References
Adebanjo, D., Abbas, A. and Mann, R. (2010), “An investigation of the adoption and implementation of benchmarking”, International Journal of Operations and Production Management, Vol. 30 No. 11, pp. 1140-1169.
Alstete, J. (2000), “Association‐sponsored benchmarking programs”, Benchmarking: An International Journal, Vol. 7 No. 3, pp. 200-205.
Anand, G. and Kodali, R. (2008), “Benchmarking the benchmarking models”, Benchmarking: An International Journal, Vol. 15 No. 3, pp. 257-291.
Behara, R. and Lemmink, J. (1997), “Benchmarking field services using a zero defects approach”, International Journal of Quality and Reliability Management, Vol. 14 No. 5, pp. 512-526.
Bendell, T., Boulter, L. and Kelly, J. (1993), Benchmarking for Competitive Advantage, Financial Times/Pitman Publishing, London.
Bernowski, K. (1991), “The benchmarking bandwagon”, Quality Progress, Vol. 24 No. 1, pp. 19-24.
Bowerman, M., Francis, G., Ball, A. and Fry, J. (2002), “The evolution of benchmarking in UK local authorities,’ benchmarking”, Benchmarking: An International Journal, Vol. 9 No. 5, pp. 429-449.
Camp, R. (1989), Benchmarking: The Search for Industry Best Practices That Lead to Superior Performance, Quality Press Milwaukee, Wis.
Cano, M., Drummond, S., Miller, C. and Barclay, S. (2001), “Learning from others: benchmarking in diverse tourism enterprises”, Total Quality Management, Vol. 12 Nos 7/8, pp. 974-980.
Carpinetti, L. and De Melo, A. (2002), “What to benchmark? A systematic approach and cases”, Benchmarking: An International Journal, Vol. 9 No. 3, pp. 244-255.
Chen, H.L. (2002), “Benchmarking and quality improvement: a quality benchmarking deployment approach”, International Journal of Quality and Reliability Management, Vol. 19 No. 6, pp. 757-773.
Codling, S. (1992), Best Practice Benchmarking: The Management Guide to Successful Implementation, Gower, London.
Cornell University, INSEAD, and WIPO (2016), The Global Innovation Index 2016: winning with Global Innovation, Johnson Cornell University INSEAD WIPO, Ithaca, Fontainebleau.
Finnigan, J. (1996), The Managers Guide to Benchmarking, Jossey-Bass Publishers, San Francisco.
Gerring, J. (2006), Case Study Research: Principles and Practices, University Press, Cambridge.
Harrington, H. (1996), The Complete Benchmarking Implementation Guide: Total Benchmarking Management, McGraw-Hill, New York, NY.
Harrington, H. and Harrington, J. (1996), High Performance Benchmarking: 20 Steps to Success, McGraw-Hill, New York, NY.
Holloway, J., Francis, G., Hinton, M. and Mayle, D. (1998), “Best practice benchmarking: delivering the goods?”, Total Quality Management, Vol. 9 Nos 4/5, pp. 121-125.
Holt, R. and Graves, A. (2001), “Benchmarking UK government procurement performance in construction projects”, Measuring Business Excellence, Vol. 5 No. 4, pp. 13-21.
International Bank for Reconstruction and Development (2016), Doing Business 2017: equal Opportunity for All: comparing Business Regulation for Domestic Firms in 190 Economies, World Bank, Washington, DC.
Jarrar, Y. and Zairi, M. (2001), “Future trends in benchmarking for competitive advantage: a global survey”, Total Quality Management, Vol. 12 Nos 7/8, pp. 906-912.
Lenz, S., Myers, S., Nordlund, S., Sullivan, D. and Vasista, V. (1994), “Benchmarking: finding ways to improve”, The Joint Commission Journal on Quality Improvement, Vol. 20 No. 5, pp. 250-259.
Longbottom, D. (2000), “Benchmarking in the UK: an empirical study of practitioners and academics”, Benchmarking: An International Journal, Vol. 7 No. 2, pp. 98-117.
Mann, R. (2015), “The history of benchmarking and its role in inspiration”, Journal of Inspiration Economy, Vol. 2 No. 2, p. 12.
Mann, R. (2017\\chenas03.cadmus.com\smartedit\Normalization\IN\INPROCESS\28). “TRADE best practice benchmarking training manual”, available at: www.bpir.com
Mann, R. and Grigg, N. (2004), “Helping the kiwi to fly: creating world-class organizations in New Zealand through a benchmarking initiative”, Total Quality Management and Business Excellence, Vol. 15 No. 5-6, pp. 707-718.
Mann, R., Adebanjo, O. and Kehoe, D. (1998), “Best practices in the food and drinks industry”, Benchmarking for Quality Management and Technology, Vol. 5 No. 3, pp. 184-199.
Mann, R., Adebanjo, O., Abbas, A., Al Nuseirat, A., Al Neaimi, H., (2017), and., and El Kahlout, Z. Achieving Performance Excellence through Benchmarking and Organisational Learning – 13 Case Studies from the 1st Cycle of Dubai We Learn’s Excellence Makers Program, Dubai Government Excellence Program, Dubai.
May, D. and Madritsch, T. (2009), “Best practice benchmarking in order to analyze operating costs in the health care sector”, Journal of Facilities Management, Vol. 7 No. 1, pp. 61-73.
Moffett, S., Anderson-Gillespie, K. and McAdam, R. (2008), “Benchmarking and performance measurement: a statistical analysis”, Benchmarking: An International Journal, Vol. 15 No. 4, pp. 368-381.
Mugion, R. and Musella, F. (2013), “Customer satisfaction and statistical techniques for the implementation of benchmarking in the public sector”, Total Quality Management and Business Excellence, Vol. 24 Nos 5/6, pp. 619-640.
OECD (2016), PISA 2015 Results (Volume I): Excellence and Equity in Education, OECD Publishing. Paris.
Panwar, A., Nepal, B., Jain, R. and Prakash Yadav, O. (2013), “Implementation of benchmarking concepts in indian automobile industry–an empirical study”, Benchmarking: An International Journal, Vol. 20 No. 6, pp. 777-804.
Partovi, F. (1994), “Determining what to benchmark: an analytic hierarchy process approach”, International Journal of Operations and Production Management, Vol. 14 No. 6, pp. 25-39.
Patton, M. (1987), How to Use Qualitative Methods in Evaluation, Sage. London.
Prašnikar, J., Debeljak, Ž. and Ahčan, A. (2005), “Benchmarking as a tool of strategic management”, Total Quality Management and Business Excellence, Vol. 16 No. 2, pp. 257-275.
Putkiranta, A. (2012), “Benchmarking: a longitudinal study”, Baltic Journal of Management, Vol. 7 No. 3, pp. 333-348.
Raymond, J. (2008), “Benchmarking in public procurement”, Benchmarking: An International Journal, Vol. 15 No. 6, pp. 782-793.
Rendon, R. (2015), “Benchmarking contract management process maturity: a case study of the US navy”, Benchmarking: An International Journal, Vol. 22 No. 7, pp. 1481-1508.
Rigby, D. and Bilodeau, B. (2015), Management Tools and Trends 2015, Bain and Co, Boston, MA.
Robbins, G., Turley, G. and McNena, S. (2016), “Benchmarking the financial performance of local councils in Ireland”, Administration, Vol. 64 No. 1, pp. 1-27.
Schwab, K. and Sala-I-Martin, X. (2017), The Global Competitiveness Report 2016-2017. World Economic Forum.
Singh, M.R., Mittal, A.K. and Upadhyay, V. (2011), “Benchmarking of North Indian urban water utilities”, Benchmarking: An International Journal, Vol. 18 No. 1, pp. 86-106.
Spendolini, M. (1992), The Benchmarking Book, American Management Association, New York, NY.
Taschner, A. and Taschner, A. (2016), “Improving SME logistics performance through benchmarking”, Benchmarking: An International Journal, Vol. 23 No. 7, pp. 1780-1797.
Thomas, G. (2016), How to Do Your Case Study, SAGE Publications, London.
van Veen-Berkx, E., de Korne, D., Olivier, O., Bal, R., Kazemier, G. and Gunasekaran, A. (2016), “Benchmarking operating room departments in The Netherlands: evaluation of a benchmarking collaborative between eight university medical centres”, Benchmarking: An International Journal, Vol. 23 No. 5, pp. 1171-1192.
Wah Fong, S., Cheng, E. and Ho, D. (1998), “Benchmarking: a general reading for management practitioners”, Management Decision, Vol. 36 No. 6, pp. 407-418.
World Bank (2016), The Worldwide Governance Indicators (WGI) Project, World Bank. Washington, DC.
Yasin, M. and Zimmerer, T. (1995), “The role of benchmarking in achieving continuous service quality”, International Journal of Contemporary Hospitality Management, Vol. 7 No. 4, pp. 27-32.
Yin, R. (2009), Case Study Research: Design and Methods, SAGE Publications, CA.
Further reading
Adebanjo, D. and Mann, R. (2007), “Benchmarking”, BPIR Management Brief, Vol. 4 No. 5.
Burns, R. (2000), Introduction to Research Methods, SAGE Publications. London.
Codling, S. (1998), Benchmarking, Gower, London.
Frankfort-Nachmias, C. (1992), Nachmias. D. Research Methods in the Social Sciences, Edward Arnold, London.
Hinton, M., Francis, G. and Holloway, J. (2000), “Best practice benchmarking in the UK”, Benchmarking: An International Journal, Vol. 7 Issue No. 1, pp. 52-61.
Simpson, M. and Kondouli, D. (2000), “A practical approach to benchmarking in three service industries”, Total Quality Management, Vol. 11 Nos 4/6, pp. 623-630.
Acknowledgements
Dubai We Learn is a Dubai Government Excellence Program (DGEP) initiative. Three of the authors, Dr Zeyad Mohammad El Kahlout, Dr Ahmad Abdullah Al Nuseirat and Dr Hazza Al Neaimi worked at DGEP and were responsible for coordinating the program.