Abstract
Building on the concept of “impact literacy” established in a previous paper from Bayley and Phipps, here we extend the principles of impact literacy in light of further insights into sector practice. More specifically, we focus on three additions needed in response to the sector-wide growth of impact: (1) differential levels of impact literacy; (2) institutional impact literacy and environment for impact; and (3) issues of ethics and values in research impact. This paper invites the sector to consider the relevance of all dimensions in establishing, maintaining and strengthening impact within the research landscape. We explore implications for individual professional development, institutional capacity building and ethical collaboration to maximise societal benefit.
Keywords
Citation
Bayley, J. and Phipps, D. (2023), "Extending the concept of research impact literacy: levels of literacy, institutional role and ethical considerations", Emerald Open Research, Vol. 1 No. 3. https://doi.org/10.1108/EOR-03-2023-0005
Publisher
:Emerald Publishing Limited
Copyright © 2019 Bayley, J. and Phipps, D.
License
This is an open access article distributed under the terms of the Creative Commons Attribution Licence, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Background and previous work
Amidst a range of definitions (see Research Excellence Framework, 2019, p. 68; and UK Research and Innovation page on Excellence with impact) impact can be most easily shorthanded to the provable effects of research in the real world. In our original paper (Bayley and Phipps, 2017), we outlined the associated concept of research impact literacy, defining it as the ability to “identify appropriate impact goals and indicators, critically appraise and optimise impact pathways, and reflect on the skills needed to tailor approaches across contexts (page 3). Impact literacy thus reflects the understanding necessary to develop and execute meaningful, appropriate and realistic impact pathways to generate benefit in the ‘real world’. As the impact agenda matures, the sector has become increasingly aware of the complexities relating to research implementation such as analysis of the UK’s 2014 Research Excellence Framework, or REF, illustrating 3,709 unique pathways from research to impact of 6,679 submitted case studies (Kings College London and Digital Science, 2015) confirming that complexity precludes formula driven or taxonomy-based methods of impact planning. We argued that in a field which precludes prescription and is characterised by the need for tailored approaches, it is essential that individuals are able to make informed choices appropriate to their specific context.
Impact literacy was conceptualised as the product of three intersecting elements: (1) the identification, assessment, evidencing and articulation of impact endpoints (“what”); (2) the practices that create impact (“how”); and (3) the successful integration of these by research impact practitioners (“who”). Ultimately an individual could be considered research impact literate if they understand the specific benefits being sought, the activities which would achieve these, and how approaches needed tailoring in specific contexts.
Since the first iteration of the work, there have been a range of sector-wide developments which have affected the research landscape and heralded a more intensive era of international investment in impact. For example, changes to the structure of UK research governance in April 2018 1 led to a new overarching body - UK Research and Innovation (UKRI) – seeking to more effectively coordinate programmes to “accelerate delivery of benefits and economic impact” (see UKRI themes and programmes). In parallel, and mirroring the experience of the UK, Australia completed its first Impact and Engagement assessment exercise (see Australian Research Council Engagement and Impact Assessment), whilst New Zealand completed another round of Performance Based Research Fund (see Performance-Based Research Fund update 2018), including options for describing community engagement and research impact. Further indications that impact is gaining pace internationally include numerous examples of existing practice being extended to include impact. For example, all publicly funded universities and colleges in Ontario Canada, are required to have a ‘Strategic Mandate Agreement’ (see College and University Strategic Mandate Agreements, 2017-2020), expressing institutional priorities and how these align with government aims and student need. Since their conceptual inception in 2014 (Phase 1), a further more detailed iteration in 2017 (Phase 2) will be superseded again in 2020 (Phase 3). Unlike Phase 2, Phase 3 will newly include requirements for indicators and metrics to assess all aspects of university performance including research excellence, innovation, entrepreneurship as well as community and economic engagement. Additionally, in response to growing international interest, in 2018 the International Network Of Research Management Societies (INORMS) council established a new international impact working group to review emerging cross-national needs and establish means to support global research management peers (chaired by the authors; see INORMS Research Impact and Stakeholder Engagement Working Group). The growth of impact is also demonstrated in the scale of personnel involved; within the UK particularly, the number of impact-related roles have grown rapidly and substantially; for instance, from its inception in 2014, membership of the Association of Research Managers and Administrators (ARMA) Impact special interest group grew to over 750 by 2019. In parallel there is a growing global discussion of impact which is reflected in scholarly and practice based literature, and its related concepts of knowledge translation/mobilisation, commercialism, entrepreneurialism and innovation. However whilst there is comparative transnational evidence that some characteristics of successful knowledge intermediaries (ie. those who support evidence use) transcend contexts (Phipps et al., 2017), global trends towards non-academic benefit have yet to result in convergence of research impact practice.
Alongside these sector wide changes, since the inception of the concept we (the authors) have been extensively engaged in training activities in various countries including Canada, the UK, Belgium, Denmark, Sweden, Switzerland, USA, Australia, New Zealand and Iran. With impact featuring so differently across these countries – varying from nascent appetite to formal policy edict - these experiences have highlighted a range of challenges and needs related to the development of literacy and broader capacity building. A natural sequalae of these activities has been the development of literacy based planning tools; a new impact literacy workbook supports researchers and research managers plan impact at project inception, with specific attention on problem formulation, stakeholder mapping, impact indicators, knowledge mobilisation activities and associated barriers, facilitators and capabilities. However, whilst impact planning tools can help focus questions related to developing an impact strategy, if those tools are not (i) created/facilitated by impact literate staff, (ii) supported by an impact literate institution, and (c) co-produced with non-academics, then plans will be at best generic. This risks disconnection from the needs of stakeholders, inattention to implementation challenges, and more fundamentally breaks the covenants of respect and mutual benefit between academic and civic beneficiary.
Throughout these experiences, the concept of research impact literacy continues to resonate within the research sector. However, taken together these points reflect a more impact-expectant and impact-expert sector, leading us to critically reflect on the original impact literacy figure and concept which we first conceptualized late in 2015. Whilst the original paper reflected the basic elements of impact literacy, as the sector has moved on it is now necessary to refresh the concept, its application, and how it is situated within a learned, advancing academic environment. In this paper we outline three primary observations:
Differential levels of impact literacy
Institutional impact literacy and environment for impact
Issues of ethics and values in research impact
We conclude with a revised impact literacy diagram, alongside observations on how impacts extend beyond research-only effects, and how research- and assessment-centric approaches mask the broader impacts of higher education institutions.
Differential levels of impact literacy
Whilst the original conceptualisation was never intended to suggest a binary simplification of literate vs non-literate, more substantial exposure to the international research community has reaffirmed the need to reflect not just the existence of literacy, but the levels through which literacy may progress.
Shifting from a binary sense of impact literate vs. illiterate, literacy itself can range from a basic awareness through to a higher level comprehension. This proposition is reinforced by drawing on the parallels with health literacy. Guzys et al. (2015) identified a number of characteristics of health literacy which align with characteristics of integrated methods of creating research impact. These include the recognition that (health) literacy is complex, multifactorial and context dependent. Achieving (health) literacy requires involving end users in developing (health) literacy frameworks to distribute power between (health) providers and (health) consumers. Extending this parallel further, Chinn (2011) describes three progressive levels of health literacy (p. 61) as basic/functional (basic reading and writing and knowledge), communicative/interactive (skills to derive meaning from various forms of communication and apply information to changing circumstances) and critical literacy (higher level cognitive and social skills to critically analyse information and use this to exert control over situations).
The analogies to impact are clear: the need for fundamental knowledge and understanding, which in turn underpin skills needed to apply and modify approaches in changing contexts, towards more advanced oversight and directive action. Research impact practitioners may build their level of literacy through the study of evidence derived from peer review literature, practice based guidelines, grey literature and/or more tacit, experiential knowledge. This reflects a continuum of knowledge from rigorously proven practice through to personal insight or anecdote. As such, practitioners need to develop not only the knowledge, but the skills to judge the quality of evidence for application in practice. We further therefore propose a tiered approach to impact literacy analogous to health literacy, wherein basic, intermediate and advanced literacy levels underpin progressive levels of integration and critique of available evidence (including implicit and explicit knowledge, literature, established good practice and other means to develop substantive insights) about impact and impact practices (see Table 1).
It is essential to note that neither literacy nor critical assessment skills unequivocally match job roles or seniority. Whilst there is a plausible expectation that literacy is higher in those holding more strategic roles, the complexity of impact and detail-orientation in operational roles may provide differential profiles within institutional hierarchies.
Institutional impact literacy and environment for impact
Whilst individual impact literacy is crucial, our ongoing engagement with the community as the impact agenda has magnified sector wide has continually highlighted the critical and powerful role of the institution. The original article referenced institutional literacy and the broad alignment of who, what and how between individuals and institutions; however, the nature of relationships between the two were not explored in further detail at that point. Impact is created and captured at the level of the research project and driven by government policy, and delivered within an institutional structure. However, the power and responsibility of the institution as an intermediary has been significantly unexplored.
Institutions govern strategic investment and operationalisation of expectations related to external agendas (such as REF and research funding competitions), determine how impact is embedded in institutional processes, and employ (or not) personnel to support impact delivery. Without due consideration of the literacy of an institution and associated choices for impact service delivery, there are significant risks for the continuation of poor practice and negative consequences for both staff and social stakeholders. Accordingly, we have therefore dually identified (i) levels of impact literacy as they relate to institutions, and (ii) aspects of organisational practice that demonstrate a healthier approach to impact as key components of the literacy narrative.
Levels of institutional impact literacy
Just as impact literacy at the individual level expresses the depth of understanding about research implementation, institutional impact literacy reflects the depth of understanding and thus the conditions the organisation creates for generating impact. Drawing on the levels outlined for individual literacy, we therefore summarise the levels of institutional literacy as supportive (basic, creating the conditions), enabling (intermediate, establishing more reflective and active practice), and driving (advanced, actively exploring and innovating institutional management). A summary of levels of institutional literacy is presented in Table 2.
Interdependency between individual and institutional impact literacy
Our engagement with the sector suggests there is an artificial separation of institutional and individual literacies, which overlooks interdependency and symbiosis. Whilst an individual practitioner can build their own competencies (Bayley et al., 2018), the extent to which they can implement these depends on institutional context and appetite to do so. Any benefits will be capped or even nullified if the institution remains unchanged. Similarly, institutional drives to improve the non-academic benefit of research are ultimately proportionate to the abilities of those involved to deliver on higher level goals. Therefore individual capacity building for impact needs to be supported by institutional approaches, and institutional capacity building needs to be informed by individual expertise. Thus a greater integration of both institutional and individual impact literacy is required to both reflect individual/institutional interdependency and to address burgeoning tensions around autonomy of researchers, professional staff development, and institutional strategic development all of which are responding to external policies, opportunities and pressures. Accordingly, a secondary tier of institutional thinking is the interdependency of literacy between individual and institution. These connections are summarised below in Table 3, which outlines how levels of individual and institutional literacy connect, the associated opportunities and risks, and potential risk management strategies.
Institutional impact health
Whilst literacy is required for individuals and institutions to make informed choices about impact paths, the organisational environment plays a key role in enabling literacy to drive action. We have observed a range of both positive and negative impact strategies, practices, processes and delivery structures, and have incrementally drawn these into a broader concept of institutional impact health. The impact agenda provides both opportunities and challenges for institutions, in terms of aligning efforts and resources, and building capacity, capability and enabling coproduction (Phipps et al., 2016). This suite of consistent observations led us to focus on the way impact is configured within universities, to support institutional awareness and decision making about how efforts are aligned, resourced, valued and connected. In turn, this led us to propose five particular areas of institutional management which contribute to healthier approaches to impact. High levels of each of the ‘Five Cs’ of institutional impact health depict a more positive and embedded institutional approach:
Five C’s of Institutional Impact Health
Commitment: The extent to which the organisation is committed to impact through strategy, systems, staff development and integrating impact into research and education processes.
Connectivity: The extent to which the organisational units work together, how they connect to an overall strategy, and how cohesive these connections are.
Coproduction: The extent of, and quality of, engagement with non-academics for to generate impactful research and meaningful effects.
Competencies: The impact-related skills and expertise within the institution, development of those skills across individuals and teams, and value placed on impact-related specialisms
Clarity: How clearly staff within the institution understand impact, how impact extends beyond traditional expectations of academic research, and their role in delivering impact.
Examples of healthy and unhealthy institutional profiles for each ‘C’ are given below (Table 4).
From these principles we developed an ‘institutional health check’ tool (see Institutional Impact Health Workbook), targeted at those driving or managing impact within an institution (eg. research impact practitioners, pro vice chancellors/presidents for research), and enabling a process of self-assessment across these five areas.
Ethics of research impact
As the impact agenda becomes more deeply integrated into the academic landscape, this has unearthed a range of issues relating to ethics, biases and values. Impact in the international sector can be broadly divided into two underlying approaches: assessment vs mission. Jurisdictions such as UK, Netherlands and Australia have system wide instrumentalised research impact assessment mechanisms (‘assessment driven’). For example in the UK, approximately £2 billion of government investment is allocated through the Research Excellence Framework 2. In the last cycle (REF, 2014), impact accounted for 20% of the allocation, which has been increased to 25% for the current (2021) round. Impact assessment is an increasingly mechanised means to support goals that include accountability, allocation, advocacy and analysis (Adam et al., 2018), and so it is imperative that those within the research sector critically understand how research can lead to change. Conversely in countries where government assessment is not a dominant feature, impact is instead driven by institutional, researcher or funder goals (‘mission driven’), with Research Impact Canada and the US based National Alliance for Broader Impacts exemplars of networks working within this mission brief.
However, this does not suggest that individuals within these countries simply align with these agendas; indeed many in the UK are personally mission driven, leading to tension where assessment dominates opportunities. Nor does it suggest one type is inherently more ethical or ‘better’ than another. Instead, it elucidates deeper complexities and considerations for impact literacy, particularly raising questions about the nature of decision making and powered positions within discourse. In both mission driven and assessment driven environments, pervasive expectations of academics – for example income from prestigious funders, high scoring impact case studies and high ranking journals – continue to anchor impact on academic need, even when goals are reflective of stakeholder issues. Whilst impact plans are a primary opportunity to consider stakeholder benefits, and case studies a prime opportunity to express these, the need to demonstrate attribution, causation and scale as connected directly to excellent research can turn altruism to instrumentalism and collaboration to territorialism. It is expressly important therefore that explicit and implicit biases engrained within the system are identified such that fair strategies can be developed.
Ethical considerations for research impact go beyond adhering to compliance regulations of research ethics boards. Ethics includes considerations of the relationships between researchers and non-academic stakeholders and the values that underpin these endeavours. The literature has highlighted the challenges associated with power dynamics between researchers and non-academic partners. Power differentials have been considered by Sandra Nutley (Nutley et al., 2007) and are well documented in both community-based research (Muhammad et al., 2015) and reflections on cross-national methodology (Schulz et al., 2018). Values are an important element to knowledge mobilisation and evidence use as articulated by Roger Pielke (Pielke, 2007, chapter 4), and whilst considering power and values the types of impacts arising are important but often overlooked. The dominant discourse in the REF is that all research has positive impacts; however, this is not the case. Gemma Derrick and colleagues coined the term “grimpacts” to describe negative impact arising from research (Derrick et al., 2018). And finally, the method of creating impact can have adverse effects. While more stakeholder engagement often remains unquestioned as a “best practice” some research has investigated the challenges and costs with co-produced research including personal and professional costs to the researcher and to the stakeholders as well as potential erosion of trust in the scientific enterprise (Oliver et al., 2019). Ultimately, whilst an impact plan or a case study can tell a persuasive story, efforts to drive impact should not be aimed at satisfying the reviewer or obtaining the grant funding but creating the conditions where socioeconomic impacts can manifest.
Revised impact literacy diagram
As a result of practice and application of the original concept, and the three key areas of reflection, we have revised the original impact literacy diagram, presented below (Figure 1). It maintains the “who”, “what” and “how” of impact, but is now extended to incorporate:
- i
Three levels of literacy along both individual and institutional poles.
- ii
Institutional literacy, focused on building organisational capacity/literacy (in parallel to the development of personal literacy/capacity)
- iii
A baseline understanding of ‘why’ impact is being pursued (mission vs. assessment driven), reflecting fundamental inceptive questions around ethics, values and power relationships and the overall purpose of the impact.
Conclusion
In 2017 we formally presented impact literacy as a concept, with the hope of anchoring attitudes to research implementation and management in a sector rapidly focusing on non-academic benefit. Whilst the concept has resonated, the increased sector-wide attendance to impact has highlighted the need to extend the model to account for, address or prevent a range of consequences. Experiences since the original model have more keenly elucidated the nature and developmental pace of literacy, and differentiation of levels of literacy across territories, institutions and individuals. In its refreshed state, the model aims still to re-anchor attitudes and ethics to healthy impact practice, and is now accompanied by a range of tools to do so.
As the impact agenda matures, grows internationally and deepens through scholarship and practice, we urge two final notes of caution. Firstly, whilst we have identified a range of issues relating to ethics, questions about appropriateness, value judgements, biases and power require far more intense examination. Academics and research managers alike must deliberately and transparently consider from the outset what and whose values are driving decision making. By rooting impact plans in the ethics of mutuality and respect by all stakeholders, biases can be more effectively surfaced and academic-non/academic voices equalised. However, without appropriate systemic and/or institutional policies, negotiated and inclusive decision making will be limited. Secondly, formal and financially-incentivised drivers to assess the impact of research – at the expense of impacts from other areas of academic practice – risks reducing parity between teaching, research and related scholarly activities. Agendas such as REF (in the UK) demand not only a causal link between research and real world effect, but effectively penalise cases which do not stem from research. Whilst this is an understandable mechanism for governing the review process, with no equivalently weighted expectations for impact arising from non-research activities, the latter effects are effectively screened out of sector-wide discourse. Incoming agendas such as the Knowledge Exchange Framework (in the UK) support broader exploration of the civic responsibilities of a university, but there remains an obscured understanding of the pathways connecting multiple areas of university activity (including research, teaching/learning, community engagement, student life, internationalisation) to social benefit. The caution here, particularly in an era of ranking deification, is that universities will disconnect rather than solidify connections between different areas of the organisation to deliver on discrete agendas. The resulting institutional discordance both minimises impact potential and dilutes capacity for the totality of universities’ missions. Thus whilst in this paper we advance the concept of research impact literacy, this is not suggestive that research is the only path towards social benefit, nor that it should be pedestalled comparatively to other academic endeavours. Research is an important, but component, part of the academy, and social benefit is best served by intra- and inter-university activity, working in collaboration rather than competitive silos.
Data availability
Underlying data
No data are associated with this article
Publisher’s note
This article was originally published on the Emerald Open Research platform hosted by F1000, under the ℈Quality Education for All℉ gateway.
The original DOI of the article was 10.35241/emeraldopenres.13140.2
This is Version 2 of the article. Version 1 is available as supplementary material.
Author roles
Bayley J: Conceptualization, Visualization, Writing - Original Draft Preparation, Writing - Review & Editing; Phipps D: Conceptualization, Visualization, Writing - Original Draft Preparation, Writing - Review & Editing
Amendments from Version 1
We have corrected the definitions for “Coproduction” and “Clarity” in the “Five C’s of Institutional Impact Health” section, as these were incorrectly assigned in the previous version 1.
The peer reviews of the article are included below.
Funding statement
The author(s) declared that no grants were involved in supporting this work.
Competing interests
JB is commissioned by Emerald Publishing Group as an Impact Literacy advisor. Emerald produced the Impact Literacy and Institutional Health workbooks listed in the article.
Reviewer response for version 1
Kathryn E.R. Graham, Performance Management and Evaluation Unit, Alberta Innovates, Edmonton, AB, Canada
Competing interests: No competing interests were disclosed.
This review was published on 25 September 2019.
This is an open access peer review report distributed under the terms of the Creative Commons Attribution Licence, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Recommendation: approve.
I enjoyed reading the article and think the concept of impact literacy is important and needs to be explored both academically and practically. I am providing the additional comments as a complementary addition to the article.
In regard to the language of "impact literacy", practitioners in the field talk about building an "impact culture". I am uncertain academically about the relationship between "impact literacy" and "impact culture", however, when we talk about impact culture we talk about an individual's/organizational mindset and philosophy to implementing the impact steps and striving for excellence in implementation and best-in class performance. This follows through to self-sustainability where both the individual and the organization intrinsically create and maintain self-sustaining (health) learning systems.
On page 1 the authors talk about the definition of impact as "provable effects of research in the real world". One consideration is to refer to the OECD definition of Impact Evaluation/Assessment as
Impact: positive and negative, primary and secondary long-term effects produced by a development intervention, directly or indirectly, intended or unintended.
An assessment of impact using before/after and/or with/without comparison (DFI. Guidance on Evaluation and Review for DFID Staff, DFID, London, 2005) this definition ties in nicely with the discussion on page 8 that the dominant REF discourse is around positive impacts, this definition acknowledges that it can also be negative as well as direct or indirect and intended or unintended.
On page 3 on the first paragraph, it highlights the importance of context and “fit for purpose” this is supported by the teachings of the International School on Research Impact Assessment which supports this in its teachings and tools: https://www.theinternationalschoolonria.com/resources.php
as well at highlighting the importance of impact planning - the proposal being that if you plan for impact is that you are more likely to achieve impact.
Page 4 another consideration is acknowledging the work of horizon 2020 the new EU framework program for research and innovation as this work is mission driven and includes evaluation.
In terms of institutional impact, there are practical example of funders and other organizations addressing integrating impact at an institutional level, examples:
American Evaluation Association (AEA) (2015). Evaluating outcomes of publicly-funded research, technology and development programs: Recommendations for improving current practice1.
Graham, K.E.R., Chorzempa, H. L., Valentine, P. A., Magnan, J. (2012). Evaluating health research impact: Development and implementation of the Alberta Innovates – Health Solutions impact framework 2.
In terms of mission versus assessment, one point about assessment that could be highlighted is that assessment is not only for allocation but also includes learning (i.e. what is evidence for what works, does not work under what conditions).
In the section on ethics reference could be made to the work on responsible research and responsible metrics.
Future considerations is understanding and expanding on the relationship on individual and institutional readiness with impact literacy.
- Is the topic of the opinion article discussed accurately in the context of the current literature?
Yes
- Are arguments sufficiently supported by evidence from the published literature?
Yes
- Are all factual statements correct and adequately supported by citations?
Yes
- Are the conclusions drawn balanced and justified on the basis of the presented arguments?
Yes
Impact Assessment, Evaluation, Health, Research and Innovation, Science of Science Policy
I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.
References
1. Research, Technology and Development Evaluation Topical Interest Group: Evaluating outcomes of publicly-funded research, technology and development programs: Recommendations for improving current practice. American Evaluation Association (AEA). 2015.
2. Graham K, Chorzempa H, Valentine P, Magnan J: Evaluating health research impact: Development and implementation of the Alberta Innovates - Health Solutions impact framework. Research Evaluation. 2012; 21 (5): 354-367
Reviewer response for version 1
Amanda Cooper, Queen’s University, Kingston, ON, Canada
Competing interests: No competing interests were disclosed.
This review was published on 04 July 2019.
This is an open access peer review report distributed under the terms of the Creative Commons Attribution Licence, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Recommendation: approve.
This article addresses an area of increasing global importance: research impact agendas. The authors provide a model to conceptualize and measure impact literacy that couples individual efforts with organizational efforts. To date, most of the impact competencies and taxonomies have focused on individuals (Mallidou et al., 2018 1) or training programs (Straus et al., 2011 2), rather than taking a whole systems perspective that incorporates individual and organizational factors contributing to (or detracting from) knowledge mobilization (KMb) and research impact efforts. In fact, Mallidou et al conclude their scoping review of KT competencies, by lamenting that “to date, little research has been conducted on KT competencies at an organizational level and the identification of these competencies or organizational characteristics is certainly needed” (p.11). As such, it is clear that this article makes a significant contribution to the field (as does the emerging body of work by Bayley and Phipps that this work expands on). Within this article, Bayley and Phipps begin the challenging work of theory-building KT competencies in a model that attempts to integrate the organizational aspects that have been consistently ignored. I believe that this article is ready for indexing as is. So, the many thoughts and perspectives below are meant to push the authors even further in their thinking around impact literacy.
Is impact literacy synonymous with implementation expertise? The concept of impact literacy is intriguing, as is the consideration of various levels of literacy being considered on a continuum and in different combinations (matched, mismatched, offset) depending on the organization and individual involved. In the end, I wonder if research impact literacy really centers around the core competency of implementation expertise and, as such, wondered what implementation science might contribute to the model proposed here. You mention literacy in relation to research implementation on page 5, but I am wondering if it is emphasized enough.
What about conflicting benefits and goals among individuals, institutions, and stakeholders? On page 3, the authors argue that “Ultimately an individual could be considered research impact literate if they understand the specific benefits being sought, the activities which would achieve these, and how approaches needed tailoring in specific contexts”. An assumption that I think needs further interrogation by the authors is that the “benefits being sought” are common across different stakeholders, levels of the organization, or individuals. Often multi-stakeholder partnerships have conflicting goals that need to be addressed and negotiated openly (the authors do refer to Oliver’s new piece The Dark Side of Co-Production – which is the point I am getting at here).
Should the model explicitly include a third prong of literacy and readiness within end-user communities? Another question I had was about who impact literacy resides with – in this piece, it seems to focus on institutions and staff members (facilitating KT) within the organization. I wonder if the concept of impact literacy, and the requisite model, actually needs another dimension – the match and readiness of the end-user. I think this could be an interesting extension and could perhaps be represented in a three-dimensional fashion as Kitson et al (1998) 3 do with the PARHIS framework (although I acknowledge there is already a lot of elements included in the Bayley and Phipps model). I think leaving the end-user out has real implications for impact and KMb efforts. For instance, using Table 3 of Bayley and Phipps model, let’s say that individual literacy is critical and institutional literacy driving (top left quadrant), this does not necessarily mean that impact with end-users is a certainty; rather, it is the last part of the triangle (Producer-mediator-user) the user, that in many ways determines the uptake and impacts that will happen. This point is not meant to take away from this innovative model and work. It is meant to think about how we might push and extend this model and concept even further. How does the research impact literacy of end-user communities enable or constrain KMb efforts and impact?
Where do the 5 C’s originate from? It mentions briefly the experience of the authors, but do these arise explicitly from empirical work on co-production? This could be briefly expanded on.
How do partnerships across institutions affect this model? I also wondered since the model seemed to focus on a single institution – how this concept of impact literacy and “matched, mismatched, offset” competency levels might be influenced across multi-university partnerships. This is an area that could yield interesting empirical work.
Great point about impact literacy not residing necessarily along hierarchical lines of seniority or institutional authority.
I wonder about whether the 5 Cs should be organized slightly differently with the Competencies being a cross-cutting theme across the other four areas (commitment, connectivity, co-production, and clarity). Another way to inform and extend this model would be to connect it to Mallidou’s recent scoping review on individual KT competencies (19 competencies organized in relation to knowledge, skills, attitudes, and others). I think explicitly integrating these competencies in some way (even at the individual level) would strengthen the empirical underpinning of the proposed model. I do not suggest this is needed for indexing; once again, these are my thoughts on how the authors might further interrogate and extend their concept of research impact literacy and the relationship between individual and organizational factors.
I was also interested to see what types of competencies or factors would fall along each part of the model for HOW, WHO, WHAT. Something to consider in a future article.
My last line of inquiry for the authors to consider, would be their perspectives on how to test this model. Redman et al (2015) 4 propose a SPIRIT framework (a policy impact model), and table 3 on p.153 of the Redman piece includes a table of hypothetical scenarios of empirical approaches to confirming, refining, or refuting the framework. The table has four columns – empirical focus, hypothetical impact following interventions, implications for the model, and confirm/refine/refute the model. I would be interested to see Bayley and Phipps create a table like this for their model as it would lay the framework for researchers studying KT and KMb to test and build upon their work. Similarly – Kitson et al (2008) 5 include a number of key questions to test conceptual models that are relevant such as: “How do the elements…and sub-elements interrelate and interact with each other? Do the elements and sub-elements have equal weighting in getting evidence into practice?” (p.5). Kitson also includes three important questions to test the usefulness of any conceptual framework in empirical research: “1. Does the framework help organise empirical research in those areas where well-specified theories are not yet formulated? 2. Does empirical research drawing in the framework lead to new discoveries and between explanation of important phenomena? 3. Can the framework be applied to multiple levels of analysis in empirical research?” (p. 5). Thinking through studies to apply and test the framework across diverse contexts would be useful to researchers in the field of KT/KMb.
I really enjoyed reading this article, and the length of this review is indicative of how thought-provoking the concept of research impact literacy is to me! I cannot stress enough the contribution to the field that Bayley and Phipps are making by trailblazing tools and models from the ground up on how to go about understanding and measuring research impact. This article should be indexed – and I look forward to empirically interrogating it further in my work.
- Is the topic of the opinion article discussed accurately in the context of the current literature?
Yes
- Are arguments sufficiently supported by evidence from the published literature?
Yes
- Are all factual statements correct and adequately supported by citations?
Yes
- Are the conclusions drawn balanced and justified on the basis of the presented arguments?
Yes
Research impact; knowledge mobilization; research utilization
I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.
References
1. Mallidou A, Atherton P, Chan L, Frisch N, et al.: Core knowledge translation competencies: a scoping review. BMC Health Services Research. 2018; 18 (1).
2. Straus SE, Brouwers M, Johnson D, Lavis JN, et al.: Core competencies in the science and practice of knowledge translation: description of a Canadian strategic training initiative. Implement Sci. 2011; 6: 127
3. Kitson A, Harvey G, McCormack B: Enabling the implementation of evidence based practice: a conceptual framework. Qual Health Care. 1998; 7 (3): 149-58
4. Redman S, Turner T, Davies H, Williamson A, et al.: The SPIRIT Action Framework: A structured approach to selecting and testing strategies to increase the use of research in policy. Soc Sci Med. 2015; 136-137: 147-55
5. Kitson AL, Rycroft-Malone J, Harvey G, McCormack B, et al.: Evaluating the successful implementation of evidence into practice using the PARiHS framework: theoretical and practical challenges. Implement Sci. 2008; 3: 1
Figures
Levels of individual impact literacy.
Literacy level | Integration and critique of evidence | Illustrative description of level |
---|---|---|
Basic | Aware | Aware of the evidence about practices and processes, understands there is a body of expertise, knowledge and tools which can underpin practice, but may not use or know how to draw them into practice. Likely understands impact at project (small scale) level |
Intermediate | Engaged | Informed by and engaged with the evidence, understands there is a body of expertise, knowledge and tools which can underpin practice, knows how to draw on these and builds them into practice. Likely to be able to comprehend at a programme (higher order) level |
Advanced | Critical | Critically engaged with the evidence, understands there is a body of expertise, knowledge and tools which can underpin practice and is able to (i) synthesize, (ii) critique and (iii) add to/extend it. Likely to be able to comprehend at a strategic and/or systems level |
Levels of institutional impact literacy.
Literacy level | Integration and critique of evidence | Description of level |
---|---|---|
Basic | Supportive | Institution recognises researchers must participate in impact related activities (eg. impact strategies in grant applications, impact assessment exercises) but has not developed institutional plans/strategies to actively develop impact literacy. Institution supports efforts of researchers but is not actively maximizing the creation and reporting of impacts. |
Intermediate | Enabling | Institution has developed some policies/plans and is investing in efforts to enable researchers to create and report impact. Institutional policies strongly reflect external agendas, but institutions are not yet critically appraising external models and adapting to institutional context. |
Advanced | Driving | Institutions have policies and strategies, are investing in these strategies and in personnel, and have established a cycle of critical stakeholder engagement to drive the ongoing development of impact services. |
Inter-relatedness of individual and institutional impact literacy levels.
Individual literacy | ||||
---|---|---|---|---|
Critical | Engaged | Aware | ||
Institution | Driving | Matched (advanced) Individual and institution both highly literate RISKS:• Complacency; • Literacy can be undermined with strategy or staff changes ACTIONS: • Sustain and review literacy in tandem • Seek to engage with evidence to continually build/develop impact practice | Offset (institution dominant) Institution is driving literacy, with engaged (non-critical) individual RISKS: • Learning from individual experiences limited • Tensions implementing institutional strategy ACTIONS: • Build individual capacity • Establish appropriate development opportunities for individual • Revise staffing/resource allocation • Consider external guidance and support | Mismatched (individual underequipped) Institutional strategy is beyond individual capability RISKS: • Strategy cannot be effectively implemented • Institutional capacity growth limited • Negative effects on staff morale and confidence ACTIONS: • Build individual capacity • Revise staffing/resource allocation • Consider external guidance and support |
Enabling | Offset (individual dominant) Individual is critically engaged with impact, within institution which is enabling (but not driving) impact RISKS: • Potential for impact capped • Impact practitioners unable to optimise performance ACTIONS: • Enable individual to lead/inform development and implementation of strategy | Matched (moderate) Individual and institution both moderately literate RISKS: • Complacency and no drive to grow literacy • Literacy can be undermined or by strategy or staff changes ACTIONS: • Monitor staff/institutional changes and build literacy in tandem | Offset (institution dominant) Institution is enabling impact, with aware (but not engaged) individual RISKS: • Learning from individual experiences limited • Tensions implementing institutional strategy ACTIONS: • Build individual capacity • Establish appropriate development opportunities for individual • Revise staffing/resource allocation • Consider external guidance and support | |
Supportive | Mismatched (individual overequipped) Individual capability is superior to institutional strategy RISKS: • Innovation capped • Impact practitioners may move to a more impact literate institution ACTIONS: • Use individual skills to inform institutional strategy • Develop strategic plans and investment to retain impact expertise. | Offset (individual dominant) Individual is engaged with impact, within institution which is supportive (but not enabling) impact RISKS: • Potential for impact capped • Impact practitioners unable to optimise performance ACTIONS: • Enable individual to lead/inform development and implementation of strategy | Matched (basic) Individual and institution both have basic literacy RISKS: • Complacency and no drive to grow • Literacy can be undermined or by strategy or staff changes ACTIONS: • Monitor staff/institutional changes and build literacy in tandem |
Examples of health and unhealthy practices (5 Cs).
HEALTHY | UNHEALTHY | |
---|---|---|
Commitment | Institution has an impact strategy, with dedicated leadership, support and resourcing. Impact is embedded across the research lifecycle built appropriately into workloads, and fairly tied to career development and progression. | Institution has no strategic direction, with no/minimal leadership, support or resourcing. Impact receives attention periodically in response to external agendas only, with effort unrecognised in workload planning and progression. |
Connectivity | Impact-related practices, personnel and goals are connected across the institution, with responsibility for impact shared across academic and non-academic staff. The activities of different people and teams are aligned, coordinated and delivered cohesively. | Impact-related practices, personnel and goals are disconnected across the institution. Impact is the responsibility of a single role or components of multiple but disconnected roles, and the activities of those who can support impact are not aligned. |
Coproduction | The institution supports multiple and meaningful connections to external stakeholders, establishing partnerships to inform, guide, contribute to and use research. Stakeholder input is embedded strategically into projects and programmes, with end-user needs integrated into research plans. | Stakeholder relationships across the institution are superficial and tokenistic. Partnerships may be transient or short term to solve a particular research- led problem, rather than reflecting stakeholder needs. Institution lacks a coherent strategy for stakeholder engagement. |
Competencies | Staff have the skills to deliver impact, with specialised expertise accessible where needed. The institution supports and invests in skills development for both academic and non-academic staff, providing training and development opportunities | Staff are underskilled and under-confident to deliver impact. There is no or limited investment in skills development for academic or non-academic staff, with little opportunity for training and development. |
Clarity | Staff understand what impact is, and how their role is connected to impact delivery. Institutional vision is unambiguous, with staff clear on formal internal and external requirements. Cross- disciplinary differences in impact are recognised, with strategic goals appropriately contextualised | Staff do not understand impact, or how their role aligns. Institutional messaging is unclear, and there is a ‘one size fits all’ expectation of how impact is delivered across subject areas |
References
Adam, P., Ovseiko, P.V., Grant, J. et al. (2018), “ISRIA statement: ten-point guidelines for an effective process of research impact assessment”, Health Res Policy Syst, Vol. 16 No. 1, p. 8, doi: 10.1186/s12961-018-0281-5.
Bayley, J.E. and Phipps, D. (2017), “Building the concept of research impact literacy”, Evidence Policy: A Journal of Research, Debate and Practice, Vol. 15 No. 4, pp. 597–606, doi: 10.1332/174426417X15034894876108.
Bayley, J.E., Phipps, D., Batac, M. et al. (2018), “Development of a framework for knowledge mobilisation and impact competencies”, Evid Policy, Vol. 14 No. 4, pp. 725-738, doi: 10.1332/174426417X14945838375124.
Chinn, D. (2011), “Critical health literacy: a review and critical analysis”, Soc Sci Med, Vol. 73 No. 1, pp. 60-67, doi: 10.1016/j.socscimed.2011.04.004
Derrick, G.E., Faria, R., Benneworth, P. et al. (2018), “Towards characterising negative impact: introducing Grimpact”, STI 2018 Conference Proceedings, available at: Reference Source.
Guzys, D., Kenny, A., Dickson-Swift, V. et al. (2015), “A critical review of population health literacy assessment”, BMC Public Health, Vol. 15:215, doi: 10.1186/s12889-015-1551-6.
King's College London and Digital Science (2015), “The nature, scale and beneficiaries of research impact. An initial analysis of Research Excellence Framework (REF) 2014 impact case studies”, available at: Reference Source.
Muhammad, M., Wallerstein, M., Sussman and A.L. (2015), “Reflections on researcher identity and power: the impact of positionality on community based participatory research (CBPR) processes and outcomes”, Crit Sociol (Eugene), Vol. 41 No. 7–8, pp. 1045-1063, doi: 10.1177/0896920513516025.
Nutley, S.M., Walter, I. and Davies, H.T.O. (2007), Using Evidence: How Research Can Inform Public Services, The Policy Press, Bristol, p. 376, doi: 10.2307/j.ctt9qgwt1.
Oliver, K., Kothari, A. and Mays, M. (2019), “The dark side of coproduction: do the costs outweigh the benefits for health research?”, Health Res Policy Syst, Vol. 17 No. 1, pp. 33, doi: 10.1186/s12961-019-0432-3.
Phipps, D.J., Brien, D., Echt, L. et al. (2017), “Determinants of successful knowledge brokering: a transnational comparison of knowledge intermediary organizations”, Research for All, Vol. 1 No. 1, pp. 185-97, doi: 10.18546/RFA.01.1.15.
Phipps, D.J., Cummings, J., Pepler, D. et al. (2016), “The co-produced pathway to impact describes knowledge mobilisation processes”, J Community Engagem Scholarsh, Vol. 9 No. 1, pp. 31-40.
Pielke, R.A. (2007), The Honest Broker, Cambridge University Press, Cambridge, doi: 10.1017/CBO9780511818110
Research Excellence Framework (2019), “Guidance on submissions”, available at: Reference Source
Schulz, J., Bahrami-Rad, D., Beauchamp, J. et al. (2018), “The origins of WEIRD psychology”, doi: 10.2139/ssrn.3201031.