“My words matter”: perspectives on evaluation from people who access and work in recovery colleges

Sophie Soklaridis (Department of Education, Centre for Addiction and Mental Health Queen Street Site, Toronto, Canada)
Rowen Shier (Department of Education, Centre for Addiction and Mental Health Queen Street Site, Toronto, Canada)
Georgia Black (Department of Education, Centre for Addiction and Mental Health Queen Street Site, Toronto, Canada)
Gail Bellissimo (Department of Education, Centre for Addiction and Mental Health Queen Street Site, Toronto, Canada)
Anna Di Giandomenico (SPOR Patient Partner, Toronto, Canada)
Sam Gruszecki (Department of Education, Centre for Addiction and Mental Health Queen Street Site, Toronto, Canada)
Elizabeth Lin (Department of Education, Centre for Addiction and Mental Health Queen Street Site, Toronto, Canada)
Jordana Rovet (Department of Education, Centre for Addiction and Mental Health Queen Street Site, Toronto, Canada)
Holly Harris (Department of Education, Centre for Addiction and Mental Health Queen Street Site, Toronto, Canada)

Mental Health and Social Inclusion

ISSN: 2042-8308

Article publication date: 7 March 2023

Issue publication date: 15 April 2024

656

Abstract

Purpose

The purpose of this co-produced research project was to conduct interviews with people working in, volunteering with and accessing Canadian recovery colleges (RCs) to explore their perspectives on what an evaluation strategy for RCs could look like.

Design/methodology/approach

This study used a participatory action research approach and involved semistructured interviews with 29 people involved with RCs across Canada.

Findings

In this paper, the authors share insights from participants about the purposes of RC evaluation; key elements of evaluation; and the most applicable and effective approaches to evaluation. Participants indicated that RC evaluations should use a personalized, humanistic and accessible approach. The findings suggest that evaluations can serve multiple purposes and have the potential to support both organizational and personal-recovery goals if they are developed with meaningful input from people who access and work in RCs.

Practical implications

The findings can be used to guide evaluations in which aspects that are most important to those involved in RCs could inform choices, decisions, priorities, developments and adaptations in RC evaluation processes and, ultimately, in programming.

Originality/value

A recent scoping review revealed that although coproduction is a central feature of the RC model, coproduction principles are rarely acknowledged in descriptions of how RC evaluation strategies are developed. Exploring coproduction processes in all aspects of the RC model, including evaluation, can further the mission of RCs, which is to create spaces where people can come together and engage in mutual capacity-building and collaboration.

Keywords

Citation

Soklaridis, S., Shier, R., Black, G., Bellissimo, G., Di Giandomenico, A., Gruszecki, S., Lin, E., Rovet, J. and Harris, H. (2024), "“My words matter”: perspectives on evaluation from people who access and work in recovery colleges", Mental Health and Social Inclusion, Vol. 28 No. 2, pp. 134-143. https://doi.org/10.1108/MHSI-01-2023-0002

Publisher

:

Emerald Publishing Limited

Copyright © 2023, Sophie Soklaridis, Rowen Shier, Georgia Black, Gail Bellissimo, Anna Di Giandomenico, Sam Gruszecki, Elizabeth Lin, Jordana Rovet and Holly Harris.

License

Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial & non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


Introduction

Recovery colleges (RCs) are mental health and well-being learning centers designed to support people who self-identify as being on personal-recovery journeys in pursuit of their wellness goals (Perkins et al., 2012). In contrast to traditional mental health services, RCs implement an adult educational framework that incorporates transformative and constructivist learning (Hoban, 2015). Emphasis on coproduction and co-learning is fundamental to RCs. Coproduction is a process in which people who access services are recognized and situated as experts and collaborate with service providers to design, facilitate and actualize processes, logistics and curricula (Durbin et al., 2019; Sommer et al., 2018). Coproduction aims for empowerment, in which the autonomy of people involved with RCs is leveraged to define and shape their interactions with the services they use (Hunter and Ritchie, 2007).

Although coproduction is a central feature of the RC model, a recent scoping review found that coproduction principles are rarely described regarding the development of RC evaluation strategies, and even when they are, reports do not detail the extent of coproduction (Lin et al., 2022a). Evaluation strategies that are developed without the meaningful involvement of people who access, volunteer or work in RCs may overlook key aspects of RCs that are important to evaluate from the perspective of people involved with them and limit the usefulness of quality improvement initiatives (Patton, 2017; Wolfe et al., 2020). Integrating coproduction processes into all aspects of the RC, including evaluation, promotes the mission of RCs to create spaces where people come together and engage in mutual capacity-building and collaboration (Avelino, 2021).

Our qualitative research project explored the use of coproduction principles in RC evaluations through two simultaneous phases. In the first phase, we cocreated a scoping review that examined how RCs were evaluated and whether those evaluations were coproduced with people accessing RCs (Lin et al., 2022a, 2022b). In the second phase, we interviewed people accessing, volunteering and/or working in RCs to explore two research questions:

RQ1.

What do participants understand to be the most important elements of RCs for their personal recovery?

RQ2.

How do we create evaluation measures that are relevant to and impactful for people accessing RCs?

The first question is being addressed in a separate article (Harris et al., in preparation). In the current article, we address the second question by sharing insights from participants about the purposes of RC evaluation, key elements of RC evaluations and the most applicable and impactful approaches to RC evaluation. Research Ethics Board approval was obtained from the Centre for Addiction and Mental Health (REB 042/2020) and Ontario Shores Centre for Mental Health Sciences (REB 20-013-B).

Approach

Every aspect of this study was coproduced from the research proposal to manuscript writing. Through weekly meetings, the research team collaboratively finalized the research aims, cocreated the interview guide, provided feedback on the data collection process, coded deidentified data, codeveloped a coding dictionary and collaborated on analyzing the findings. Our team recognizes each member as a coauthor/coinvestigator, which is not a common view in the greater scientific community (Nelson and Petrova, 2022).

Our team consists of people representing diverse disciplines, areas of focus and learned/lived expertise, each with intersectional and complex experiences of privilege and oppression. We recognize and grapple with the deeply embedded power dynamics that exist in the research context in which we work. We recognize that advocating the inclusion of lived experience as a key element of knowledge production is an ongoing process, and we are committed to ongoing learning and accountability. We live with the tensions that working equitably and collaboratively in a hierarchical system entails, and during our weekly meetings, we take all opportunities to discuss how we can continue to critically analyze and challenge problematic power imbalances that exist within mental health care and health research spaces. We are especially mindful and intentional in working to dismantle power hierarchies through the language we use. Thus, throughout this paper, we use the words and phrases that participants used in their interviews rather than imposing our own language.

Methods

We used a participatory action research (PAR) approach that leveraged the collective knowledge of all coauthors and that placed equal value on personal and professional experiences (Schneider, 2012). Consistent with the principles of PAR, the lines between researcher and the researched were deliberately blurred and challenged. As a team, we were committed to working in the interest of people who participate in RCs who will live and work with the results.

PAR differs from other research methods in its focus on reflection, data collection and action that work together to improve health and reduce health inequities by involving the individuals affected in all stages of research. Specifically, our project was initiated and shaped by, with, about and for people who participate in RCs in partnership with academic researchers (Kindon et al., 2007; Whitley, 2014). PAR involves three core elements:

  1. active participation of knowledge users in the co-construction of knowledge;

  2. promotion of self-awareness and critical awareness leading to individual, collective or social change; and

  3. building of alliances among stakeholders in planning, implementing and disseminating research (McIntyre, 2007).

Recruitment

We used purposive and convenience sampling methods to identify individuals who were especially knowledgeable about, invested in or had experience with RC evaluations (Patton, 2002). Administrative personnel at RCs across Canada sent a general email to people who access, volunteer or work at their respective RCs. The email provided information about the study and instructions to direct questions to a member of the research team who was not known to prospective participants and had no affiliation to any of the RCs. One RC also posted the information letter on its website. Through these strategies, 29 participants were recruited. To be eligible, participants had to:

  • be at least 18 years old;

  • be currently accessing, volunteering or working at a Canadian RC;

  • speak and understand English fluently; and

  • have the capacity to consent to research participation.

Data collection

Twenty-nine individuals were interviewed. We used a semistructured interview guide and conducted the interviews using a virtual conferencing platform called Webex version 41.12.0.20899. A sample of evaluation-related questions and associated prompts extracted from the study’s larger interview guide can be found in Box 1. Consistent with PAR approaches, interview questions were responsive and reshaped as the interviews progressed. Overall, interviews covered the following two areas:

  1. What features of the RC would be important to evaluate? and

  2. What would be the best way to go about evaluating these features (i.e. what methods would work in an RC context)?

Interviews lasted approximately 60 min and were conducted between December 2021 and June 2022. Interviews were audio-recorded and the recordings were stored on a secure server at the lead site:

Box 1. Sample interview questions

1. As we develop this evaluation framework, what elements of RCs do you think we should be evaluating to measure its success?

  • popularity of certain courses;

  • teaching styles of course facilitators; and

  • impact on your life.

2. In your opinion, how do you think this RC could be evaluated? Probe(s):

  • town halls with students;

  • one-on-one consultations with external folks;

  • a quick online survey with students;

  • cumulatively via a portfolio of student work/progress; and

  • all of the above.

Data analysis

All interviews were transcribed, deidentified and analyzed collectively by coauthors using a thematic analysis approach (Kiger and Varpio, 2020). Deidentified transcripts were imported into a qualitative-analysis software program called Dedoose 9.0.46. We followed the six steps of analysis outlined in Braun and Clarke’s(2006) seminal approach to qualitative analysis. First, we familiarized ourselves with the data by doing multiple readings of the interview transcripts (Step 1). Next, each of us identified interesting features in the transcripts and assigned each one a code. We then discussed the codes and, through consensus, came up with code names that would be applied to all transcripts. Next, we created a coding dictionary that listed the codes and their definitions (Step 2). Once all the data were coded into Dedoose, the codes were grouped according to potential themes (Step 3). The team reviewed these themes and refined them by looking for patterns and ensuring that enough data existed to support the initial interpretations (Step 4). We then named each theme and provided an overall story of what that theme represented (Step 5). Finally, we produced an account of the story that the data tell (Step 6).

Results

Evaluation can be a difficult topic to describe and to explain in terms of what works, why, for whom and under what circumstances (Teitelbaum, 2020). Therefore, we were not surprised that many participants had difficulty answering specific questions about evaluating RCs and sometimes needed to be guided with explanations of evaluation, prompts or leading questions. Participants involved in RC facilitation or in conducting RC evaluations had more experience with the evaluation process and were able to provide more detailed answers.

As evidence suggests that RC evaluations rarely describe coproduction and that even when they do, descriptions of the extent of coproduction are lacking (Lin et al., 2022a), we chose to intentionally center the perspectives of people who access and facilitate RCs in this paper. We do this by exclusively sharing their quotes to illustrate themes derived from the larger sample. People who facilitated or attended RC courses constituted 96.5% (28/29 participants) of our sample. Overall, participants discussed three topics:

  1. multiple purposes of RC evaluation;

  2. key considerations for evaluations; and

  3. approaches to evaluation.

1. Multiple purposes of evaluation

Three main purposes of evaluation emerged from our interviews. First, participants highlighted that evaluations are a vehicle for advocating for program improvements and change. As articulated by one participant:

P001: I know more about [the organization], I have feedback from other people, and I know my feedback, for sure, so I am at the point that I am comfortable to speak up and explain and see whether [organizational leaders] think there is something they can do or something we can do together. I give them something to go with in order to better the services.

Second, some participants noted that evaluations could be a useful tool for ensuring that RC offerings meet the objectives stated in the course description:

P025: Some of the general feedback that I’ve received from folks is how we are able to manage time. So I think questions around, did we cover all the topics? Was there anything missing? […] Because I know that the facilitator guide is never going to match the handouts for [an RC course]. So then the facilitators can kind of decide on the fly what they’re going to leave out […] I would love to see some sort of structure around what the participants can expect.

Third, several participants shared that evaluation can be useful to monitor self-improvement, suggesting that self-assessments are an effective way to gauge personal learning and growth:

P021: One thing actually that would be cool […] is some kind of self-assessment tool. Something you could work on more, like a workbook or a self-assessment tool or something like that. It would be neat if you could have a tool, checklist or a check-in for yourself […] Just to see your progression, and see if you are improving.

P001: The particular group that I took, there were quizzes and different things that you could review on your own. So those are really things that I really enjoyed as well.

Some participants also shared how they enjoyed seeing written progress reports, such as certificates:

P021: They post your certificates when you are done a course. It has your progress, like, 60% done this course […] It was good to see.

One participant expressed how getting feedback would have been very useful in preparing for the work environment and performance evaluations:

P025: I never had a performance evaluation with any of my peer support schooling […] So when I started my position at [RC], I had my three-month review […] I took it very hard […] I know that there are some people that are sensitive to feedback, which I know that I am. I think just not even being prepared for how that looks […] was the thing.

A few participants noted the challenges of conducting evaluations when their purpose was to meet multiple needs, such as those of people directly involved in RCs, the organization and the funder in situations when the RC received philanthropic or research funding:

P029: I am looking at things from the client perspective or the peer perspective, and [organizational leaders] have their view from the organization perspective, so it is not possible we are in sync all the time.

Given the diverse perspectives on the purposes of evaluation, understanding the reasons for evaluation requires ongoing discussion with everyone who is involved or affected. One participant involved in facilitation described that part of their role was to give the organization’s leadership feedback on these tensions between various groups:

P001: I will still voice my opinions, concerns, feedback, but I also would not insist on something when [the organization] has decided what they think they should do. At the end, they are the organization providing the services. I have to trust that they have a reason for what they are doing. But my obligation is to give them the feedback for them to consider.

2. Key considerations for evaluating RCs

Participants shared four key elements to consider when conducting RC evaluations. First, as one participant described, evaluations should be conducted “gently”:

P027: Be gentle […] because the person that wrote it, or the people that are writing it, you don’t know their background. You don’t know where they’re coming from. You don’t know what experiences they have and don’t have.

A second consideration that participants identified was the importance of making the evaluation process accessible. For example, one participant indicated that although online surveys might be straightforward and easy for some people, not everyone has access to a cell phone or a computer:

RC29: I know I speak to quite a few people that don’t use a computer or cell phone, and they have a lot of feedback they want to give, but they don’t really have any way to do it.

Another participant explained that it could be difficult for people accessing RC courses to access a link to a survey that is provided in a virtual environment, for example, on presentation slides or in the chat room:

P021: They offered a link, from what I remember, at the end of their slide that they were showing that we could print off and were provided with, to fill out a survey. I never did it though because it wasn’t easy to get at.

The third consideration for RC evaluation that participants identified was the importance of offering incentives to complete evaluations as a way of acknowledging the value of this feedback:

P023: I think there should be a component of this that is incentivizing people to complete the survey. By incentivizing I mean it’s just a gift card to anywhere really […] It can have a big difference. I know for me sometimes I do studies and they give me a card to [a coffee shop]. Well I haven’t been to [a coffee shop] in a long time because I’m focused on my recovery and finances are tight […] So those little incentives really make a difference. They also help me see my participation as valuable, rather than just free. You give your opinion, and you’re not sure whether they’re listened to or not. But as soon as you bring in incentives, you realize, okay, people are paying attention. People really care about what’s going on. And it’s super helpful for the clients.

3. Approaches to evaluation

Several participants shared their perspectives on how evaluations are, or should be, conducted. Most participants indicated that post-course surveys and feedback forms were the most common methods of evaluation:

P019: When we have courses, we usually do a survey at the end of them. I assume that they gather that information to try to develop what we’re doing. I find that sometimes the questions are a little generic. Maybe they could try to be a little more pointed, so that you’re getting better answers.

P014: I think surveys are the best way […] I think generally surveys would do a good job of seeing what’s right, what’s wrong in the recovery colleges themselves.

Participants were aware of the potential time constraints of doing anything lengthier than a simple survey. Some suggested that following up with a brief interview might provide an opportunity to expand beyond a “tick box” approach to evaluation and garner more meaningful information:

P016: I think on a quarterly basis, identifying people who have been through the college to have an interview similar to this one would be a little bit more meaningful [than a written survey] in terms of developing or identifying really credible information that you could say, okay, as we are working on redesigning an area of our program, here is something to consider.

P024: [After] the post-survey, following up with and asking, “Would you be open to a contact to further discuss?” I think you would find a lot of people say, yeah, my words matter. They matter to me, and I’m glad someone is listening to them. I don’t want to tick boxes anymore. I’ve had enough of this clinical approach. I’m not my diagnosis. My words matter.

Conducting interviews was the second most discussed approach to evaluation. Although interviews are time consuming, many participants believed that information gleaned from even a 5-min interview would have great value:

P027: The best thing that I would do personally is get them to do even a five-minute interview, like this saying, “What is your opinion on this?”

Others talked about the benefits of roundtables, groups or focus groups, while recognizing that sometimes people might be more comfortable sharing during a one-on-one interview:

P015: When we come into these courses, and when we get together as a group, it is like we have a deeper relationship almost automatically, because people are willing to share things that they wouldn’t with somebody they just met. So, I think when you can engage people in a group setting like this, it is helpful, and it is even more helpful when it is on the basis of mental health, addictions or something like that. So […] in terms of sharing […] there would be a lot of benefit that could come from it. I also think that there are people that might be more comfortable with the one-on-one format, and kind of sharing their true thoughts about things.

A couple of participants described how town halls could be useful for program improvement and evaluation. One participant shared this experience of attending a town hall:

P010:It was great. We exchanged ideas, mainly for improvement. We recognized what went well […] I signed up for the next one […] because I would like to keep that consistent. It is another way of contributing and staying connected. If you don’t participate and expect things to happen without contributing […] it is not realistic. It’s like going to vote.

Overall, the findings reflect the need for evaluation approaches to strike a balance between ease of access, respecting people’s time and allocating space for more personalized and elaborate feedback through, for example, face-to-face engagement.

Discussion

Although coproduction is a central feature of the RC model, it is rare that reports describe how people who access and work in RCs are involved in developing evaluation processes (Lin et al., 2022a). In this project, we found that evaluations that are personalized, humanistic and accessible would be most meaningful to people who are directly involved in RCs. Building on the findings of a recent scoping review, which showed that RC evaluations can serve multiple purposes, this study reveals the potential of RC evaluations to support both organizational and personal-recovery goals if they are developed with meaningful input from people who access and work in RCs (Lin et al., 2022a).

RCs are designed to be safe, responsive spaces where people can share their perspectives and feel valued. The majority of participants in our project thought that the purpose of evaluation is program improvement and that feedback from people who access or work in RCs is an integral part of shaping and informing their evolution. Other participants emphasized the importance of critical reflection through evaluations as a form of advocacy to influence systemic change. These findings suggest that it is important for RC program staff to show people accessing RCs that their feedback is valuable and that it is being used to make changes that respond to their input. Demonstrating that feedback translates into program improvements has the potential to increase evaluation response rates, which is a common challenge for program evaluations (Fan and Yan, 2010).

In addition to garnering valuable information to inform program improvements and advocacy, evaluations can also support people in their personal-recovery journeys. RCs are designed to help people work toward their self-defined wellness goals, which often involve living a meaningful and contributing life (Leamy et al., 2011). Participants in our study noted how evaluations could be used as self-reflection tools, which in turn could support the pursuit of vocational opportunities. Specifically, evaluation can be an important part of monitoring progress toward goals, increasing self-awareness, providing positive reinforcement and identifying areas of opportunity. For those conducting evaluations, reflecting on the question “Does this evaluation benefit the people accessing this program?” ensures that programming responds to the needs of those who access it.

The findings also suggest that it is important for those evaluating RCs to consider the barriers to participation posed by evaluation processes that are lengthy or that involve disclosing sensitive information. Consistent with broader conversations about the ethics of paying people who access services for their contributions to health education and research (Feige and Choubak, 2019; Soklaridis et al., 2020), providing incentives (e.g. gift cards) for participating in evaluations acknowledges that participants’ feedback is valuable, ensures that they are compensated for their contributions and time and promotes overall engagement.

Participants also indicated that it is crucial to make it clear that participation in evaluations is optional and to increase the accessibility of evaluations for those who do wish to participate, particularly when evaluations are conducted online. Links to evaluation surveys must be easy to find, and help must be available to ensure that people have the digital literacy needed to actively participate. Offering choices wherever possible in how and when people can participate in evaluation can increase accessibility and thus improve response rates and quality. In addition, providing choice can strengthen the commitment of RC programming to honor the right to self-determination.

Although participants identified surveys as the most common, practical and efficient evaluation tool, they also noted the drawbacks of relying solely on this form of data collection. Surveys cannot fully capture a person’s emotional response and often provide few details about an experience. Some participants suggested combining surveys with semistructured interviews or town halls to better capture the complexity of individuals’ experiences with RCs. Giving opportunities to those accessing RCs to discuss their feedback with an evaluator in interviews or town halls could solicit more robust and relevant information and conveys the message that their voices matter.

According to a recent scoping review, nearly half of RC evaluations use a mixed methods approach (Lin et al., 2022a). The current findings suggest that mixed methods approaches have the potential to satisfy funder and organizational requirements that often consist of high-level metrics derived from surveys while also providing useful insights from narrative accounts that highlight why and how the RC is working and under what circumstances (Morgan, 1996). This approach also gives people accessing RCs an opportunity to tell evaluators what they believe to be important to inform quality improvement initiatives. The commitment to centering the voices of those who are most impacted by RCs by positioning them as a driver of programming rather than as an afterthought could increase evaluation response rates and lead to improvements that better reflect the communities that RCs serve (Fan and Yan, 2010).

Limitations

One limitation of this study relates to the varying levels of evaluation literacy among participants. Some had difficulty answering open-ended questions about evaluation and needed explanations, examples and prompts. Despite this challenge, the findings of our study can be viewed as the first step in mapping out general principles to underpin the development of RC evaluation strategies. Another limitation is that we did not collect demographic data; we chose not to because gathering this type of information may have interfered with our focus on building rapport in the interviews. Although demographic data are not required in this type of qualitative research, this information could have provided deeper insights into the participant sample. Finally, participation in this study was limited to people who could connect via phone or videoconference.

Conclusion

Our findings can be used to guide evaluations so that program improvements reflect what is most important to the people who access, volunteer and work in RCs (Patton, 2017). Further exploring coproduction processes in RC evaluation can deepen the commitment to shared decision-making in all aspects of the RC model.

References

Avelino, F. (2021), “Theories of power and social change. Power contestations and their implications for research on social change and innovation”, Journal of Political Power, Vol. 14 No. 3, pp. 425-448.

Braun, V. and Clarke, V. (2006), “Using thematic analysis in psychology”, Qualitative Research in Psychology, Vol. 3 No. 2, pp. 77-101.

Durbin, A., Kapustianyk, G., Nisenbaum, R., Wang, R., Aratangy, T., Khan, B. and Stergiopoulos, V. (2019), “Recovery education for people experiencing housing instability: an evaluation protocol”, International Journal of Social Psychiatry, Vol. 65 No. 6, pp. 468-478.

Fan, W. and Yan, Z. (2010), “Factors affecting response rates of the web survey: a systematic review”, Computers in Human Behavior, Vol. 26 No. 2, pp. 132-139.

Feige, S. and Choubak, M. (2019), “Compensating people with lived experience: highlights from the literature”, available at: www.atrium.lib.uoguelph.ca/xmlui/bitstream/handle/10214/17653/Feige_Choubak_PeerEngagementProject_FactSheet_2019.pdf?sequence=2&isAllowed=y

Hoban, D. (2015), “Developing a sound theoretical educational framework”, Recovery Colleges International Community of Practice, Proceedings of June 2015 Meeting, pp. 8-13, available at: www.researchintorecovery.com/files/2016%20RCICoP%20Proceedings%20June%202015%20meeting.pdf

Hunter, S. and Ritchie, P. (2007), Co-production and Personalisation in Social Care: Changing Relationships in the Provision of Social Care, Jessica Kingsley Publishers, London.

Kiger, M.E. and Varpio, L. (2020), “Thematic analysis of qualitative data: AMEE guide no. 131”, Medical Teacher, Vol. 42 No. 8, pp. 846-854.

Kindon, S., Pain, R. and Kesby, M. (2007), Participatory Action Research Approaches and Methods: Connecting People, Participation and Place, Routledge, Abingdon.

Leamy, M., Bird, V., Le Boutillier, C., Williams, J. and Slade, M. (2011), “Conceptual framework for personal recovery in mental health: systematic review and narrative synthesis”, British Journal of Psychiatry, Vol. 199 No. 6, pp. 445-452.

Lin, E., Harris, H., Black, G., Bellissimo, G., Di Giandomenico, A., Rodak, T., … Soklaridis, S. (2022a), “Evaluating recovery colleges: a co-created scoping review”, Journal of Mental Health, pp. 1-22, Online ahead of print.

Lin, E., Harris, H., Gruszecki, S., Costa-Dookhan, K.A., Rodak, T., Sockalingam, S. and Soklaridis, S. (2022b), “Developing an evaluation framework for assessing the impact of recovery colleges: protocol for a participatory stakeholder engagement process and cocreated scoping review”, BMJ Open, Vol. 12 No. 3, pp. e055289.

McIntyre, A. (2007), Participatory Action Research, Sage, Thousand Oaks, CA.

Morgan, D.L. (1996), “Focus groups”, Annual Review of Sociology, Vol. 22 No. 1, pp. 129-152.

Nelson, P. and Petrova, M.G. (2022), “Research assistants: scientific credit and recognized authorship”, Learned Publishing, Vol. 35 No. 3, pp. 423-427.

Patton, M.Q. (2002), Qualitative Research and Evaluation Methods, 3rd ed., Sage, Thousand Oaks, CA.

Patton, M.Q. (2017), “Principles-focused”, Evaluation: The Guide. New York: NY: Guilford.

Perkins, R., Repper, J., Rinaldi, M. and Brown, H. (2012), “1. Recovery colleges. [Implementing recovery through organisational change project]”, available at: www.imroc.org/resources/1-recovery-colleges/

Schneider, B. (2012), “Participatory action research, mental health service user research, and the hearing (our) voices projects”, International Journal of Qualitative Methods, Vol. 11 No. 2, pp. 152-165.

Soklaridis, S., de Bie, A., Cooper, R.B., McCullough, K., McGovern, B., Beder, M., … Agrawal, S. (2020), “Co-producing psychiatric education with service user educators: a collective autobiographical case study of the meaning, ethics, and importance of payment”, Academic Psychiatry, Vol. 44 No. 2, pp. 159-167.

Sommer, J., Gill, K. and Stein-Parbury, J. (2018), “Walking side-by-side: recovery colleges revolutionising mental health care”, Mental Health and Social Inclusion, Vol. 22 No. 1, pp. 18-26.

Teitelbaum, P. (2020), “Strengthening evaluation literacy: demystifying participatory and collaborative approaches to evaluation”, available at: www.tamarackcommunity.ca/hubfs/Resources/Publications/Strengthening%20Evaluation%20Literacy.pdf?hsCtaTracking=324e7bd5-6fff-46f3-9a8f-a3966d46e3c4%7C2c58bd55-8dfc-4481-a2e5-da7d9ef5b89dOurearlierscopingreview

Whitley, R. (2014), “Beyond critique: rethinking roles for the anthropology of mental health”, Culture, Medicine, and Psychiatry, Vol. 38 No. 3, pp. 499-511.

Wolfe, S.M., Long, P.D. and Brown, K.K. (2020), “Using a principles‐focused evaluation approach to evaluate coalitions and collaboratives working toward equity and social justice”, New Directions for Evaluation, Vol. 2020 No. 165, pp. 45-65.

Acknowledgements

The support and contributions of Sanjeev Sockalingam, Hema Zbogar and Kenya A. Costa-Dookhan are gratefully acknowledged.

Funding: This work was supported by the Canadian Institutes of Health Research under SPOR Catalyst Grant [# 424837].

Corresponding author

Sophie Soklaridis can be contacted at: sophie.soklaridis@camh.ca

About the authors

Sophie Soklaridis is based at the Department of Education, Centre for Addiction and Mental Health Queen Street Site, Toronto, Canada.

Rowen Shier is based at the Department of Education, Centre for Addiction and Mental Health Queen Street Site, Toronto, Canada.

Georgia Black is based at the Department of Education, Centre for Addiction and Mental Health Queen Street Site, Toronto, Canada.

Gail Bellissimo is based at the Department of Education, Centre for Addiction and Mental Health Queen Street Site, Toronto, Canada.

Anna Di Giandomenico is based at SPOR Patient Partner, Toronto, Canada.

Sam Gruszecki is based at the Department of Education, Centre for Addiction and Mental Health Queen Street Site, Toronto, Canada.

Elizabeth Lin is based at the Department of Education, Centre for Addiction and Mental Health Queen Street Site, Toronto, Canada.

Jordana Rovet is based at the Department of Education, Centre for Addiction and Mental Health Queen Street Site, Toronto, Canada.

Holly Harris is based at the Department of Education, Centre for Addiction and Mental Health Queen Street Site, Toronto, Canada.

Related articles