Evaluating team dynamics in interdisciplinary science teams

Sara Bolduc (Sara Bolduc Evaluation and Planning, LLC, Honolulu, Hawaii, USA)
John Knox (John Knox and Associates, Honolulu, Hawaii, USA)
E. Barrett Ristroph (Ristroph Law, Planning, and Research, Baton Rouge, Louisiana, USA)

Higher Education Evaluation and Development

ISSN: 2514-5789

Article publication date: 13 September 2022

Issue publication date: 7 November 2023

3

Abstract

Purpose

This article considers how the evaluation of research teams can better account for the challenges of transdisciplinarity, including their larger team size and more diverse and permeable membership, as well as the tensions between institutional pressures on individuals to publish and team goals.

Design/methodology/approach

An evaluation team was retained from 2015 to 2020 to conduct a comprehensive external evaluation of a five-year EPSCoR-funded program undertaken by a transdisciplinary research team. The formative portion of the evaluation involved monitoring the program’s developmental progress, while the summative portion tracked observable program outputs and outcomes as evidence of progress toward short- and long-term goals. The evaluation team systematically reviewed internal assessments and gathered additional data for an external assessment via periodic participation in team meetings, participant interviews and an online formative team survey (starting in Year 2).

Findings

Survey participants had a better understanding of the project’s “Goals and Vision” compared to other aspects. “Work Roles,” and particularly the timeliness of decision-making, were perceived to be a “Big Problem,” specifically in regard to heavy travel by key managers/leadership. For “Communication Channels,” Year 2 tensions included differing views on the extent to which management should be collaborative versus “hierarchical.” These concerns about communication demonstrate that differences in language, culture or status impact the efficiency and working relationship of the team. “Authorship Credit/Intellectual Property” was raised most consistently each year as an area of concern.

Originality/value

The study involves the use of a unique survey approach.

Keywords

Citation

Bolduc, S., Knox, J. and Ristroph, E.B. (2023), "Evaluating team dynamics in interdisciplinary science teams", Higher Education Evaluation and Development, Vol. 17 No. 2, pp. 70-81. https://doi.org/10.1108/HEED-10-2021-0069

Publisher

:

Emerald Publishing Limited

Copyright © 2022, Sara Bolduc, John Knox and E. Barrett Ristroph

License

Published in Higher Education Evaluation and Development. Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http:// creativecommons.org/licences/by/4.0/legalcode


1. Introduction: interdisciplinary research team structures merit improved evaluation of team dynamics

1.1 The increasing importance of interdisciplinary research teams

Scientific research has historically been conducted by individual researchers, or as research teams within a single discipline focused on a specific project or objective. Increasingly, researchers are collaborating with colleagues in different fields as well as with nonacademic practitioners and community stakeholders (Newig et al., 2019). The trend is evident not only in science and engineering but also within the social sciences (Wuchty et al., 2007). We call this approach to research “interdisciplinary science research” and recognize that there are related though slightly different terms for this concept, including “collaborative research,” “21st-century science,” “participatory research” or “team science” (Grigorovich et al., 2019; Stokols et al., 2008). Our use of the term “interdisciplinary” places emphasis on the collaboration between researchers from different academic disciplines. The interdisciplinary collaborative direction in research is part of a larger, 21st-century transformation in science, embracing concepts of “perspectivism” (including more than one perspective), “heterarchy” (a nonhierarchical system) and community psychology (Tebes et al., 2014; Falk-Krzesinski et al., 2010; Wagner et al., 2011; Bennett and Gadlin, 2012).

Interdisciplinarity in research is a response to trends in scientific research, and research funding that promotes collaboration to address complex challenges that cannot be addressed by individuals working in isolation or within a single discipline, such as water resource management (Lanier et al., 2018; Carr et al., 2018; Gibson et al., 2019), health delivery or climate change response. Key examples in the US are the National Science Foundation’s (NSF) “Established Program to Stimulate Competitive Research” (EPSCoR) and the National Institutes of Health’s (NIH) Transformative Research Awards Program. These examples are important because NIH is the national medical research agency (and thus one of the primary funders of health-related research in the United States), while NSF, another federal agency, is the funder for approximately one-quarter of all federally supported basic research conducted by America’s colleges and universities (NSF, n.d.). NSF EPSCoR seeks to enhance research competitiveness of targeted jurisdictions (e.g. states and territories) by strengthening Science, Technology, Engineering, and Math (STEM) capacity and capability (NSF EPSCoR, n.d.).

As the nature of research teams has evolved, so have the factors that affect the team’s ability to meet its research goals (Tannenbaum et al., 2012). Larger, more geographically dispersed teams may have more challenges communicating (O’Rourke et al.,2019; NRC, 2015; Lanier et al., 2018). Adding to these communication challenges is the likelihood that team members from different disciplines may not speak a “common language” (Carr et al., 2018; MacLeod, 2018; NRC, 2015; O’Rourke et al., 2019). Some disciplines and degrees may also be perceived with higher regard and status than others (e.g. M.D.s may be more highly regarded than Ph.D.s), which may contribute to tension (Urbanska et al., 2019). Interdisciplinary research teams operate in a more fluid, dynamic and complex environment than in the past, with more frequent changes. Large projects may span many years, during which time team members might leave to pursue new opportunities or retire. Some researchers may move to another institution, while other researchers may join the team at various points on the research program’s timeline. This reality can prove problematic in sustaining meaningful team interactions (NRC, 2015; Wageman et al., 2012, p. 305).

Interdisciplinary research team members may also battle with persistent institutional barriers and preexisting structures that disincentivize interdisciplinary work (Rhoten, 2004), such as an individualistic approach to knowledge generation and sharing (Gibson et al., 2019; O’Rourke et al., 2019). Many universities continue to use individualistic landmarks to determine institutional advancement (Wageman et al., 2005; Brody et al., 2019). For example, achieving tenure may require sole-authored publications and being principal investigator on research proposals. Without a framework for negotiating promotion and tenure for interdisciplinary and collaborative research, there are limited incentives for pursuing this line of work (Klein and Falk-Krzesinski, 2017; Rhoten, 2004). Researchers may be focused on proposal deadlines, program deliverables or the underlying scientific problems that drive their research, as opposed to nurturing relationships with their colleagues (Bennett and Gadlin, 2012).

1.2 Shifting evaluation to account for team dynamics in interdisciplinary research teams

Evaluation of the efficacy of interdisciplinary research teams in achieving their intended goals is important to assure that they are competitive with more traditional research teams (Strang and McLeish, 2015, p. 4). There is a need for more research on how evaluation could better assess team success in terms of both research outcomes and team dynamics (see Carr et al., 2018, p. 36). Research funders are interested in supporting interdisciplinary research and are willing to provide funding for the evaluation of interdisciplinary team science (Barka et al., 2016). Yet such evaluations can be challenging because of the potentially different epistemic viewpoints and standards of quality within these teams (Huutoniemi, 2010, p. 309; Strang and McLeish, 2015). The evaluation of interdisciplinary research teams must account for challenges unique to this mode of research to better assess the nuances of this approach.

Formative (process-oriented) evaluation is important because it can call attention to problems with team dynamics that may impede reaching research goals, thus giving a more holistic picture of the team’s progress and identifying opportunities for corrective action (Strang and McLeish, 2015, p. 5; Mâsse et al., 2008; Roelofs et al., 2019; NRC, 2015). Formative evaluation can support research programs by assisting team members in setting a deliberative space to facilitate positive team dynamics and providing consistent, iterative feedback throughout the program to improve the process (Weiss, 1997).

NIH has developed a list of indicators to support the evaluation of both relationships among team members and team performance (Bennett et al., 2010). Relationship indicators include factors such as the existence of a dispute resolution process, adequate notice of problems and the degree of responsiveness to raised concerns. Performance indicators include factors such as timeliness, participant commitment and attitude, and synergy. The NIH Office of the Ombudsman has also created a template for interdisciplinary research teams that can facilitate dialogue to prevent or reduce conflicts (NIH, 2017). The template may be used to (1) build consensus regarding the expected contributions of each participant at the outset of the project; (2) generate mechanisms for routine communication among research team members; (3) outline supervisory roles and the creation of transparent personnel and decision-making processes and (4) enable increased inclusion and access to data management and ownership, authorship, credit, intellectual property and patent applications. The template can help teams consider how research goals could be redirected if unforeseen events or discoveries emerge during the research process.

This article expands on the existing literature on the formative evaluation of interdisciplinary research teams by presenting a case study of the design and implementation of a formative tool designed to evaluate team dynamics. The tool is a survey that authors Bolduc and Knox (the “evaluation team”) included in their evaluation of a five-year, federally funded, interdisciplinary science research program. The article also provides recommendations from this case study and the literature.

2. Methods

The evaluation team was retained from 2015 to 2020 to conduct a comprehensive external evaluation of a five-year NSF/EPSCoR-funded program undertaken by an interdisciplinary research team. While the science focus of the research was to study the effects of climate change on coral reefs (specifically on a genetic basis), the program’s vision extended beyond applied marine biology to strengthening the capacity of research and training at the institution and to serve as a regional resource for addressing challenges related to climate change in the Pacific region and beyond. Led by a primary principal investigator and supported by two co-investigators, the team consisted of researchers with expertise in various aspects of marine science (genomics and oceanography) as well as from other fields that could support other components of the project (such as bioinformatics, STEM education and workforce development, information technology, as well as curational research). This research team was perhaps smaller in size (average of 30 participants per year) compared to most research teams in other NSF/EPSCoR jurisdictions across the United States. However, the small scale of the project provided a unique opportunity to test innovative tools to assess participant satisfaction, team dynamics and overall project outcomes.

The formative portion of the evaluation involved monitoring the program’s developmental progress, while the summative portion tracked observable program outputs and outcomes as evidence of progress toward short- and long-term goals. The evaluation team systematically reviewed internal documents and gathered additional data for external assessment via periodic participation in program team meetings, participant interviews and annual online formative team surveys (starting in Year 2). Until the COVID-19 pandemic in 2020, one or both members of the evaluation team attended annual conferences where in-person meetings and interviews with team members took place.

The evaluation team used a survey to assess team dynamics among members. The survey design was built upon the 2017 NIH template for managing conflict in scientific collaborations, as well as factors relevant to team management identified in the literature (e.g. Bennett and Gadlin, 2012; Mathieu et al., 2008; Wageman et al., 2005). The survey was not designed to assess personality conflicts or leadership styles, but to consider participant satisfaction with intra-team communication and significant program-related management actions.

The survey was administered annually from Year 2 to 5, with participation as shown in Table 1. While the survey was slightly modified during Years 2–5, the overarching format remained the same for the most part. It included four sections with questions based on factors recognized in the above-cited literature as predictors of interdisciplinary research team cohesion. The survey was designed utilizing these four broad assumptions:

  1. Understanding of program vision and goals: Team members having a clear understanding of the program and the activities that will need to be accomplished can contribute to all team members working together toward a shared mission.

  2. Understanding of work roles: In a well-functioning team, members are clear about “who is doing what” and “who to see to get questions answered.”

  3. Having good communication channels: Effective communication within and outside the research team also contributes to enhanced functionality.

  4. Having clarity about authorship credit and intellectual property: While, historically, researchers published research within their own disciplines, interdisciplinary research may involve researchers from different fields, at different stages in their careers working together. Considerations for career advancement (such as author order or whether junior researchers might benefit more from solely authored papers) need to be addressed early in the collaboration process.

Each section asked participants to indicate their agreement or disagreement with certain statements relating to these factors. For each survey question measuring the degree of agreement with a statement, respondents could mark “Strongly Agree,” “Somewhat Agree,” “Somewhat Disagree,” “Strongly Disagree” or “Don’t Know/Doesn’t Apply.” There were also questions for respondents to assess the performance of the program in addressing these factors overall as “Excellent,” “Good,” “Fair” or “Poor.” The survey also provided space for open-ended suggestions on how to improve any perceived problems, as well as space for general suggestions. The open-ended responses were grouped into themes, which were used to design follow-up discussions with researchers. The fifth-year survey included questions regarding how participants’ perceptions of these factors had changed or remained the same over the course of the project. The responses to the surveys and interviews were used to make recommendations to the research team each year.

Table 2 provides a summary of the survey format. The evaluation team used these findings to provide recommendations to the research team. The recommendations for research in the Discussion and Recommendations section draw from what was provided to the research team.

3. Survey findings (results)

This section focuses on key survey findings, while the following section (4) provides recommendations based on these findings.

First year (Year 2) key survey results: Interdisciplinary research teams often experience personal tensions or friction, and this team was no different. While a majority of the team demonstrated a clear understanding of “Work Roles” and “Vision and Goals” (for both factors, 91% of respondents reported “Excellent” or “Good” progress toward this factor overall), frequent sources of tension concerned “Communication” and “Authorship Credit” (only 77% of respondents reported excellent or good progress overall in terms of communication, and 46% for authorship). Views differed among team members regarding the degree to which leadership should be heterarchical or hierarchical. A minority saw problems in virtually all aspects of communication, and significant portions of the team expressed concern about being “kept up to date on relevant information,” and to a lesser extent about “differences in language, culture or status.” Finally, a substantial number of team researchers were particularly concerned about procedures for properly handling data or credit in the event that key researchers left the institution (which did occur). This demonstrated the need for “clear rules for authorship and credit.”

Year 3 key survey results: Overall, survey results shown in Table 3 suggest that participants rated the program more positively in Year 3 than in Year 2. Participants continued to demonstrate a good understanding of “Work Roles” as well as the program “Goals and Vision.” As in Year 2, concerns primarily pertained to specific factors classified as “Communication Channels” and “Authorship Credit and Intellectual Property.” There were also concerns related to achieving progress toward research goals and the timeliness of decision-making. As the research moved forward, some researchers expressed concern about how the collaborative goals of the program could impact their ability to remain competitive at the individual level (particularly for junior faculty seeking tenure).

Year 4 key survey results: As shown in Table 3, survey responses in Year 4 were fairly similar to those in Year 3. There were particular concerns associated with the institution’s small number of faculty members, including a request from junior faculty to expand mentoring capacity by enlisting senior faculty, and concerns about team members leaving the institution. In the evaluation report, the evaluation team recommended a program-wide faculty mentoring plan that could engage research mentors from beyond the institution. The institution adopted an ambitious plan to match each junior faculty member with two mentors: one from the local institution (for professional support such as applying for tenure) and one from outside (for research mentoring). The evaluation report showed that the number of publications stemming from the program was far below target goals. In response, they suggested a Publications Plan with clear research topics that assigned specific roles and responsibilities and clarified the order of authorship.

Year 5 key survey results: In the Year 5 team survey, participants suggested (despite points of lingering concern) that overall team dynamics (with the exception of work role clarity) had improved over time. Some concerns about communication and authorship continued, although “Authorship” appeared to be less of a concern than in the previous year. However, a minority of participants were concerned about being compelled to accept second or third authorship given tenure pressures, and again, regarding the consequences if key people left the institution/program. The greatest ongoing concern raised in survey comments from Year 5 (the final year of the program) was insufficient mentoring for junior faculty. Unfortunately, COVID-19 disrupted the implementation of the mentoring plan, and certain participants indicated that they considered mentorship to be “too little, too late” at that point in time.

Table 3 shows the percentage of survey participants in each survey year who responded that program progress was “Excellent” or “Good” (as opposed to “Fair,” “Poor” or “Don’t Know/Doesn’t Apply”). The final column shows the percentage of survey participants who believed that the team improved (by factor) during their time with the team [1].

Overall: We consider the overall progress regarding each factor more specifically as follows:

  1. After Year 2, program “Goals and Vision” was reported to be generally well understood. After Year 5, with the possible exception of non-research-related program components, “Goals and Vision” were still generally well understood, especially for personal research goals (which diminished for non-research and “overall” team research goals).

  2. It was revealed in Year 2 that “Work Roles,” and particularly the timeliness of decision-making, were perceived to be a “Big Problem,” specifically in regard to heavy travel by key managers/leadership. In Year 5, “Work Roles” continued to be a major concern to a minority of participants. These participants were concerned about the rules for storing program data, how data would be handled if researchers left the institution and the overall structure of decision-making authority.

  3. For “Communication Channels,” Year 2 tensions included differing views on the extent to which management should be collaborative versus “hierarchical,” and a desire to be kept up to date on relevant information. Year 2 concerns about communication demonstrate that differences in language, culture or status impact the efficiency and working relationship of the team. Over time (by Year 5), the adequacy of junior faculty mentorship and open lines of communication (feeling free to bring up concerns/problems) continued to receive low ratings.

  4. Overall, “Authorship Credit/Intellectual Property” was raised most consistently each year as an area of concern. Specifically, there were concerns about how authorship would be attributed if researchers left the program, and about having to accept second or third authorship (particularly for those seeking tenure).

In summary, for this small interdisciplinary research team, “Communication Channels” and “Authorship Credit/Intellectual Property” were the most frequently encountered problems, albeit among a minority of participants. Competing perspectives may relate to generational differences in values and standard practices pertaining to authority versus collegiality. Another key finding is that satisfaction regarding all factors appeared to peak in Year 4 and then decline in Year 5. This decline may equate to increased dissatisfaction with the team, or it may indicate greater trust and willingness to share dissatisfaction with the evaluation team in Year 5 than in earlier years. Further, it is not clear to what extent the COVID-19 pandemic factored into the decline in satisfaction in Year 5.

4. Discussion and recommendations

This section begins with general recommendations for improving team dynamics based on what evaluators learned through the case study. It then explains how the particular formative tool used for evaluation was useful in eliciting these findings. It closes with recommendations for continuing research to evaluate team dynamics.

First, it can be useful to hold a “team retreat” or workshop at the beginning of an interdisciplinary research program to establish clear team direction, expectations and understanding of each team member’s role (Carr et al., 2018). This retreat can discuss the team dynamics issues covered in our evaluation survey (“Vision and Goals,” “Work Roles,” “Communication Channels” and “Authorship Credit and Intellectual Property”) as well as how contingencies will be handled (such as the departure of team members from the institution). Each member could sign an agreement acknowledging program goals, member roles, modes for communication channels and the team’s understanding of intellectual property, credit and contingencies. The retreat notes and results as well as the member agreement could become part of a team handbook that could be distributed to existing and future team members.

A shift from hierarchy to heterarchy must not leave a vacuum in leadership. On the contrary, a particular type of leadership is required that will foster interdisciplinary communication and collaborative decision-making (even remotely), and help the team adapt as the research program evolves (Lanier et al., 2018; Gibson et al., 2019). There is a need for clear, transparent decision-making processes and assurances that information will be shared with all participants. There is also a need for spaces that allow for dialogue and even debate between researchers in different disciplines (Carr et al., 2018).

Frequent and regular team meetings are beneficial to improve communication and transparency. The evaluation team found it helpful to also attend regular team meetings to increase trust in the evaluator. Notices of the rescheduling of any meetings should occur in a timely manner. Where members are in the same geographic location, establishing a neutral area or “lounge” space for informal meetings can be helpful (Gibson et al., 2019). Meetings may have the additional benefit of enhancing “team spirit” or appreciation for the work of members from other disciplines (Urbanska et al., 2019). Meetings should continue throughout the life of the program and avoid lagging toward the end of the term, particularly since team membership turnover can reduce clarity regarding work roles. If meetings of this frequency are not possible, then standardized monthly email updates may be useful as a proxy. A team website (or even a social media webpage) may also contribute to promoting enhanced communications.

There must be a balance between working as a team (and publishing collaboratively) and supporting junior researchers to meet institutional tenure-related milestones (which favor first-authored or even single-authored research). Institutional support is a significant dimension of fostering interdisciplinary research (Urbanska et al., 2019). In the example discussed in this article, the program intended to create a mentorship program where tenured faculties were partnered with mentees (students, junior faculties and others at the early stages of their career). As with team agreements and data management systems, mentorship actions should be implemented at the beginning of a program rather than in its latter stages. Institutions should consider consistent reward systems that define expectations upon appointment and prepare researchers to adopt and incorporate program criteria relating to collaboration into the research design and workflow (Klein and Falk-Krzesinski, 2017). At the funding level, there exists a great need to provide smaller, less competitive funding pools for early career researchers, creating spaces to fail and/or build on their experiences (Gibson et al., 2019).

We want to highlight the utility of using a survey to assess team dynamics in interdisciplinary research teams as part of a formative evaluation. The survey we used served as a vehicle for timely exchanges of information that may have otherwise never surfaced. Regardless of their status, rank or role, team members were able to express themselves in a manner that elevated the team’s capacity to efficiently function. Rather than simply being a soapbox to air dirty laundry or express discontent with particular people, the survey focused on factors recognized in the literature that commonly impede efficient team functionality. It also provided an opportunity for considering practical, implementable recommendations. Future research may build on this tool by assessing which factors (e.g. communication, authorship rules, etc.) have the most impact on achieving interdisciplinary research team goals, or by considering other potentially relevant factors.

One question that emerged during survey analysis concerned the influence of rapport and trust between interdisciplinary research teams and evaluation teams on the scope and depth of data collection. As multiyear projects progress and evolve over time, it may be difficult to differentiate between an increase of conflicts within an interdisciplinary research environment and an increased willingness to share conflicts with evaluation teams. This phenomenon should be explored in future research.

Finally, additional research is needed to better understand the effect of shifts in team membership. This fluidity may require evaluators to adapt their methods of data acquisition to evaluate a project throughout its life span.

5. Conclusion

There is much value in conducting a formative evaluation over the course of a sustained interdisciplinary science research project through which evaluators can share findings with the research team and contribute to the improvement of the program in real time. An anonymous survey that captures multiple factors associated with potential conflicts within interdisciplinary science research teams can be an indispensable tool in this endeavor. The evaluation team’s survey captured some of the difficulties anticipated within interdisciplinary research teams, namely clarity on goals and vision, work roles, communication challenges and the lack of clarity on how to address issues such as assigning credit for authorship. The evaluation team’s recommendations were based on measurable factors that can contribute to conflicts in interdisciplinary teams. The recommendations were quickly applied, as there were clear links between team member reports and the source of their dissatisfaction.

Challenges related to team dynamics may potentially be particularly acute with smaller, more geographically remote institutions where team members may be more mobile, and only a small pool of potential mentors exists. Establishing a team member agreement at the outset of a project anticipates likely problems and fosters intra-team understanding to help avoid or mitigate some of these challenges. Likewise, having regular team communications throughout the course of the project will keep new members up to speed, illuminate decision-making processes and avoid or minimize festering concerns.

In the era of the COVID-19 pandemic and beyond, working online and negotiating collaborative workflows at a distance with numerous competing distractions are increasingly being normalized. Although this trend may engender negative implications for interdisciplinary research team efficacy and dynamics (particularly in terms of in-person communication), it will undoubtedly also open up new corridors of cooperation and innovative opportunities for broader collaborations between various institutions. Likewise, it may also create spaces where new forms of online interinstitutional mentoring can emerge.

There are also new opportunities for evaluators, not only to advance methods presented in this paper but also to expand our reach to better serve highly remote and increasingly heterogeneous teams that are inevitably coming to terms with a new normal. Not only can evaluators do a better job of evaluating team dynamics but they can also encourage research teams to develop team agreements, intra- and interinstitutional data management systems. Holding regular online meetings may become instrumental in supporting the work and visions of interdisciplinary science research teams.

Participation in annual surveys

YearNumber receiving surveyNumber respondingResponse rate
2 (2017)1313100%
3 (2018)211676%
4 (2019)231461%
5 (2020)181267%

Survey format

Understanding of goals and visionUnderstanding of work roles
  • Agreement or disagreement with statements regarding clarity of goals and team’s achievement of goals:

    • o Clarity about overall research goals

    • o Meeting overall research goals

    • o Clarity about education and outreach goals

    • o Meeting research and education goals

    • o Clarity about personal research goals

    • o Meeting personal research goals

  • Explanation of areas of disagreement or additional information on items that need “fixing”

  • Overall rating of how well the team is achieving this factor

  • Agreement or disagreement with statements regarding clarity of work roles and how to submit grant reports:

    • o Understanding expectations

    • o Clear lines of decision-making

    • o Personal clarity about reporting

    • o Others’ clarity about reporting

    • o Clarity about data storage/access

  • Explanation of areas of disagreement or additional information on items that need “fixing”

  • Overall rating of how well the team is achieving this factor

Communication channelsAuthorship credit and intellectual property
  • Agreement or disagreement with statements regarding the ability to communicate concerns and have them addressed:

    • o Good cultural understanding among and across disciplines

    • o Peer satisfaction about the program

    • o Ability to freely bring up concerns

    • o Attentiveness to raised concerns

    • o Good lines of communication

  • Explanation of areas of disagreement or additional information on items that need “fixing”

  • Overall rating of how well the team is achieving this factor

  • Added questions regarding mentoring in Year 5

  • Agreement or disagreement with statements regarding clarity of authorship rules and balance between “team” authored publications and individual career needs for lead authorship:

    • o Balancing joint authorship and career needs

    • o Worrying about the author order

    • o Rules for authorship being clear

    • o Procedures in place if someone departs

  • Explanation of areas of disagreement or additional information on items that need “fixing”

  • Overall rating of how well the team is achieving this factor

Survey findings

Team dynamics factorsPercentage responding “excellent” or “good” progress for this factorOverall improvement (from Year 2 to 5)**
Year 2Year 3Year 4Year 5
People understand GOALS/VISION, make progress on them and work out issues related to goals when they may arise91%100%84%82%72%
People understand WORK ROLES, function in those roles and work out issues related to work roles when they may arise91%81%85%75%66%
COMMUNICATION channels between or among people with various roles and working out communication issues when they may arise77%86%77%67%66%
Laying out good rules/procedures for AUTHORSHIP CREDIT and INTELLECTUAL PROPERTY issues and working out related issues when they may arise61%75%76%72%57%

Note(s): ** This represents the combined percentage of participants who indicated in Year 5 that the Program was “Much Better” or “Somewhat Better” than when they joined. No numeric value was assigned to “Much Better” or “Somewhat Better”; these are qualitative terms of comparison

Note

1.

New members joined the team in Years 2, 3 and 4.

References

Barka, R.H., Kragt, M.E. and Robson, B.J. (2016), “Evaluating an interdisciplinary research project: lessons learned for organisations, researchers and funders”, International Journal of Project Management, Vol. 34 No. 8, pp. 1449-1459.

Bennett, L.M. and Gadlin, H. (2012), “Collaboration and team science: from theory to practice”, Journal of Investigative Medicine: The Official Publication of the American Federation for Clinical Research, Vol. 60 No. 5, pp. 768-775, doi: 10.2310/JIM.0b013e318250871d.

Bennett, L.M., Gadlin, H.L. and Levine-Finley, S. (2010), Collaboration & Team Science: A Field Guide, National Institutes of Health, Washington, DC, available at: https://idr.utep.edu/assets/docs/Resources/NIH%20TeamScience_FieldGuide.pdf.

Brody, A.A., Bryant, A.L., Perez, G.A. and Bailey, D.E. (2019), “Best practices and inclusion of team science principles in appointment promotion and tenure documents in research intensive schools of nursing”, Nursing Outlook, Vol. 67 No. 2, pp. 133-139.

Carr, G., Louck, D.P. and Blöschl, G. (2018), “Gaining insight into interdisciplinary research and education programmes: a framework for evaluation”, Research Policy, Vol. 47, pp. 35-48.

Falk-Krzesinski, H.J., Börner, K., Contractor, N., Fiore, S.M., Hall, K.L., Keyton, J. and Uzzi, B. (2010), “Advancing the science of team science”, Clinical and Translational Science, Vol. 3 No. 5, pp. 263-266, doi: 10.1111/j.1752-8062.2010.00223.x.

Gibson, C., Stutchbury, T., Ikutegbe, V. and Michielin, N. (2019), “Challenge-led interdisciplinary research in practice: program design, early career research, and a dialogic approach to building unlikely collaborations”, Research Evaluation, Vol. 28 No. 1, pp. 51-62, doi: 10.1093/reseval/rvy039.

Grigorovich, A., Fang, M.L., Sixsmith, J. and Kontos, P. (2019), “Defining and evaluating transdisciplinary research: implications for aging and technology”, Disability and Rehabilitation: Assistive Technology, Vol. 14 No. 6, pp. 533-542, doi: 10.1080/17483107.2018.1496361.

Huutoniemi, K. (2010), “Evaluating interdisciplinary research”, in Frodeman, R., Klein, J.T. and Mitcham, C. (Eds), Oxford Handbook of Interdisciplinarity, Oxford University Press, Oxford, pp. 309-319.

Klein, J.T. and Falk-Krzesinski, H.J. (2017), “Interdisciplinary and collaborative work: framing promotion and tenure practices and policies”, Research Policy, Vol. 46 No. 6, pp. 1055-1061.

Lanier, A.L., Drabik, J.R., Heikkila, T., Bolson, J., Sukop, M.C., Watkins, D.W., Rehage, J., Mirchi, A., Engel, V. and Letson, D. (2018), “Facilitating integration in interdisciplinary research: lessons from a South Florida water, sustainability, and climate project”, Environmental Management, Vol. 62, pp. 1025-1037.

Mâsse, L.C., Moser, R.P., Stokols, D., Taylor, B.K., Marcus, S.E., Morgan, G.D., Hall, K.L., Croyle, R.T. and Trochim, W.M. (2008), “Methodologic contribution: measuring collaboration and transdisciplinary integration in team science”, American Journal of Preventive Medicine, Vol. 35 No. 2, pp. S151-S160, doi: 10.1016/j.amepre.2008.05.020.

MacLeod, M. (2018), “What makes interdisciplinarity difficult? Some consequences of domain specificity in interdisciplinary practice”, Synthese, Vol. 195, pp. 697-720, doi: 10.1007/s11229-016-1236-4.

Mathieu, J., Maynard, M.T., Rapp, T. and Gilson, L. (2008), “Team effectiveness 1997-2007: a review of recent advancements and a glimpse into the future”, Journal of Management, Vol. 34 No. 3, pp. 410-476.

National Institute of Health (NIH) (2017), “Office of the Ombudsman”, Center for Cooperative Resolution, Sample Partnering Agreement Template, available at: https://ombudsman.nih.gov/sites/default/files/Sample%20Partnering%20Agreement%20Template.pdf.

National Research Council (NRC) (2015), Enhancing the Effectiveness of Team Science, National Academies Press, Washington, DC.

National Science Foundation (NSF) (n.d.), “About”, available at: https://www.nsf.gov/about/ (accessed 7 March 2022).

Newig, J., Jahn, S., Lang, D.J., Kahle, J. and Bergmann, M. (2019), “Linking modes of research to their scientific and societal outcomes. Evidence from 81 sustainability-oriented research projects”, Environmental Science and Policy, Vol. 101, pp. 147-155.

NSF EPSCoR (n.d.), “About EPSCoR”, available at: https://www.nsf.gov/od/oia/programs/epscor/index.jsp (accessed 7 March 2022).

O'Rourke, M., Crowley, S., Laursen, B., Robinson, B. and Vasko, S.E. (2019), “Disciplinary diversity in teams: integrative approaches from unidisciplinarity to transdisciplinarity”, in Hall, K.L., Vogel, A.L. and Croyle, R.T. (Eds), Strategies for Team Science Success, Ch. 2.

Rhoten, D. (2004), “Interdisciplinary research: trend or transition”, Items and Issues, Vol. 5 Nos 1-2, pp. 6-11.

Roelofs, S., Edwards, N., Viehbeck, S. and Anderson, C. (2019), “Formative, embedded evaluation to strengthen interdisciplinary team science: results of a 4-year, mixed methods, multi-country case study”, Research Evaluation, Vol. 28 No. 1, pp. 37-50, doi: 10.1093/reseval/rvy023.

Stokols, D., Hall, K.L., Taylor, B.K. and Moser, R.P. (2008), “The science of team science: Overview of the field and introduction to the Supplement”, American Journal of Preventive Medicine, Vol. 35, 2, Supplement, pp. S77-S89, doi: 10.1016/j.amepre.2008.05.002.

Strang, V. and McLeish, T. (2015), Evaluating Interdisciplinary Research: A Practical Guide, Durham University, Durham, NC, available at: https://www.iasdurham.org/wp-content/uploads/2020/11/StrangandMcLeish.EvaluatingInterdisciplinaryResearch.July2015_2.pdf.

Tannenbaum, S.I., Mathieu, J.E., Salas, E. and Cohen, D. (2012), “Teams are changing: are research and practice evolving fast enough?”, Industrial and Organizational Psychology, Vol. 5 No. 1, pp. 2-24, doi: 10.1111/j.1754-9434.2011.01396.x.

Tebes, J.K., Thai, N.D. and Matlin, S.L. (2014), “Twenty-first century science as a relational process: from eureka! To team science and a place for community psychology”, American Journal of Community Psychology, Vol. 53 Nos 3-4, pp. 475-490, doi: 10.1007/s10464-014-9625-7.

Urbanska, K., Huet, S. and Guimond, S. (2019), “Does increased interdisciplinary contact among hard and social scientists help or hinder interdisciplinary research?”, PLoS ONE, Vol. 14 No. 9, e0221907, doi: 10.1371/journal.pone.0221907.

Wageman, R., Hackman, J.R. and Lehman, E. (2005), “Team diagnostic survey development of an instrument”, The Journal of Applied Behavioral Science, Vol. 41 No. 4, pp. 373-398, doi: 10.1177/0021886305281984.

Wageman, R., Gardner, H. and Mortensen, M. (2012), “The changing ecology of teams: new directions for teams research”, Journal of Organizational Behavior, Vol. 33 No. 3, pp. 301-315, doi: 10.1002/job.1775.

Wagner, C.S., Roessner, J.D., Bobb, K.K., Klein, J.T., Boyack, K.W., Keyton, J., Rafols, I. and Börner, K. (2011), “Approaches to understanding and measuring interdisciplinary scientific research (IDR): a review of the literature”, Journal of Informetrics, Vol. 5 No. 1, pp. 14-26, doi: 10.1016/j.joi.2010.06.004.

Weiss, C.H. (1997), Evaluation: Methods for Studying Programs and Policies, 2nd ed., Prentice-Hall, Upper Saddle River, NJ.

Wuchty, S., Jones, B.F. and Uzzi, B. (2007), “The increasing dominance of teams in production of knowledge”, Science, Vol. 316 No. 5827, pp. 1036-1039, doi: 10.1126/science.1136099.

Corresponding author

E. Barrett Ristroph can be contacted at: barrett@ristrophlaw.com

Related articles