The development and validation of the scholar–practitioner research development scale for students enrolled in professional doctoral programs

Amanda Rockinson-Szapkiw (College of Education, University of Memphis, Memphis, Tennessee, USA)

Journal of Applied Research in Higher Education

ISSN: 2050-7003

Publication date: 8 October 2018

Abstract

Purpose

The purpose of this paper is to develop and validate the scholar–practitioner research development scale (SPRDS), an instrument to assess research competencies of students enrolled in professional doctoral programs.

Design/methodology/approach

In this instrument development study, an expert panel established the content valid. A factor analysis and internal consistency analysis was used to examine the validity and reliability of the instrument.

Findings

An expert panel deemed the scale as content valid. Results of a factor analysis and internal consistency analysis demonstrated that the scale is both valid and reliable, consisting of five subscales.

Research limitations/implications

The current study provides evidence that the scholar–SPRDS is a valid and reliable instrument to assess research characteristics professional doctorate students’ research competencies, which can be used to extend research on the development of doctoral students in professional doctorate programs.

Practical implications

The instrument can be a useful tool to assess and inform the faculty and administrators about their students, the curriculum and program resources.

Social implications

Equipped with an instrument such as this, faculty and administrators are better armed to assess students’ growth thought out the program, and, in turn, design and deliver research curriculum and mentorship that assists students in developing as scholar–practitioners, which may ultimately lead to success in the program and beyond, impacting the society.

Originality/value

There is not a formal or standardized scale to evaluate if professional doctoral students are progressing and developing as practitioner-scholars through their professional doctoral programs. There is not a standardized or universally adopted assessment to determine if professional doctoral programs are meeting the goals and objectives they have set forth. Thus, the aim of this study was to develop and to determine the validity and reliability of a scale to measure a scholar–practitioner’s research competencies in a professional doctoral program.

Keywords

Citation

Rockinson-Szapkiw, A. (2018), "The development and validation of the scholar–practitioner research development scale for students enrolled in professional doctoral programs", Journal of Applied Research in Higher Education, Vol. 10 No. 4, pp. 478-492. https://doi.org/10.1108/JARHE-01-2018-0011

Download as .RIS

Publisher

:

Emerald Publishing Limited

Copyright © 2018, Emerald Publishing Limited


“All doctoral programs have in common a structure of formal requirements and informal expectations for students” to become scholars (Golde and Dore, 2001, p. 34). According to national organizations (American Association of Colleges of Nursing (AACN), 2015; Carnegie Project on the Education Doctorate (CPED), 2009; Council of Graduate Schools, 2007), professional doctoral programs across disciplines (e.g. nursing, education, psychology, business) have been created to train scholar–practitioners. Scholar–practitioners, during their progams: develop a value of research, including an understanding and appreciation of research as a means to solve problems of practice and to advocate for social justice and equity; obtain skills and knowledge to critically consume (e.g. information literacy), apply, and conduct research to inform practice; and gain the ability to disseminate research to professionals in the field to transform practice (Perry, 2015; Rolfe and Davies, 2009; Servage, 2009; Shulman et al., 2006). Thus, developing the competency to apply, conduct and disseminate research to solve problems in practice is essential to becoming a scholar–practitioner (Maxwell, 2003). The responsibility of scholar–practitioner development and related competencies lies with not only the student but also with doctoral program faculty and administrators (Lovitts, 2008). To promote this development, faculty and administrators need to understand the research competencies scholar–practitioners need and have tools to assess their development. The identification of competencies and assessment tools enable faculty and administrators to assess students’ development, and, in turn, design curriculum and develop initiatives that promote this development, which is associated with students’ persistence (Gardner, 2009; Rockinson-Szapkiw et al., 2017) and quality research contributions to the profession (Walker et al., 2008).

Unfortunately, while a few assessments exist to examine research self-efficacy (Research Self-Efficacy Scale; Greeley et al., 1989), perceptions of the research training environment (Research Training Environment Scale-Revised; Gelso et al., 1996), interest in research (Interest in Research Questionnaire; Bishop and Bieschke, 1994) and research competencies of research scientists (Research Competency Scale; Swank & Lambie, 2016), a universally adopted assessment to determine if students are developing research competencies as scholar–practitioners in professional doctoral programs does not exist. In fact, although there are a few discipline-specific articles on doctoral student research competencies (Peterson et al., 2010; Richardson, 2006), a list of research competencies for scholar–practitioners is in a neophyte stage of development. Thus, the purpose of this present study is to review the literature on scholar–practitioners’ needed research competencies and then develop and validate a scale to measure them.

The importance of scholar–practitioners’ research and the problems that exist

Development of research competencies as a scholar–practitioner is imperative for doctoral student success, and ultimately, improving professional practice. Doctoral students’ development as scholar–practitioners is associated with volition to persist in a program, and a lack of development from student to scholar–practitioner can result in attrition (Rockinson-Szapkiw and Spaulding, 2015; Rockinson-Szapkiw et al., 2017). Through doctoral education, future leaders of education, commerce, and industry are created (Nettles and Millett, 2006). Attrition in programs, which has hovered around 50 percent for doctoral programs throughout the decades (Bowen and Rudenstine, 1992; Nettles and Millett, 2006), leads to a deficiency in leaders for professions.

Moreover, the research competencies of a scholar–practitioner are vital for improving practice (e.g. business practice, clinical treatment or classroom instruction), for the quality of the research and its ability to improve practice is limited by the students’ research competence (Walker et al., 2008). Unfortunately, research across disciplines such as social sciences, behavioral sciences, nursing and education has been criticized, and the quality of the research has been called into question (Burkhardt and Schoenfeld, 2003; Walker et al., 2008; Wester et al., 2013). Additionally, what has been referred to as a research-practice gap exists (Murray, 2009). This gap exists for a myrid of reasons, including doctoral students graduating from professional doctoral programs without needed research competencies. Researchers have documented that students graduate without the ability to evaluate literature and apply research in practice. They graduate without a value of research as relevant to practice (Gelso, 1993; Kaplan and Gladding, 2011; Stoltenberg et al., 2000; Wester and Borders, 2014). Stoltenberg et al. (2000) explained:

[…] we find the critical thinking associated with scientific training to be crucial to the development of effective practitioners. Indeed, the [student] runs the risk of believing sufficient facts are known to justify [an] […] intervention. It is at times disturbing how the same students who will dismiss the research literature as flawed will so readily embrace an approach espoused by a clinician who has no evidence of efficacy […] the [student] who lacks a depth and process appreciation of science can only accept or dismiss research results based on limited analytical abilities. Thus, the authority to which they become tied may merely be that of their own experience or that of their most influential teachers, or even worse, of the most charismatic and aggressive salesperson on the practitioner continuing-education-seminar circuit.

(p. 630)

Reasons for the lack of scholar–practitioner development, attrition and poor-quality research across professions are likely complex and multifaceted; however, these concerns do bring into question the training and the research competencies of doctoral students. The concerns highlight the need to identify and more systematically assess the research competencies doctoral students develop throughout their professional doctoral programs. For example, Lovitts (2008) purported that the process of becoming a scholar-practioner is an important area where more research is needed, especially since the development as a scholar-practioner is associated with success for the doctoral student and program (Gardner, 2008; Rockinson-Szapkiw et al., 2017; Walker et al., 2008). The development of a list and assessment tool for scholar–practitioners’ research competencies is also an important step in improving the quality of research training in professional doctoral programs, and perhaps, the quality of research conducted in practice.

Research competency of scholar–practitioners

Before defining research competency, the term competency needed to be defined. For this study, competency was defined as being “a professional qualified, capable, and able to understand and do certain things in an appropriate and effective manner,” which is inclusive of attitudes, knowledge, skills, and behaviors (Rodolfa et al., 2005, p. 348). More specifically, drawing from the literature on professional doctoral programs, research competency of a scholar–practitioner refers to the attitudes (e.g. view, evaluation), knowledge (e.g. know, understand), skills (e.g. ability, capability) and behavior (e.g. action) to apply, conduct and disseminate research appropriately and effectively to solve problems of practice, to improve practice and to advocate for social justice and equity (Perry, 2015; Rolfe and Davies, 2009; Servage, 2009; Shulman et al., 2006). Empirical research on discipline-specific research competency frameworks and scholar-practioner development (i.e. sometimes referred to as practioner-scholar, scholar, etc.) was used to further delineate this construct and develop items for the scale.

For example, Rockinson-Szapkiw et al. (2017), in a qualitative study, focused on women’s development as scholars during professional doctoral programs. They concluded that scholar–practitioners, early in their programs, develop a value of research as a tool to improve practice and serve others. Through coursework and participation in collaborative scholarship, women develop research knowledge and skills that enable them to apply and conduct research in their workplaces. Finally, women develop a vision and intention to engage in research and communicate (e.g. disseminate) their knowledge to key stakeholders in the community and field. Similarly, Baker and Pifer (2014), in studying doctoral students enrolled in education and business programs, purported that the transition from student to scholar occurs through three stages: knowledge consumption, knowledge creation and knowledge enactment. Through the first stage, which often occurs during coursework, doctoral students begin developing as scholars by learning how to consume and critically evaluate literature. As doctoral students move toward completing their capstone research project, they obtain knowledge about research and analytic methods. They learn how to conceptualize and conduct research investigations through collaborative and capstone research projects. Finally, the process of development as a scholar continues through the knowledge enactment stage, in which the doctoral student “seeks legitimacy and membership in the scholarly community by engaging in practices and behaviors” (Baker and Pifer, 2014, p. 140), such as communicating their research to various stakeholders.

Many others have written about research-related competencies and characteristics of scholar–practitioners in professional doctoral programs, highlighting similar attitudinal, behavioral, skill, and knowledge elements as discussed in these two studies. Similar to Rockinson-Szapkiw et al. (2017), CPED (2009) and AACN (2015), literature has noted that doctoral students enrolled in professional doctoral programs should develop a value of research as a tool to solve problems of practice and promote social justice. Through their professional doctoral programs, students develop the skills and knowledge of what some authors call Mode 2 knowledge generation (e.g. Lester, 2004; Rolfe and Davies, 2009). Students are expected to develop knowledge and skills to conceptualize, design and conduct research investigations “according to accepted standards of rigor and quality” (Golde, 2006, p. 10), with the responsibility of generating knowledge that investigates “a particular professional topic or existing problem” (Colwill, 2012, p. 13) and is readily usable in practice. The CPED (2011) Consortium developed the idea of inquiry as practice to explain the skills and knowledge students develop as scholar–practitioners through their professional doctoral programs. CPED (2011) purported that scholar–practitioners should be able to engage in:

[…] the process of posing significant questions that focus on complex problems of practice and using data to understand the effects of innovation. As such, inquiry of practice requires the ability to gather, organize, judge, aggregate, and analyze situations, literature, and data with a critical lens.

(para. 2)

That is, scholar–practitioners need to understand “the usefulness of a number of relevant, related disciplinary fields” (Morley and Priest, 2001, p. 24), theories, research methods, designs and analytic approaches so that they can design research investigations “suited to the context of practice” (Willis et al., 2010, p. 25; Johnson and Hathaway, 2004; Lee et al., 2000; Perry, 2015; Taylor and Maxwell, 2004). Richardson (2006) noted that in order to do research such as this in education, scholar–practitioners should have seven research competencies. Scholar–practitioners should: have discipline-specific knowledge; be able to think critically and theoretically about problems; be able to conceptualize research problems, in a socially and professionally relevant manner; have the skills to design a research investigation using appropriate methods; have the skills to collect and analyze data; and be able to communicate research findings in a usable manner to various stakeholders, especially community partners. Likewise, in the discipline of psychology, Peterson et al. (2010) noted that students should graduate with competencies for conducting and using research in applied settings. Students should graduate doctoral programs with the ability to evaluate literature and have an understanding of research ethics. Finally, as Baker and Pifer (2014) note, additional researchers have stated that students enrolled in professional doctoral programs should understand that scholarly practitioner work needs to be shared with key stakeholders in their organization as well as the broader community of stakeholders (e.g. community partners, professional colleagues) (Archbald, 2008). In sum, Table I outlines the five areas of research competencies derived from the literature on scholar–practitioner development and research development in professional doctoral programs. These themes and the literature surrounding them were used to develop items for the scale aimed at measuring the research competency of a scholar–practitioner.

Purpose of the study

The scholar–practitioner research development scale (SPRDS) was developed and the following research questions were examined:

RQ1.

What is the validity and dimensionality of the SPRDS when assessing the research competencies of scholar–practitioners?

RQ2.

Does the SPRDS have good internal reliability?

Methodology

Design

To investigate the research questions, I used quantitative methods. A principal axis factoring method with direct oblimin rotation was used to examine the validity and dimensionality of the SPRDS. Given that the assumption of univariate, a condition necessary for multivariate normality, was not met, a principal axis factoring method was chosen as the most appropriate type of factor analysis (Fabrigar et al., 1999). Factors were rotated using a direct oblimin method to account for the correlation among factors (Fabrigar et al., 1999). Coefficient α (Cronbach and Shavelson, 2004) was used to investigate internal consistency.

Item and scale development

Through a review of the literature and review of previously developed research scales (see Table I), 40 items were developed to assess the five areas of research competencies. Items were designed following Kline’s (2005) guidelines, and eight items were developed for each area. All 40 items were positively worded dealing with one primary concept. Question style and soring followed DeVellis’ (2012) recommendations for scoring; a five-point Likert-type scale response was used of strongly agree (5), agree (4), neutral (3), disagree (2) and strongly disagree (1). After the items were developed, a panel of experts consisting of 10 university faculty from public and private non-profit institutions across disciplines (e.g. education, counseling, psychology, nursing and business) was formed, which exceeded the recommended size (DeVellis, 2012). All panelists held terminal degrees within their disciplines and had at least five years’ experience teaching in a professional doctoral program. Via an online survey, the panelists were asked to evaluate the set of 40 items for their relevancy and necessity as a research competency for developing scholar–practitioners enrolled in a professional doctoral program. The panelists were also asked to evaluate and make comments about the clarity of each of the items. The panelists rated each item on a three-point Likert-type scale for both relevancy and necessity. For example, they used 0 for not relevant, 1 for reasonable but not completely relevant and 2 for relevant. They used 0 for not necessary, 1 for reasonable but not completely necessary and 2 for necessary. Mean rating scores for each item were computed; any item that did not have a score of 1.5 or higher for relevancy and necessity was deleted. The panelists reached a consensus that 22 of the items were both relevant and necessary for research competence (i.e. having a mean rating of 2); however, 8 of the items had mean scores ranging between 1.5 and 2.0 with comments for recommended revisions. Recommended revisions were made, and the 30 items were sent to the panelists to again rate for both relevancy and necessity. Mean scores for each item were again computed; any item that did not have a score of 2.0 for relevancy and necessity was deleted. In the second round of rating, panelists reached a consensus that 26 items were both relevant and necessary with no further recommendations for revisions. Thus, the scale sent to the participants consisted of 26 items, with three to six items per competency area.

Participants

Data were collected from doctoral students enrolled in professional doctoral programs at public, private for-profit, and private non-profit institutions. The programs included Doctor of Education (EdD) programs in educational leadership, curriculum and instruction, instructional technology and design, higher education, special education and educational psychology (n=735); Doctor of Nursing Practice (DNP; n=60) programs, Doctor of Business Administration (DBA; n=84) programs and doctoral programs in Counseling and Counselor Education and Supervision (n=252).

In total, 271 (62.6 percent) females and 162 (37.4 percent) males across various ethnicities (i.e. White=300, 69.3 percent; African American = 97, 22.4 percent; American Indian = 9, 2.1 percent; Asian = 14, 3.2 percent; and Hispanic = 13, 3 percent) completed the SPRDS. Participants were enrolled in Doctor of Education (n = 341, 78.9 percent), DNP (n = 18, 4.1 percent), DBA (n = 32, 7.4 percent) and doctoral programs in Counselor Education and Supervision (n = 42, 9.6 percent). On average, the participants had completed 44 (SD = 20.92) hours of their 60–63 credit hour programs. All but 19 of the participants had completed all of their research or analysis courses in their program; all had taken at least one research and analysis course. The majority of the participants had full-time jobs and maintained the status of a full-time student (i.e. taking at least six credit hours or enrolled in dissertation/final project courses). Participants held professional roles such as principals, superintendents, student affairs personnel, instructional designers, school counselors, counselors, nurses, accountants, marketing managers and managers. Many indicated their intention to continue in their current professional roles upon completing their degrees.

Procedures

A sample of students (n=1,131) enrolled in professional doctoral programs across the USA was elicited. Following Dillman et al.’s (2009) recommendations, emails to participate in the study were sent to students and faculty associated with professional doctoral programs at nine institutions. Within the email invitation, the professional doctoral program was defined and enrollment in a professional doctoral program was identified as a criterion to participate. In total, 492 students (43.5 percent) completed the survey, with 433 (38.28 percent) cases being complete and usable for data analysis. A total of 59 cases were removed as more than half of the responses were left unanswered, making the data unusable in these cases.

Results

Descriptive statistics

The descriptive statistics for the overall SPRDS and each subscale are presented in Table II. The distributions on each scale were positively skewed, and the univariate assumption of normality was violated (Field, 2009; Gravetter and Wallnau, 2014; Trochim and Donnelly, 2006). During the data analysis, two items that did not account for salient factor loading (i.e. a rotated factor loading of 0.3, accounting for at least 9 percent variance for one factor) when the factor analysis was conducted; thus, the scale was reduced from 26 items to 24 items. The final SPRDS consists of 24 Likert-type scale items, with six items related to research skills, three items related to research dissemination, six items related to the value of research, three items related to evaluation and application skills, and six items related to research knowledge. As noted, the five-point Likert-type scale was used for all 24 items (i.e. strongly agree (5), agree (4), neutral (3), disagree (2) and strongly disagree (1)). Averaged scores on the overall scale and each subscale range from 1 to 5, with higher scores indicating higher levels of research competencies.

Validity and reliability analysis

Table III is a correlation matrix for the SPRDS. The correlation coefficients in this matrix coupled with the Kaiser–Meyer–Olkin value for sampling adequacy was 0.92, indicating that none of the items on the scale violated the assumption of multicollinearity (Kaiser, 1958). The significant Bartlett’s Test of Sphericity (p >0.001; χ2 = 8,040.11) provided support for the factorability of the correlation matrix (Fields, 2009). The data were deemed suitable for analysis, a principal axis factoring method.

Four criteria were used to determine how many factors to extract. Inspection of the Cattell’s scree plot revealed a clean break after the first factor, supporting a unidimensional scale solution. Whereas, the Kaiser–Gutman rule indicated a five-factor solution as five factors possessed eigenvalues of 1.0 or higher. This was supported by the parallel analysis, which also suggested a five component solution. Finally, the interpretability of the solution was used, with the most interpretable solution also being a five-factor solution. Factors were rotated using a direct oblimin method to account for the correlation among factors (Fabrigar et al., 1999). All but 2 of the 26 items loaded on one of the five factors and had extraction communalities (h2) above 0.5 or greater (Kline, 2005; Tabachnick and Fidell, 2013). Many items loaded strongly on a primary factor (i.e. above 0.5; Tabachnick and Fidell, 2013). Where cross-loading of items occurred, the items were examined to determine if it was theoretically justifiable to retain them and to determine if any loading on the non-primary factors was more than 0.32 (Tabachnick and Fidell, 2013). Thus, the decision was made to retain 24 items and remove the 2 items that did not load on a factor. Table IV contains the estimates of communalities (h2), that is, the proportion of variance explained by the factors, and the factor loadings. Based on these data, the decision to retain 24 items was made.

The research skills factor accounted for 40.79 percent for the item variance; the value of research accounted for 11.27 percent; research dissemination accounted for 8.98 percent; evaluation and application skills for 55.21 percent; and research knowledge accounted for 4.32 percent, which accounts for 70.85 percent of the data. The five factors were moderately associated (see Table V).

Internal consistency of the entire scale, as well as the subscales, was computed. Coefficient α for the SPRDS was 0.93, demonstrating good internal consistency. Each subscale also had good internal consistency (i.e. research skills = 0.90, research disseminations = 0.93, value of research =0.88, research evaluation skills = 0.82, research knowledge=0.85).

Discussion

This study examined the dimensionality, validity and reliability of a scale created to measure research competency of a scholar–practitioner, which is inclusive of the attitudes (e.g. view, evaluation), knowledge (e.g. know, understand), skills (e.g. ability, capability) and behavior (e.g. action) to apply, conduct and disseminate research appropriately and effectively to solve problems of practice, to improve practice and to advocate for social justice and equity (Perry, 2015; Rolfe and Davies, 2009; Servage, 2009; Shulman et al., 2006). In this study, the SPRDS was developed, refined and tested with over 400 students enrolled in professional doctoral programs across disciplines such as business, counseling, nursing and education. Evidence from both an exploratory factor analysis and internal consistency analysis demonstrated that the self-report scale has both validity and reliability. The final scale was found to have five dimensions: value of research; evaluation and application skills; research knowledge; research skills; and research dissemination.

The value of research scale represents a doctoral student’s belief that research is valuable and can be a useful tool to improve professional practice. Evaluation and application skills are the skills necessary to locate, critically evaluate and apply literature to practice, whereas research knowledge is the understanding of theories and methodologies as research is applied and conducted to solve problems encountered in practice. Research skills represent the skills needed to design and conduct an investigation to inform practice. Finally, research dissemination refers to the doctoral student’s interest or active engagement in scholarly practices and behaviors, namely, disseminating research to a variety of stakeholders. This dimensionality is consistent with research (e.g. Baker and Pifer, 2014; Perry and Imig, 2008; Rockinson-Szapkiw et al., 2017) suggesting that students enrolled in professional doctoral programs need to develop research attitudes, skills, knowledge and behaviors to not merely contribute to the theoretical literature read by the academic community but to improve practice.

Recommendations and limitations

Lovitts (2008) suggested that doctoral students’ success, including their development as a scholar–practitioner, is dependent upon the student, the program and its faculty. While faculty and administrators cannot control individual factors, they can and should influence student development as a scholar–practitioner through the program and cultural factors (Jones, 2013). The SPRDS, found to have both reliability and validity, has utility for faculty and administrators in professional doctoral programs. Students may also find it useful in assessing their own research competency development as they progress through their programs to identify areas in which they may want to seek further development.

The SPRDS provides a useful tool for faculty and administrators to assess, and, in turn, influence the development of doctoral students’ research competencies in their professional doctoral programs. Used at the beginning and middle stages of a doctoral program, the SPRDS can be used to identify students with weak research competencies and, potentially, at-risk of attrition. Students may also use the measure to identify needed areas of growth and seek opportunities, such as joining communities of practice o taking additional research courses, to address these identified areas.

The SPRDS may also be useful to assess and inform curriculum and program supports aimed at students’ development as scholar–practitioners. Doctoral faculty and advisors play a fundamental role in doctoral students’ transition from students to scholar–practitioners, refining the role they will play in the profession (Barnes et al., 2010; McAlpine and Amundsen, 2011). Thus, faculty have the responsibility to design courses to promote the value of research, research skill and research knowledge. Through program sponsored communities of practice, faculty can model their commitment to improving the lives of individuals through research and invite students to participate in faculty-led applied research projects or program evaluations, which aim to improve practice. These specific courses, pedagogies or experiences can be evaluated with the SPRDS to determine if they are promoting students’ ability to consume, evaluate, design and conduct, and disseminate research aimed at improving practice.

Using the scale may also be a way for faculty and administrators to assess if curriculum and culture within the program are consistent with the aim of a professional doctoral program. From the onset of the program, doctoral students should see research as a useful tool for practice, rather than something individuals perform that is specialized and confined to their field of study and the academic community. For, McConnell (1984) stated that individuals “should be trained as thoroughly as possible for what they are to do-whether this is research or practice-and not what others wish that they do” (p. 366). Professional doctoral programs need to ensure they do not become too focused on the traditional research focus but instead ensure the program and culture is a place where students adopt a research-minded approach for practice (Johnson and Hathaway, 2004; Perry, 2015). The SPRDS provides a tool for both formative and summative assessment throughout professional doctoral programs. The scale may be one useful tool for faculty and researchers who desire to develop and assess systematic teaching and learning protocols that are empirically validated for professional doctoral programs across universities and applicable and useful in local contests (Shulman, 2011).

While the SPRDS has reliability and validity and is useful, limitations do need to be acknowledged. In this study, the sample consisted of students from a limited number of programs and universities, with the majority of students pursuing degrees in the field of education. The majority of participants were also White and female. In the future, studies conducted to norm the SPRDS need to include students from a broader range of disciplines. International professional doctoral program students should be considered, and a more diverse demographic needs to be included. The present study sample also consisted of students in the final stages of their doctoral programs. Future studies, thus, may examine the variance in research competencies and scholar–practitioner development across doctoral program stages. Additionally, as the development and debate about the construct of the scholar–practitioner and competencies continue, this scale may continue to be refined. The scale may also need to be refined for discipline-specific use. It is also important to recognize that while self-repot measures can be useful in assessing research skills (Gilmore and Feldon, 2010), this self-report scale is limited in that it does not measure actual research competencies.

Conclusion

The current study provides evidence that the SPRDS has validity and reliability as an instrument to assess research competencies of students enrolled in professional doctoral programs. The instrument can be a useful tool to assess and inform faculty and administrators about their students’ development, the curriculum and program culture. In fact, future research needs to use the scale to assess the influence instructional elements and doctoral program initiatives have on students’ scholar–practitioner development of research competencies. The scale may also be useful in future research to better understand elements that predict and inform the development of doctoral student scholar–practitioners.

Literature themes

Areas of research competencies References
Attitude toward or value of research (e.g. research as: a tool relevant to practice; useful for helping people; a tool to investigate social problems; a tool to address issues of equity and social justice; a tool to find solutions to problems of practice) CPED (2009), Gelso (1993), Gelso et al. (2013), Richardson (2006), Rockinson-Szapkiw et al. (2017)
Critical evaluation and application of research (e.g. find information; critically evaluate research articles; think theoretically; use research and theory in an applied setting; conceptualization) Baker and Pifer (2014), Bieschke et al. (1993), CPED (2009), Greeley et al. (1989), Peterson et al. (2010), Richardson (2006)
Research knowledge (e.g. understand ethical guidelines for research in the profession; have substantive knowledge of the professional field, discipline-specific research and theory; understand the differences between various methods and designs; understand analysis procedures) Bieschke et al. (1993), Greeley et al. (1989), Berliner (2006), Golde (2006), Lee et al. (2000), Peterson et al. (2010), Richardson (2006), Shulman et al. (2006), Taylor and Maxwell (2004); Walker et al. (2008)
Research skills (e.g. investigate in an applied setting; design research; collect and analyze data; conduct research motivated by practice needs’ preset results; implementation) Greeley et al. (1989), Johnson and Hathaway (2004), Peterson et al. (2010), Richardson (2006), Taylor and Maxwell (2004), Walker et al. (2008), Wester and Borders (2014)
Research dissemination (e.g. generate professional knowledge to improve practice; communicate research effectively and clearly to various stakeholders; communicate about research to a variety of audiences) Archbald (2008), Baker and Pifer (2014), CPED (2009), Creswell (2015), Lester (2004), Perry (2015), Rolfe and Davies (2009), Shulman et al. (2006), Walker et al. (2008), Wester and Borders (2014)

Descriptive statistics

Scale or item M SD
Value of research 4.70 0.43
Evaluation and application skills 4.57 0.55
Research knowledge 4.46 0.53
Research skills 4.13 0.77
Research dissemination 4.21 0.91
SPRDS (total) 4.42 0.48

Note: n=433

Item correlation matrix

Factor loadings and communalities

F1 F2 F3 F4 F5 h2
Value of research items (n=6)
1. Acquiring research knowledge and skills during my program is important 0.721 −0.034 −0.093 −0.068 0.116 0.574
2. Research can improve the lives of those served in my professional practice 0.668 −0.127 −0.086 −0.078 0.103 0.560
3. Research can improve my professional practice 0.831 0.085 0.005 0.062 −0.118 0.734
4. Research is useful to solve complex problems I face in my professional practice 0.795 0.043 −0.001 0.084 −0.080 0.674
5. Research is important to promote equity and social justice in my professional practice 0.859 0.037 0.014 0.127 −0.125 0.782
6. Disseminating my research to various audiences is important to improve professional practice 0.580 0.110 −0.084 −0.014 0.039 0.533
Evaluation and application skills (n=3)
7. I can apply empirical research to solve problems I encounter in my professional practice 0.136 0.563 0.038 0.153 −0.097 0.716
8. I can apply theory to solve problems I encounter in my professional practice −0.030 0.766 −0.094 −0.064 0.165 0.722
9. I can identify scholarly resources to solve problems I encounter in my professional practice −0.032 0.885 −0.028 0.005 −0.005 0.769
Research knowledge (n=6)
10. I understand ethical guidelines for research in my profession (e.g. obtain IRB approval, do not harm participants) 0.009 0.002 0.656 0.314 −0.095 0.575
11. I understand how to formulate questions to investigate problems in my professional practice 0.005 −0.012 0.685 0.207 −0.033 0.664
12. I understand research methods (e.g. quantitative, qualitative, and mixed) I can use to investigate problems in my professional practice 0.124 0.063 0.750 0.080 0.014 0.766
13. I understand analytic procedures to analyze data collected in my professional practice 0.138 −0.009 0.710 0.152 −0.027 0.767
14. I understand how theories and paradigms are used to develop investigations to solve problems in my professional practice 0.125 0.097 0.761 −0.015 0.021 0.717
15. I understand how to engage in the research process, from conceptualization to dissemination (e.g. communication to key stakeholders), to address problems in my professional practice 0.276 −0.038 0.566 0.059 −0.009 0.656
Research skills (n=6)
16. I can design meaningful research investigations to address problems in my professional practice 0.066 0.264 0.021 0.576 0.142 0.668
17. I can choose the appropriate method of inquiry (e.g. quantitative, qualitative and mixed) to address problems in my professional practice 0.072 0.169 0.078 0.687 0.156 0.726
18. I can conduct rigorous research investigations to address problems in my professional practice 0.087 0.166 0.086 0.727 0.123 0.761
19. I can interpret results from the data I analyze 0.041 0.002 −0.262 0.558 0.034 0.585
20. I can analyze data (e.g. quantitative, qualitative and mixed) that I collect to address problems in my professional practice −0.006 −0.086 −0.272 0.736 0.003 0.768
21. I can develop investigation questions to examine problems in my professional practice 0.023 −0.125 −0.277 0.676 0.083 0.668
Research dissemination (n=3)
22. I can communicate (e.g. present, write) the results of research investigations I conduct to key stakeholders 0.127 0.114 −0.008 −0.024 0.755 0.719
23. I can discuss the results of research investigations in light of empirical and theoretical literature, drawing connections between the practice and the knowledge of the profession 0.000 0.046 −0.001 0.052 0.905 0.872
24. I can communicate implications to improve practice based the results of research investigations I conduct 0.002 0.045 0.366 0.043 0.905 0.872

Notes: Factor labels: F1, the value of research; F2, evaluation and application skills; F3, research knowledge; F4, research skills; F5, research disseminations

Factor correlation matrix

Factor 1 2 3 4 5
1 0.30* 0.26* 0.36* 0.31*
2 45* 0.48* 0.36*
3 0.48* 0.27*
4 0.52*

Notes: Factor labels: factor labels: F1, the value of research; F2, evaluation and application skills; F3, research knowledge; F4, research skills; F5, research disseminations. *Correlations significant at the 0.05 level

References

American Association of Colleges of Nursing (AACN) (2015), “The essentials of doctoral education for advanced nursing practice”, available at: www.aacn.nche.edu/publications/position/DNPEssentials.pdf (accessed May 15, 2018).

Archbald, D. (2008), “Research versus problem solving for the education leadership doctoral thesis: implications for form and function”, Educational Administration Quarterly, Vol. 44 No. 5, pp. 704-739.

Baker, V.L. and Pifer, M.J. (2014), “Preparing for practice: Parallel processes of identity development in stage 3 of doctoral education”, International Journal of Doctoral Studies, Vol. 9, pp. 137-154, available at: http://ijds.org/Volume9/IJDSv9p137-154Baker0623.pdf

Barnes, B., Williams, E. and Archer., S. (2010), “Characteristics that matter most: doctoral students’ perceptions of positive and negative advisor traits”, NACADA Journal, Vol. 30 No. 1, pp. 34-46.

Berliner, D. (2006), “Toward a future as rich as our past”, in Golde, C.M. and Walker, G.E. (Eds), Envisioning the Future of Doctoral Education, Jossey-Bass, San Francisco, CA, pp. 268-289.

Bieschke, K.J., Bishop, R.M. and Garcia, V.L. (1993), “A factor analysis of the research self-efficacy scale”, paper presented at the meeting of American Psychological Association, August, Toronto.

Bishop, R.M. and Bieschke, K.J. (1994), “Interest in research questionnaire”, unpublished scale, The Pennsylvania State University, University Park.

Bowen, W.G. and Rudenstine, N.L. (1992), In Pursuit of the PhD, Princeton University Press, Princeton, NJ.

Burkhardt, H. and Schoenfeld, A.H. (2003), “Improving educational research: toward a more useful, more influential and better-funded enterprise”, Educational Researcher, Vol. 32 No. 9, pp. 3-14.

Carnegie Project on the Education Doctorate (CPED) (2009), Working Principles for the Professional Practice Doctorate in Education, Carnegie Project on the Education Doctorate, College Park, MD.

Carnegie Project on the Education Doctorate (CPED) (2011), Design Concept Definitions, Carnegie Project on the Education Doctorate, College Park, MD.

Colwill, D.A. (2012), Educating the Scholar Practitioner in Organization Development, Information Age Publishing, Charlotte, NC.

Council of Graduate Schools (2007), Task Force on the Professional Doctorate, Council of Graduate Schools, Washington, DC.

Creswell, J.W. (2015), Educational Research: Planning, Conducting, and Evaluating Quantitative and Qualitative Research, Merrill Prentice-Hall, Columbus, OH.

Cronbach, J.L. and Shavelson, R.J. (2004), “My current thoughts on coefficient alpha and successor procedures”, Educational and Psychological Measurement, Vol. 64 No. 3, pp. 391-418.

DeVellis, R.F. (2012), Scale Development: Theory and Applications, Sage, Thousand Oaks, CA.

Dillman, D.A., Smyth, J.D. and Christian, L.M. (2009), Internet, Mail, and Mixed Mode Surveys: The Tailored Design Method, 3rd ed., Wiley, New York, NY.

Fabrigar, L.R., Wegener, D.T., MacCallum, R.C. and Strahan, E.J. (1999), “Evaluating the use of exploratory factor analysis in psychological research”, Psychological Methods, Vol. 4 No. 3, pp. 272-299.

Field, A. (2009), Discovering Statistics Using SPSS, SAGE, London.

Gardner, S.K. (2008), “‘What’s too much and what’s too little?’ The process of becoming an independent researcher in doctoral education”, The Journal of Higher Education, Vol. 79 No. 3, pp. 326-350, doi: 10.1353/jhe.0.0007.

Gardner, S.K. (2009), “Student and faculty attributions of attrition in high and low-completing doctoral programs in the United States”, Higher Education, Vol. 58 No. 3, pp. 97-112.

Gelso, C.J. (1993), “On the making of a scientist-practitioner: a theory of research training in professional psychology”, Professional Psychology: Research and Practice, Vol. 24 No. 1, pp. 468-476.

Gelso, C.J., Baumann, E.C., Chui, H.T. and Savela, A.E. (2013), “The making of a scientist psychotherapist: the research training environment and the psychotherapist”, Psychotherapy, Vol. 50 No. 2, pp. 139-149, doi: 10.1037/a0028257.

Gelso, C.J., Mallinckrodt, B. and Judge, A.B. (1996), “Research training environment, attitudes toward research, and research self-efficacy: the revised research training environment scale”, The Counseling Psychologist, Vol. 24 No. 2, pp. 304-322.

Gilmore, J. and Feldon, D. (2010), “Measuring graduate students teaching and research skills through self-report: descriptive findings and validity evidence”, paper presented at the Annual Meeting of American Educational Research Association, Denver, CO.

Golde, C.M. (2006), “Preparing stewards of the discipline”, in Golde, C.M. and Walker, G.E. (Eds), Envisioning the Future of Doctoral Education, Jossey-Bass, San Francisco, CA, pp. 3-23.

Golde, C.M. and Dore, T.M. (2001), At Cross Purposes: What the Experiences of Today’s Doctoral Students Reveal about Doctoral Education, The Pew Charitable Trusts, Philadelphia, PA.

Gravetter, F. and Wallnau, L. (2014), Essentials of Statistics for the Behavioral Sciences, 8th ed., Wadsworth, Belmont, CA.

Greeley, A.T., Johnson., E., Seem, S., Braver, M., Dias, L., Evans, K. and Pricken, P. (1989), “Research self-efficacy scale”, unpublished scale, The Pennsylvania State University, University Park, PA.

Johnson, J. and Hathaway, W. (2004), “Training Christian practitioner-scholars: the regent university example”, Journal of Psychology and Christianity, Vol. 23 No. 4, pp. 331-337.

Jones, M. (2013), “Issues in doctoral studies – forty years of journal discussion: where have we been and where are we going?”, International Journal of Doctoral Studies, Vol. 8, pp. 83-104.

Kaiser, H.F. (1958), “The varimax criterion for analytic rotation in factor analysis”, Psychometrika, Vol. 23 No. 3, pp. 187-200.

Kaplan, D.M. and Gladding, S.T. (2011), “A vision for the future of counseling: the 20/20 principles for unifying and strengthening the profession”, Journal of Counseling & Development, Vol. 89 No. 3, pp. 367-372.

Kline, T.J.B. (2005), Psychological Testing: A Practical Approach to Design and Evaluation, Sage, Thousand Oaks, CA.

Lee, A., Green, B. and Brennan, M. (2000), “Organisational knowledge, professional practice and the professional doctorate at work”, in Garrick, J. and Rhodes, C.C. (Eds), Research and Knowledge at Work. Perspectives, Case-Studies and Innovative Strategies, Routledge, London, pp. 117-136.

Lester, S. (2004), “Conceptualising the practitioner doctorate”, Studies in Higher Education, Vol. 29 No. 6, pp. 757-770.

Lovitts, B. (2008), “The transition to independent research: who makes it, who doesn’t, and why”, The Journal of Higher Education, Vol. 79 No. 3, pp. 296-325.

McAlpine, L. and Amundsen, C. (2011), “Challenging the taken-for-granted: how research analysis might inform pedagogical practices and institutional policies related to doctoral education”, Studies in Higher Education, Vol. 37 No. 6, pp. 683-694, doi: 10.1080/03075079.2010.537747.

McConnell, S.C. (1984), “Doctor of psychology degree: from hibernation to reality”, Professional Psychology: Research and Practice, Vol. 15 No. 3, pp. 362-370.

Maxwell, T. (2003), “From first to second generation professional doctorate”, Studies in Higher Education, Vol. 28 No. 3, pp. 279-291.

Morley, C. and Priest, J. (2001), “Developing a professional doctorate in business administration: reflection and the ‘executive scholar”, in Green, B., Maxwell, T.W. and Shanahan, P. (Eds), Doctoral Education and Professional Practice: The Next Generation?, Kardoorair Press, Armidale, pp. 163-185.

Murray, C.E. (2009), “A bridge for the research–practice gap in counseling”, Journal of Counseling & Development, Vol. 87 No. 1, pp. 108-116.

Nettles, M.T. and Millett, C.C. (2006), Three Magic Letters: Getting to PhD, The Johns Hopkins Press, Baltimore, MD.

Perry, J.A. (2015), “The Carnegie Project on the education doctorate”, Change: The Magazine of Higher Learning, Vol. 47 No. 3, pp. 56-61.

Perry, J.A. and Imig, D. (2008), “A steward of practice in education”, Change: The Magazine of Higher Learning, Vol. 40 No. 6, pp. 42-49.

Peterson, R.L., Peterson, D.R., Abrams, J.C., Stricker, G. and Ducheny, K. (2010), “The national council of schools and programs of professional psychology educational model – 2009”, in Kenkel, M.B. and Peterson, R.L. (Eds), Competency-Based Education for Professional Psychology, American Psychological Association, Washington, DC, pp. 13-42.

Richardson, V. (2006), “Stewards of a field, stewards of an enterprise: the doctorate in education”, in Golde, C.M. and Walker, G.E. (Eds), Envisioning the Future of Doctoral Education, Jossey-Bass, San Francisco, CA, pp. 251-267.

Rockinson-Szapkiw, A.J. and Spaulding, L.S. (2015), “The intersecting identities of female EdD. Students and their journey to persistence”, in Stead, V. (Ed.), The Education Doctorate (Ed.D): Issues of Access, Diversity, Social Justice, and Community Leadership, Peter Lang Publishing, New York, NY.

Rockinson-Szapkiw, A.J., Spaulding, L.S. and Lunde, R.M. (2017), “Women in distance doctoral programs: how they negotiate their identities as mothers, professionals, and academics in order to persist”, International Journal of Doctoral Studies, Vol. 12, pp. 49-71, available at: www.informingscience.org/Publications/3671

Rodolfa, E., Bent, R., Eisman, E., Nelson, P., Rehm, L. and Ritchie, P. (2005), “A cube model for competency development: implications for psychology educators and regulators”, Professional Psychology: Research and Practice, Vol. 36 No. 4, pp. 347-354.

Rolfe, G. and Davies, R. (2009), “Second generation professional doctorates in nursing”, International Journal of Nursing Studies, Vol. 46 No. 9, pp. 1265-1273.

Servage, L. (2009), “Alternative and professional doctoral programs: what is driving the demand?”, Studies in Higher Education, Vol. 34 No. 7, pp. 765-779.

Shulman, L. (2011), “The scholarship of teaching and learning: a personal account and reflection”, International Journal for the Scholarship of Teaching and Learning, Vol. 5 No. 1, pp. 1-7, doi: doi.org/10.20429/ijsotl.2011.050130.

Shulman, L.S., Golde, C.M., Bueschel, A.C. and Garabedian, K.J. (2006), “Reclaiming education’s doctorates: a critique and a proposal”, Educational Researcher, Vol. 35 No. 3, pp. 25-32.

Stoltenberg, C.D., Pace, T.M., Kashubeck-West, S., Biever, J.L., Patterson, T. and Welch, I.D. (2000), “Training models in counseling psychology: scientist-practitioner versus practitioner-scholar”, The Counseling Psychologist, Vol. 28 No. 5, pp. 622-640.

Swank, J.M. and Lambie, G.W. (2016), “Development of the research competencies scale”, Measurement and Evaluation in Counseling and Development, Vol. 49, pp. 91-108, doi: 10.1177/0748175615625749.

Tabachnick, B.G. and Fidell, L.S. (2013), Using Multivariate Statistics, Allyn & Bacon, Needham Heights, MA.

Taylor, N. and Maxwell, T. (2004), “Enhancing the relevance of a professional doctorate: the case of the doctor of education degree at the university of New England”, Asia-Pacific Journal of Cooperative Education, Vol. 5 No. 1, pp. 60-69.

Trochim, W.M. and Donnelly, J.P. (2006), The Research Methods Knowledge Base, 3rd ed., Atomic Dog, Cincinnati, OH.

Walker, G.E., Golds, C.M., Jones, L., Bueschel, A.C. and Hutchings, P. (2008), The Formation of Scholars: Rethinking Doctoral Education for the Twenty-First Century, Jossey-Bass, San Francisco, CA.

Wester, K.L. and Borders, L.D. (2014), “Research competencies in counseling: a Delphi study”, Journal of Counseling & Development, Vol. 92 No. 4, pp. 447-458.

Wester, K.L., Borders, L.D., Boul, S. and Horton, E. (2013), “Research quality: critique of quantitative articles in the journal of counseling & development”, Journal of Counseling & Development, Vol. 91 No. 3, pp. 280-290.

Willis, J., Inman, D. and Valenti, R. (2010), Completing a Professional Practice Dissertation: A Guide for Doctoral Students and Faculty, Information Age Press, Charlotte, NC.

Further reading

Dimitrov, D.M. (2012), Statistical Methods for Validation of Assessment Scale Data in Counseling and Related Fields, American Counseling Association, Alexandria, VA.

Halse, C. and Malfoy, J. (2010), “Retheorizing doctoral supervision as professional work”, Studies in Higher Education, Vol. 35 No. 1, pp. 79-92.

Corresponding author

Amanda Rockinson-Szapkiw can be contacted at: dr.rockinsonszapkiw@gmail.com