This study is a quantitative exploration of a new construct the authors label as “academic culture (AC).” Treating it as generalized latent variable composed of academic press (AP), disciplinary climate (DC), and teachers’ use of instructional time, the purpose of this paper is to explore the potential of this construct to be a key mediator of school leaders’ influence on student learning. The study is guided by three hypotheses.
Responses by 856 elementary teachers from 70 schools to an online survey measured the three components of AC along with school leadership (SL). Provincial tests of writing, reading, and math were used as measures of student achievement (SA). Social economic status (SES) was used as control variable for the study. Data were summarized using descriptive statistics and correlations were calculated among all variables. Analyses included intra-class correlation analysis, regression equations, confirmatory factor analysis, and structural equation modeling.
Evidence confirmed the study’s three hypotheses: first, AP, DC, and instructional time formed a general latent construct, AC; second, AC explained a significant proportion of the variance in SA, controlling for student SES; and third, AC was a significant mediator of SL’s influence on SA. Concepts and measures of academic optimism (AO) and AC are compared in the paper and implications for practice and future research are outlined.
This first study of AC explored the relationship between AC and SA. Although at least two AO studies have included measures of distributed leadership, minimal attention has been devoted to actually testing the claim that AO is amenable to the influence of explicit leadership practices (as distinct from enabling school structures) and is a powerful mediator of SL effects on student learning. Addressing this limitation of AO research to date, the present study included a well-developed measure of leadership practices and assessed the value of AC as a mediator of such practices.
Leithwood, K. and Sun, J. (2018), "Academic culture: a promising mediator of school leaders’ influence on student learning", Journal of Educational Administration, Vol. 56 No. 3, pp. 350-363. https://doi.org/10.1108/JEA-01-2017-0009Download as .RIS
Emerald Publishing Limited
Copyright © 2018, Emerald Publishing Limited
School leader effects on student learning are largely indirect (Hallinger and Heck, 1996; Heck and Hallinger, 2010, 2014; Leithwood et al., 2006, 2010; Marks and Printy, 2003). Given the substantial theory and evidence accumulated in support of this claim, one of the most important goals for leadership research is to identify potentially powerful mediators or school-related variables that make significant, relatively direct, contributions to student learning, and are amenable to skillful leadership intervention. The practical value of such research lies in the guidance it provides leaders when choosing the focus of their improvement efforts; while leaders’ choices are myriad, only some of those choices pay off for students.
Although much more research is required before a comprehensive set of powerful individual leadership mediators is available, a second strand of this work is now underway. This strand probes below the surface of combinations of individual mediators unearthing more fundamental properties of productive schools in order to provide leaders with broader and more fundamental focuses of attention for their improvement efforts. Reflecting key features of arguably the most fully developed corpus of work aimed at this goal – research on academic optimism (AO) – this study enquired about the potential of a second multi-variable construct we label as “academic culture (AC)”.
The purpose of this study was to explore the viability of AC to serve as a mediator of school leaders’ influence on student learning. AC was created from evidence about three individual variables, academic press (AP), disciplinary climate (DC), and teachers’ use of instructional time (UIT).
Organizational culture has long been the object of research in many contexts, stimulated in no small measure by Schein’s early work (e.g. Schein, 2006). Interest in the nature and effects of school culture, in particular, was an important feature of the early “effective schools” research (Teddlie and Stringfield, 2007). Research on school culture has developed two distinct foci. One focus, represented, for example, in the early work of Little (1982), Lortie (1975), and Rosenholtz (1989) explored the contributions to student success of collaborative norms of work between and among teachers and school leaders.
The second strand of work on school culture has inquired about the conditions required to provide students with both a “safe and orderly,” as well as a “work-centered” school environment or ethos (Sammons, 1994; Mortimore et al., 1988; Reynolds et al., 1996; Teddlie and Stringfield, 2007). Our concept of AC is closely related to this second strand of school culture research, specifically the emphasis on a “work-centered” school environment. While justified on different theoretical premises, this study of AC parallels both procedural and methodological features of research on AO (e.g. Hoy et al., 2006; McGuigan and Hoy, 2006).
Focus and opportunity are the primary theoretical grounds on which we justify our attention to AC. AP and DC focus both teachers and students on the academic goals of the curriculum and help minimize distractions from pursing those goals. The value of this focus can be found in theory and evidence about the motivational effects of setting difficult but attainable goals (Lock et al., 1990) and the heightened effort that results from perceiving high expectations from significant others (e.g. Rosenthal, 1987; Wineburg, 1987). Using most of the instructional time in the classroom for teaching and learning without many other distractions provides students with opportunities to be meaningfully engaged in acquiring those academic goals. We invoke the term “culture” as part of the label for AC to acknowledge the norms and beliefs that would lead a school staff to adopt an unrelenting focus on academic goals for all students and make the most of the time they spend with those students.
Academic press (AP)
Hoy et al. (1998) define AP as “a combination of teachers setting high, but reasonable goals, students responding positively to the challenge of these goals, and the principal supplying the resources and exerting influence to attain these goals” (p. 342). Among the early correlates of effective schools (Murphy et al., 1982), AP has been found to be positively related to achievement in all types of schools including schools serving poor and minority students (Goddard et al., 2000; Hoy et al., 2006), with its effect stronger in low social economic status (SES) high schools (Shouse, 1996). For low and middle SES schools, the greatest achievement effects follow from strong combinations of communality and AP (Shouse, 1996). Instructional leadership has a positive effect on student achievement (SA) through AP, even when controlling for SES (Alig-Mielcarek, 2003). Principals who are open, supportive, friendly, and establish high expectations but do not burden teachers with bureaucratic tasks typically have high levels of AP in their schools (e.g. Jacob, 2004). AP is significantly correlated with most dimensions of transformational school leadership (SL) (r=0.42-0.65; p<0.00) (Leithwood and Sun, 2012).
Disciplinary climate (DC)
DC is conceptualized as including students’ discipline concerns, class disruptions, student absenteeism and tardiness, students counseling about discipline, students’ discipline experience (student had something stolen), the rules for behavior, race or cultural conflicts at the school, students’ behaviors and the punishments for misbehaviors at the school, teachers’ behavior, and teacher-student relations (Ma and Williams, 2004). Its effects are larger than the effects of student SES (Ma and Crocker, 2007; Ma and Willms, 2004). DC, defined as school rules and compliance, was identified as the most important determinant of academic achievement by Ma and Klinger (2000).
The impact of DC varies with student population. Ma’s (2003) study, using PISA data from 28,914 15-year old students in 1,528 Canadian schools, revealed that the only outstanding school-level variable that significantly affected reading, mathematics, and science achievement was school DC. Schools that included primary and intermediate grades tended to have more favorable DC than either high schools or junior high schools (Ma and Williams, 2004). While rural schools showed more favorable results on four of the seven measures, but the differences were small (Ma and Williams, 2004).
Limited evidence supports a direct relationship between a school’s DC and distributed or flexible leadership. Anderson et al.’s (2007) case study research indicated that distributed leadership successfully reshaped school cultures, especially school DC, by involving many staff members (e.g. custodians, security personnel, teachers, the principal, and students) in the use of school-wide behavior plans. Benda’s (2000) study reported a direct relationship between a school’s DC and culture and the flexibility of its leadership (r=0.585).
Use of Instructional time (UIT)
The value of this variable is supported by a significant body of international evidence about opportunity to learn, especially increasing the amount of academically engaged student time in class (Hossler et al., 1988). Teachers’ UIT includes teachers’ efforts to maximize time devoted to teaching and learning, create classroom conditions that allow for appropriate pace of instruction, and help students to take charge of their own learning in age-appropriate ways. Across OECD countries, the average learning time in regular school lessons is positively, but weakly, related to country average performance, while learning time in out-of-school lessons and individual study is negatively related to performance. The total amount of “time actually devoted to instruction” has moderate effects on student learning (e.g. Bellei, 2009). Total instructional time matters less than how the time is spent, on which subjects time is spent and the strength of the curriculum (OECD, 2011). Time on task is an important contributor to achievement. The content of the curriculum in which students spend time studying, “opportunity to learn,” has quite strong effects on learning (Tornroos, 2005; Wang, 1998).
There is limited direct evidence about leadership practices for optimizing instructional time. Buffering, one such strategy, protects teachers from many distractions they face from both inside and outside of their classrooms and schools, and helps teachers devote their time to their classroom instructional (e.g. DiPaola and Tschannen-Moran, 2005). Improving attendance is another strategy to maximize learning time. A few studies (Marburger, 2006; OECD, 2013; Roby, 2004) have reported significant links between absenteeism (or attendance) and learning. Improved student learning, evidence suggests, depends on optimizing instructional uses of time in classrooms (Fullan et al., 2006). Available evidence indicates that the three variables subsumed by AC are significantly correlated with each other (Fauth et al., 2014; Marzano et al., 2003; Wang et al., 1993). For example, Le Blanc’s (2004) study found that improved school DC and a culture which pressed for academic excellence helped lengthen, maximize, and enhance the UIT in schools. Effective classrooms and schools in which teachers limit disruptions to instruction and take effective disciplinary.
School leadership (SL) practices
The conceptualization and measure of leadership in this study was based on an extensively-tested model of leadership which integrates both instructional and transformational practices (Leithwood and Riehl, 2005; Leithwood and Louis, 2012; Leithwood et al., 2010; Sun and Leithwood, 2015; Hitt and Tucker, 2016). A reduced version (six items) of the total of 21 individual leadership practices are included in the model, each representing one of five categories or domains – setting directions, building relationships and developing people, organizational re-design, improving the instructional programs, and securing accountability. The most fully developed version of this model now serves as Ontario’s leadership framework (Leithwood, 2012).
The first three domains reflect social theory suggesting that the performance of organizational members is a function of their motivation, ability, and the settings in which they work. So key functions of leaders include assisting their teachers and other organizational colleagues to further develop their motivations (one of the primary purposes for Setting Directions) and abilities (the purpose for building relationships and developing people) to accomplish organizational goals, as well as to create and sustain supportive work settings (the goal of Developing the Organization to Sustain Desired Practices).
The study tested three hypotheses:
AP, DC, and instructional time will come together to form the general latent construct AC in schools.
AC will explain a significant proportion of the variance in SA, controlling for student SES.
AC will be a significant mediator of SL’s influence on SA.
Correlational analysis and structural equation modeling (SEM) were used to test the latent variable AC, its effects on student learning, and the indirect impact of SL on student learning outcomes mediated by AC, as outlined in the three hypotheses. The predictor variable was SL. AC was the mediating variable. The dependent variable was SA composed of averaged percentages of students achieving at Level 3 or greater in math, reading, and writing on provincial measures. Students’ SES was designated as the control variable. The unit of analysis was the school.
Source of data
This study used data collected as part of a larger evaluation of a provincially-sponsored project in Ontario Canada, entitled “Leading SA: Networks for Learning” (LSA). At the time of collecting the data used for this study, approximately 1,200 principals (about four-fifths of them, i.e., 900 are from elementary schools) from the majority of the province’s 72 school districts were project participants. Online surveys administered in Spring 2012 were available to all teachers in each principal participant’s school. The achieved sample for the study included 856 teachers in 70 elementary schools in which at least three teachers completed the survey.
Online teacher survey
This survey requested information about SL, AP, DC, and UIT (as well as a number of other school and classroom variables unrelated to this study). All items on the survey used a seven-point Likert scale.
The five-item scale for measuring AP was adapted from a scale used by Hoy and Tarter (1997) with an α coefficient of 0.94. A sample item from this scale included “My school sets high standards for academic success.”
DC was measured by six items adapted from earlier research by Williams and Ma (2004). A sample item “Students do not start working for a long time after my lessons begin” was asked. The reliability estimates for this scale used in previous research ranged from 0.45 to 0.71 (Ma and Williams, 2004).
Ten items developed specifically for this study were used to measure teachers’ UIT. A sample item was “My classroom timetable includes large uninterrupted blocks of learning time.”
The six items used to measure SL reflected each of the five domains of the integrated leadership model described earlier. Previous uses of this measure report relatively high reliability (e.g. 0.93 in Leithwood and Louis, 2012).
Student achievement (SA)
Our measure of SA was the percentage of students in Grades 3 and 6 achieving at Level 3 or greater (the province uses a four- point scale with Level 3 considered “satisfactory”) on provincial tests of math, reading, and writing tests in 2011. The mean for math was determined by averaging the percentages of students in Grades 3 and 6 at or above Level 3 in math. The means for reading and writing were determined by averaging the percentages of students in Grades 3 and 6 at or above Level 3 in reading and writing, respectively. Then we computed a “language” measure by computing the mean of writing and reading, hence the two measures of SAs in our study: language and math. We used these two means of percentages of students at Level 3 or Level 4 in language and math to create a latent variable SA.
Student social economic status (SES)
Student SES data were provided by the school leaders based on their estimation of the approximate proportion of their students’ families earning at each of the following three income levels: less than $40,000, 41,000 to 80,000, and more than 80,000. We assigned a value to each of the three income levels: 1 for $40,000; 2 for $41,000; 3 for $800,001. We also assigned a value from 1 to 4 for the proportion of students’ families in a school earning at each of these three income levels, indicating less than 20 percent of families (1), 21 to 50 percent of families (2), 51 to 75 percent of families (3), and more than 75 percent of families (4). Then an aggregated score was calculated for each school based on the school leader’s estimation. School leaders were asked to check ticks in the applicable cells in a 3×4 tables, with each cell receiving a weight through the above-mentioned value assignment processes. Thus, a principal’s response was scored by adding the weights assigned to each cell he or she checked – a single number. Next, these SES numbers were recoded to a 1-7 Likert Scale with 1= low income, and 7= high income. After recoding, SES scores ranged from 0.35 (minimum) to 7.00 (maximum), with a value of 3.5 at the 61th percentile.
Means, standard deviations, and scale reliabilities (Cronbach’s α) were computed for all variables and bivariate correlations between all variables. Intra-class correlations (ICC) were calculated using SPSS 24 ANOVA random effects for AP, DC, teachers’ UIT, and SL to examine whether these variables, measured teacher perceptions in each school, cluster significantly at the school level. ICC(1) (representing the variance attributed to group membership), and ICC(2) (representing the within group agreement between teachers) were used to assess whether aggregation to the group level was appropriate (Bliese, 2000; Mierlo et al., 2009).
Next, a confirmatory factor analysis (CFA) using LISREL 9.2 was conducted to determine whether AP, DC, and teachers’ UIT could be considered a latent construct, AC, and whether our indicator variables, percentages of students achieving at Level 3 or greater in math and language could be considered the latent variable, SA.
Finally, SEM using LISREL 9.2 was performed to test the effects of AC on SA and the indirect effects of SL on SA mediated by AC and controlling for student SES. The χ2 test of model fit, root-mean-square-error of approximation (RMSEA), goodness-of-fit index (GFI), and the non-centrality parameter (NCP) were indices used to assess model fit (Schumacker and Lomax, 2013).
This section describes results aggregated across all 70 schools, including a summary of descriptive statistics for all variables (Table I), a report of relationships among all variables (Table II), the justification of the aggregation of the data to the school level (Table III), a report of CFA analysis of the latent variable AC and its impact on student learning (see Figure 1), and a report of the SEM analysis of the indirect impact of SL on SA mediated by AC (see Figure 2).
Descriptive statistics and scale reliabilities
Table I indicates that teachers’ ratings of their SL was moderately high (M=5.40; SD=0.69). Teachers’ mean rating of DC was moderately high (M=5.41; SD=0.53), followed by instructional time (M=5.13; SD=0.70), and by AP (M=4.64; SD=0.76). The standard deviations of these responses were moderate (0.53 to 0.70), indicating substantial agreement among respondents’ ratings. Teachers’ perceptions were more consistent about DC than instructional time and AP. The internal reliability of all scales was relatively high: 0.85 for both AP and DC, 0.83 for teachers’ UIT, and 0.98 for SL.
Relationships between variables
As Table II indicates, all three components of AC were positively correlated with SA in language, Pearson correlation coefficients ranging from 0.39 to 0.25. The correlations between both AP and DC and SA in language were higher than between UIT and SA in language. A similar pattern appeared with SA in math. Student SES had a significant correlation with SA in language but not math.
The three variables hypothesized to form the generalized latent variable, AC, were correlated with each other. The correlation between DC and instructional time (0.75) was higher than the other two correlations (0.60 and 0.65). This result supports our original rationale for creating AC, of which each of these three variables is a key component. Students’ SES was positively related to AP (0.28) but not to DC or instructional Time.
Intra-class correlations (ICC) analysis
Table III reports the result of ICC analysis. ICC(1) correlation is commonly interpreted as the proportion of variance in a target variable that is accounted for by group membership (Bliese, 2000; McGraw and Wong, 1996; Snijders and Bosker, 1999). ICC(2) represents the reliability of the group mean scores and varies as a function of ICC(1) and group size. It tested for homogeneity of perceptions among teachers within school. For a group-level construct to be reliable, ICC(1) values should be significant and the acceptable ICC(2) values should be larger than 0.60 by convention (Cohen et al., 2003). We ran four random effect ANOVA for the four variables: AP, DC, UIT, and SL (n=643). The F-test of significance for each of the four observable variables was statistically significant (p<0.001). This confirmed the school level variability in the four observed variables and suggested that individual-level analyses would be inappropriate. The values of ICC(1) for the four variables were small (0.35 for AP; 0.35 for DC; 0.20 for UIT; 0.26 for SL), The values of ICC(2) for the four variables were large (0.64 for AP; 0.63 for DC) or close to large (0.44 for UIT; 0.45 for SL). These results indicate reliable within group agreement, hence supporting the aggregation to the group level. Though the within group/school agreement for SL and IT did not exceed the 0.60 threshold recommended by Cohen et al. (2003), these results taken together indicated the appropriateness of aggregating the data to school level.
Confirmatory factor analysis (CFA)
To test H1 that AP, DC, and instructional time would come together to create the general latent construct AC, we conducted a CFA which led to under-identified measurement models. Therefore, we included the two latent variables in our CFA model (see Schumacker and Lomax, 2016). Results support the hypothesis: the factor loading for AP was 0.72, for DC 0.91, and for instructional time 0.83. Results also support the hypothesis that the two indicator variables (percentages of students achieving at Level 3 or greater in math and language) would come together to create the latent dependent variable SA. Very high factor loadings for math (0.89) and language (0.91) justify the creation of the latent variable.
For the confirmatory factor model, all major fit indices confirm good model fit. First, a significant χ2 value relative to the degrees of freedom indicates that the observed and implied variance-covariance matrices differ (Schumacker and Lomax, 2013). The χ2 statistic is 4.35, non-significant (p=0.36), indicating that the implied theoretical model significantly reproduces the sample variance-covariance relationships in the matrix. Second, the GFI >0.95, indicates good fit (Schumacker and Lomax, 2013). In our original model, GFI=0.97, so 97 percent of the S matrix is predicted by the reproduced matrix Σ; hence, a good fit.
Third, the NCP was 0.35, which is small (NCP close to 0 indicating perfect fit). Finally, the RMSEA was 0.04, within the acceptable level of model fit (Schumacker and Lomax, 2013; Steiger, 2007). All of the goodness-of-fit indices, in sum, confirmed that our hypothesized model had good model fit.
Structural equation model
Our structural equation model consisted of:
one independent variable SL;
three observed endogenous indicator variables (AP, DC, and IT) defining the latent variable AC;
two observed endogenous indicator variables (students’ EQAO scores in math and literacy) defining SA, which was treated as the dependent variable; and
one control variable, student SES.
To avoid problems of multicollinearity and suppression due to the high correlations among the three components of AC, the latent mediating variable, AC, was created from the three observed variables (AP, DC, UIT). SL was hypothesized to have direct effects on AC and indirect effects on SA through AC. The exogenous control variable, SES, was hypothesized to have direct effects on SA and as well as an indirect effect on SA via the effect on AC. The second step in our structural model involved testing this theoretical model. H3 (SL has a significant positive impact on student learning when mediated by AC) was confirmed. AC had a significant direct effect on SA (b=0.39; p<0.05) (see Figure 2 for standardized coefficients). SL had a significant direct effect on AC (b=0.36; p<0.05). SES had a significant direct effect on SA (b=0.27; p<0.05). AC and SES together explained 15.8 percent of the variance in SA with AC making the larger contribution (13 percent) to the achievement variance in the schools. SES and SL through AC explained 17.0 percent of the variance in SA. SL explained 14. 2 percent of the variance in SA through AC, controlling for SES. Our model had a good model fit with the data as indicated by a non-significant χ2 of 6.38. Several other indices also indicated good model fit: RMSEA =0.00, NCP=4.96, and GFI=0.98, indicating good fit.
Additional regression analysis using SPSS 24 found that none of the individual predictor variables (SL, AP, DC or instructional time) was significantly related to SA when SES entered the equation. This result further supports the value of aggregating the three individual variables into the more fundamental (or underlying) variable, AC.
Conclusions and implications
Influenced by the goals and methodological tools reflected in research about AO (e.g. McGuigan and Hoy, 2006), the purpose of this study was to explore the value of another generalized latent variable, AC as a mediator of school leaders’ influence on SA. Including AP, DC, and UIT, the initial justification for exploring AC was the focus on academic goals that it encourages on the part of teachers, students and parents, along with the uninterrupted opportunity provided for students to be meaningfully engaged in academic work most of their time in class.
Several limitations of this study deserve to be highlighted. Evidence for this study was limited to elementary schools in one Canadian province. The contribution of AC to SA in secondary schools, often with less homogenous cultures (Firestone and Riehl, 2005; Firestone et al., 1984) and in other school systems with different policy and other contextual features remains unknown. So, it would be worthwhile examining AC in secondary school settings and in more diverse policy contexts. A more accurate measure of student SES also would be useful in future studies. As well, because data for the study were aggregated at the school level, individual SA results were not available, ruling out the use of hierarchical linear modeling to examine nested effects.
The study hypothesized that AP, DC, and UIT would form a general latent construct, AC; AC would explain a significant proportion of the variance in SA, controlling for student SES; and that AC would be a significant mediator of school leaders’ influence on SA. Confirming these three hypotheses, results suggest that AC is a multi-variable construct worth the attention of school leaders’ as part of their improvement efforts.
Previous studies suggest that in school environments with academic emphasis and that are safe and orderly, teachers trust each other, students, and principals, and are more likely to be efficacious and committed to teaching, student and schools (e.g. Jurewicz, 2004; McGuigan, 2005). This suggests that there might be other school variables related to AC that could form other latent constructs (such as teacher emotions) and future research could test their mediation effects or refine the construct of AC.
Three types of future research are suggested by the results of this study. The first type includes replicating the findings of this study at different levels of schooling and in different school system contexts. A second type includes the search for other promising latent mediators of leadership effects on student learning. Finally, as with some other strands of leadership research, additional research about AC using quasi-experimental designs would be both useful and not unduly complex to initiate.
Quasi-experimental designs are rarely used in the educational leadership field but the study of mediating variables lends itself to such designs more readily than many other foci. Research of this sort could be incorporated into the ongoing professional development of existing school leaders in their districts. Participants in professional development efforts could be introduced to both the concept of AC and research about leadership practices likely to enhance AC with the expectation of work aimed at improving AC in their schools. Research designs incorporating pre- and post-treatment data collection with participants and samples of similar leaders not included in such professional development would provide much more robust evidence about the causal influence of leaders on AC and the causal influence of AC on SA.
This focus on leadership mediators, in fact, is a much overdue and relatively rare framework for school leaders’ professional development. Within a single district, for example, professional development with mediators as the framework, could be provided for half of the school leader population in Year 1 with the other half of the school leader population acting as controls; the control group would be provided with the same (or improved) professional development in Year 2.
The findings of this study also have two important implications for practice. First, results suggest that school academic cultural environment may be a partial “antidote” to the often- negative influence of low SES on student learning. This implication is especially relevant for leaders of schools serving a significant proportion of students in challenging circumstances. Second, the significant effects of AC (as with AO) on student learning found in this study offer a more nuanced and role-appropriate version of the common admonition that principals should focus almost all of their efforts directly on the improvement of curriculum and instruction – the common meaning of “instructional leadership.” Such a focus has always been a major challenge for most school leaders and for most secondary school leaders, in our experience, too unrealistic even to attempt. Evidence about the significant effects on SA of the types of variables included in this study offer indirect but powerful routes for school leaders to use to enhance instructional effects, routes focused more directly on creating supportive cultures and organizational routines. Productive conceptions of effective SL, especially the leadership of those in administrative roles, need to reflect the importance of organizational design and what Hoy and his colleagues refer to as the creation of “enabling bureaucracies” (Hoy and Sweetland, 2000).
Mean, standard deviation, and scale reliability for variables
|Variables||Mean||SD||Reliability||No. of items|
|School leadership (SL)||5.40||0.69||0.98||6|
|Academic culture (AC)|
|Disciplinary climate (DC)||5.41||0.53||0.85||6|
|Use of Instructional time (UIT)||5.13||0.70||0.83||10|
|Academic press (AP)||4.64||0.76||0.85||5|
|Student achievement (SA)|
|Students’ socioeconomic status (SES)||3.64||1.00|
Notes: n=70 elementary schools. Reliability = Cronbach’s α calculated based on 851-856 individual teacher responses
Relationships between all variables
|School leadership (SL)|
|Academic press (AP)||0.50**|
|Disciplinary climate (DC)||0.21||0.65**|
|Use of instructional time (UIT)||0.24**||0.60**||0.75**|
Notes: n=70 elementary schools. *,**Significant at 0.05, 0.01 levels, respectively (two-tailed)
Direct and indirect impacts of school leadership on student learning mediated by academic culture
|Path||Unstandardized estimates||SE||Critical ratio|
|AC → SA||0.39||0.16||2.31|
|SL → AC||0.36||0.25||1.45|
|SES → AC||0.27||0.11||2.48|
|SES → SA||0.28||0.20||1.41|
|SL → AC → SA||0.14||0.01||1.42|
|SES → AC → SA||0.10||0.06||1.43|
Note: Parameter estimates are statistically significant at p<0.05 when t or z>1.96
The Education Quality and Accountability Office is an arm’s-length crown agency of the Government of Ontario in Canada. This independent agency creates and administers large-scale assessments to measure Ontario students’ achievement in reading, writing and math at key stages of their education. More information can be found at www.eqao.com
More information about this project can be found at http://curriculum.org/LSA/home.shtml
Due to the unknown number of the teachers in total who had the opportunity to respond to those surveys, the overall response rate is unknown.
The data set, based on which this study, was conducted is a small portion of a much larger survey data measuring the results of the Leading Student achievement project. The results of modeling the multiple relationships between all the variables in the original data set were similar when including the schools where three or more teachers participated in the survey and when including the schools where five or more teachers participated in the survey.
Alig-Mielcarek, J.M. (2003), “A model of school success: instructional leadership, academic press, and student achievement”, unpublished doctoral dissertation, Ohio State University, Columbus, OH.
Bellei, C. (2009), “Does lengthening the school day increase students’ academic achievement? Results from a natural experiment in Chile”, Economics of Education Review, Vol. 28 No. 5, pp. 629-640.
Benda, S.M. (2000), “The effect of leadership styles on the disciplinary climate and culture of elementary school”, PhD dissertation, Widener University, Chester, PA.
Bliese, P.D. (2000), “Within group agreement, non-independence and reliability: implications for data and analysis”, in Klein, K.J. and Kozlowski, S.W.J. (Eds), Multilevel Theory, Research, and Methods in Organizations: Foundations, Extensions, and New Directions, Jossey-Bass, San Francisco, CA, pp. 349-381.
Cohen, J., Cohen, P., West, S.G. and Aiken, L.S. (2003), Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences, 3rd ed., Erlbaum, Mahwah, NJ.
DiPaola, M.F. and Tschannen-Moran, M. (2005), “Bridging or buffering? The impact of schools’ adaptive strategies on student achievement”, Journal of Educational Administration, Vol. 43 No. 1, pp. 60-71.
Fauth, B., Decristan, J., Rieser, S., Klieme, E. and Büttner, G. (2014), “Student ratings of teaching quality in primary school: dimensions and prediction of student outcomes”, Learning and Instruction, Vol. 29, pp. 1-9.
Firestone, W.A. and Riehl, C. (2005), A New Agenda for Research in Educational Leadership: Critical Issues in Educational Leadership Series: Critical Issues in Educational Leadership Series, Teachers College Press, New York, NY.
Firestone, W.A. and Wilson, B.L. (1984), “Culture of school is a key to more effective instruction”, NASSP Bulletin 68, No. 476, Complementary Index, EBSCOhost, p. 7.
Fullan, M., Hill, P. and Crévola, C. (2006), Breakthrough, Ontario Principal’s Council, Toronto.
Goddard, R.D., Hoy, W.K. and Hoy, A.W. (2000), “Collective teacher efficacy: its meaning, measure, and impact on student achievement”, American Educational Research Journal, Vol. 37 No. 2, pp. 479-507.
Hallinger, P. and Heck, R.H. (1996), “Reassessing the principal’s role in school effectiveness: a review of empirical research 1980-1995”, Educational Administration Quarterly, Vol. 32 No. 1, pp. 5-44.
Heck, R.H. and Hallinger, P. (2010), “Collaborative leadership effects on school improvement: integrating unidirectional-and reciprocal-effects models”, The Elementary School Journal, pp. 226-252.
Heck, R.H. and Hallinger, P. (2014), “Modeling the effects of school leadership on teaching and learning over time”, Journal of Educational Administration, Vol. 52 No. 5, pp. 653-681.
Hitt, D. and Tucker, P. (2016), “Systematic review of key leadership practices found to influence student achievement: a unified framework”, Review of Educational Research, Vol. 86 No. 2, pp. 531-569.
Hossler, C., Stage, F. and Gallagher, K. (1988), “The relationship of increased instructional time to student achievement”, Policy Bulletin: Consortium on Educational Policy Studies, Bloomington, IN.
Hoy, W.K. and Sweetland, S.R. (2000), “School bureaucracies that work: enabling, not coercive”, Journal of School Leadership, Vol. 10 No. 4, pp. 524-541.
Hoy, W.K. and Tarter, C.J. (1997), The Road to Open and Healthy Schools: A Handbook for Change, Elementary Edition, Corwin Press, Thousand Oaks, CA.
Hoy, W.K., Hannum, J. and Tschannen-Moran, M. (1998), “Organizational climate and student achievement: A parsimonious and longitudinal view”, Journal of School Leadership, Vol. 8 No. 4, pp. 336-359.
Hoy, W.K., Tarter, J.C. and Hoy, A.W. (2006), “Academic optimism of schools: a force for student achievement”, American Educational Research Journal, Vol. 43 No. 3, pp. 425-446.
Jacob, J.A. (2004), “A study of school climate and enabling bureaucracy in selected New York City public elementary schools”, PhD dissertation, Dissertations & Theses Europe Full Text: Social Sciences.
Jurewicz, M.M. (2004), “Organizational citizenship behaviors of middle school teachers: A study of their relationship to school climate and student achievement”, unpublished doctoral dissertation, College of William and Mary, Williamsburg, VA.
Leithwood, K. (2012), The Ontario Leadership Framework 2012 with a Discussion of the Research Foundations, Institute for Educational Leadership, Toronto.
Leithwood, K. and Louis, K.S. (2012), Linking Leadership to Student Learning, Jossey-Bass, San Francisco, CA.
Leithwood, K. and Riehl, C. (2005), “What we already know about successful school leadership”, in Firestone, W.A. and Riehl, C. (Eds), A New Agenda: Directions for Research on Educational Leadership, Teachers College Press, New York, NY, pp. 12-27.
Leithwood, K. and Sun, J. (2012), “The nature and effects of transformational school leadership: a meta-analytic review of unpublished research”, Educational Administration Quarterly, Vol. 48, pp. 387-423.
Leithwood, K., Aitken, R. and Jantzi, D. (2006), Making Schools Smarter: Leading with Evidence, Corwin, Thousand Oaks, CA.
Leithwood, K., Patten, S. and Jantzi, D. (2010), “Testing a conception of how leadership influences student learning”, Educational Administration Quarterly, Vol. 46, pp. 671-706.
Little, J.W. (1982), “Norms of collegiality and experimentation: workplace conditions of school success”, American Educational Research Journal, Vol. 19 No. 3, pp. 325-340.
Lock, E., Latham, G., Smith, K. and Wood, R. (1990), A Theory of Goal Setting and Task Performance, Prentice-Hall, Englewood Cliffs, NJ.
Lortie, D. (1975), Schoolteacher: A Sociological Study, University of Chicago Press, Chicago, IL.
McGraw, K.O. and Wong, S.P. (1996), “Forming inferences about some intraclass correlation coefficients”, Psychological Methods, Vol. 1 No. 1, pp. 30-46, doi: 10.1037/1082-989X.1.1.30.
McGuigan, L. and Hoy, W.K. (2006), “Principal leadership: creating a culture of academic optimism to improve achievement for all students”, Leadership and Policy in Schools, Vol. 5 No. 3, pp. 203-229.
McGuigan, L.M. (2005), “The role of enabling bureaucracy and academic optimism in academic achievement growth”, unpublished doctoral dissertation, The Ohio State University, Columbus, OH.
Ma, X. (2003), “Effects of early acceleration of students in mathematics on attitude and anxiety toward mathematics: Teachers college record”, Journal for Research in Mathematics Education, Vol. 30 No. 5, pp. 438-464.
Ma, X. and Crocker, R. (2007), “Provincial effects on reading achievement”, The Alberta Journal of Educational Research, Vol. 53 No. 1, pp. 87-109.
Ma, X. and Klinger, D.A. (2000), “Hierarchical linear modelling of student and school effects on academic achievement”, Canadian Journal of Education, Vol. 105 No. 3, pp. 41-55.
Ma, X. and Willms, J.D. (2004), “School disciplinary climate: characteristics and effects on eighth grade achievement”, Alberta Journal of Educational Research, Vol. 50 No. 2, pp. 169-188.
Marburger, D.R. (2006), “Does mandatory attendance improve student performance”, The Journal of Economic Education, Vol. 37 No. 2, pp. 148-155.
Marks, H.M. and Printy, S.M. (2003), “Principal leadership and school performance: an integration of transformational and instructional leadership”, Educational Administration Quarterly, Vol. 39, pp. 370-397.
Marzano, R.J., Marzano, J.S. and Pickering, D. (2003), Classroom Management that Works, Association for Supervision and Curriculum Development, Alexandria, VA.
Mortimore, P., Sammons, P., Stoll, L., Lewis, D. and Ecob, R. (1988), School Matters: The Junior Years, Open Books, Shepton Mallett.
Murphy, J.F., Weil, M., Hallinger, P. and Mitman, A. (1982), “Academic press: translating high expectations into school policies and classroom practices”, Educational Leadership, Vol. 40 No. 3, pp. 22-26.
OECD (2011), Lessons from PISA for the United States, Strong Performers and Successful Reformers in Education, OECD Publishing, available at: http://dx.doi.org/10.1787/9789264096660-en
OECD (2013), Lessons from PISA 2012 for the United States, Strong Performers and Successful Reformers in Education, OECD Publishing, available at: http://dx.doi.org/10.1787/9789264207585-en
Reynolds, D., Sammons, P., Stoll, L., Barber, M. and Hillman, J. (1996), “School effectiveness and school improvement in the United Kingdom”, School Effectiveness and School Improvement, Vol. 7 No. 2, pp. 133-158.
Rosenholtz, S. (1989), “Effective schools: interpreting the evidence”, American Journal of Education, Vol. 93 No. 3, pp. 352-388.
Roby, D.E. (2004), “Research on school attendance and student achievement: a study of Ohio schools”, Educational Research Quarterly, Vol. 28 No. 1, pp. 1-3, available at: http://search.proquest.com/docview/216183172
Rosenthal, R. (1987), “Pygmalion effects: existence, magnitude, and social importance”, Educational Researcher, Vol. 16, pp. 37-40.
Sammons, P. (1994), “Findings from school effectiveness research: some implications for improving the quality of schools”, in Ribbins, P. and Burridge, E. (Eds), Improving Education: Promoting Quality in Schools, Cassell, London.
Schumacker, R.E. and Lomax, R.G. (2013), A Beginner’s Guide to Structural Equation Modeling, 4th ed., Routledge, New York, NY.
Schein, E.H. (2006), Organizational Culture and Leadership, 3rd ed., John Wiley & Sons, New York, NY.
Shouse, R.C. (1996), “Academic press and sense of community: conflict, congruence, and implications for student achievement”, Social Psychology of Education, Vol. 1 No. 1, pp. 47-68.
Steiger, J.H. (2007), “Understanding the limitations of global fit assessment in structural equation modeling”, Personality and Individual Differences, Vol. 42 No. 5, pp. 893-898.
Snijders, T.A.B. and Bosker, R.J. (1999), Multilevel Analysis: An Introduction to Basic and Advanced Multilevel Modeling, Sage Publications, Thousand Oaks, CA.
Sun, J. and Leithwood, K. (2015), “Direction-setting school leadership practices: a meta-analytical review of evidence about their influence”, School Effectiveness and School Improvement, Vol. 26 No. 4, pp. 499-523, doi: 10.1080/09243453.2015.1005106.
Teddlie, C. and Stringfield, S. (2007), “A history of school effectiveness and school improvement research in the USA focusing on the past quarter century”, in Townsend, T. (Ed.), International Handbook of School Effectiveness and Improvement, Springer, Dordrecht, pp. 131-166.
Tornroos, J. (2005), “Mathematics textbooks, opportunity to learn and student achievement”, Studies in Educational Evaluation, Vol. 31, pp. 315-327.
van Mierlo, H., Vermunt, J.K. and Rutte, C.G. (2009), “Composing group-level constructs from individual-level survey data”, Organizational Research Methods, Vol. 12, pp. 368-392.
Wang, J. (1998), “Opportunity to learn: the impacts and policy implications”, Educational Evaluation and Policy Analysis, Vol. 20 No. 4, pp. 137-156.
Wang, M., Haertel, G.D. and Walberg, H.J. (1993), “Toward a knowledge base for school learning”, Review of Educational Research, Vol. 63 No. 3, pp. 249-294.
Wineburg, S. (1987), “The self-fulfillment of the self-fulfilling prophecy”, Educational Researcher, Vol. 16 No. 9, pp. 28-37.