ELA/literacy assessment: the good, the bad and the ugly

Terry Locke (University of Waikato, Hamilton, New Zealand)

English Teaching: Practice & Critique

ISSN: 1175-8708

Article publication date: 7 September 2015

1054

Citation

Locke, T. (2015), "ELA/literacy assessment: the good, the bad and the ugly", English Teaching: Practice & Critique, Vol. 14 No. 2. https://doi.org/10.1108/ETPC-07-2015-0060

Publisher

:

Emerald Group Publishing Limited


ELA/literacy assessment: the good, the bad and the ugly

Article Type: Editorial From: English Teaching: Practice & Critique, Volume 14, Issue 2

The focus for this issue of English Teaching: Practice and Critique was assessment practice in the context of the English/literacy classroom, and the many forms it takes, for good or for ill. In framing the rationale for the issue, we mentioned Alison Wolf’s assertion that assessment is the most powerful single influence on teaching and learning (Wolf, 1995). This statement, made 20 years ago, is certainly true in today’s climate, where English teachers in a range of settings are finding ways of weathering standardized testing regimes. This is a theme that was explored in a number of contribution to Volume 13, Number 1 of this journal, on the theme of “English curriculum in the current moment: Tensions between policy and professionalism”. It was also a recurrent theme in the recent joint IFTE/CEE Conference at Fordham University in July of this year.

There are a number of reasons why Wolf’s assertion makes sense. Assessment in its various forms is so pervasive in our classrooms and in society at large that it is hard to imagine schooling without it. As a species, we must have figured out a long time ago that, to survive, we had to engage in the exercise of judgment around the commensurability of an action or product with such criteria as utility, mastery, taste and ethical rightness. There have always been standards and standard-setters.

What does change are balances of power: between those who are measured and those who measure them; between those who measure and the forces that determine the assessment regimes (technologies in a Foucautian sense) they work within. In simple terms, as Harlen (2006, p. 103) puts it, the normalized purpose of assessment is “to help learning and to summarize what has been learned”. She goes on to offer a definition of assessment that she regards as non-problematic: “It is generally agreed that assessment in the context of education involves deciding, collecting and making judgments about evidence relating to the goals of the learning being assessed” (Harlen, 2006, p. 103).

We invited prospective contributors to this issue to problematize this definition with evidence-based or conceptual studies that explored such questions as:

Q1. How are the learning goals that form the basis of assessment determined and by whom?

Q2. How do the technologies of assessments themselves initiate or define learning goals and the instruction that follows from them?

Q3. What role does the professional knowledge of the teacher play in the determination of these goals?

Q4. How is assessment “evidence” determined and who constructs this process?

Q5. What kinds of assessment evidence are deemed to be valid and by whom?

Q6. What role does data play in the assessment process and what data counts?

Q7. How is assessment practice being constructed by extrinsic accountability technologies/systems?

Q8. How are criteria established in relationship to the exercise of qualitative judgment in the various aspects of English/literacy as a field of knowledge?

Q9. In whose interests are these qualitative judgments made?

Q10. What role do students have in determining the assessment processes they are subject to?

In their article on “Assessing the field: students and teachers of writing in high-stakes literacy testing in Australia”, as if taking their cue from Wolf (1995), Frawley and McLean Davies begin by drawing attention the form of standardized assessment that currently prevails in the Australian educational context, namely NAPLAN (National Assessment Program for Numeracy and Literacy), which tests all students at Years 3, 5, 7 and 9. Through a close examination of the writing component of NAPLAN testing, the authors explore the interface between such high-stakes testing, the construction of disciplinary knowledge (particularly in relation to writing) and pedagogical practice in subject English. They argue that at a time of curriculum instability, and in the absence of a clear literacy policy, high-stakes assessment practices are driving the way in which writing is constructed and taught, and reshaping teachers’ disciplinary and pedagogical content knowledge, with flow-on effects on their students.

In contrast to Frawley and McLean Davies’ contribution, the article by Manuel and Carter, “‘I had been given the space to grow’: An innovative model of assessment in subject English in New South Wales, Australia” presents a positive example of high-stakes assessment in English. Like Frawley and McLean Davies, they also begin by commenting on the way in which national and international standardized testing regimes have come to dominate the educational landscape in literacy and numeracy. These writers adopt a position that because of subject English’s focus on the aesthetic, the expressive and the imaginative, it does not lend itself to one-size-fits-all ways of measuring student achievement which tend to narrow the curriculum. The focus of their article is the English Extension 2 course, available to senior students by the New South Wales English curriculum. This course allows students to direct their own learning in producing artifacts that are the outcome of a rigorous process of self-chosen inquiry. For Manuel and Carter, the model of assessment underpinning this course challenges the definition of assessment offered by Harlen (2006) earlier, and the normative models of assessment that characterize standardized regimes, especially because of its focus on process as well as product. In the context of their article, they discuss the theoretical, philosophical and practical dimensions of the English Extension 2 course as a way of arguing for its uniqueness as an assessment model that encapsulates what, at its best, the subject has to offer. In doing so, they call on the voices of both teachers and students to support their case.

Bethan Marshall and Sue Brindley have researched and written about assessment in English for a number of years (Marshall, 2011). Their contribution to this issue, based on a case study in England, is entitled: “Resisting the rage for certainty”: Dialogic assessment: A case study of one secondary English subject classroom in the UK”. As they states in the introduction, the article is focused on one teacher’s attempt to teach dialogically. In some ways, the article is a work in progress, an exploration of what dialogic teaching might look like and why it might be viewed as way of practicing effective, formative assessment as a dialogic practice. Drawing on the theories of Vygotsky and Bakhtin, the authors discuss ways in which non-hierarchical talk and dialogue can be productive of effective learning (in contrast to monologic classroom practices). The case-study teacher in question (Jonathan) was involved in a project led by Sue Brindley (CamTalk) to develop a resource to support secondary-school dialogic teaching, learning and assessment. Based on a narrative exploration (utilizing a range of data) of a number of key incidents from Jonathan’s teaching, practices such as cumulative dialogue, questioning, pace and timing, student listening and teacher directedness are problematized and discussed (especially in the context of the long shadow of an end-of-year, high-stakes examination). Marshall and Brindley conclude their article with some key points of practice (the role of the teacher; the classroom culture; implications for planning) and a provisional model of what dialogic assessment might look like.

The article by Lee, Mak and Burns, from the Hong Kong context, is also concerned with formative assessment, and is entitled: “Bringing innovation to conventional feedback approaches in EFL secondary writing classrooms: A Hong Kong case study”. Like Marshall and Brindley’s article, this is a case study, with a focus on just two EFL teachers who implemented a range of innovative feedback approaches in their grade nine writing classrooms. The authors argue a case for the relative ineffectiveness of such conventional feedback strategies as single-draft marking and a focus on error correction. Responding to a relative lack of research on feedback in the EFL context which takes account of cultural factors, the authors invited teachers to include written corrective feedback (focused as opposed to exhaustive WCF), coded written corrective feedback (coded WCF), comment-only feedback (with a focus on rubric-based criteria) and peer evaluation in their repertoire of strategies. These were new strategies for them. A range of data was collected, including teacher interviews, student questionnaires, focus-group interviews for students, pre- and post-intervention writing tests and classroom observations. In this small study, findings indicated that these approaches enhanced both student motivation and writing performance. Lee et al. conclude that focused WCF is especially helpful for low-proficiency EFL students, and that it can be desirable to defer the reporting of scores as a way of enhancing student motivation.

The contribution to this issue from Mehrabi Boshrabadi and Biria comes from the Iranian context, and focuses on a topic that concerns all teachers of English, that is, the selection and evaluation of learning materials. Their article, entitled “Towards developing a multi-aspectual framework for systematic evaluation of locally prepared ELT materials”, begins by drawing attention to the way in which in the inappropriate development of English language teaching (ELT) materials can have a detrimental effect on language learning as constructed by locally developed curriculums. As the writers point out, in contrast to their L1 English-teaching colleagues, EFL teachers are limited in terms of their choice of reading/learning materials. EFL teachers are also more likely to be textbook dependent. In the study reported here, an attempt was made to develop a principle-driven, multi-aspectual evaluative framework, which was applied to materials commonly used by Iranian high-school students. In total, 120 high-school students, 60 ex-students and 30 EFL teachers were selected as participants who applied a 30-item questionnaire aimed at gauging the pedagogical effectiveness of various English textbooks, especially the Right Path to English series. Focus-group interviews were conducted as a source of triangulation to enhance the credibility and dependability of the participants’ evaluations. Findings suggested a discrepancy between the textbooks prescribed by the Iranian Ministry of Education and the needs of students, with important implications for EFL teachers, policy-makers and textbook developers.

Chien’s article on “College students’ awareness in organizational strategy use in English writing: A Taiwan-based study” is effectively an article in dialogue, focused as it is on the teaching of writing in English in the Taiwanese context. Locating itself in the field of contrastive rhetoric studies, the article focuses on the concept of culturally constructed organizational patterns and investigates the commonplace claim that second language (L2) writers are characterized by implicit, culturally driven presuppositions and values about academic writing in the first language (L1), which they tend to transfer to their academic writing in English. His study investigated the organizational strategies deployed by EFL students in their English writing in a number of Taiwanese universities. Data were collected from 50 high- and 50 low-achieving EFL students’ and 50 native English speakers’ (NESs’) written texts and complemented with interview data from the EFL students and their teachers. Based on an analysis of the text data, it was found that high-achieving EFL students and NESs were comparable in their deployment of a range of rhetorical strategies and textual features, and differed only in the presentation of background/contextual information. However, it was found that low-achieving EFL students differed markedly from high-achieving EFL students and NESs in a number of rhetorical strategies and textual features deployed. In addition, while the findings suggested that cultural differences did occur, Chinese writers’ English organizational strategies were in part influenced by their experiences as writers and by their teachers’ instructional practices. Indeed, there was considerable diversity in relation to flexibility and experience within the cultural group studied, which has implications for the teaching of writing in comparable settings.

A second article in dialogue in this issue by Sadeghi and Richards is also situated in the Iranian context and deals with “Teaching spoken English in Iran’s private language schools: Issues and options”. In some ways, in its focus on oral language, this article resonates with Marshall and Brindley (in this issue), but without a focus on the role of dialogue in the service of formative assessment. In their case, the focus is on the situation in Iran where students have limited opportunities to develop skills in spoken English in the public school system, and turn to private providers (institutes), which advertise courses in English “conversation”. The article reports on a study which had 89 institute teachers complete a questionnaire on how they approached the teaching of spoken English. Questionnaire data were complemented with interviews and classroom observations. Findings suggest that the institute spoken English courses reflect a limited understanding of oral interaction (as narrowly focused discussion skills) and fail to address major aspects of how to conduct a conversation. The authors suggest that one way of addressing this conceptual and pedagogical issue is to redesign learning based on a distinction between conversation and discussion. In the context of the article, they provide an overview of what conversation and discussion are and their identifying features. They also suggest the utilization of out-of-class opportunities as a means of enhancing the learning of spoken English.

As editor of this issue, I find that these seven articles, taken collectively, give rise to a number of reflections. The first is the strength of the centripetal power at work in standardized, assessment regimes to produce (in a Bakhtinian sense) a uniform language for constructing and regulating the disciplinary practices of subject English teachers. We see this in Frawley and McLean Davies’ contribution. While the English Extension 2 course described by Manuel and Carter might be viewed as a kind of oasis in a desert of standardization, it will clearly not be immune from upward pressures from NAPLAN operating to reshape English teachers’ disciplinary knowledge in undesirable ways. Even the dialogic assessment practices described by Marshall and Brindley are not immune from high-stakes examinations, with their own ways of legitimizing certain kinds of response to text and not others. Having said that, of course, it is important for the health of subject English and English studies that beacons of exemplary practice are identified, trialed and evaluated as roads that, in the current milieu, are less taken.

A second reflection is that, in a strange kind of irony, in a number of contexts there may be more room for innovation in EFL than in L1 English classrooms. A purpose in this journal publishing articles from both contexts is the potential for a study from one to illuminate aspects of the other. For example, Sadeghi and Richards’ study from the Iranian context remind all of us of the importance of oral language development and of ways in which oral language pedagogy can be productively conceptualized. At the same time, there is much in Marshall and Brindley’s single-teacher case study that suggests ways in which EFL teachers might review the way they think about the utilization of dialogue for a range of purposes in their classrooms. In another example, Chien’s study reminds us of the dangers of drawing on contrastive analysis theory to simplify the cultural dispositions non-“native” writers of English bring to acts of composition. In addition, Mehrabi Boshrabadi and Biria remind us of the importance of evaluating and selecting learning materials for both L1 and EAL students. However, an implication of their study is that EFL teachers may need to take a leaf out of the book of L1 English teachers, who at their best devise themselves, in a principled way, materials customized to the backgrounds and needs of their students.

Terry Locke

University of Waikato, Hamilton, New Zealand

References

Harlen, W. (2006), “On the relationship between assessment for formative and summative purposes”, in Gardner, J. (Ed.), Assessment and Learning, Sage, London, pp. 103-118.

Marshall, B. (2011), Testing English: Formative and Summative Approaches to English Assessment, Continuum, London.

Wolf, A. (1995), Competence-Based Assessment, Open University Press, Buckingham.

Related articles