Search results

1 – 10 of over 2000
Article
Publication date: 22 August 2023

Nicole Brownlie, Katie Burke and Luke van der Laan

The current literature on school teacher-created summative assessment lacks a clear consensus regarding its definition and key principles. The purpose of this research was…

Abstract

Purpose

The current literature on school teacher-created summative assessment lacks a clear consensus regarding its definition and key principles. The purpose of this research was therefore to arrive at a cohesive understanding of what constitutes effective summative assessment.

Design/methodology/approach

Conducting a systematic literature review of 95 studies, this research adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. The objective was to identify the core principles governing effective teacher-created summative assessments.

Findings

The study identified five key principles defining effective summative assessment creation: validity, reliability, fairness, authenticity and flexibility.

Research limitations/implications

The expansiveness of education research is such that not all relevant studies may have been identified, particularly outside of mainstream databases. This study considered only the school environment, so contextual limitations will exist.

Originality/value

To the best of the authors’ knowledge, this study contributes original insights by proposing a holistic definition that can facilitate consensus-building in further research. The assimilation of core principles guided the development of quality indicators beneficial for teacher practice. The comprehensive definition, key principles and quality indicators offer a unique perspective on summative assessment discourse.

Open Access
Article
Publication date: 23 October 2020

Ben Alexander, Sean Owen and Cliff B. Thames

This study, a post hoc observational one, attempted to determine if career and technical education (CTE) students in the state of Mississippi would academically benefit from…

1949

Abstract

Purpose

This study, a post hoc observational one, attempted to determine if career and technical education (CTE) students in the state of Mississippi would academically benefit from taking multiple formative assessments in an online format prior to completing their summative exams. Most CTE students in the state of Mississippi are required to take an end-of-course exam cataloged as the Mississippi Career and Planning Assessment System (MS-CPAS). Previously, MS-CPAS test score results did not impact school-wide accountability scores, but in recent years, some of the guidelines were changed so that these summative test scores now play a vital role in school accountability and rankings.

Design/methodology/approach

This study examines both formative and summative online exam scores for more than 13,000 students who have taken an MS-CPAS assessment in the 2018 and 2019 school years.

Findings

The results of this study revealed that there were significant differences in summative exam scores for students who took two online formative practice tests when compared to groups of students who did not take any formative practice tests. This study also illustrated a positive correlation between those students' final online practice test scores and their summative exam scores.

Originality/value

These results would prove very beneficial to both CTE teachers and directors in helping them understand the benefits of introducing formative practice tests into their programs to boost student understanding.

Details

Asian Association of Open Universities Journal, vol. 15 no. 3
Type: Research Article
ISSN: 1858-3431

Keywords

Book part
Publication date: 11 August 2021

Shannon Stuart and Tia Schultz

This chapter provides evidence-based assessment techniques for students with autism spectrum disorder (ASD). An overview of formative and summative assessment, innovative…

Abstract

This chapter provides evidence-based assessment techniques for students with autism spectrum disorder (ASD). An overview of formative and summative assessment, innovative formative assessment strategies for students with ASD, and innovative summative assessment strategies for students with ASD are included. Discussion includes case studies and clear examples of how technology can support the assessment process. Practitioners may combine the assessment supports presented in this chapter because each support addresses more than one characteristic or need.

Details

Traditional and Innovative Assessment Techniques for Students with Disabilities
Type: Book
ISBN: 978-1-83909-890-1

Keywords

Open Access
Article
Publication date: 22 February 2024

Daniele Morselli

This article focuses on the assessment of entrepreneurship competence by selected vocational teachers in Italy. The exploratory research question addresses the extent to which…

Abstract

Purpose

This article focuses on the assessment of entrepreneurship competence by selected vocational teachers in Italy. The exploratory research question addresses the extent to which entrepreneurship assessments are competence based, and the research seeks to identify fully fledged assessment programmes with both a formative and summative component, and the use of assessment rubrics. It also explores the extent to which entrepreneurship competence is referred to in school documentation and later assessed, and the tools and strategies used for such assessment.

Design/methodology/approach

This case study is part of a larger European research project promoted by Cedefop; in Italy it focused on six selected vocational IVET and CVET programmes and apprenticeship schemes. It used a wide range of instruments to ensure triangulation and multiple perspectives: analysed policy documents and undertook online interviews with experts and policy makers. At VET providers' premises it deployed: analysis of school documents; observations of learning environments; interviews and focus groups with (in schools) teachers, directors and vice directors, learners and alumni (in companies) instructors, company tutors and employers, apprentices and alumni.

Findings

Assessment tasks were rarely embedded within fully fledged assessment programmes involving both formative and summative tasks, and assessment rubric for grading. Most of the time, entrepreneurship programmes lacked self-assessment, peer assessment and structured feedback and did not involve learners in the assessment process. Some instructors coached the students, but undertook no clear formative assessment. These findings suggest institutions have a testing culture with regard to assessment, at the level of both policy and practice. In most cases, entrepreneurship competence was not directly assessed, and learning outcomes were only loosely related to entrepreneurship.

Research limitations/implications

One limitation concerned the selection of the VET providers: these were chosen not on a casual basis, but because they ran programmes that were relevant to the development of entrepreneurship competence.

Practical implications

At the policy level, there is a need for new guidelines on competence development and assessment in VET, guidelines that are more aligned with educational research on competence development. To ensure the development of entrepreneurship competence, educators need in-service training and a community of practice.

Originality/value

So far, the literature has concentrated on entrepreneurship education at the tertiary level. Little is known about how VET instructors assess entrepreneurship competence. This study updates the picture of policy and practice in Italy, illustrating how entrepreneurship competence is developed in selected IVET and CVET programmes and apprenticeships.

Details

Education + Training, vol. 66 no. 10
Type: Research Article
ISSN: 0040-0912

Keywords

Article
Publication date: 11 September 2017

Kristal Curry and Doug Smith

The purpose of this paper is to present results from three years of a longitudinal “Assessment Attitudes and Practices” survey collected from a large school district in the…

Abstract

Purpose

The purpose of this paper is to present results from three years of a longitudinal “Assessment Attitudes and Practices” survey collected from a large school district in the Southern USA.

Design/methodology/approach

This paper focuses on both formative and summativeassessment practices” results from secondary (middle and high school) social studies teachers.

Findings

There was no statistically significant difference between secondary social studies teachers’ use of assessments and secondary teachers of other disciplines, nor was there a statistically significant difference in assessment use by year. Data results by assessment type were ranked in order of how often teachers claimed to use various assessment practices, and discussed in terms of assessment practices recommended by NCSS. Social studies teachers in this study were often more likely to report use of assessments of knowledge (including selected-response items) than performance-based assessment techniques (such as authentic assessments).

Research limitations/implications

The lack of statistically significant differences in assessment practices along disciplinary lines indicates homogeneity in the use of assessments that does not do justice to social studies.

Practical implications

Using Common Core standards or not, having a 1:1 technological environment or not, teacher respondents essentially reported using the same assessments, perhaps because high-stakes assessments did not change.

Social implications

There is a need for professional development that helps teachers see how performance-based assessments can be used to boost student performance on high-stakes assessments.

Originality/value

Studies of actual assessment practices (as opposed to ideas about how teachers should assess) are still quite rare, and provide a helpful window in understanding what is actually happening inside schools.

Details

Social Studies Research and Practice, vol. 12 no. 2
Type: Research Article
ISSN: 1933-5415

Keywords

Open Access
Article
Publication date: 6 February 2024

Tiprawee Tongtummachat, Attasak Jaree and Nattee Akkarawatkhoosith

This article presents our experience in implementing the assessment for learning process (AfL) to enhance the teaching–learning quality, which has faced numerous challenges…

Abstract

Purpose

This article presents our experience in implementing the assessment for learning process (AfL) to enhance the teaching–learning quality, which has faced numerous challenges impacting educational quality. The effectiveness of this technique is demonstrated through a case study conducted in a core course of chemical engineering.

Design/methodology/approach

The article shares insights into the systematic course design and planning processes that were discussed and developed through AfL practices. Significant emphasis is placed on implementing formative and summative student self-assessment surveys as simple yet effective methods to meet this purpose. Quantitative data were collected and analyzed over three consecutive academic years (2020–2022) using various statistical parameters such as percentage, interquartile range and the program’s numerical goal (%G).

Findings

The AfL process via formative and summative surveys could significantly and effectively improve teaching–learning quality. These findings assist educators in identifying appropriate teaching methods and recognizing areas of weakness and strength, thereby facilitating continuous improvement in the teaching–learning quality. Validation methods, including quizzes and numerical grades, were employed to practically verify the outcome obtained from the questionnaires.

Practical implications

The AfL techniques demonstrated in this study can be directly implemented or adapted for various educational fields to enhance the teaching–learning quality.

Originality/value

The practical implementation of AfL in an engineering context has hardly been reported, particularly in chemical engineering. This work represents the practical implementation of AfL to enhance engineering field education.

Details

Journal of Research in Innovative Teaching & Learning, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2397-7604

Keywords

Book part
Publication date: 3 December 2013

Evaluation is the process by which we estimate how things should go, explore how things are going, and determine how things went in terms of course redesign. In this chapter, we…

Abstract

Evaluation is the process by which we estimate how things should go, explore how things are going, and determine how things went in terms of course redesign. In this chapter, we examine formative and summative methods for assessing student learning and establishing teacher effectiveness and course quality. Evaluation is a subjective, value-laden process. To introduce the rigor needed to make it meaningful, evaluation should be multifaceted, planned in advance, made transparent to learners, and employ valid and reliable methods. Moving courses online presents both opportunities and challenges for evaluation. We explore ways to implement assessment to make full use of the advantages of technology while mitigating the problems associated with online delivery.

Details

Redesigning Courses for Online Delivery
Type: Book
ISBN: 978-1-78190-691-0

Keywords

Open Access
Article
Publication date: 11 April 2023

Lolowa Almekhaini, Ahmad R. Alsuwaidi, Khaula Khalfan Alkaabi, Sania Al Hamad and Hassib Narchi

Computer-Assisted Learning in Pediatrics Program (CLIPP) and National Board of Medical Examiners Pediatric Subject Examination (NBMEPSE) are used to assess students’ performance…

Abstract

Purpose

Computer-Assisted Learning in Pediatrics Program (CLIPP) and National Board of Medical Examiners Pediatric Subject Examination (NBMEPSE) are used to assess students’ performance during pediatric clerkship. International Foundations of Medicine (IFOM) assessment is organized by NBME and taken before graduation. This study explores the ability of CLIPP assessment to predict students’ performance in their NBMEPSE and IFOM examinations.

Design/methodology/approach

This cross-sectional study assessed correlation of students’ CLIPP, NBMEPSE and IFOM scores. Students’ perceptions regarding NBMEPSE and CLIPP were collected in a self-administered survey.

Findings

Out of the 381 students enrolled, scores of CLIPP, NBME and IFOM examinations did not show any significant difference between genders. Correlation between CLIPP and NBMEPSE scores was positive in both junior (r = 0.72) and senior (r = 0.46) clerkships, with a statistically significant relationship between them in a univariate model. Similarly, there was a statistically significant relationship between CLIPP and IFOM scores. In an adjusted multiple linear regression model that included gender, CLIPP scores were significantly associated with NBME and IFOM scores. Male gender was a significant predictor in this model. Results of survey reflected students’ satisfaction with both NBMEPSE and CLIPP examinations.

Originality/value

Although students did not perceive a positive relationship between their performances in CLIPP and NBMEPSE examinations, this study demonstrates predictive value of formative CLIPP examination scores for their future performance in both summative NBMEPSE and IFOM. Therefore, students with poor performance in CLIPP are likely to benefit from feedback and remediation in preparation for summative assessments.

Details

Arab Gulf Journal of Scientific Research, vol. 42 no. 2
Type: Research Article
ISSN: 1985-9899

Keywords

Article
Publication date: 16 January 2023

Joseph S. Nadan, Abram Walton, Behzad Tabaei, Charles Edward Bryant and Natalie Shah

This paper aims to propose an innovative method for deploying a personalized instructor-created software-aided assessment system, that will disrupt traditional learning…

Abstract

Purpose

This paper aims to propose an innovative method for deploying a personalized instructor-created software-aided assessment system, that will disrupt traditional learning environments by allowing students to confidentially and with indirect supervision from the instructor, assess their knowledge and ability to achieve the course outcomes.

Design/methodology/approach

Through empirical evaluation in real-world educational settings, the authors examine the impact of augmenting human activity in the classroom with an innovative software platform to transform the learning process.

Findings

Findings indicate that this software-aided assessment system effectively augments human interactivity by providing timely instructor-designed feedback to increase knowledge retention and skillsets.

Practical implications

This study has shown that incorporating disruptive innovation through the use of software-aided assessment systems increases the effectiveness of the faculty in the classroom and enhances student learning and retention. Thus, a transformative software-aided assessment system design that incorporates artificial intelligence into the learning pathway should be pursued. These software-aided assessments are disruptive innovation as they are formative, frequent and require little direct involvement from the instructor.

Originality/value

To the best of the authors’ knowledge, this study is the first of its kind to incorporate artificial intelligence into the assessment process by analyzing results of pilot programs at several universities. The results demonstrate how using software-aided transformative assessments in various courses have helped instructors assess students’ preparedness and track their learning progress. These software-aided systems are the first step in bringing disruptive innovation to the classroom as these software-aided assessment instruments rapidly assess learners’ knowledge and skills based on short, easily created, multiple-choice tests, with little direct engagement from the faculty.

Book part
Publication date: 22 August 2022

Sam Elkington

Universities across the globe have had to rethink how the significant resources devoted to learning, teaching, and assessment might be reconfigured to better support student…

Abstract

Universities across the globe have had to rethink how the significant resources devoted to learning, teaching, and assessment might be reconfigured to better support student learning across different modes of delivery. A focus on ‘flexibility’ in assessment arrangements supports the need to be responsive to the requirements of a changing and increasingly uncertain higher education landscape across Africa and elsewhere in the world. This chapter explores how professors and lecturers in higher education can deliver effective assessment processes that meet the demands of online and blended learning environments. Flexibility in assessment is about responding to students’ individual learning needs as well as the needs of the curriculum. The key is making assessment relevant to the students. The proliferation of learning technologies and tools, coupled with the increasing diversification of student profiles and pathways through programmes, provides the context for developing flexible assessment. Here technology is a key enabler for personalised and active blended learning experiences. This chapter considers practical ideas and strategies for inclusive, authentic, and flexible assessment task design, delivering effective feedback, and ensuring quality and consistency within assessment processes – all of which are relevant in an era of COVID-19 pandemic disruptions in higher education.

Details

The Emerald Handbook of Higher Education in a Post-Covid World: New Approaches and Technologies for Teaching and Learning
Type: Book
ISBN: 978-1-80382-193-1

Keywords

1 – 10 of over 2000