Search results

1 – 10 of over 125000
Book part
Publication date: 4 February 2008

Pamela A. Moss

This chapter, completed in 1999, provides an overview and critical analysis of the validity research agenda undertaken by the National Board for Professional Teaching Standards…

Abstract

This chapter, completed in 1999, provides an overview and critical analysis of the validity research agenda undertaken by the National Board for Professional Teaching Standards (NBPTS) for its assessment to certify accomplished teachers at the end of its first decade of assessment development and implementation. The review is presented in three major sections: (a) an overview of the validity criteria underlying the review; (b) a description of the National Board's research agenda presented in its own terms, focusing first on the studies that were routinely carried out for each certificate and second on the “special studies” that were not part of the routine agenda; and (c) a series of six critical observations and explanations based on the validity issues described in the first section.

Details

Assessing Teachers for Professional Certification: The First Decade of the National Board for Professional Teaching Standards
Type: Book
ISBN: 978-0-7623-1055-5

Article
Publication date: 21 March 2023

Anne Reed

This paper examines the innovative potential of micro-credentials which, arguably, is compromised if not for a particular attribute of the digital format: evidence. Evidence

Abstract

Purpose

This paper examines the innovative potential of micro-credentials which, arguably, is compromised if not for a particular attribute of the digital format: evidence. Evidence allows for an artifact of learning (e.g. project, writing sample) to be included in a digital micro-credential. Micro-credentials that include evidence can support individualized learning; elucidate learners' qualifications; and make assessment and credentialing processes more inclusive.

Design/methodology/approach

This conceptual paper explores the subject of higher education micro-credentials which are increasingly being offered as formal (albeit smaller and digital), credit-bearing credentials, far removed from the Open Digital Badge movement from which they originated. This paper presents a case for safeguarding the qualities of micro-credentials that allow for innovative practice, before micro-credentials become entirely subsumed into conventional assessment and credentialing practices.

Findings

A review of the literature indicates that evidence, when used effectively, can support the innovative potential of micro-credentials. This subject is examined from the perspective of three categories of evidence, which are identified and illustrated through specific examples from the literature.

Originality/value

This paper fulfills a need to address the features of micro-credentials that, if used effectively, can challenge traditional assessment and credentialing paradigms. Evidence is rarely discussed in the literature and has not been thoroughly examined from this perspective. Additionally, faculty who develop and implement micro-credentials face numerous challenges when attempting to include evidence in micro-credentials. This paper explores those challenges and offers several recommendations for practice.

Details

The International Journal of Information and Learning Technology, vol. 40 no. 5
Type: Research Article
ISSN: 2056-4880

Keywords

Article
Publication date: 7 June 2021

Yoon Jeon (YJ) Kim, Yumiko Murai and Stephanie Chang

As maker-centered learning grows rapidly in school environments, there is an urgent need for new forms of assessment. The purpose of this paper is to report on the development and…

Abstract

Purpose

As maker-centered learning grows rapidly in school environments, there is an urgent need for new forms of assessment. The purpose of this paper is to report on the development and implementation of tools to support embedded assessment of maker competencies within school-based maker programs and describes alternative assessment approaches to rubrics and portfolios.

Design/methodology/approach

This study used a design-based research (DBR) method, with researchers collaborating with US middle school teachers to iteratively design a set of tools that support implementation of embedded assessment. Based on teacher and student interviews, classroom observations, journal notes and post-implementation interviews, the authors report on the final phase of DBR, highlighting how teachers can implement embedded assessment in maker classrooms as well as the challenges that teachers face with assessment.

Findings

This study showed that embedded assessment can be implemented in a variety of ways, and that flexible and adaptable assessment tools can play a crucial role in supporting teachers in this process. Additionally, though teachers expressed a strong desire for student involvement in the assessment process, we observed minimal student agency during implementation. Further study is needed to investigate how establishing classroom culture and norms around assessment may enable students to fully participate in assessment processes.

Originality/value

Due to the dynamic and collaborative nature of maker-centered learning, teachers may find it difficult to provide on-the-fly feedback. By employing an embedded assessment approach, this study explored a new form of assessment that is flexible and adaptable, allowing teachers to formally plan ahead while also adjusting in the moment.

Details

Information and Learning Sciences, vol. 122 no. 3/4
Type: Research Article
ISSN: 2398-5348

Keywords

Book part
Publication date: 9 August 2012

Brian Patrick Green and Guangcheng Wang

Most universities have relied on student evaluations as a source of evidence in their assessment of teaching performance. However, a complete evaluation of all dimensions of a…

Abstract

Most universities have relied on student evaluations as a source of evidence in their assessment of teaching performance. However, a complete evaluation of all dimensions of a faculty member's teaching requires multiple sources of evidence. The purpose of this chapter is to identify the sources of evidence that accounting chairs report they currently use to assess teaching. Calderon and Green first examined this issue in their 1997 study. However, their results may be outdated due to changes in accreditation requirements, teaching delivery methods, and the continued evolution of assessment tools. Responding department chairs report that peer observation followed by course syllabus, exams given in class, and instructor course notes are the most frequently used evidence types, with an average of 3.16 sources beyond student evaluations. The source and quantity of evidence vary across different types of institutions. While Calderon and Green reported that most schools use ad hoc and subjective sources of evidence, respondents in this study focus more on instructor-supplied materials and direct evidence from inside the classroom.

Details

Advances in Accounting Education: Teaching and Curriculum Innovations
Type: Book
ISBN: 978-1-78052-757-4

Article
Publication date: 1 February 2001

Jeanette Purcell

The development of competence‐based assessment in the UK has been strongly influenced by the introduction, in the 1980s, of National Vocational Qualifications (NVQs) and Scottish…

Abstract

The development of competence‐based assessment in the UK has been strongly influenced by the introduction, in the 1980s, of National Vocational Qualifications (NVQs) and Scottish Vocational Qualifications. The introduction of these qualifications has raised the profile of competence‐based assessment and, arguably, its credibility. But it is responsible for creating some misconceptions. Attempts to centralise and prescribe criteria and processes have stifled innovation and have restricted the wider application of and involvement in competence‐based assessment, particularly at the higher levels. This article describes the background of competence‐based assessment and NVQs and identifies some of the misconceptions which exist in this area. Taking the Association of Accounting Technicians as a case study, the article aims to correct these misconceptions and demonstrate the real potential of competence‐based assessment in vocational and professional contexts.

Details

Education + Training, vol. 43 no. 1
Type: Research Article
ISSN: 0040-0912

Keywords

Book part
Publication date: 30 September 2019

Brad A. Schafer and Jennifer K. Schafer

This chapter examines whether professional auditors’ affect toward client management influences fraud likelihood judgments and whether accountability and experience with fraud…

Abstract

This chapter examines whether professional auditors’ affect toward client management influences fraud likelihood judgments and whether accountability and experience with fraud risk judgments moderate this effect. This research also explores the process by which affect influences fraud judgments by examining affect’s influence on the evaluation of fraud evidence cues. Results indicate that more positive affect toward the client results in lower fraud likelihood judgments. Accountability is found to moderate this effect, but only for experienced auditors. These findings have implications for fraud brainstorming sessions where all staff levels provide input into fraud risk assessments and because client characteristics are especially salient during these assessments. Importantly, results also support the proposition that affect impacts inexperienced auditors’ fraud assessments through errant attribution of client likability to evidence cues that refer to management, rather than biasing all client-related evaluations. Together, these findings suggest that education and training can be improved to better differentiate relevant and irrelevant cues in fraud judgment.

Details

Advances in Accounting Behavioral Research
Type: Book
ISBN: 978-1-83867-346-8

Keywords

Article
Publication date: 4 February 2014

Frank Cervone

Issues related to usability and creating effective and engaging user experiences on the internet continue to vex libraries and information agencies. Many organizations still do…

2370

Abstract

Purpose

Issues related to usability and creating effective and engaging user experiences on the internet continue to vex libraries and information agencies. Many organizations still do not have an on-going, sustainable usability assessment program in place. This should be a cause for concern because usability programs serve as a quality-control check on our ability to provide quality information. This is why evidence-based information practice is so important as a fundamental building block of a usability assessment plan. This paper aims to address these issues.

Design/methodology/approach

Through a review of the basic principles of evidence-based practice, the author puts into context how a web assessment methodology could be put in place at a library or information organization.

Findings

Using the principles of evidence-based practice, as well as a user-centered design perspective, can greatly enhance the ability of libraries and information organizations to develop effective web usability assessment programs.

Originality/value

While there has been a significant body of work in library and information science related to implementing evidence-based practice (EBP), there has been little specifically written about applying EBP to web usability assessment. This article fills that gap in the literature.

Details

OCLC Systems & Services, vol. 30 no. 1
Type: Research Article
ISSN: 1065-075X

Keywords

Article
Publication date: 3 April 2017

Judy R. Wilkerson

Understanding and navigating the differences in standards, and the roots and rationales underlying accreditation reviews, is necessary for all institutions that seek multiple…

Abstract

Purpose

Understanding and navigating the differences in standards, and the roots and rationales underlying accreditation reviews, is necessary for all institutions that seek multiple accreditations. The purpose of this paper is to demonstrate a method to assist institutional-level leaders and assessment practitioners analyze and align these differences in various national or international agency requirements, to develop a framework for assessment and data collection. The proposed method is demonstrated by using multiple accreditors’ standards from the USA.

Design/methodology/approach

Guided by a set of process questions, a review and content analysis of national standards and 12 accreditation agency requirements from the USA was conducted using Web-based, documentary sources. An operational definition of institutional quality was derived based on the core themes that emerged. Examples of evidence matched to each core theme were outlined to suggest an assessment framework. The 12 US agency requirements were compared and contrasted with the core themes and validated.

Findings

In the USA, recognition requirements set by two national bodies, the US Department of Education and Council of Higher Education Accreditation, drive the standards applied by various agencies that accredit institutions and programs. Six themes emerged from their requirements, serving as a core framework for designing institutional assessment systems. The themes are student achievement and continuous improvement; curriculum quality; faculty; facilities, equipment and supplies; fiscal and administrative capacity; and student support services, admissions and information-gathering systems. While the 12 sampled accreditation agencies generally used these core themes, divergences were found in how they treated the themes in published requirements.

Practical implications

Where multiple US or other accreditations are sought, the approach recommended could facilitate the work of institutional accreditation leaders and practitioners in establishing assessment systems that reduce redundancy while also maximizing efficiency in assessment and data collection.

Originality/value

There is little guidance in the literature on how institutional leaders and practitioners confronting the challenges of accreditation can negotiate multiple, and sometimes conflicting, sets of requirements. This paper demonstrates a possible solution strategy. Outside the general utility of the demonstrated method, the findings and core assessment framework produced could be useful for institutions seeking accreditation through the agencies in the study sample, in both the USA and overseas.

Details

Quality Assurance in Education, vol. 25 no. 2
Type: Research Article
ISSN: 0968-4883

Keywords

Article
Publication date: 13 February 2023

Laura Doyle, Lorna Montgomery, Sarah Donnelly, Kathryn Mackay and Bridget Penhale

Across the UK and Ireland, there are a range of processes and interventions offered to adults who, because of personal characteristics or life circumstances, require help to keep…

Abstract

Purpose

Across the UK and Ireland, there are a range of processes and interventions offered to adults who, because of personal characteristics or life circumstances, require help to keep themselves safe from potential harm or abuse. The ways in which the statutory and voluntary sectors have chosen to safeguard these adults varies. Different models of intervention and the utilisation of a range of assessment tools, frameworks and approaches have evolved, often in response to policy and practice wisdom. Empirical research in this area is limited. The primary research purpose of the project on which this paper is based is to gather information on the range of tools and frameworks that are used in adult safeguarding practice across the UK and Ireland. In so doing, this paper seeks to contribute and inform the future development of an evidence based adult safeguarding assessment framework.

Design/methodology/approach

A team of academics from England, Scotland, Northern Ireland and Ireland wanted to explore the possibility of adapting a pre-existing assessment framework currently in use in family and childcare social work to consider its utility in assessing carers involved in adult safeguarding referrals. This paper reports on a small pilot study which sought to inform the adaptation of this framework for use in adult safeguarding. This paper is based on a qualitative study involving 11 semi-structured telephone interviews with adult safeguarding social work managers and experienced practitioners. Two to four professionals from each region of England, Scotland, Northern Ireland and Ireland were interviewed to elicit their perceptions and experiences of engaging in adult safeguarding assessment processes and their views about models of assessment.

Findings

This study identified considerable variation in and between the nations under review, in terms of the assessment frameworks and tools used in adult safeguarding practice. To a large extent, the assessment frameworks and tools in use were not evidence based or accredited. Participants acknowledged the value of using assessment frameworks and tools whilst also identifying barriers in undertaking effective assessments.

Originality/value

There is limited evidence available in the literature regarding the utility of assessment frameworks and tools in adult safeguarding practice. This primary research identifies four themes derived from professional’s experiences of using such frameworks and identifies broader recommendations for policy and practice in this area.

Details

The Journal of Adult Protection, vol. 25 no. 2
Type: Research Article
ISSN: 1466-8203

Keywords

Article
Publication date: 2 January 2020

Alistair Catterall

The purpose of this paper is to address the fact that under current Education Skills Funding Agency (ESFA) funding guidelines, diagnostic assessments for apprentices with…

Abstract

Purpose

The purpose of this paper is to address the fact that under current Education Skills Funding Agency (ESFA) funding guidelines, diagnostic assessments for apprentices with additional learner needs are deemed an ineligible cost, which has the potential to reduce access to additional funding and support.

Design/methodology/approach

The approach of this paper is to critically evaluate the surrounding literature, government reports and Mencap review produced since the apprenticeship levy and present the implications of these funding guidelines relating to access to apprenticeships and the practical effects of apprentice’s experience and development.

Findings

The finding presented by this paper is that the definition of diagnostic assessments as an ineligible cost reduces the quality of training delivered by providers and assurances to apprentices that they will be fully supported from the start of their training.

Research limitations/implications

The limitation of this research was the minimal amount of government/ESFA documentation addressing this subject within apprenticeships.

Practical implications

The practical implications of this paper relate to the on-going delivery of apprenticeship training in the UK, and the detrimental effect of reducing access to diagnostic assessments for apprentices with undiagnosed additional learner needs under the current wording of the Education Skills Agency guidance.

Social implications

The government policy is currently under review to address this area which is considered an ineligible cost for supporting apprentices with recognised additional learner needs.

Originality/value

The value of this paper is to align with recent Mencap review and collaboratively readdress the ESFA’s current positioning of diagnostic assessments for apprentices with undiagnosed learning difficulties and disabilities as an ineligible cost and non-standardised requirement.

Details

Higher Education, Skills and Work-Based Learning, vol. 10 no. 4
Type: Research Article
ISSN: 2042-3896

Keywords

1 – 10 of over 125000