Search results

1 – 10 of over 8000
Open Access
Article
Publication date: 14 March 2024

Geethanjali Selvaretnam

Large classes pose challenges in managing different types of skills (e.g. maths, subject-specific knowledge, writing, confidence and communication), facilitating interactions…

Abstract

Purpose

Large classes pose challenges in managing different types of skills (e.g. maths, subject-specific knowledge, writing, confidence and communication), facilitating interactions, enabling active learning and providing timely feedback. This paper shares a design of a set of assessments for a large undergraduate economics course consisting of students from diverse cultural backgrounds. The benefits, challenges and learning experiences of students are analysed.

Design/methodology/approach

Students worked in groups to complete an assessment with several questions which would be useful as a revision for the individual assessment, the following week. Survey questionnaires with Likert-type questions and open-ended questions were used to analyse the learning and skill development that occurred because of the group work. Responses to the open-ended survey questions were coded and analysed by identifying the themes and categorising the various issues that emerged.

Findings

This assessment design developed group working skills, created opportunities to interact and enhanced learning. The analysis of the responses found that working with peers enabled the students to generate their own feedback, clear doubts and learn to solve problems. Effective communication, planning meetings and working around the diverse group members’ strengths and weaknesses are some graduate skills that are developed in this group assessment. The challenges were arranging meetings, finalising assessments, engagement of group members and unreliable technology. However, the students found ways to overcome these challenges.

Originality/value

This assessment design can be useful in higher education practice by introducing a mechanism for authentic collaborative practice. This paper adds to the literature on peer interactions and group work and enables effective learning at scale.

Details

Journal of Work-Applied Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2205-2062

Keywords

Open Access
Article
Publication date: 6 February 2024

Tiprawee Tongtummachat, Attasak Jaree and Nattee Akkarawatkhoosith

This article presents our experience in implementing the assessment for learning process (AfL) to enhance the teaching–learning quality, which has faced numerous challenges…

Abstract

Purpose

This article presents our experience in implementing the assessment for learning process (AfL) to enhance the teaching–learning quality, which has faced numerous challenges impacting educational quality. The effectiveness of this technique is demonstrated through a case study conducted in a core course of chemical engineering.

Design/methodology/approach

The article shares insights into the systematic course design and planning processes that were discussed and developed through AfL practices. Significant emphasis is placed on implementing formative and summative student self-assessment surveys as simple yet effective methods to meet this purpose. Quantitative data were collected and analyzed over three consecutive academic years (2020–2022) using various statistical parameters such as percentage, interquartile range and the program’s numerical goal (%G).

Findings

The AfL process via formative and summative surveys could significantly and effectively improve teaching–learning quality. These findings assist educators in identifying appropriate teaching methods and recognizing areas of weakness and strength, thereby facilitating continuous improvement in the teaching–learning quality. Validation methods, including quizzes and numerical grades, were employed to practically verify the outcome obtained from the questionnaires.

Practical implications

The AfL techniques demonstrated in this study can be directly implemented or adapted for various educational fields to enhance the teaching–learning quality.

Originality/value

The practical implementation of AfL in an engineering context has hardly been reported, particularly in chemical engineering. This work represents the practical implementation of AfL to enhance engineering field education.

Details

Journal of Research in Innovative Teaching & Learning, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2397-7604

Keywords

Open Access
Article
Publication date: 22 February 2024

Daniele Morselli

This article focuses on the assessment of entrepreneurship competence by selected vocational teachers in Italy. The exploratory research question addresses the extent to which…

Abstract

Purpose

This article focuses on the assessment of entrepreneurship competence by selected vocational teachers in Italy. The exploratory research question addresses the extent to which entrepreneurship assessments are competence based, and the research seeks to identify fully fledged assessment programmes with both a formative and summative component, and the use of assessment rubrics. It also explores the extent to which entrepreneurship competence is referred to in school documentation and later assessed, and the tools and strategies used for such assessment.

Design/methodology/approach

This case study is part of a larger European research project promoted by Cedefop; in Italy it focused on six selected vocational IVET and CVET programmes and apprenticeship schemes. It used a wide range of instruments to ensure triangulation and multiple perspectives: analysed policy documents and undertook online interviews with experts and policy makers. At VET providers' premises it deployed: analysis of school documents; observations of learning environments; interviews and focus groups with (in schools) teachers, directors and vice directors, learners and alumni (in companies) instructors, company tutors and employers, apprentices and alumni.

Findings

Assessment tasks were rarely embedded within fully fledged assessment programmes involving both formative and summative tasks, and assessment rubric for grading. Most of the time, entrepreneurship programmes lacked self-assessment, peer assessment and structured feedback and did not involve learners in the assessment process. Some instructors coached the students, but undertook no clear formative assessment. These findings suggest institutions have a testing culture with regard to assessment, at the level of both policy and practice. In most cases, entrepreneurship competence was not directly assessed, and learning outcomes were only loosely related to entrepreneurship.

Research limitations/implications

One limitation concerned the selection of the VET providers: these were chosen not on a casual basis, but because they ran programmes that were relevant to the development of entrepreneurship competence.

Practical implications

At the policy level, there is a need for new guidelines on competence development and assessment in VET, guidelines that are more aligned with educational research on competence development. To ensure the development of entrepreneurship competence, educators need in-service training and a community of practice.

Originality/value

So far, the literature has concentrated on entrepreneurship education at the tertiary level. Little is known about how VET instructors assess entrepreneurship competence. This study updates the picture of policy and practice in Italy, illustrating how entrepreneurship competence is developed in selected IVET and CVET programmes and apprenticeships.

Details

Education + Training, vol. 66 no. 10
Type: Research Article
ISSN: 0040-0912

Keywords

Open Access
Article
Publication date: 27 February 2024

Maryam R. Nezami, Mark L.C. de Bruijne, Marcel J.C.M. Hertogh and Hans L.M. Bakker

Societies depend on interconnected infrastructures that are becoming more complex over the years. Multi-disciplinary knowledge and skills are essential to develop modern…

Abstract

Purpose

Societies depend on interconnected infrastructures that are becoming more complex over the years. Multi-disciplinary knowledge and skills are essential to develop modern infrastructures, requiring close collaboration of various infrastructure owners. To effectively manage and improve inter-organizational collaboration (IOC) in infrastructure construction projects, collaboration status should be assessed continually. This study identifies the assessment criteria, forming the foundation of a tool for assessing the status of IOC in interconnected infrastructure projects.

Design/methodology/approach

A systematic literature study and in-depth semi-structured interviews with practitioners in interconnected infrastructure construction projects in the Netherlands are performed to identify the criteria for assessing the status of IOC in infrastructure construction projects, based on which an assessment tool is developed.

Findings

The identified assessment criteria through the literature and the practitioner’s perspectives results in the designing and development of a collaboration assessment tool. The assessment tool consists of 12 criteria and 36 sub-criteria from three different categories of collaborative capacity: individual, relational, and organizational.

Originality/value

The assessment tool enables practitioners to monitor the status of IOC between infrastructure owners and assists them in making informed decisions to enhance collaboration. The assessment tool provides the opportunity to assess and analyze the status of collaboration based on three categories (i.e., individual, relational, and organizational).

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

Open Access
Article
Publication date: 19 April 2024

Robert Wagenaar

Key to transnational higher education (HE) cooperation is building trust to allow for seamless recognition of studies. Building on the Tuning Educational Structures initiative…

Abstract

Purpose

Key to transnational higher education (HE) cooperation is building trust to allow for seamless recognition of studies. Building on the Tuning Educational Structures initiative (2001) and lessons learnt from the Organisation for Economic Cooperation and Development (OECD)-Assessment of Learning Outcomes in Higher Education (AHELO) feasibility study, this paper offers a sophisticated approach developed by the European Union (EU)-co-financed project Measuring and Comparing Achievements of Learning Outcomes in Europe (CALOHEE). These evidence the quality and relevance of learning by applying transparent and reliable indicators at the overarching and disciplinary levels. The model results allow for transnational diagnostic assessments to identify the strength and weaknesses of degree programmes.

Design/methodology/approach

The materials presented have been developed from 2016 to 2023, applying a bottom-up approach involving approximately 150 academics from 20+ European countries, reflecting the full spectrum of academic fields. Based on intensive face-to-face debate and consultation of stakeholders and anchored in academic literature and wide experience.

Findings

As a result, general (overarching) state-of-the-art reference frameworks have been prepared for the associated degree, bachelor, master and doctorate, as well as aligned qualifications reference frameworks and more detailed learning outcomes/assessment frameworks for 11 subject areas, offering a sound basis for quality assurance. As a follow-up, actual assessment formats for five academic fields have been developed to allow for measuring the actual level of learning at the institutional level from a comparative perspective.

Originality/value

Frameworks as well as assessment models and items are highly innovative, content-wise as in the strategy of development, involving renown academics finding common ground. Its value is not limited to Europe but has global significance. The model developed, is also relevant for micro-credentials in defining levels of mastery.

Details

Journal of International Cooperation in Education, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2755-029X

Keywords

Open Access
Article
Publication date: 16 June 2023

Afeez Tunde Jinadu

Upholding assessment ethics are common concerns during annual public examination performance appraisal. Previous studies have focused more on examination stakeholder: testees…

Abstract

Purpose

Upholding assessment ethics are common concerns during annual public examination performance appraisal. Previous studies have focused more on examination stakeholder: testees outside proctors however, assessment ethics cannot be studied excluding proctors variables therefore, the study investigated consistency of a structural equation modelling of security, environment, professionalism, testing and assessment ethics.

Design/methodology/approach

Ex-post facto design was adopted. Simple random sampling technique was employed to choose 90 proctors drawn from 45 colleges. Proctors Examination Ethics Questionnaire (reliability = 0.86) was used to collect data for the study. Data collected were analysed using path analysis at 0.05 significant levels.

Findings

Out of the six hypothesised paths significantly explaining the consistency of the causal model. Test security, environment and professionalism accounted for both direct and indirect effects on assessment ethics. All model fit indices were established to explain testing and assessment model.

Research limitations/implications

Few proctor variables were studied, therefore assessment ethics may not be explained other than through proctor variables considered in this study.

Practical implications

Assessment ethics may not be violated if test security, testing environment and professionalism are not cared for during test administration as shown in the study.

Social implications

It added to knowledge base in ethical areas of assessment, a 21st-century proctors in upholding testing and assessment ethics, security, environment and professionalism are to be considered.

Originality/value

There was a positive causal effect of security, environment and professionalism on testing and assessment ethics among proctors in public examinations.

Details

Arab Gulf Journal of Scientific Research, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1985-9899

Keywords

Open Access
Article
Publication date: 11 July 2023

Tadhg Stapleton, Kirby Jetter and Sean Commins

The purpose of this study was to provide an outline of the process of developing an on-road driving test route and rating form. Comprehensive evaluation of medical fitness to…

Abstract

Purpose

The purpose of this study was to provide an outline of the process of developing an on-road driving test route and rating form. Comprehensive evaluation of medical fitness to drive should comprise of an off-road and an on-road assessment. Much research attention has focussed on the off-road phase of assessment, while there is less standardisation evident in the completion and measurement of the on-road phase of fitness-to-drive assessment.

Design/methodology/approach

A scholarship of practice approach was used to inform the development of an on-road test route and an associated generic on-road assessment tool that was guided by research evidence and best practice recommendations.

Findings

A step-by-step guide, outlining seven recommended phases in the development of an on-road route for the assessment of fitness to drive that aligns with best practice recommendations, was developed. A preliminary generic on-road assessment tool (the Maynooth–Trinity Driving Test) that includes higher-order cognition alongside element of strategic, tactical and operational driving ability was developed and piloted alongside the newly developed on-road test route.

Originality/value

This paper offers an overview of an approach to developing evidence-based on-road test routes and an associated generic assessment tool that may assist occupational therapists and on-road driving assessors establish a standard practice for testing on-road behaviour as part of a comprehensive approach to evaluate fitness to drive.

Details

Irish Journal of Occupational Therapy, vol. 51 no. 2
Type: Research Article
ISSN: 2398-8819

Keywords

Open Access
Article
Publication date: 30 May 2023

Ya-Ping (Amy) Hsiao, Gerard van de Watering, Marthe Heitbrink, Helma Vlas and Mei-Shiu Chiu

In the Netherlands, thesis assessment quality is a growing concern for the national accreditation organization due to increasing student numbers and supervisor workload. However…

1020

Abstract

Purpose

In the Netherlands, thesis assessment quality is a growing concern for the national accreditation organization due to increasing student numbers and supervisor workload. However, the accreditation framework lacks guidance on how to meet quality standards. This study aims to address these issues by sharing our experience, identifying problems and proposing guidelines for quality assurance for a thesis assessment system.

Design/methodology/approach

This study has two parts. The first part is a narrative literature review conducted to derive guidelines for thesis assessment based on observations made at four Dutch universities. The second part is a case study conducted in one bachelor’s psychology-related program, where the assessment practitioners and the vice program director analyzed the assessment documents based on the guidelines developed from the literature review.

Findings

The findings of this study include a list of guidelines based on the four standards. The case study results showed that the program meets most of the guidelines, as it has a comprehensive set of thesis learning outcomes, peer coaching for novice supervisors, clear and complete assessment information and procedures for both examiners and students, and a concise assessment form.

Originality/value

This study is original in that it demonstrates how to holistically ensure the quality of thesis assessments by considering the context of the program and paying more attention to validity (e.g. program curriculum and assessment design), transparency (e.g. integrating assessment into the supervision process) and the assessment expertise of teaching staff.

Details

Higher Education Evaluation and Development, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-5789

Keywords

Open Access
Article
Publication date: 19 March 2024

Reijo Savolainen

To elaborate the nature of fact-checking in the domain of political information by examining how fact-checkers assess the validity of claims concerning the Russo-Ukrainian…

Abstract

Purpose

To elaborate the nature of fact-checking in the domain of political information by examining how fact-checkers assess the validity of claims concerning the Russo-Ukrainian conflict and how they support their assessments by drawing on evidence acquired from diverse sources of information.

Design/methodology/approach

Descriptive quantitative and qualitative content analysis of 128 reports written by the fact-checkers of Snopes – an established fact-checking organisation – during the period of 24 February 2022 – 28 June, 2023. For the analysis, nine evaluation grounds were identified, most of them inductively from the empirical material. It was examined how the fact-checkers employed such grounds while assessing the validity of claims and how the assessments were bolstered by evidence acquired from information sources such as newspapers.

Findings

Of the 128 reports, the share of assessments indicative of the invalidity of the claims was 54.7%, while the share of positive ratings was 26.7%. The share of mixed assessments was 15.6%. In the fact-checking, two evaluation grounds, that is, the correctness of information and verifiability of an event presented in a claim formed the basis for the assessment. Depending on the topic of the claim, grounds such as temporal and spatial compatibility, as well as comparison by similarity and difference occupied a central role. Most popular sources of information offering evidence for the assessments include statements of government representatives, videos and photographs shared in social media, newspapers and television programmes.

Research limitations/implications

As the study concentrated on fact-checking dealing with political information about a specific issue, the findings cannot be extended to concern the fact-checking practices in other contexts.

Originality/value

The study is among the first to characterise how fact-checkers employ evaluation grounds of diverse kind while assessing the validity of political information.

Details

Journal of Documentation, vol. 80 no. 7
Type: Research Article
ISSN: 0022-0418

Keywords

Open Access
Article
Publication date: 12 August 2022

Hesham El Marsafawy, Rumpa Roy and Fahema Ali

This study aims to identify the gap between the requirements of the accreditation bodies and the widely used learning management systems (LMSs) in assessing the intended learning…

1437

Abstract

Purpose

This study aims to identify the gap between the requirements of the accreditation bodies and the widely used learning management systems (LMSs) in assessing the intended learning outcomes (ILOs). In addition, this study aims to introduce a framework, along with the evaluation of the functionality of the LMS, for measuring the ILO.

Design/methodology/approach

A qualitative method was deployed to examine the gap between the requirements of the accreditation standards and the LMS functionalities. The researchers collaborated to design a mechanism, develop a system architecture to measure the ILO in alignment with the accreditation standards and guide the development of the Moodle plugin. The appropriateness and effectiveness of the plugin were evaluated within the scope of assessment mapping and design. Focus group interviews were conducted to collect feedback from the instructors and program leaders regarding its implementation.

Findings

The results of this study indicate that there is no standardized mechanism to measure course and program ILO objectively, using the existing LMS. The implementation of the plugin shows the appropriateness and effectiveness of the system in generating ILO achievement reports, which was confirmed by the users.

Originality/value

This study proposed a framework and developed a system architecture for the objective measurement of the ILO through direct assessment. The plugin was tested to generate consistent reports during the measurement of course and program ILO. The plugin has been implemented across Gulf University’s program courses, ensuring appropriate reporting and continuous improvement.

Details

Quality Assurance in Education, vol. 30 no. 4
Type: Research Article
ISSN: 0968-4883

Keywords

1 – 10 of over 8000