Search results

1 – 10 of over 33000
To view the access options for this content please click here
Article
Publication date: 8 September 2021

Teresa Michelle Pidduck and Nadia Bauer

Self-assessment (SA) and peer-assessment (PA) are considered useful tools in the development of lifelong learning and reflective skills. The authors implemented a teaching…

Abstract

Purpose

Self-assessment (SA) and peer-assessment (PA) are considered useful tools in the development of lifelong learning and reflective skills. The authors implemented a teaching intervention using SA and PA amongst a large cohort of final year undergraduate students. The purpose of this study was to investigate students' perceptions of online SA and PA in order to understand the differences between these perceptions and to allow instructors to adopt differentiated instruction in developing a diverse student group's professional skills.

Design/methodology/approach

The research design adopted a mixed methods approach through the use of surveys that were administered before and after the SA and peer-assessment intervention in a taxation module taught at a large public South African university. Through the use of a series of open and closed questions students' perceptions on SA and peer-assessment were analysed both quantitatively and qualitatively.

Findings

The findings show that student perceptions of SA and peer-assessment differed significantly, where perceptions of SA were more positive than those towards PA. The findings indicate that SA and peer-assessment still present a challenge in an online context for large student cohorts, despite improved tracking, faster feedback and anonymity.

Originality/value

The study contributes to the literature by analysing students' perceptions about SA and peer-assessment in an accounting education context and in an online setting in South Africa.

Details

Journal of Applied Research in Higher Education, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2050-7003

Keywords

To view the access options for this content please click here
Book part
Publication date: 22 November 2012

Steve Gove

This chapter presents two established pedagogical techniques to increase student engagement, simulations and peer assessment. The use of each technique, its benefits and…

Abstract

This chapter presents two established pedagogical techniques to increase student engagement, simulations and peer assessment. The use of each technique, its benefits and drawbacks, and how content knowledge and student engagement increase are detailed. While each of the approaches can be utilized independently to create active learning environments, this chapter illustrates the potential to extend these approaches further. An overview of an MBA-level elective on competitive analysis structured around a simulation and peer assessment is presented. The result is a highly interactive and engaging course where the simulation and peer assessments achieve symbiotic benefits. Learning and performance in the simulation is enhanced by the application of competitive analyst reports which are used by peer “clients.” Assessment in turn leads to greater insights to the simulation, and subsequently higher levels of performance on both the simulation and future analysis work. Insights on these instructional methods, their limitations, and potential barriers to adoption are offered with the hope of inspiring others to utilize and experiment with novel approaches for further enhance learner engagement.

Details

Increasing Student Engagement and Retention Using Immersive Interfaces: Virtual Worlds, Gaming, and Simulation
Type: Book
ISBN: 978-1-78190-241-7

To view the access options for this content please click here
Book part
Publication date: 16 October 2020

Tara J. Shawver and William F. Miller

The Giving Voice to Values (GVV) program takes a unique approach to ethics education by shifting the focus away from a philosophical analysis of why actions are unethical…

Abstract

The Giving Voice to Values (GVV) program takes a unique approach to ethics education by shifting the focus away from a philosophical analysis of why actions are unethical to a focus on how individuals can effectively voice their values to resolve ethical conflict. The authors explore how peer feedback and peer assessment, when implemented within a GVV module, can increase students’ understanding of ways to resolve ethical dilemmas, increase student engagement, and increase confidence in confronting unethical actions. The findings indicate that the use of peer feedback and assessment increases students’ understanding of ways to resolve ethical dilemmas, increases confidence in confronting unethical actions, and student attitudes suggest that assessing peers is a way to learn from each other and enhances interaction/engagement of students in the course. The teaching methods described in this study can easily be implemented in any specific discipline or accounting ethics course.

Details

Research on Professional Responsibility and Ethics in Accounting
Type: Book
ISBN: 978-1-83867-669-8

Keywords

To view the access options for this content please click here
Article
Publication date: 13 August 2018

Anand Agrawal and Damith C. Rajapakse

The purpose of this paper is to check the veracity of educators’ apprehensions about peer assessments by comparing them with the actual peer assessment scores. It also…

Abstract

Purpose

The purpose of this paper is to check the veracity of educators’ apprehensions about peer assessments by comparing them with the actual peer assessment scores. It also explores the levels of satisfaction and current usage of peer assessment tools among educators.

Design/methodology/approach

The first phase of this study aims at providing insights into the educators’ apprehensions, their satisfaction and usage levels of peer assessments. The second phase involves analysis of peer assessment scores of 539 students in 117 teams. Findings from statistical analysis of peer assessment scores are compared against the apprehensions of educators.

Findings

The results do not support the apprehensions among educators about peer assessments. Findings on the usage, satisfaction levels of educators and their future intentions of using peer assessments are also presented in this paper.

Research limitations/implications

Studies with larger sample size, qualitative in-depth research on experiences, designs and conditions of successful peer assessments and studies based on users’ experiences of peer assessments will help in getting richer insights in this area.

Practical implications

Results of this study indicate a need for educators to shed their apprehensions and adopt online or offline peer assessments tools with trust and confidence.

Originality/value

This study is important due to the existence of contrary views, inconsistent results and lack of adequate familiarity about the use, efficacy and practice of peer assessments. Though previous studies have tried to establish the reliability of peer assessments, this study finds that educators are still apprehensive about peer assessments. This is a unique study as no previous research has attempted a comparative study to check the veracity of the apprehensions of educators about peer assessments using the actual peer assessment scores.

Details

International Journal of Educational Management, vol. 32 no. 6
Type: Research Article
ISSN: 0951-354X

Keywords

To view the access options for this content please click here
Article
Publication date: 26 June 2009

Darrall Thompson and Ian McGregor

Group‐based tasks or assignments, if well designed, can yield benefits for student employability and other important attribute developments. However there is a fundamental…

Abstract

Purpose

Group‐based tasks or assignments, if well designed, can yield benefits for student employability and other important attribute developments. However there is a fundamental problem when all members of the group receive the same mark and feedback. Disregarding the quality and level of individual contributions can seriously undermine many of the educational benefits that groupwork can potentially provide. This paper aims to describe the authors' research and practical experiences of using self and peer assessment in an attempt to retain these benefits.

Design/methodology/approach

Both authors separately used different paper‐based methods of self and peer assessment and then used the same web‐based assessment tool. Case studies of their use of the online tool are described in Business Faculty and Design School subjects. Student comments and tabular data from their self and peer assessment ratings were compared from the two Faculties.

Findings

The value of anonymity when using the online system was found to be important for students. The automatic calculation of student ratings facilitated the self and peer assessment process for large classes in both design and business subjects. Students using the online system felt they were fairly treated in the assessment process as long as it was explained to them beforehand. Students exercised responsibility in the online ratings process by not over‐using the lowest rating category. Student comments and analysis of ratings implied that a careful and reflective evaluation of their group engagement was achieved online compared with the paper‐based examples quoted.

Research limitations/implications

This was not a control group study as the subjects in business and design were different for both paper‐based and online systems. Although the online system used was the same (SPARK), the group sizes, rating scales and self and peer assessment criteria were different in the design and business cases.

Originality/value

The use of paper‐based approaches to calculate a fair distribution of marks to individual group members was not viable for the reasons identified. The article shows that the online system is a very viable option, particularly in large student cohorts where students are unlikely to know one another.

Details

Education + Training, vol. 51 no. 5/6
Type: Research Article
ISSN: 0040-0912

Keywords

To view the access options for this content please click here
Article
Publication date: 6 November 2009

Keith Willey and Anne Gardner

Self‐ and peer assessment has proved effective in promoting the development of teamwork and other professional skills in undergraduate students. However, in previous…

Abstract

Purpose

Self‐ and peer assessment has proved effective in promoting the development of teamwork and other professional skills in undergraduate students. However, in previous research approximately 30 percent of students reported that its use produced no perceived improvement in their teamwork experience. It was hypothesised that a significant number of these students were probably members of a team that would have functioned well without self‐ and peer assessment and hence the process did not improve their teamwork experience. This paper aims to report the testing of this hypothesis.

Design/methodology/approach

The paper reviews some of the literature on self‐ and peer assessment, outlines the online self‐ and peer assessment tool SPARKPLUS, and analyses the results of a post‐subject survey of students in a large multi‐disciplinary engineering design subject.

Findings

It was found that students who were neutral as to whether self‐ and peer assessment improved their teamwork experience cannot be assumed to be members of well‐functioning teams.

Originality/value

To increase the benefits for all students it is recommended that self‐ and peer assessment focuses on collaborative peer learning, not just assessment of team contributions. Furthermore, it is recommended that feedback sessions be focused on learning not just assessment outcomes and graduate attribute development should be recorded and tracked by linking development to categories required for professional accreditation.

Details

Campus-Wide Information Systems, vol. 26 no. 5
Type: Research Article
ISSN: 1065-0741

Keywords

To view the access options for this content please click here
Article
Publication date: 8 May 2018

Zeynep Tatli, Nursel Uğur and Ünal Çakiroğlu

The purpose of this paper is to reveal the contribution of the digital storytelling to the peer assessments experiences of pre-service teachers within the teaching practices.

Abstract

Purpose

The purpose of this paper is to reveal the contribution of the digital storytelling to the peer assessments experiences of pre-service teachers within the teaching practices.

Design/methodology/approach

The study is carried out as a special case study. Both qualitative and quantitative data gathering tools were used together to investigate a special case in depth (Yıldırım and Şimşek, 2011; Çepni, 2007). In this study, the case investigated was the process whereby the senior year pre-service teachers enrolled in the faculty of education provided assessments of themselves and their peers through the teaching practice course, using digital storytelling. The contributions of the assessment method employed, in the experiences and personal development of the pre-service teachers, were investigated through interviews with pre-service teachers involved.

Findings

The results suggested that pre-service teachers’ perspectives were quite positive toward the use of digital storytelling for peer assessment in their teaching practices. The prominent contributions were: they caused easily tolerate personally as they did not take a direct form, and that they considered peer assessment through digital storytelling as an alternative means of assessment to effectively reflect the process. Receiving more detailed feedback about their classroom experiences and their teaching skills was helpful for pre-service IT teachers. Suggestions due to the findings were also included.

Originality/value

In the study, peer assessment digital stories in the teaching practice courses offers the benefits of confidence with, different perspectives, satisfaction, and objectivity. These benefits can help pre-service teachers to focus on shortcomings regarding their teaching experiences, and take care for the correction. Further studies can be provided about the digital storytelling processes in various assessment processes of the instructions to reveal the potential of digital narratives in other domains as well.

Details

The International Journal of Information and Learning Technology, vol. 35 no. 3
Type: Research Article
ISSN: 2056-4880

Keywords

To view the access options for this content please click here
Article
Publication date: 2 April 2019

Amir Ghiasi, Grigorios Fountas, Panagiotis Anastasopoulos and Fred Mannering

Unlike many other quantitative characteristics used to determine higher education rankings, opinion-based peer assessment scores and the factors that may influence them…

Abstract

Purpose

Unlike many other quantitative characteristics used to determine higher education rankings, opinion-based peer assessment scores and the factors that may influence them are not well understood. Using peer scores of US colleges of engineering as reported annually in US News and World Report (USNews) rankings, the purpose of this paper is to provide some insights into peer assessments by statistically identifying factors that influence them.

Design/methodology/approach

With highly detailed data, a random parameters linear regression is estimated to statistically identify the factors determining a college of engineering’s average USNews peer assessment score.

Findings

The findings show that a wide variety of college- and university-specific attributes influence average peer impressions of a university’s college of engineering including the size of the faculty, the quality of admitted students and the quality of the faculty measured by their citation data and other factors.

Originality/value

The paper demonstrates that average peer assessment scores can be readily and accurately predicted with observable data on the college of engineering and the university as a whole. In addition, the individual parameter estimates from the statistical modeling in this paper provide insights as to how specific college and university attributes can help guide policies to improve an individual college’s average peer assessment scores and its overall ranking.

Details

Journal of Applied Research in Higher Education, vol. 11 no. 3
Type: Research Article
ISSN: 2050-7003

Keywords

To view the access options for this content please click here
Article
Publication date: 6 November 2009

Keith Willey and Anne Gardner

As a way of focusing curriculum development and learning outcomes universities have introduced graduate attributes, which their students should develop during their degree…

Abstract

Purpose

As a way of focusing curriculum development and learning outcomes universities have introduced graduate attributes, which their students should develop during their degree course. Some of these attributes are discipline‐specific, others are generic to all professions. The development of these attributes can be promoted by the careful use of self‐ and peer assessment. The authors have previously reported using the self‐ and peer assessment software tool SPARK in various contexts to facilitate opportunities to practise, develop, assess and provide feedback on these attributes. This research and that of the other developers identified the need to extend the features of SPARK, to increase its flexibility and capacity to provide feedback. This paper seeks to report the results of the initial trials to investigate the potential of these new features to improve learning outcomes.

Design/methodology/approach

The paper reviews some of the key literature with regard to self‐ and peer assessment, discusses the main aspects of the original online self‐ and peer assessment tool SPARK and the new version SPARKPLUS, reports and analyses the results of a series of student surveys to investigate whether the new features and applications of the tool have improved the learning outcomes in a large multi‐disciplinary Engineering Design subject.

Findings

It was found that using self‐ and peer assessment in conjunction with collaborative peer learning activities increased the benefits to students and improved engagement. Furthermore it was found that the new features available in SPARKPLUS facilitated efficient implementation of additional self‐ and peer assessment processes (assessment of individual work and benchmarking exercises) and improved learning outcomes. The trials demonstrated that the tool assisted in improving students' engagement with and learning from peer learning exercises, the collection and distribution of feedback and helping them to identify their individual strengths and weaknesses.

Practical implications

SPARKPLUS facilitates the efficient management of self‐ and peer assessment processes even in large classes, allowing assessments to be run multiple times a semester without an excessive burden for the coordinating academic. While SPARKPLUS has enormous potential to provide significant benefits to both students and academics, it is necessary to caution that, although a powerful tool, its successful use requires thoughtful and reflective application combined with good assessment design.

Originality/value

It was found that the new features available in SPARKPLUS efficiently facilitated the development of new self‐ and peer assessment processes (assessment of individual work and benchmarking exercises) and improved learning outcomes.

Details

Campus-Wide Information Systems, vol. 26 no. 5
Type: Research Article
ISSN: 1065-0741

Keywords

To view the access options for this content please click here
Article
Publication date: 13 May 2019

Bobby Hoffman

The purpose of this paper is to examine the influence of peer-assessment training as a catalyst to enhance student assessment knowledge and the ability to effectively…

Abstract

Purpose

The purpose of this paper is to examine the influence of peer-assessment training as a catalyst to enhance student assessment knowledge and the ability to effectively evaluate reflective journal writing assignments when using the online peer assessment (PA) tool Expertiza.

Design/methodology/approach

Over a two-year period, end-of-unit assessment test scores and reflective writing samples from a peer-assessment participation group were compared to a no peer-assessment control group. Analysis of covariance was used to control for existing writing skill and ongoing feedback on writing samples.

Findings

No significant increases were observed in student assessment knowledge when participating in peer-assessment training. Comparison of matched participant samples revealed that after controlling for existing writing skill, students participating in PA graded reflective writing assignments significantly lower than instructor-graded assessments from students not afforded peer-assessment participation.

Research limitations/implications

No distinction was made on the relative influence of giving or receiving PA influenced performance on the outcome measures. Second, students making multiple revisions based on feedback were not analyzed. Third, the Expertiza system does not control for the number of reviews performed, thus differential weighting of assessment outcomes may be realized unless all students submit and perform the same number of assessments. Finally, in absence of any qualitative analysis as to what factors students consider when grading writing samples, it is unknown as to how individual difference factors or adherence to scoring rubrics may have influenced the obtained results.

Practical implications

Students may be reticent evaluating peers or utilize grading criteria beyond the mandatory evaluation rubrics. Clear distinctions should be provided to students indicating how instructional content aligns with skills needed to conduct assessment. Training that addresses the theoretical and transactional components of PA are important, but teachers should recognize that when developing assessment skills learners undergo a developmental catharsis related to building trust and establishing a secure and comfortable identity as an assessor. Peer review systems should quantify the relative contribution of each reviewer through the measurement of frequency, timeliness and accuracy of the feedback, compared with instructor standards/evaluations.

Originality/value

This paper reduces the gap in the literature concerning how PA evolves over time and identifies factors related to the etiology of the peer-review process. In addition, the paper reveals new information regarding the calibration between instructor and peer evaluations.

Details

Journal of Applied Research in Higher Education, vol. 11 no. 4
Type: Research Article
ISSN: 2050-7003

Keywords

1 – 10 of over 33000