Search results

1 – 10 of over 19000
Article
Publication date: 31 December 2019

Robert Williams and Monica Wallace

The purpose of this paper is primarily to identify factors that accounted for the differences in course evaluation and course performance in two sections of the same course taught…

Abstract

Purpose

The purpose of this paper is primarily to identify factors that accounted for the differences in course evaluation and course performance in two sections of the same course taught by the same instructor. Potential contributors to these differences included critical thinking, grade point average (GPA) and homework time in the course. Secondarily, the authors also examined whether season of the year and academic status of students (1st year through 3rd year) might have contributed to differences in course ratings and exam performance. The data in the study included some strictly quantitative variables and some qualitative judgments subsequently converted to quantitative measures.

Design/methodology/approach

The outcome variables included student objective exam scores and course ratings on the University’s eight-item rating form. Variables that may have contributed to performance and course evaluation differences between the two groups included student effort in the course, GPA and critical thinking.

Findings

The higher-performing section obtained significantly higher scores on course exams than the lower-preforming group and also rated the course significantly higher (average of 4.15 across the evaluation items) than the lower performing section (3.64 average in item ratings). The two performance groups did not differ on critical thinking and GPA, but did differ significantly in hours spent per week outside of class in studying for the course.

Originality/value

Although many studies have examined the predictive validity of course ratings, instructors are typically held responsible for both high and low student ratings. This particular action study suggests that it may be student effort rather than instructor behavior that has the stronger impact on both student performance and course evaluations.

Details

Journal of Applied Research in Higher Education, vol. 12 no. 4
Type: Research Article
ISSN: 2050-7003

Keywords

Open Access
Article
Publication date: 1 June 2016

Abbas Zare-ee, Zuraidah Mohd Don and Iman Tohidian

University students' ratings of teaching and teachers' performance are used in many parts of the world for the evaluation of faculty members at colleges and universities. Even…

Abstract

University students' ratings of teaching and teachers' performance are used in many parts of the world for the evaluation of faculty members at colleges and universities. Even though these ratings receive mixed reviews, there is little conclusive evidence on the role of the intervening variable of teacher and student gender in these ratings. Possible influences resulting from gender-related differences in different socio-cultural contexts, especially where gender combination in student and faculty population is not proportionate, have not been adequately investigated in previous research. This study aimed to examine Iranian university students' ratings of the professional performance of male and female university teachers and to explore the differences in male and female university students' evaluation of teachers of the same or opposite gender. The study was a questionnaire-based cross-sectional survey with a total of 800 randomly selected students in their different years of undergraduate study (307 male and 493 female students, reflecting the proportion of male and female students in the university) from different faculties at the University of Kashan, Iran. The participants rated male and female teachers’ performance in observing university regulations, relationship with colleagues, and relationships with students. The researchers used descriptive statistics, means comparison inferential statistics and focus-group interview data to analyze and compare the students’ ratings. The results of one-sample t-test, independent samples t-test, and Chi-square analyses showed that a) overall, male university teachers received significantly higher overall ratings in all areas than female teachers; b) male students rated male teachers significantly higher than female students did; and c) female students assigned a higher overall mean rating to male teachers than to female teachers but this mean difference was not significant. These results are studied in relation to the findings in the related literature and indicate that gender can be an important intervening variable in university students' evaluation of faculty members.

Details

Learning and Teaching in Higher Education: Gulf Perspectives, vol. 13 no. 1
Type: Research Article
ISSN: 2077-5504

Article
Publication date: 26 June 2009

Darrall Thompson and Ian McGregor

Group‐based tasks or assignments, if well designed, can yield benefits for student employability and other important attribute developments. However there is a fundamental problem…

1952

Abstract

Purpose

Group‐based tasks or assignments, if well designed, can yield benefits for student employability and other important attribute developments. However there is a fundamental problem when all members of the group receive the same mark and feedback. Disregarding the quality and level of individual contributions can seriously undermine many of the educational benefits that groupwork can potentially provide. This paper aims to describe the authors' research and practical experiences of using self and peer assessment in an attempt to retain these benefits.

Design/methodology/approach

Both authors separately used different paper‐based methods of self and peer assessment and then used the same web‐based assessment tool. Case studies of their use of the online tool are described in Business Faculty and Design School subjects. Student comments and tabular data from their self and peer assessment ratings were compared from the two Faculties.

Findings

The value of anonymity when using the online system was found to be important for students. The automatic calculation of student ratings facilitated the self and peer assessment process for large classes in both design and business subjects. Students using the online system felt they were fairly treated in the assessment process as long as it was explained to them beforehand. Students exercised responsibility in the online ratings process by not over‐using the lowest rating category. Student comments and analysis of ratings implied that a careful and reflective evaluation of their group engagement was achieved online compared with the paper‐based examples quoted.

Research limitations/implications

This was not a control group study as the subjects in business and design were different for both paper‐based and online systems. Although the online system used was the same (SPARK), the group sizes, rating scales and self and peer assessment criteria were different in the design and business cases.

Originality/value

The use of paper‐based approaches to calculate a fair distribution of marks to individual group members was not viable for the reasons identified. The article shows that the online system is a very viable option, particularly in large student cohorts where students are unlikely to know one another.

Details

Education + Training, vol. 51 no. 5/6
Type: Research Article
ISSN: 0040-0912

Keywords

Article
Publication date: 7 February 2019

Youngjin Lee

The purpose of this paper is to investigate an efficient means of estimating the ability of students solving problems in the computer-based learning environment.

Abstract

Purpose

The purpose of this paper is to investigate an efficient means of estimating the ability of students solving problems in the computer-based learning environment.

Design/methodology/approach

Item response theory (IRT) and TrueSkill were applied to simulated and real problem solving data to estimate the ability of students solving homework problems in the massive open online course (MOOC). Based on the estimated ability, data mining models predicting whether students can correctly solve homework and quiz problems in the MOOC were developed. The predictive power of IRT- and TrueSkill-based data mining models was compared in terms of Area Under the receiver operating characteristic Curve.

Findings

The correlation between students’ ability estimated from IRT and TrueSkill was strong. In addition, IRT- and TrueSkill-based data mining models showed a comparable predictive power when the data included a large number of students. While IRT failed to estimate students’ ability and could not predict their problem solving performance when the data included a small number of students, TrueSkill did not experience such problems.

Originality/value

Estimating students’ ability is critical to determine the most appropriate time for providing instructional scaffolding in the computer-based learning environment. The findings of this study suggest that TrueSkill can be an efficient means for estimating the ability of students solving problems in the computer-based learning environment regardless of the number of students.

Details

Information Discovery and Delivery, vol. 47 no. 2
Type: Research Article
ISSN: 2398-6247

Keywords

Article
Publication date: 13 November 2017

David Messer

In the UK, concern frequently has been voiced that young people lack appropriate employability skills. One way to address this is to provide work based placements. In general…

Abstract

Purpose

In the UK, concern frequently has been voiced that young people lack appropriate employability skills. One way to address this is to provide work based placements. In general, previous research findings have indicated that young people find such placements useful because of help with career choice and relevant skills. However, most studies are retrospective and involve sixth form or degree students. The purpose of this paper is to extend previous research by collecting information before and after the placements.

Design/methodology/approach

This investigation involved questionnaires with nearly 300 14-15 year-old students who provided a pre- and post-placement self-reports about their employability skills and their work-experience hosts provided ratings of employability skills at the end of the placement.

Findings

There was a significant increase in student ratings of their employability skills from before to after the placement, and although the employers gave slightly lower ratings of some employability skills than the students, the two sets of ratings were reasonably close. In addition, the students had high expectations of the usefulness of the placements and these expectations were fulfilled as reported in the post-placement questionnaire.

Originality/value

These positive findings, extend the knowledge of the effects of work based placements, by focussing on the opinion of the young people themselves, using a pre- to post-placement design, by validating student self-reports with host employer ratings, and by focussing on a younger than usual age group.

Details

Education + Training, vol. 60 no. 1
Type: Research Article
ISSN: 0040-0912

Keywords

Article
Publication date: 1 February 1990

John Arnold and Nigel Garland

Sandwhich placements in degree courses are often thought to have wide‐ranging benefits for students, employers and educational institutions (CNAA, 1984). Certainly, the…

Abstract

Sandwhich placements in degree courses are often thought to have wide‐ranging benefits for students, employers and educational institutions (CNAA, 1984). Certainly, the student‐orientated goals of a sandwich year as defined by CNAA (1980) are wide‐ranging. A successful sandwich placement should enhance students' capacity to: relate theory to practice, make appropriate career decisions, work effectively with others, understand work situations, and benefit from final year studies. It should also contribute to the student's “personal development”, which means amongst other things their skills and self‐confidence (see also Day, Kelly, Parker and Parr, 1982).

Details

Management Research News, vol. 13 no. 2
Type: Research Article
ISSN: 0140-9174

Article
Publication date: 1 March 2003

Charles R. Emery, Tracy R. Kramer and Robert G. Tian

A student evaluation of teaching effectiveness (SETE) is often the most influential information in promotion and tenure decision at colleges and universities focused on teaching…

5828

Abstract

A student evaluation of teaching effectiveness (SETE) is often the most influential information in promotion and tenure decision at colleges and universities focused on teaching. Unfortunately, this instrument often fails to capture the lecturer’s ability to foster the creation of learning and to serve as a tool for improving instruction. In fact, it often serves as a disincentive to introducing rigour. This paper performs a qualitative (e.g. case studies) and quantitative (e.g. empirical research) literature review of student evaluations as a measure of teaching effectiveness. Problems are highlighted and suggestions offered to improve SETEs and to refocus teaching effectiveness on outcome‐based academic standards.

Details

Quality Assurance in Education, vol. 11 no. 1
Type: Research Article
ISSN: 0968-4883

Keywords

Article
Publication date: 21 June 2011

Stephen Wilkins and Alun Epps

The purpose of this paper is to investigate the attitudes of students in the United Arab Emirates (UAE) towards non‐institutionally sanctioned student evaluation web sites, and to…

761

Abstract

Purpose

The purpose of this paper is to investigate the attitudes of students in the United Arab Emirates (UAE) towards non‐institutionally sanctioned student evaluation web sites, and to consider how educational institutions might respond to the demands of students for specific information.

Design/methodology/approach

The study involved a self‐completed questionnaire administered to 118 undergraduate students at a single university in the UAE.

Findings

Even though there exists no UAE‐based web site that carries student evaluations of faculty/teaching, 13 per cent of the survey participants had previously visited a site that held student ratings, 85 per cent said they would consider posting on one if it existed in the country, and just over a half of the students were in favour of such web sites being established in the UAE.

Research limitations/implications

Despite limitations, such as the sample size and convenience sampling strategy, it is clear that students appreciate information about course evaluations and that educational institutions should consider how students obtain this information.

Practical implications

The advent of student evaluation web sites in the UAE could bring a set of challenges and opportunities to educational institutions, but, whether they are established or not, institutions might benefit from developing effective strategies for the dissemination of course evaluation and other student‐related data in the near future.

Originality/value

Student evaluation web sites, such as RateMyProfessors.com, are popular in the USA, Canada and the UK, but it was unknown how students in a relatively conservative country such as the UAE would react to such web sites. Educational institutions can use the findings of this study to develop suitable policies and strategies that address the issues discussed herein.

Details

International Journal of Educational Management, vol. 25 no. 5
Type: Research Article
ISSN: 0951-354X

Keywords

Open Access
Article
Publication date: 1 December 2006

John Morgan and Thomas Davies

This paper reports results of analyses made at an all-female Gulf Arab university measuring the nature and extent of biases in students' evaluation of faculty. Comparisons are…

Abstract

This paper reports results of analyses made at an all-female Gulf Arab university measuring the nature and extent of biases in students' evaluation of faculty. Comparisons are made with research reporting the nature of similar relationships in North America. Two issues are investigated: 1) What variables (if any) bias faculty evaluation results at an all-female Arab university? 2) Are biasing variables different in nature or magnitude to those reported at North America universities? Using the population of 13,300 faculty evaluation records collected over two school years at Zayed University, correlations of faculty evaluation results to nine potentially biasing factors are made. Results show biases to faculty evaluation results do exist. However, biases are small, and strikingly similar in nature to those reported at North American universities.

Details

Learning and Teaching in Higher Education: Gulf Perspectives, vol. 3 no. 2
Type: Research Article
ISSN: 2077-5504

Article
Publication date: 1 December 2001

Larry Crumbley, Byron K. Henry and Stanley H. Kratchman

The validity of student evaluation of teaching (SET) has been continually debated in the academic community. The primary purpose of this research is to survey student perceptions…

3654

Abstract

The validity of student evaluation of teaching (SET) has been continually debated in the academic community. The primary purpose of this research is to survey student perceptions to provide any evidence of inherent weaknesses in the use of SETs to measure and report teaching effectiveness accurately. The study surveyed over 500 undergraduate and graduate students enrolled in various accounting courses over two years at a large public university. Students were asked to rate several factors on their importance in faculty evaluations and identify instructor traits and behaviors warranting lower ratings. The study provides further evidence that the use of student evaluations of teaching for personnel decisions is not appropriate. Students will punish instructors who engage in a number of well‐known learning/teaching techniques, which encourages instructors to increase SET scores by sacrificing the learning process. Other measures and methods should be employed to ensure that teaching effectiveness is accurately measured and properly rewarded. Using student data as a surrogate for teaching performance is an illusionary performance measurement system.

Details

Quality Assurance in Education, vol. 9 no. 4
Type: Research Article
ISSN: 0968-4883

Keywords

1 – 10 of over 19000