Books and journals Case studies Expert Briefings Open Access
Advanced search

Search results

1 – 10 of over 18000
To view the access options for this content please click here
Article
Publication date: 20 December 2019

Teacher vs student responsibility for course outcomes

Robert Williams and Monica Wallace

The purpose of this paper is primarily to identify factors that accounted for the differences in course evaluation and course performance in two sections of the same…

HTML
PDF (162 KB)

Abstract

Purpose

The purpose of this paper is primarily to identify factors that accounted for the differences in course evaluation and course performance in two sections of the same course taught by the same instructor. Potential contributors to these differences included critical thinking, grade point average (GPA) and homework time in the course. Secondarily, the authors also examined whether season of the year and academic status of students (1st year through 3rd year) might have contributed to differences in course ratings and exam performance. The data in the study included some strictly quantitative variables and some qualitative judgments subsequently converted to quantitative measures.

Design/methodology/approach

The outcome variables included student objective exam scores and course ratings on the University’s eight-item rating form. Variables that may have contributed to performance and course evaluation differences between the two groups included student effort in the course, GPA and critical thinking.

Findings

The higher-performing section obtained significantly higher scores on course exams than the lower-preforming group and also rated the course significantly higher (average of 4.15 across the evaluation items) than the lower performing section (3.64 average in item ratings). The two performance groups did not differ on critical thinking and GPA, but did differ significantly in hours spent per week outside of class in studying for the course.

Originality/value

Although many studies have examined the predictive validity of course ratings, instructors are typically held responsible for both high and low student ratings. This particular action study suggests that it may be student effort rather than instructor behavior that has the stronger impact on both student performance and course evaluations.

Details

Journal of Applied Research in Higher Education, vol. 12 no. 4
Type: Research Article
DOI: https://doi.org/10.1108/JARHE-04-2019-0092
ISSN: 2050-7003

Keywords

  • Student effort
  • Student grades
  • Student intelligence
  • Student ratings

To view the access options for this content please click here
Article
Publication date: 26 June 2009

Online self‐ and peer assessment for groupwork

Darrall Thompson and Ian McGregor

Group‐based tasks or assignments, if well designed, can yield benefits for student employability and other important attribute developments. However there is a fundamental…

HTML
PDF (343 KB)

Abstract

Purpose

Group‐based tasks or assignments, if well designed, can yield benefits for student employability and other important attribute developments. However there is a fundamental problem when all members of the group receive the same mark and feedback. Disregarding the quality and level of individual contributions can seriously undermine many of the educational benefits that groupwork can potentially provide. This paper aims to describe the authors' research and practical experiences of using self and peer assessment in an attempt to retain these benefits.

Design/methodology/approach

Both authors separately used different paper‐based methods of self and peer assessment and then used the same web‐based assessment tool. Case studies of their use of the online tool are described in Business Faculty and Design School subjects. Student comments and tabular data from their self and peer assessment ratings were compared from the two Faculties.

Findings

The value of anonymity when using the online system was found to be important for students. The automatic calculation of student ratings facilitated the self and peer assessment process for large classes in both design and business subjects. Students using the online system felt they were fairly treated in the assessment process as long as it was explained to them beforehand. Students exercised responsibility in the online ratings process by not over‐using the lowest rating category. Student comments and analysis of ratings implied that a careful and reflective evaluation of their group engagement was achieved online compared with the paper‐based examples quoted.

Research limitations/implications

This was not a control group study as the subjects in business and design were different for both paper‐based and online systems. Although the online system used was the same (SPARK), the group sizes, rating scales and self and peer assessment criteria were different in the design and business cases.

Originality/value

The use of paper‐based approaches to calculate a fair distribution of marks to individual group members was not viable for the reasons identified. The article shows that the online system is a very viable option, particularly in large student cohorts where students are unlikely to know one another.

Details

Education + Training, vol. 51 no. 5/6
Type: Research Article
DOI: https://doi.org/10.1108/00400910910987237
ISSN: 0040-0912

Keywords

  • Group dynamics
  • Self assessment
  • Online operations
  • Internet
  • Worldwide web

To view the access options for this content please click here
Article
Publication date: 20 May 2019

Estimating student ability and problem difficulty using item response theory (IRT) and TrueSkill

Youngjin Lee

The purpose of this paper is to investigate an efficient means of estimating the ability of students solving problems in the computer-based learning environment.

HTML
PDF (867 KB)

Abstract

Purpose

The purpose of this paper is to investigate an efficient means of estimating the ability of students solving problems in the computer-based learning environment.

Design/methodology/approach

Item response theory (IRT) and TrueSkill were applied to simulated and real problem solving data to estimate the ability of students solving homework problems in the massive open online course (MOOC). Based on the estimated ability, data mining models predicting whether students can correctly solve homework and quiz problems in the MOOC were developed. The predictive power of IRT- and TrueSkill-based data mining models was compared in terms of Area Under the receiver operating characteristic Curve.

Findings

The correlation between students’ ability estimated from IRT and TrueSkill was strong. In addition, IRT- and TrueSkill-based data mining models showed a comparable predictive power when the data included a large number of students. While IRT failed to estimate students’ ability and could not predict their problem solving performance when the data included a small number of students, TrueSkill did not experience such problems.

Originality/value

Estimating students’ ability is critical to determine the most appropriate time for providing instructional scaffolding in the computer-based learning environment. The findings of this study suggest that TrueSkill can be an efficient means for estimating the ability of students solving problems in the computer-based learning environment regardless of the number of students.

Details

Information Discovery and Delivery, vol. 47 no. 2
Type: Research Article
DOI: https://doi.org/10.1108/IDD-08-2018-0030
ISSN: 2398-6247

Keywords

  • Problem solving
  • User modeling
  • Prediction model
  • Educational data mining (EDM)
  • Log file analysis
  • Learning analytics (LA)

To view the access options for this content please click here
Article
Publication date: 8 January 2018

Work placements at 14-15 years and employability skills

David Messer

In the UK, concern frequently has been voiced that young people lack appropriate employability skills. One way to address this is to provide work based placements. In…

HTML
PDF (152 KB)

Abstract

Purpose

In the UK, concern frequently has been voiced that young people lack appropriate employability skills. One way to address this is to provide work based placements. In general, previous research findings have indicated that young people find such placements useful because of help with career choice and relevant skills. However, most studies are retrospective and involve sixth form or degree students. The purpose of this paper is to extend previous research by collecting information before and after the placements.

Design/methodology/approach

This investigation involved questionnaires with nearly 300 14-15 year-old students who provided a pre- and post-placement self-reports about their employability skills and their work-experience hosts provided ratings of employability skills at the end of the placement.

Findings

There was a significant increase in student ratings of their employability skills from before to after the placement, and although the employers gave slightly lower ratings of some employability skills than the students, the two sets of ratings were reasonably close. In addition, the students had high expectations of the usefulness of the placements and these expectations were fulfilled as reported in the post-placement questionnaire.

Originality/value

These positive findings, extend the knowledge of the effects of work based placements, by focussing on the opinion of the young people themselves, using a pre- to post-placement design, by validating student self-reports with host employer ratings, and by focussing on a younger than usual age group.

Details

Education + Training, vol. 60 no. 1
Type: Research Article
DOI: https://doi.org/10.1108/ET-11-2016-0163
ISSN: 0040-0912

Keywords

  • Employability skills
  • Work experience
  • Secondary school students
  • Work placements

To view the access options for this content please click here
Article
Publication date: 1 February 1990

Using Assessments of Sandwich Year and Academic Work Performance on a Business Studies Degree Course

John Arnold and Nigel Garland

Sandwhich placements in degree courses are often thought to have wide‐ranging benefits for students, employers and educational institutions (CNAA, 1984). Certainly, the…

HTML
PDF (990 KB)

Abstract

Sandwhich placements in degree courses are often thought to have wide‐ranging benefits for students, employers and educational institutions (CNAA, 1984). Certainly, the student‐orientated goals of a sandwich year as defined by CNAA (1980) are wide‐ranging. A successful sandwich placement should enhance students' capacity to: relate theory to practice, make appropriate career decisions, work effectively with others, understand work situations, and benefit from final year studies. It should also contribute to the student's “personal development”, which means amongst other things their skills and self‐confidence (see also Day, Kelly, Parker and Parr, 1982).

Details

Management Research News, vol. 13 no. 2
Type: Research Article
DOI: https://doi.org/10.1108/eb028066
ISSN: 0140-9174

To view the access options for this content please click here
Book part
Publication date: 16 August 2011

Effects of Converting Student Evaluations of Teaching from Paper to Online Administration

Sharon M. Bruns, Timothy J. Rupert and Yue (May) Zhang

Because of the potential cost savings that online teaching evaluations can deliver, many universities are converting their student evaluations of teaching to this mode of…

HTML
PDF (450 KB)
EPUB (342 KB)

Abstract

Because of the potential cost savings that online teaching evaluations can deliver, many universities are converting their student evaluations of teaching to this mode of administration. In this study, we examine the effects of transitioning administration of student evaluations from paper-and-pencil to online. We use data from accounting and other departments in the College of Business Administration of a large private, research university that converted from paper to online administration of teaching evaluations. We examine the average teaching effectiveness rating and response rate for instructors who taught the same course before and after the conversion. In addition, we survey a sample of accounting students to investigate their responses to online teaching evaluations. We find a significant increase in the average effectiveness rating and a significant decrease in response rate. Furthermore, we find that those instructors with lower ratings under the paper administration experience the greatest increase in ratings when the evaluations are converted to online administration. In a follow-up survey, we find that students who are highly motivated and have higher grade point averages are more likely to complete and provide higher evaluations with the online administration. We discuss the implications of our results to accounting and business education literature.

Details

Advances in Accounting Education: Teaching and Curriculum Innovations
Type: Book
DOI: https://doi.org/10.1108/S1085-4622(2011)0000012010
ISBN: 978-1-78052-223-4

To view the access options for this content please click here
Book part
Publication date: 24 January 2002

AN EXPLORATORY STUDY OF STUDENT BELIEFS IN THE RESPONSE RELIABILITY OF TEACHING EVALUATION INSTRUMENTS

Brian Patrick Green, Thomas G. Calderon and Michael Harkness

HTML
PDF (1.1 MB)

Abstract

Details

Advances in Accounting Education Teaching and Curriculum Innovations
Type: Book
DOI: https://doi.org/10.1108/S1085-4622(2002)00000040013
ISBN: 978-0-85724-052-1

To view the access options for this content please click here
Article
Publication date: 1 March 2003

Return to academic standards: a critique of student evaluations of teaching effectiveness

Charles R. Emery, Tracy R. Kramer and Robert G. Tian

A student evaluation of teaching effectiveness (SETE) is often the most influential information in promotion and tenure decision at colleges and universities focused on…

HTML
PDF (90 KB)

Abstract

A student evaluation of teaching effectiveness (SETE) is often the most influential information in promotion and tenure decision at colleges and universities focused on teaching. Unfortunately, this instrument often fails to capture the lecturer’s ability to foster the creation of learning and to serve as a tool for improving instruction. In fact, it often serves as a disincentive to introducing rigour. This paper performs a qualitative (e.g. case studies) and quantitative (e.g. empirical research) literature review of student evaluations as a measure of teaching effectiveness. Problems are highlighted and suggestions offered to improve SETEs and to refocus teaching effectiveness on outcome‐based academic standards.

Details

Quality Assurance in Education, vol. 11 no. 1
Type: Research Article
DOI: https://doi.org/10.1108/09684880310462074
ISSN: 0968-4883

Keywords

  • Teachers
  • Evaluation
  • Performance
  • Effectiveness
  • Students

To view the access options for this content please click here
Article
Publication date: 21 June 2011

Student evaluation web sites as potential sources of consumer information in the United Arab Emirates

Stephen Wilkins and Alun Epps

The purpose of this paper is to investigate the attitudes of students in the United Arab Emirates (UAE) towards non‐institutionally sanctioned student evaluation web…

HTML
PDF (89 KB)

Abstract

Purpose

The purpose of this paper is to investigate the attitudes of students in the United Arab Emirates (UAE) towards non‐institutionally sanctioned student evaluation web sites, and to consider how educational institutions might respond to the demands of students for specific information.

Design/methodology/approach

The study involved a self‐completed questionnaire administered to 118 undergraduate students at a single university in the UAE.

Findings

Even though there exists no UAE‐based web site that carries student evaluations of faculty/teaching, 13 per cent of the survey participants had previously visited a site that held student ratings, 85 per cent said they would consider posting on one if it existed in the country, and just over a half of the students were in favour of such web sites being established in the UAE.

Research limitations/implications

Despite limitations, such as the sample size and convenience sampling strategy, it is clear that students appreciate information about course evaluations and that educational institutions should consider how students obtain this information.

Practical implications

The advent of student evaluation web sites in the UAE could bring a set of challenges and opportunities to educational institutions, but, whether they are established or not, institutions might benefit from developing effective strategies for the dissemination of course evaluation and other student‐related data in the near future.

Originality/value

Student evaluation web sites, such as RateMyProfessors.com, are popular in the USA, Canada and the UK, but it was unknown how students in a relatively conservative country such as the UAE would react to such web sites. Educational institutions can use the findings of this study to develop suitable policies and strategies that address the issues discussed herein.

Details

International Journal of Educational Management, vol. 25 no. 5
Type: Research Article
DOI: https://doi.org/10.1108/09513541111146341
ISSN: 0951-354X

Keywords

  • Student evaluation web sites
  • Consumer information
  • Student attitudes
  • Higher education
  • United Arab Emirates
  • Web sites
  • Students
  • Function evaluation

To view the access options for this content please click here
Article
Publication date: 1 December 2001

Students’ perceptions of the evaluation of college teaching

Larry Crumbley, Byron K. Henry and Stanley H. Kratchman

The validity of student evaluation of teaching (SET) has been continually debated in the academic community. The primary purpose of this research is to survey student…

HTML
PDF (98 KB)

Abstract

The validity of student evaluation of teaching (SET) has been continually debated in the academic community. The primary purpose of this research is to survey student perceptions to provide any evidence of inherent weaknesses in the use of SETs to measure and report teaching effectiveness accurately. The study surveyed over 500 undergraduate and graduate students enrolled in various accounting courses over two years at a large public university. Students were asked to rate several factors on their importance in faculty evaluations and identify instructor traits and behaviors warranting lower ratings. The study provides further evidence that the use of student evaluations of teaching for personnel decisions is not appropriate. Students will punish instructors who engage in a number of well‐known learning/teaching techniques, which encourages instructors to increase SET scores by sacrificing the learning process. Other measures and methods should be employed to ensure that teaching effectiveness is accurately measured and properly rewarded. Using student data as a surrogate for teaching performance is an illusionary performance measurement system.

Details

Quality Assurance in Education, vol. 9 no. 4
Type: Research Article
DOI: https://doi.org/10.1108/EUM0000000006158
ISSN: 0968-4883

Keywords

  • Evaluation
  • Higher education
  • Customer satisfaction

Access
Only content I have access to
Only Open Access
Year
  • Last week (51)
  • Last month (153)
  • Last 3 months (455)
  • Last 6 months (902)
  • Last 12 months (1694)
  • All dates (18082)
Content type
  • Article (14777)
  • Book part (2157)
  • Earlycite article (632)
  • Case study (442)
  • Expert briefing (61)
  • Executive summary (13)
1 – 10 of over 18000
Emerald Publishing
  • Opens in new window
  • Opens in new window
  • Opens in new window
  • Opens in new window
© 2021 Emerald Publishing Limited

Services

  • Authors Opens in new window
  • Editors Opens in new window
  • Librarians Opens in new window
  • Researchers Opens in new window
  • Reviewers Opens in new window

About

  • About Emerald Opens in new window
  • Working for Emerald Opens in new window
  • Contact us Opens in new window
  • Publication sitemap

Policies and information

  • Privacy notice
  • Site policies
  • Modern Slavery Act Opens in new window
  • Chair of Trustees governance statement Opens in new window
  • COVID-19 policy Opens in new window
Manage cookies

We’re listening — tell us what you think

  • Something didn’t work…

    Report bugs here

  • All feedback is valuable

    Please share your general feedback

  • Member of Emerald Engage?

    You can join in the discussion by joining the community or logging in here.
    You can also find out more about Emerald Engage.

Join us on our journey

  • Platform update page

    Visit emeraldpublishing.com/platformupdate to discover the latest news and updates

  • Questions & More Information

    Answers to the most commonly asked questions here