Search results

1 – 10 of over 3000
Open Access
Article
Publication date: 1 June 2016

Abbas Zare-ee, Zuraidah Mohd Don and Iman Tohidian

University students' ratings of teaching and teachers' performance are used in many parts of the world for the evaluation of faculty members at colleges and universities. Even…

Abstract

University students' ratings of teaching and teachers' performance are used in many parts of the world for the evaluation of faculty members at colleges and universities. Even though these ratings receive mixed reviews, there is little conclusive evidence on the role of the intervening variable of teacher and student gender in these ratings. Possible influences resulting from gender-related differences in different socio-cultural contexts, especially where gender combination in student and faculty population is not proportionate, have not been adequately investigated in previous research. This study aimed to examine Iranian university students' ratings of the professional performance of male and female university teachers and to explore the differences in male and female university students' evaluation of teachers of the same or opposite gender. The study was a questionnaire-based cross-sectional survey with a total of 800 randomly selected students in their different years of undergraduate study (307 male and 493 female students, reflecting the proportion of male and female students in the university) from different faculties at the University of Kashan, Iran. The participants rated male and female teachers’ performance in observing university regulations, relationship with colleagues, and relationships with students. The researchers used descriptive statistics, means comparison inferential statistics and focus-group interview data to analyze and compare the students’ ratings. The results of one-sample t-test, independent samples t-test, and Chi-square analyses showed that a) overall, male university teachers received significantly higher overall ratings in all areas than female teachers; b) male students rated male teachers significantly higher than female students did; and c) female students assigned a higher overall mean rating to male teachers than to female teachers but this mean difference was not significant. These results are studied in relation to the findings in the related literature and indicate that gender can be an important intervening variable in university students' evaluation of faculty members.

Details

Learning and Teaching in Higher Education: Gulf Perspectives, vol. 13 no. 1
Type: Research Article
ISSN: 2077-5504

Open Access
Article
Publication date: 1 December 2006

John Morgan and Thomas Davies

This paper reports results of analyses made at an all-female Gulf Arab university measuring the nature and extent of biases in students' evaluation of faculty. Comparisons are…

Abstract

This paper reports results of analyses made at an all-female Gulf Arab university measuring the nature and extent of biases in students' evaluation of faculty. Comparisons are made with research reporting the nature of similar relationships in North America. Two issues are investigated: 1) What variables (if any) bias faculty evaluation results at an all-female Arab university? 2) Are biasing variables different in nature or magnitude to those reported at North America universities? Using the population of 13,300 faculty evaluation records collected over two school years at Zayed University, correlations of faculty evaluation results to nine potentially biasing factors are made. Results show biases to faculty evaluation results do exist. However, biases are small, and strikingly similar in nature to those reported at North American universities.

Details

Learning and Teaching in Higher Education: Gulf Perspectives, vol. 3 no. 2
Type: Research Article
ISSN: 2077-5504

Open Access
Article
Publication date: 6 December 2018

Gregory Ching

Competition among higher education institutions has pushed universities to expand their competitive advantages. Based on the assumption that the core functions of universities are…

21007

Abstract

Purpose

Competition among higher education institutions has pushed universities to expand their competitive advantages. Based on the assumption that the core functions of universities are academic, understanding the teaching–learning process with the help of student evaluation of teaching (SET) would seem to be a logical solution in increasing competitiveness. The paper aims to discuss these issues.

Design/methodology/approach

The current paper presents a narrative literature review examining how SETs work within the concept of service marketing, focusing specifically on the search, experience, and credence qualities of the provider. A review of the various factors that affect the collection of SETs is also included.

Findings

Relevant findings show the influence of students’ prior expectations on SET ratings. Therefore, teachers are advised to establish a psychological contract with the students at the start of the semester. Such an agreement should be negotiated, setting out the potential benefits of undertaking the course and a clear definition of acceptable performance within the class. Moreover, connections should be made between courses and subjects in order to provide an overall view of the entire program together with future career pathways.

Originality/value

Given the complex factors affecting SETs and the antecedents involved, there appears to be no single perfect tool to adequately reflect what is happening in the classroom. As different SETs may be needed for different courses and subjects, options such as faculty self-evaluation and peer-evaluation might be considered to augment current SETs.

Details

Higher Education Evaluation and Development, vol. 12 no. 2
Type: Research Article
ISSN: 2514-5789

Keywords

Open Access
Article
Publication date: 23 December 2020

Stefan Dreisiebner, Anna Katharina Polzer, Lyn Robinson, Paul Libbrecht, Juan-José Boté-Vericad, Cristóbal Urbano, Thomas Mandl, Polona Vilar, Maja Žumer, Mate Juric, Franjo Pehar and Ivanka Stričević

The purpose of this paper is to demonstrate the rationale, technical framework, content creation workflow and evaluation for a multilingual massive open online course (MOOC) to…

2076

Abstract

Purpose

The purpose of this paper is to demonstrate the rationale, technical framework, content creation workflow and evaluation for a multilingual massive open online course (MOOC) to facilitate information literacy (IL) considering cultural aspects.

Design/methodology/approach

A good practice analysis built the basis for the technical and content framework. The evaluation approach consisted of three phases: first, the students were asked to fill out a short self-assessment questionnaire and a shortened adapted version of a standardized IL test. Second, they completed the full version of the IL MOOC. Third, they were asked to fill out the full version of a standardized IL test and a user experience questionnaire.

Findings

The results show that first the designed workflow was suitable in practice and led to the implementation of a full-grown MOOC. Second, the implementation itself provides implications for future projects developing multilingual educational resources. Third, the evaluation results show that participants achieved significantly higher results in a standardized IL test after attending the MOOC as mandatory coursework. Variations between the different student groups in the participating countries were observed. Fourth, self-motivation to complete the MOOC showed to be a challenge for students asked to attend the MOOC as nonmandatory out-of-classroom task. It seems that multilingual facilitation alone is not sufficient to increase active MOOC participation.

Originality/value

This paper presents an innovative approach of developing multilingual IL teaching resources and is one of the first works to evaluate the impact of an IL MOOC on learners' experience and learning outcomes in an international evaluation study.

Details

Journal of Documentation, vol. 77 no. 3
Type: Research Article
ISSN: 0022-0418

Keywords

Open Access
Article
Publication date: 22 November 2019

Wu Chen, Xin Tang and Ting Mou

The purpose of this paper is to provide some references for teachers who use KidsProgram or other graphic programming tools platform for STEAM (science, technology, engineering…

3371

Abstract

Purpose

The purpose of this paper is to provide some references for teachers who use KidsProgram or other graphic programming tools platform for STEAM (science, technology, engineering, arts and mathematics) education at distance by game-based teaching. From the design of the STEAM class, teachers can know how to stimulate students’ interest in programming and cultivating their ability to innovate and solve practical problems more clearly with KidsProgram.

Design/methodology/approach

This paper will explain the teaching design from ten aspects and implement it in real class to see the result. The ten aspects are situations creation, knowledge popularization, raising problems, analyzing problems, concepts introduction, interface design, logic design, self-evaluation and mutual evaluation, teacher comments and extension and innovation. With the KidsProgram platform, this paper takes “The Missile Convey,” a sub-course of “Discovery Universe” as an example. Through the situation created by the teacher, students brainstorm the dangers that the earth may encounter in the universe and then learn relevant scientific knowledge. Next, students raise and analyze problems according to the situation under the guidance of the teacher. Through the interaction with teachers, students review the programming concepts and the usage of corresponding coding blocks needed for the project, like “random number.” They need to carry out interface design and logic design for the project, and complete the project. After that, the students use the self-evaluation form and the mutual evaluation form to modify and then show and share the projects to the in front of the class. After self-evaluation and peer evaluation, the teacher will make a final summary evaluation and make some suggestions for improvement. From the students’ programming productions and the interviews with them, the teaching result can be known.

Findings

With elaborate teaching design and appropriate teaching strategies, students can flexibly use multi-disciplinary knowledge of science, technology, engineering, art and mathematics to solve problems in the process of creation, which is conducive to the cultivation and improvement of students’ comprehensive quality on KidsProgram classroom, under the guidance of STEAM education. In other words, in this class, students need to use engineering thinking to plan the whole project based on the understanding of scientific principles, design interfaces with artistic ideas, use mathematical knowledge for logical operations, and gradually solve technical problems with the above knowledge or methods in a comprehensive way.

Originality/value

The KidsProgram is a leading graphical programming tool platform in China in recent years. It deeply reconstructs the concept of Scratch designed by MIT. Graphic programming, a method of programming by dragging and dropping blocks containing natural languages, is different from traditional code programming. In this paper, the visualized cases in the class will be demonstrated in the “interface design” and “logic design.” This paper designs a course in STEAM education at distance via KidsProgram, hoping to provide some reference for other research on teaching of graphical programming tools.

Details

Asian Association of Open Universities Journal, vol. 14 no. 2
Type: Research Article
ISSN: 2414-6994

Keywords

Open Access
Article
Publication date: 23 May 2019

John Garger, Paul H. Jacques, Brian W. Gastle and Christine M. Connolly

The purpose of this paper is to demonstrate that common method variance, specifically single-source bias, threatens the validity of a university-created student assessment of…

2307

Abstract

Purpose

The purpose of this paper is to demonstrate that common method variance, specifically single-source bias, threatens the validity of a university-created student assessment of instructor instrument, suggesting that decisions made from these assessments are inherently flawed or skewed. Single-source bias leads to generalizations about assessments that might influence the ability of raters to separate multiple behaviors of an instructor.

Design/methodology/approach

Exploratory factor analysis, nested confirmatory factor analysis and within-and-between analysis are used to assess a university-developed, proprietary student assessment of instructor instrument to determine whether a hypothesized factor structure is identifiable. The instrument was developed over a three-year period by a university-mandated committee.

Findings

Findings suggest that common method variance, specifically single-source bias, resulted in the inability to identify hypothesized constructs statistically. Additional information is needed to identify valid instruments and an effective collection method for assessment.

Practical implications

Institutions are not guaranteed valid or useful instruments even if they invest significant time and resources to produce one. Without accurate instrumentation, there is insufficient information to assess constructs for teaching excellence. More valid measurement criteria can result from using multiple methods, altering collection times and educating students to distinguish multiple traits and behaviors of individual instructors more accurately.

Originality/value

This paper documents the three-year development of a university-wide student assessment of instructor instrument and carries development through to examining the psychometric properties and appropriateness of using this instrument to evaluate instructors.

Details

Higher Education Evaluation and Development, vol. 13 no. 1
Type: Research Article
ISSN: 2514-5789

Keywords

Open Access
Article
Publication date: 30 October 2018

Tashfeen Ahmad

The purpose of this paper is to share the author’s viewpoint on how to increase student response rate in course evaluation surveys.

5565

Abstract

Purpose

The purpose of this paper is to share the author’s viewpoint on how to increase student response rate in course evaluation surveys.

Design/methodology/approach

The approach is to highlight measures which increased student response rate in online surveys of the author’s teaching evaluation at The University of the West Indies, Jamaica.

Findings

This viewpoint suggests that student response rate to course evaluation can be improved by the lecturer’s effective communication. The examples of effective communication are given in this paper.

Originality/value

This work will encourage the lecturers to initiate more student engagement to improve response rate of their teaching evaluation.

Details

PSU Research Review, vol. 2 no. 3
Type: Research Article
ISSN: 2399-1747

Keywords

Open Access
Article
Publication date: 24 September 2021

Junko Winch

The purpose of this study comprises the following three: (1) to ascertain the purpose of university module evaluation questionnaires (MEQs) and its reliability; (2) to evaluate…

Abstract

Purpose

The purpose of this study comprises the following three: (1) to ascertain the purpose of university module evaluation questionnaires (MEQs) and its reliability; (2) to evaluate University X's MEQ; and (3) to offer how Universities may be able to support their teaching staff with scholarship activities using the MEQ project.

Design/methodology/approach

University MEQ purposes and its reliability were investigated using literature reviews. The University X's MEQ seven statements were evaluated by three university academic staff. The study was conducted at a British university in South East of England. The duration of this interdisciplinary project was for two months which was a university interdisciplinary project between 14/07/20 and 13/10/20.

Findings

The purpose for MEQs includes (1) students’ satisfaction; (2) accountability for university authority and (3) teaching feedback and academic promotions for teaching staff. The evaluation of University X's MEQ indicated that MEQ questions were unclear which do not serve reliable student evaluation results. This topic may be of interest to University MEQ designers, lecturers, University Student Experience team, University Executive Board, University administrators and University HR senior management teams.

Originality/value

The following three points are considered original to this study: (1) MEQ purposes are summarised by students, university authority and teaching staff; (2) the evaluation of a British University MEQ; (3) provides suggestions on how lecturers' scholarship activities can be supported by the university-wide initiative and umbrella network. These are practical knowledge for the faculty and administrators of higher education institutions which may be of use.

Details

Higher Education Evaluation and Development, vol. 16 no. 2
Type: Research Article
ISSN: 2514-5789

Keywords

Open Access
Article
Publication date: 3 July 2017

Leony Derick, Gayane Sedrakyan, Pedro J. Munoz-Merino, Carlos Delgado Kloos and Katrien Verbert

The purpose of this paper is to evaluate four visualizations that represent affective states of students.

2098

Abstract

Purpose

The purpose of this paper is to evaluate four visualizations that represent affective states of students.

Design/methodology/approach

An empirical-experimental study approach was used to assess the usability of affective state visualizations in a learning context. The first study was conducted with students who had knowledge of visualization techniques (n=10). The insights from this pilot study were used to improve the interpretability and ease of use of the visualizations. The second study was conducted with the improved visualizations with students who had no or limited knowledge of visualization techniques (n=105).

Findings

The results indicate that usability, measured by perceived usefulness and insight, is overall acceptable. However, the findings also suggest that interpretability of some visualizations, in terms of the capability to support emotional awareness, still needs to be improved. The level of students’ awareness of their emotions during learning activities based on the visualization interpretation varied depending on previous knowledge of information visualization techniques. Awareness was found to be high for the most frequently experienced emotions and activities that were the most frustrating, but lower for more complex insights such as interpreting differences with peers. Furthermore, simpler visualizations resulted in better outcomes than more complex techniques.

Originality/value

Detection of affective states of students and visualizations of these states in computer-based learning environments have been proposed to support student awareness and improve learning. However, the evaluation of visualizations of these affective states with students to support awareness in real life settings is an open issue.

Details

Journal of Research in Innovative Teaching & Learning, vol. 10 no. 2
Type: Research Article
ISSN: 2397-7604

Keywords

Open Access
Article
Publication date: 27 August 2019

Óscar Martín Rodríguez, Francisco González-Gómez and Jorge Guardiola

The purpose of this paper is to focus on the relationship between student assessment method and e-learning satisfaction. Which e-learning assessment method do students prefer? The…

2711

Abstract

Purpose

The purpose of this paper is to focus on the relationship between student assessment method and e-learning satisfaction. Which e-learning assessment method do students prefer? The assessment method is an additional determinant of the effectiveness and quality that affects user satisfaction with online courses.

Design/methodology/approach

The study employs data from 1,114 students. The first set of data was obtained from a questionnaire on the online platform. The second set of information was obtained from the external assessment reports by e-learning specialists. The satisfaction revealed by the students in their responses to the questionnaire is the dependent variable in the multivariate technique. In order to estimate the influence of the independent variables on the global satisfaction, we use the ordinary least squares technic. This method is the most appropriate for dependent discrete variables whose categories are ordered but have multiple categories, as is the case for the dependent variable.

Findings

The method influences e-learning satisfaction, even though only slightly. The students are reluctant to be assessed by a final exam. Students prefer systems that award more importance to the assessment of coursework as part of the final mark.

Practical implications

Knowing the level of student satisfaction and the factors that influence it is helpful to the teachers for improving their courses.

Originality/value

In online education, student satisfaction is an indicator of the quality of the education system. Although previous research has analyzed the factors that influence e-student satisfaction, to the best of authors’ knowledge, no previous research has specifically analyzed the relationship between assessment systems and general student satisfaction with the course.

Details

Higher Education Evaluation and Development, vol. 13 no. 1
Type: Research Article
ISSN: 2514-5789

Keywords

1 – 10 of over 3000