Search results

1 – 5 of 5
To view the access options for this content please click here
Article
Publication date: 13 July 2012

Mahmoud F. Alquraan

The purpose of this paper is to explore the assessment methods used in higher education to assess students' learning, and to investigate the effects of college and grading…

Abstract

Purpose

The purpose of this paper is to explore the assessment methods used in higher education to assess students' learning, and to investigate the effects of college and grading system on the used assessment methods.

Design/methodology/approach

This descriptive study investigates the assessment methods used by teachers in higher education to assess their students' learning outcomes. An instrument consisting of 15 items (each item is an assessment method) was distributed to 736 undergraduate students from four public universities in Jordan.

Findings

Findings show that traditional paper‐pencil test is the most common method that is used to assess learning in higher education. Results also show that teachers in colleges of science and engineering and colleges of nursing use different assessment methods to assess learning, besides traditional testing such as: real life tasks (authentic assessment), papers, and projects. Also, the results show that teachers use the same assessment methods to assess learning, despite the grading systems (letter or numbers) used at their institutes.

Research limitations/implications

The sample of the study was limited to undergraduate students and teachers' points of views about the frequent use of assessment methods were not studied.

Practical implications

Higher education institutes should encourage teachers to use new and modern assessment methods as well as traditional paper‐pencil testing, and study the reasons for not using these new methods.

Originality/value

The paper should alert the higher education institutes about the important of developing the assessment process, through knowing their students' points of view about the assessment methods. This will help to get students involved in the learning process.

Details

Education, Business and Society: Contemporary Middle Eastern Issues, vol. 5 no. 2
Type: Research Article
ISSN: 1753-7983

Keywords

To view the access options for this content please click here
Article
Publication date: 28 October 2014

Mahmoud F. Alquraan

This study aims to utilized the item response theory (IRT) rating scale model to analyze students’ perceptions of assessment practices in two universities: one in Jordan…

Abstract

Purpose

This study aims to utilized the item response theory (IRT) rating scale model to analyze students’ perceptions of assessment practices in two universities: one in Jordan and the other in the USA. The sample of the study consisted of 506 university students selected from both universities. Results show that the two universities still focus on paper-pencil testing to assess students’ learning outcomes. The study recommends that higher education institutes should encourage their teachers to use different assessment methods to assess students’ learning outcomes.

Design/methodology/approach

The convenience sample consisted of 506 selected university students from the USA and Jordan, and participants were distributed according to their educational levels, thus: 83 freshmen, 139 sophomores, 157 juniors and 59 seniors. (Note: some students from both universities did not report their gender and/or their educational level). The USA university sample consisted of 219 students from three colleges at a major university in the southeast of the USA studying for arts and sciences, education and commerce and business qualifications, of whom 43 were males and 173 were females. The study used the Students Perception of Assessment Practices Inventory developed by Alquraan (2007), and for the purpose of this study, the RUMM2020 program was used for its rating scale model.

Findings

Both universities, in Jordan and the USA, still focus more on the developmental (construction of assessment tasks), organizational and planning aspects of assessment processes than they do on assessments of learning and assessment methods (traditional and new assessment methods). The assessment practices that are used frequently in both universities based on the teachers sampled are: “(I27) I know what to study for the test in this class”, “(I6) Teacher provides a good environment during test administration” and “(I21) My teacher avoids interrupting students as they are taking tests”. This indicates that teachers in the selected universities have a tendency to focus on the administrative and communicative aspects of assessment (e.g. providing a good environment during test administration) more than on using different assessment methods (e.g. portfolios, new technology, computers, peer and self-assessment) or even using assessment practices that help students learn in different ways (e.g. assessing students’ prior knowledge and providing written feedback on the graded tests).

Originality/value

This is a cross-cultural study focus assessment of students learning in higher education.

Details

Education, Business and Society: Contemporary Middle Eastern Issues, vol. 7 no. 4
Type: Research Article
ISSN: 1753-7983

Keywords

Content available
Article
Publication date: 13 July 2012

Kay Gallagher and James Pounder

Abstract

Details

Education, Business and Society: Contemporary Middle Eastern Issues, vol. 5 no. 2
Type: Research Article
ISSN: 1753-7983

To view the access options for this content please click here
Article
Publication date: 2 April 2019

Mahmoud AlQuraan

The purpose of this paper is to investigate the effect of insufficient effort responding (IER) on construct validity of student evaluations of teaching (SET) in higher education.

Abstract

Purpose

The purpose of this paper is to investigate the effect of insufficient effort responding (IER) on construct validity of student evaluations of teaching (SET) in higher education.

Design/methodology/approach

A total of 13,340 SET surveys collected by a major Jordanian university to assess teaching effectiveness were analyzed in this study. The detection method was used to detect IER, and the construct (factorial) validity was assessed using confirmatory factor analysis (CFA) and principal component analysis (PCA) before and after removing detected IER.

Findings

The results of this study show that 2,160 SET surveys were flagged as insufficient effort responses out of 13,340 surveys. This figure represents 16.2 percent of the sample. Moreover, the results of CFA and PCA show that removing detected IER statistically enhanced the construct (factorial) validity of the SET survey.

Research limitations/implications

Since IER responses are often ignored by researchers and practitioners in industrial and organizational psychology (Liu et al., 2013), the results of this study strongly suggest that higher education administrations should give the necessary attention to IER responses, as SET results are used in making critical decisions

Practical implications

The results of the current study recommend universities to carefully design online SET surveys, and provide the students with clear instructions in order to minimize students’ engagement in IER. Moreover, since SET results are used in making critical decisions, higher education administrations should give the necessary attention to IER by examining the IERs rate in their data sets and its consequences on the data quality.

Originality/value

Reviewing the related literature shows that this is the first study that investigates the effect of IER on construct validity of SET in higher education using an IRT-based detection method.

Details

Journal of Applied Research in Higher Education, vol. 11 no. 3
Type: Research Article
ISSN: 2050-7003

Keywords

To view the access options for this content please click here
Article
Publication date: 8 November 2011

Mahmoud Alquraan and Abed Alnaser Aljarah

The purpose of this paper is to investigate the psychometric properties of a Jordanian version of the Metamemory in Adulthood (MIA) questionnaire of Dixon, Hultsch and Hertzog.

Abstract

Purpose

The purpose of this paper is to investigate the psychometric properties of a Jordanian version of the Metamemory in Adulthood (MIA) questionnaire of Dixon, Hultsch and Hertzog.

Design/methodology/approach

The sample for this study consisted of 656 students randomly selected from Yarmouk University‐Jordan. Translation‐back‐translation, classical test theory, IRT Rasch model, and confirmatory factor analysis procedures were used to evaluate the psychometric properties of a Jordanian version of the MIA (MIA‐Jo).

Findings

The results of these analyses show that 76 items (out of 108 original MIA items) provide sufficient evidence in support of the reliability and validity of the MIA‐Jo. The results also show that the MIA‐Jo has the same structure or subscales as the original MIA.

Research limitations/implications

The sample for this study consisted of 656 students randomly selected from Yarmouk University‐Jordan. Therefore, the study recommends the necessity to conduct more research on the MIA‐Jo using samples that have a wider range of age (up to 80 years) and other strata of Jordanian society.

Originality/value

This study is expected to provide researchers and educators in Jordan with a valid and reliable instrument to do more research on metamemory and its relationship with other cognitive variables.

Details

Education, Business and Society: Contemporary Middle Eastern Issues, vol. 4 no. 4
Type: Research Article
ISSN: 1753-7983

Keywords

1 – 5 of 5