To read the full version of this content please select one of the options below:

Using Learning Analytics to evaluate the quality of multiple-choice questions: A perspective with Classical Test Theory and Item Response Theory

Jose Manuel Azevedo (CEOS.PP / ISCAP / P.PORTO, Instituto Politécnico do Porto, Porto, Portugal)
Ema P. Oliveira (Department of Psychology and Education, LabCom.IFP, Universidade da Beira Interior, Covilhã, Portugal)
Patrícia Damas Beites (Department of Mathematics and CMA-UBI, Universidade da Beira Interior, Covilhã, Portugal)

International Journal of Information and Learning Technology

ISSN: 2056-4880

Article publication date: 4 June 2019

Issue publication date: 23 July 2019

Abstract

Purpose

The purpose of this paper is to find appropriate forms of analysis of multiple-choice questions (MCQ) to obtain an assessment method, as fair as possible, for the students. The authors intend to ascertain if it is possible to control the quality of the MCQ contained in a bank of questions, implemented in Moodle, presenting some evidence with Item Response Theory (IRT) and Classical Test Theory (CTT). The used techniques can be considered a type of Descriptive Learning Analytics since they allow the measurement, collection, analysis and reporting of data generated from students’ assessment.

Design/methodology/approach

A representative data set of students’ grades from tests, randomly generated with a bank of questions implemented in Moodle, was used for analysis. The data were extracted from a Moodle database using MySQL with an ODBC connector, and collected in MS ExcelTM worksheets, and appropriate macros programmed with VBA were used. The analysis with the CTT was done through appropriate MS ExcelTM formulas, and the analysis with the IRT was approached with an MS ExcelTM add-in.

Findings

The Difficulty and Discrimination Indexes were calculated for all the questions having enough answers. It was found that the majority of the questions presented values for these indexes, which leads to a conclusion that they have quality. The analysis also showed that the bank of questions presents some internal consistency and, consequently, some reliability. Groups of questions with similar features were obtained, which is very important for the teacher to develop tests as fair as possible.

Originality/value

The main contribution and originality that can be found in this research is the definition of groups of questions with similar features, regarding their difficulty and discrimination properties. These groups allow the identification of difficulty levels in the questions on the bank of questions, thus allowing teachers to build tests, randomly generated with Moodle, that include questions with several difficulty levels in the tests, as it should be done. As far as the authors’ knowledge, there are no similar results in the literature.

Keywords

Citation

Azevedo, J.M., Oliveira, E.P. and Beites, P.D. (2019), "Using Learning Analytics to evaluate the quality of multiple-choice questions: A perspective with Classical Test Theory and Item Response Theory", International Journal of Information and Learning Technology, Vol. 36 No. 4, pp. 322-341. https://doi.org/10.1108/IJILT-02-2019-0023

Publisher

:

Emerald Publishing Limited

Copyright © 2019, Emerald Publishing Limited