Search results
1 – 10 of over 5000Rajasekharan Pillai K. and Ashish Viswanath Prakash
The purpose of the study is to analyse the perception of students toward a computer-based exam on a custom-made digital device and their willingness to adopt the same for…
Abstract
Purpose
The purpose of the study is to analyse the perception of students toward a computer-based exam on a custom-made digital device and their willingness to adopt the same for high-stake summative assessment.
Design/methodology/approach
This study followed an analytical methodology using survey design. A modified version of students’ perception of e-assessment questionnaire (SPEAQ) was used to elicit information from the subjects, who were drawn from a first-year post-graduate course in management and commerce, soliciting voluntary participation in the survey. SmartPLS 2.0 was the major analytical tool used to understand the theoretical robustness of observed and latent variables through structural equation modelling. The final model was retained based on the structural significance of the path coefficients.
Findings
The results of the study offer ample evidence to establish the proposed theoretical relationship. It was found that the subjects of the study maintained a positive attitude toward e-assessment. Hence, the introduction of e-exams for high-stake assessment is suitable to current generation students.
Research limitations/implications
Findings of the study may be irrelevant to students who are not subject to e-learning processes, as an e-assessment can only be effective when students have ample exposure to working on computers.
Practical/implications
A major practical implication of the study is that e-exams will positively influence the outcome of education and effectiveness of the teaching–learning process. Technology, as an eclectic paradigm, can amplify the educational outcome by boosting the competency of students to meet challenges of any emerging situations.
Originality/value
The idea of an e-exam, using a custom-made device, is unprecedented. This paper offers a convincing empirical evidence to academic administrators to integrate e-assessment with e-learning programs.
Details
Keywords
Stavros A. Nikou and Anastasios A. Economides
The purpose of this study is to compare the overall usability and user experience of desktop computers and mobile-devices when used in a summative assessment in the context of a…
Abstract
Purpose
The purpose of this study is to compare the overall usability and user experience of desktop computers and mobile-devices when used in a summative assessment in the context of a higher education course.
Design/methodology/approach
The study follows a between-groups design. The participants were 110 first-year undergraduate students from a European university. Students in the experimental group participated in the assessment using mobile devices, whereas students in the control group participated using desktop computers. After the assessment, students self-reported their experiences with computer-based assessment (CBA) and mobile-based assessment (MBA), respectively. The instruments used were the user experience questionnaire and the system usability scale.
Findings
Attractiveness and novelty were reported significantly higher in the experimental group (MBA), while no significant differences were found between the two groups in terms of efficiency, perspicuity, dependability and stimulation. The overall score for the system usability was not found to differ between the two conditions.
Practical implications
The usability and user experience issues discussed in this study can inform educators and policymakers about the potential of using mobile devices in online assessment practices, as an alternative to desktop computers.
Originality/value
The study is novel, in that it provides quantitative evidence for the usability and user experience of both desktop computers and mobile devices when used in a summative assessment in the context of a higher education course. Study findings can contribute towards the interchangeable usage of desktop computers and mobile devices in assessment practices in higher education.
Details
Keywords
Each of the four objectives can be applied within the military training environment. Military training often requires that soldiers achieve specific levels of performance or…
Abstract
Each of the four objectives can be applied within the military training environment. Military training often requires that soldiers achieve specific levels of performance or proficiency in each phase of training. For example, training courses impose entrance and graduation criteria, and awards are given for excellence in military performance. Frequently, training devices, training media, and training evaluators or observers also directly support the need to diagnose performance strengths and weaknesses. Training measures may be used as indices of performance, and to indicate the need for additional or remedial training.
Surveys that include skill measures may suffer from additional sources of error compared to those containing questionnaires alone. Examples are distractions such as noise or…
Abstract
Purpose
Surveys that include skill measures may suffer from additional sources of error compared to those containing questionnaires alone. Examples are distractions such as noise or interruptions of testing sessions, as well as fatigue or lack of motivation to succeed. This paper aims to provide a review of statistical tools based on latent variable modeling approaches extended by explanatory variables that allow detection of survey errors in skill surveys.
Design/methodology/approach
This paper reviews psychometric methods for detecting sources of error in cognitive assessments and questionnaires. Aside from traditional item responses, new sources of data in computer-based assessment are available – timing data from the Programme for the International Assessment of Adult Competencies (PIAAC) and data from questionnaires – to help detect survey errors.
Findings
Some unexpected results are reported. Respondents who tend to use response sets have lower expected values on PIAAC literacy scales, even after controlling for scores on the skill-use scale that was used to derive the response tendency.
Originality/value
The use of new sources of data, such as timing and log-file or process data information, provides new avenues to detect response errors. It demonstrates that large data collections need to better utilize available information and that integration of assessment, modeling and substantive theory needs to be taken more seriously.
Details
Keywords
C. Lamprecht and G.F. Nel
In the light of the acceleration in the international and local information and knowledge revolution, the University of Stellenbosch (US) has introduced an e‐learning strategy to…
Abstract
In the light of the acceleration in the international and local information and knowledge revolution, the University of Stellenbosch (US) has introduced an e‐learning strategy to gain maximum benefit from the developments in information technology. In support of this strategy, the US has implemented WebCT as an electronic course management system. Subsequent consultations have revealed doubt among accounting lecturers and students about the effectiveness of WebCT assessment of tests in Financial Accounting. The purpose of the study was therefore to investigate this perception on the basis of the available literature, our own experience, categories of student learning and feedback from students. The WebCT assessment function was also contrasted with traditional assessment methods. It was concluded that although WebCT is not a quick fix, it could be implemented successfully in bigger classes, provided that innovative lecturers are responsible for these classes.
Details
Keywords
Discusses the reappraisal at Leeds Polytechnic of methodology fordeveloping teaching and learning materials in general, and independentlearning materials in particular. Outlines…
Abstract
Discusses the reappraisal at Leeds Polytechnic of methodology for developing teaching and learning materials in general, and independent learning materials in particular. Outlines two models for establishing production facilities and two models for designing and creating materials. Discusses the strategy adopted by the Polytechnic, which has led to the establishment of a staff‐access laboratory. Illustrates how a lecturer can develop a range of learning resources from word‐processed notes to fully interactive computer‐based material with integrated automatic assessment. Reflects on progress to date.
Details
Keywords
This chapter presents “what we know” about the application of technology to instruction for students with learning and behavioral disabilities. Information is presented on…
Abstract
This chapter presents “what we know” about the application of technology to instruction for students with learning and behavioral disabilities. Information is presented on research-based effective practices in technological interventions for teaching specific academic skills, delivering content at the secondary level and using technology as a tool for assessment. The chapter concludes with a discussion on Universal Design for Learning and the promises this paradigm holds for educating not only students with special needs, but all learners. The chapter begins where parents and teachers typically begin: the consideration of technology.
Dirk Ifenthaler and Muhittin ŞAHİN
This study aims to focus on providing a computerized classification testing (CCT) system that can easily be embedded as a self-assessment feature into the existing legacy…
Abstract
Purpose
This study aims to focus on providing a computerized classification testing (CCT) system that can easily be embedded as a self-assessment feature into the existing legacy environment of a higher education institution, empowering students with self-assessments to monitor their learning progress and following strict data protection regulations. The purpose of this study is to investigate the use of two different versions (without dashboard vs with dashboard) of the CCT system during the course of a semester; to examine changes in the intended use and perceived usefulness of two different versions (without dashboard vs with dashboard) of the CCT system; and to compare the self-reported confidence levels of two different versions (without dashboard vs with dashboard) of the CCT system.
Design/methodology/approach
A total of N = 194 students from a higher education institution in the area of economic and business education participated in the study. The participants were provided access to the CCT system as an opportunity to self-assess their domain knowledge in five areas throughout the semester. An algorithm was implemented to classify learners into master and nonmaster. A total of nine metrics were implemented for classifying the performance of learners. Instruments for collecting co-variates included the study interest questionnaire (Cronbach’s a = 0. 90), the achievement motivation inventory (Cronbach’s a = 0. 94), measures focusing on perceived usefulness and demographic data.
Findings
The findings indicate that the students used the CCT system intensively throughout the semester. Students in a cohort with a dashboard available interacted more with the CCT system than students in a cohort without a dashboard. Further, findings showed that students with a dashboard available reported significantly higher confidence levels in the CCT system than participants without a dashboard.
Originality/value
The design of digitally supported learning environments requires valid formative (self-)assessment data to better support the current needs of the learner. While the findings of the current study are limited concerning one study cohort and a limited number of self-assessment areas, the CCT system is being further developed for seamless integration of self-assessment and related feedback to further reveal unforeseen opportunities for future student cohorts.
Details
Keywords
Kentaro Yamamoto and Mary Louise Lennon
Fabricated data jeopardize the reliability of large-scale population surveys and reduce the comparability of such efforts by destroying the linkage between data and measurement…
Abstract
Purpose
Fabricated data jeopardize the reliability of large-scale population surveys and reduce the comparability of such efforts by destroying the linkage between data and measurement constructs. Such data result in the loss of comparability across participating countries and, in the case of cyclical surveys, between past and present surveys. This paper aims to describe how data fabrication can be understood in the context of the complex processes involved in the collection, handling, submission and analysis of large-scale assessment data. The actors involved in those processes, and their possible motivations for data fabrication, are also elaborated.
Design/methodology/approach
Computer-based assessments produce new types of information that enable us to detect the possibility of data fabrication, and therefore the need for further investigation and analysis. The paper presents three examples that illustrate how data fabrication was identified and documented in the Programme for the International Assessment of Adult Competencies (PIAAC) and the Programme for International Student Assessment (PISA) and discusses the resulting remediation efforts.
Findings
For two countries that participated in the first round of PIAAC, the data showed a subset of interviewers who handled many more cases than others. In Case 1, the average proficiency for respondents in those interviewers’ caseloads was much higher than expected and included many duplicate response patterns. In Case 2, anomalous response patterns were identified. Case 3 presents findings based on data analyses for one PISA country, where results for human-coded responses were shown to be highly inflated compared to past results.
Originality/value
This paper shows how new sources of data, such as timing information collected in computer-based assessments, can be combined with other traditional sources to detect fabrication.
Details
Keywords
This study aims to utilized the item response theory (IRT) rating scale model to analyze students’ perceptions of assessment practices in two universities: one in Jordan and the…
Abstract
Purpose
This study aims to utilized the item response theory (IRT) rating scale model to analyze students’ perceptions of assessment practices in two universities: one in Jordan and the other in the USA. The sample of the study consisted of 506 university students selected from both universities. Results show that the two universities still focus on paper-pencil testing to assess students’ learning outcomes. The study recommends that higher education institutes should encourage their teachers to use different assessment methods to assess students’ learning outcomes.
Design/methodology/approach
The convenience sample consisted of 506 selected university students from the USA and Jordan, and participants were distributed according to their educational levels, thus: 83 freshmen, 139 sophomores, 157 juniors and 59 seniors. (Note: some students from both universities did not report their gender and/or their educational level). The USA university sample consisted of 219 students from three colleges at a major university in the southeast of the USA studying for arts and sciences, education and commerce and business qualifications, of whom 43 were males and 173 were females. The study used the Students Perception of Assessment Practices Inventory developed by Alquraan (2007), and for the purpose of this study, the RUMM2020 program was used for its rating scale model.
Findings
Both universities, in Jordan and the USA, still focus more on the developmental (construction of assessment tasks), organizational and planning aspects of assessment processes than they do on assessments of learning and assessment methods (traditional and new assessment methods). The assessment practices that are used frequently in both universities based on the teachers sampled are: “(I27) I know what to study for the test in this class”, “(I6) Teacher provides a good environment during test administration” and “(I21) My teacher avoids interrupting students as they are taking tests”. This indicates that teachers in the selected universities have a tendency to focus on the administrative and communicative aspects of assessment (e.g. providing a good environment during test administration) more than on using different assessment methods (e.g. portfolios, new technology, computers, peer and self-assessment) or even using assessment practices that help students learn in different ways (e.g. assessing students’ prior knowledge and providing written feedback on the graded tests).
Originality/value
This is a cross-cultural study focus assessment of students learning in higher education.
Details