Search results

1 – 10 of over 5000
Open Access
Article
Publication date: 12 August 2022

Hesham El Marsafawy, Rumpa Roy and Fahema Ali

This study aims to identify the gap between the requirements of the accreditation bodies and the widely used learning management systems (LMSs) in assessing the intended learning…

1479

Abstract

Purpose

This study aims to identify the gap between the requirements of the accreditation bodies and the widely used learning management systems (LMSs) in assessing the intended learning outcomes (ILOs). In addition, this study aims to introduce a framework, along with the evaluation of the functionality of the LMS, for measuring the ILO.

Design/methodology/approach

A qualitative method was deployed to examine the gap between the requirements of the accreditation standards and the LMS functionalities. The researchers collaborated to design a mechanism, develop a system architecture to measure the ILO in alignment with the accreditation standards and guide the development of the Moodle plugin. The appropriateness and effectiveness of the plugin were evaluated within the scope of assessment mapping and design. Focus group interviews were conducted to collect feedback from the instructors and program leaders regarding its implementation.

Findings

The results of this study indicate that there is no standardized mechanism to measure course and program ILO objectively, using the existing LMS. The implementation of the plugin shows the appropriateness and effectiveness of the system in generating ILO achievement reports, which was confirmed by the users.

Originality/value

This study proposed a framework and developed a system architecture for the objective measurement of the ILO through direct assessment. The plugin was tested to generate consistent reports during the measurement of course and program ILO. The plugin has been implemented across Gulf University’s program courses, ensuring appropriate reporting and continuous improvement.

Details

Quality Assurance in Education, vol. 30 no. 4
Type: Research Article
ISSN: 0968-4883

Keywords

Open Access
Article
Publication date: 1 June 2014

Bader Ahmed Abuid

In this paper a systematic and well-defined student participation assessment scheme for college courses is proposed. The scheme supports the involvement of students in a variety…

Abstract

In this paper a systematic and well-defined student participation assessment scheme for college courses is proposed. The scheme supports the involvement of students in a variety of areas of participation within and outside the classroom with the aim of improving their learning. The scheme addresses mostly the challenges related to the practicality of the structure and design of the assessment. It also addresses the subjectivity of grading student participations. Areas of participation are widened to allow the faculty more accurate information about the conduct of each individual student towards more objective assessment. In addition, it provides the faculty with the flexibility to select areas that best fit the learning outcomes, nature of the course, availability of time and resources, and class atmosphere. The proposed scheme is initiated and developed using feedback from the teaching staff of Nizwa College of Technology, (NCT) through a survey and open discussion. The results indicate that over two thirds of the surveyed staff show agreement with the concept of assessing participation and find the scheme design clear and systematic, while 82% of them perceive the scheme as effective in improving the motivation and learning of students.

Details

Learning and Teaching in Higher Education: Gulf Perspectives, vol. 11 no. 1
Type: Research Article
ISSN: 2077-5504

Open Access
Article
Publication date: 27 August 2019

Óscar Martín Rodríguez, Francisco González-Gómez and Jorge Guardiola

The purpose of this paper is to focus on the relationship between student assessment method and e-learning satisfaction. Which e-learning assessment method do students prefer? The…

2729

Abstract

Purpose

The purpose of this paper is to focus on the relationship between student assessment method and e-learning satisfaction. Which e-learning assessment method do students prefer? The assessment method is an additional determinant of the effectiveness and quality that affects user satisfaction with online courses.

Design/methodology/approach

The study employs data from 1,114 students. The first set of data was obtained from a questionnaire on the online platform. The second set of information was obtained from the external assessment reports by e-learning specialists. The satisfaction revealed by the students in their responses to the questionnaire is the dependent variable in the multivariate technique. In order to estimate the influence of the independent variables on the global satisfaction, we use the ordinary least squares technic. This method is the most appropriate for dependent discrete variables whose categories are ordered but have multiple categories, as is the case for the dependent variable.

Findings

The method influences e-learning satisfaction, even though only slightly. The students are reluctant to be assessed by a final exam. Students prefer systems that award more importance to the assessment of coursework as part of the final mark.

Practical implications

Knowing the level of student satisfaction and the factors that influence it is helpful to the teachers for improving their courses.

Originality/value

In online education, student satisfaction is an indicator of the quality of the education system. Although previous research has analyzed the factors that influence e-student satisfaction, to the best of authors’ knowledge, no previous research has specifically analyzed the relationship between assessment systems and general student satisfaction with the course.

Details

Higher Education Evaluation and Development, vol. 13 no. 1
Type: Research Article
ISSN: 2514-5789

Keywords

Open Access
Article
Publication date: 31 May 2019

Allen Z. Reich, Galen R. Collins, Agnes L. DeFranco and Suzanne L. Pieper

Because of the increasingly higher expectations of accrediting organizations, calls for greater accountability from state governments and students’ demand for an education that…

1196

Abstract

Purpose

Because of the increasingly higher expectations of accrediting organizations, calls for greater accountability from state governments and students’ demand for an education that prepares them for a career, most hospitality programs are now required to have an effective assessment of learning outcomes process. The increasing popularity of the assessment of learning outcomes process is viewed as highly positive because it can be considered as best-practices in higher education. The paper aims to discuss this issue.

Design/methodology/approach

This is Part 2 of a two-part article that provides an overview of the justifications for implementing an assessment of learning outcomes process, the steps that were developed by two hospitality programs, and the experiences of the two programs during implementation.

Findings

The steps in a closed-loop assessment of learning outcomes process are relatively detailed; however, because of changes in expectations of stakeholders and the requirements of accreditors, they are now mandatory for most hospitality programs. Therefore, the choice is not whether to implement them, but when. From a competitive standpoint, it is to the program’s advantage to begin as soon as possible. Another factor to consider is that the implementation of a closed-loop assessment of learning outcomes process will take several years to complete.

Originality/value

This paper is presenting a critical view of one of, if not the most important concepts in higher education, the closed-loop assessment of learning outcomes process. Hopefully, the information on the process that is provided and the experiences of the two programs can shorten the learning curve for other hospitality programs.

Details

International Hospitality Review, vol. 33 no. 1
Type: Research Article
ISSN: 2516-8142

Keywords

Open Access
Article
Publication date: 4 June 2019

Li Hsien Ooi and Arathai Din Eak

The purpose of this paper is to highlight how accreditation of prior experiential learning (APEL) is implemented, the challenges faced by the APEL assessors while assessing…

1661

Abstract

Purpose

The purpose of this paper is to highlight how accreditation of prior experiential learning (APEL) is implemented, the challenges faced by the APEL assessors while assessing candidates as well as to suggest recommendations for improving the APEL process.

Design/methodology/approach

This paper is written based on the critical reflection of two accreditation of prior experiential learning: admissions (APEL-A) assessors appointed from a Malaysian Qualifications Agency approved assessment centre. This process would add depth and breadth to the study based on the assessor’s experience.

Findings

The study identified five challenges in the implementation of APEL-A. They are limited literature and records of the existing practices, conceptualisation of the APEL process, complicated and time-consuming APEL process, standard of acceptance vary according to discipline and lack of continuous training for APEL assessors. The four recommendations for improvements are as follows: the need for transparent and clear guidelines, ensuring consistency in practices and fairness to those from conventional learning, integrating APEL as part of the institution’s academic policy and providing continuous training for all APEL assessors.

Originality/value

Until now, not much research has been done regarding its implementation in Malaysia. The number of learners enrolled through this form of assessment may be low but growing. The feedback on the implementation of the APEL-A assessment process would be greatly beneficial to the stakeholders involved in improving its implementation process. The highlighted challenges faced as well as the recommendations put forth may also be useful for the continuous improvement of the APEL-A assessment process. Relevant stakeholders would benefit from this study.

Details

Asian Association of Open Universities Journal, vol. 14 no. 1
Type: Research Article
ISSN: 2414-6994

Keywords

Open Access
Article
Publication date: 3 May 2019

Allen Z. Reich, Galen R. Collins, Agnes L. DeFranco and Suzanne L. Pieper

Because of the increasingly higher expectations of accrediting organizations, calls for greater accountability from state governments, and students’ demand for an education that…

1545

Abstract

Purpose

Because of the increasingly higher expectations of accrediting organizations, calls for greater accountability from state governments, and students’ demand for an education that prepares them for a career, most hospitality programs are now required to have an effective assessment of learning outcomes process. The increasing popularity of the assessment of learning outcomes process is viewed as highly positive because it can be considered as best practices in higher education. The paper aims to discuss this issue.

Design/methodology/approach

This is Part 1 of a two-part article that provides an overview of the justifications for implementing an assessment of learning outcomes process, the steps that were developed by two hospitality programs and the experiences of the two programs during implementation of the seven steps. Part 1 includes foundational principles of the process and the first three of the seven steps.

Findings

The steps in a closed-loop assessment of learning outcomes process are relatively detailed; however, because of changes in expectations of stakeholders and the requirements of accreditors, they are now mandatory for most hospitality programs. Therefore, the choice is not whether to implement them, but when to implement them. From a competitive standpoint, it is to the program’s advantage to begin as soon as possible. Another factor to consider is that the implementation of an effective closed-loop assessment of learning outcomes process will take several years to complete.

Originality/value

This paper is presenting a critical view of one of, if not the most important concepts in higher education, the closed-loop assessment of learning outcomes process. Hopefully, the information on the process that is provided and the experiences of the two programs can shorten the learning curve for other hospitality programs.

Details

International Hospitality Review, vol. 33 no. 1
Type: Research Article
ISSN: 2516-8142

Keywords

Open Access
Article
Publication date: 26 July 2018

Fanny M.F. Lau and Gryphon Sou

Territory-wide system assessment (TSA) was launched and administered by Hong Kong (HK) Education Bureau (EDB) since 2004. Since then, parents and teachers have been questioning…

5589

Abstract

Purpose

Territory-wide system assessment (TSA) was launched and administered by Hong Kong (HK) Education Bureau (EDB) since 2004. Since then, parents and teachers have been questioning its need, value, uselessness, effectiveness, harm for schools, teachers and students. In 2015, the issue blew up with Kau Yan School’s principal boycotting the tests. A series of discussions in the public and media and different surveys were then carried out widely in HK. After review, EDB announced in 2017 that the revised version of TSA be extended to Primary 3 students in HK. The purpose of this paper is to propose that TSAs for Primary 3, Primary 6 and Secondary 3 need a further review to judge their need and uselessness.

Design/methodology/approach

This paper reviews the educational policy governing the administration of the TSA. Primary and secondary data from focus group meetings, press interviews (Bogdan and Biklen, 1982; Miles and Huberman, 1994; Ouiment et al., 2001) and public reports would be analyzed. Besides, participant observation (Nosich, 1982; Sou, 2000; Sou and Zhou, 2007) and theoretical reasoning (Nosich, 1982; Sou, 2000; Sou and Zhou, 2007) have been applied for the critical review of this controversial test. The contrast study on the conflicting views of stakeholders in the education industry would bring up some insights of this controversial educational policy in Assessment for Learning.

Findings

Conflicting and contrasting perceptions from TSA to basic competency assessment (BCA) among stakeholders of education and government include governmental stakeholder – EDB’s awareness; EDB stressed that TSA is a low-stakes assessment which does not need extra practice for students; non-governmental stakeholders including legislative councilors’ perception, school principals’ perception, teachers’ perception, parents’ perception and students’ perception. Facing the opposition and grievances of different stakeholders, EDB announced in January 2017 that the revised version of TSA: BCA, be extended to HK in May 2017. Parents and legislative councilors were angry and they ask for a review or even cancellation for Primary 3 TSA.

Originality/value

This original study will initiate more thorough revisions and discussions for the TSAs for Primary 3, Primary 6 and Secondary 3 in HK, as a quality educational management step. While TSA for Primary 3 has been reviewed and substantially “revised,” the community at large still asks for further revision for its needs, uselessness and harm for parents, teachers and students. Since the underlying causes of students’ suicides are not fully identified, the problem of over-drilling practices for TSAs for Primary 3, Primary 6 and Secondary 3 needs to be satisfactorily resolved. Thus, TSAs for Primary 6 and Secondary 3, like that for Primary 3, should be reviewed for probable revision.

Open Access
Article
Publication date: 12 May 2023

Dirk Ifenthaler and Muhittin ŞAHİN

This study aims to focus on providing a computerized classification testing (CCT) system that can easily be embedded as a self-assessment feature into the existing legacy…

Abstract

Purpose

This study aims to focus on providing a computerized classification testing (CCT) system that can easily be embedded as a self-assessment feature into the existing legacy environment of a higher education institution, empowering students with self-assessments to monitor their learning progress and following strict data protection regulations. The purpose of this study is to investigate the use of two different versions (without dashboard vs with dashboard) of the CCT system during the course of a semester; to examine changes in the intended use and perceived usefulness of two different versions (without dashboard vs with dashboard) of the CCT system; and to compare the self-reported confidence levels of two different versions (without dashboard vs with dashboard) of the CCT system.

Design/methodology/approach

A total of N = 194 students from a higher education institution in the area of economic and business education participated in the study. The participants were provided access to the CCT system as an opportunity to self-assess their domain knowledge in five areas throughout the semester. An algorithm was implemented to classify learners into master and nonmaster. A total of nine metrics were implemented for classifying the performance of learners. Instruments for collecting co-variates included the study interest questionnaire (Cronbach’s a = 0. 90), the achievement motivation inventory (Cronbach’s a = 0. 94), measures focusing on perceived usefulness and demographic data.

Findings

The findings indicate that the students used the CCT system intensively throughout the semester. Students in a cohort with a dashboard available interacted more with the CCT system than students in a cohort without a dashboard. Further, findings showed that students with a dashboard available reported significantly higher confidence levels in the CCT system than participants without a dashboard.

Originality/value

The design of digitally supported learning environments requires valid formative (self-)assessment data to better support the current needs of the learner. While the findings of the current study are limited concerning one study cohort and a limited number of self-assessment areas, the CCT system is being further developed for seamless integration of self-assessment and related feedback to further reveal unforeseen opportunities for future student cohorts.

Details

Interactive Technology and Smart Education, vol. 20 no. 3
Type: Research Article
ISSN: 1741-5659

Keywords

Open Access
Article
Publication date: 22 February 2024

Daniele Morselli

This article focuses on the assessment of entrepreneurship competence by selected vocational teachers in Italy. The exploratory research question addresses the extent to which…

Abstract

Purpose

This article focuses on the assessment of entrepreneurship competence by selected vocational teachers in Italy. The exploratory research question addresses the extent to which entrepreneurship assessments are competence based, and the research seeks to identify fully fledged assessment programmes with both a formative and summative component, and the use of assessment rubrics. It also explores the extent to which entrepreneurship competence is referred to in school documentation and later assessed, and the tools and strategies used for such assessment.

Design/methodology/approach

This case study is part of a larger European research project promoted by Cedefop; in Italy it focused on six selected vocational IVET and CVET programmes and apprenticeship schemes. It used a wide range of instruments to ensure triangulation and multiple perspectives: analysed policy documents and undertook online interviews with experts and policy makers. At VET providers' premises it deployed: analysis of school documents; observations of learning environments; interviews and focus groups with (in schools) teachers, directors and vice directors, learners and alumni (in companies) instructors, company tutors and employers, apprentices and alumni.

Findings

Assessment tasks were rarely embedded within fully fledged assessment programmes involving both formative and summative tasks, and assessment rubric for grading. Most of the time, entrepreneurship programmes lacked self-assessment, peer assessment and structured feedback and did not involve learners in the assessment process. Some instructors coached the students, but undertook no clear formative assessment. These findings suggest institutions have a testing culture with regard to assessment, at the level of both policy and practice. In most cases, entrepreneurship competence was not directly assessed, and learning outcomes were only loosely related to entrepreneurship.

Research limitations/implications

One limitation concerned the selection of the VET providers: these were chosen not on a casual basis, but because they ran programmes that were relevant to the development of entrepreneurship competence.

Practical implications

At the policy level, there is a need for new guidelines on competence development and assessment in VET, guidelines that are more aligned with educational research on competence development. To ensure the development of entrepreneurship competence, educators need in-service training and a community of practice.

Originality/value

So far, the literature has concentrated on entrepreneurship education at the tertiary level. Little is known about how VET instructors assess entrepreneurship competence. This study updates the picture of policy and practice in Italy, illustrating how entrepreneurship competence is developed in selected IVET and CVET programmes and apprenticeships.

Details

Education + Training, vol. 66 no. 10
Type: Research Article
ISSN: 0040-0912

Keywords

Open Access
Article
Publication date: 30 September 2020

Aminudin Zuhairi, Maria Rowena Del Rosario Raymundo and Kamran Mir

Quality assurance (QA) in open and distance learning (ODL) has always become universal concerns of stakeholders. The quality of ODL has been confronted with challenges in terms of…

29164

Abstract

Purpose

Quality assurance (QA) in open and distance learning (ODL) has always become universal concerns of stakeholders. The quality of ODL has been confronted with challenges in terms of the diversity of inputs, processes, the complex supply chain management of ODL and recent paradigm shift into online learning. Assuring the quality of ODL are daunting tasks at individual, institution and system levels. Completed before the beginning of the COVID-19 outbreak, this study aims to better understand the implementation of QA system in three Asian open universities (OUs), namely University of the Philippines Open University (UPOU), Universitas Terbuka (UT), Indonesia and Allama Iqbal Open University (AIOU), Pakistan.

Design/methodology/approach

A qualitative method was employed involving analysis of documents of the three Asian OUs and focus group discussions and interviews with management and staff. Data collected were then analyzed to draw conclusions and possible recommendations.

Findings

Findings of this study presented good practices, challenges and rooms for improvement of the QA system in the three Asian OUs. Focusing on students and stakeholders in their QA effort, this study has revealed that quality begins with inner self and is multidimensional. QA is principally viewed as continuous improvement, as mechanism and assessment and as effort at exceeding expectations of students and stakeholders. The recent challenge for QA is to embrace a delicate process of ODL transformation into online digital system. The recent COVID-19 outbreak has further implications and challenged QA implementation in ODL in higher education into the next level of complexity.

Practical implications

This study revealed the diversities in how OUs met the societal needs of their respective stakeholders and addressed the challenges ahead for QA in ODL.

Originality/value

These findings were expected to enhance the understanding of the theory and practice of QA in ODL and to contribute to quality improvement of ODL programs.

Details

Asian Association of Open Universities Journal, vol. 15 no. 3
Type: Research Article
ISSN: 1858-3431

Keywords

1 – 10 of over 5000