Search results
1 – 10 of over 5000Lonneke H. Schellekens, Marieke F. van der Schaaf, Cees P.M. van der Vleuten, Frans J. Prins, Saskia Wools and Harold G.J. Bok
This study aims to report the design, development and evaluation of a digital quality assurance application aimed at improving and ensuring the quality of assessment programmes in…
Abstract
Purpose
This study aims to report the design, development and evaluation of a digital quality assurance application aimed at improving and ensuring the quality of assessment programmes in higher education.
Design/methodology/approach
The application was developed using a design-based research (DBR) methodology. The application’s design was informed by a literature search and needs assessment of quality assurance stakeholders to ensure compliance with daily practices and accreditation requirements. Stakeholders from three study programmes evaluated the application.
Findings
As part of the development of the application, module- and programme-level dashboards were created to provide an overview of the programme’s outcomes, assessment methods, assessment metrics, self-evaluated quality indicators and assessment documents. The application was evaluated by stakeholders at the module and programme levels. Overall, the results indicated that the dashboards aided them in gaining insight into the assessment programme and its alignment with underlying assessments.
Practical implications
Visualisation of the assessment programme’s structure and content identifies gaps and opportunities for improvement, which can be used to initiate a dialogue and further actions to improve assessment quality.
Originality/value
The application developed facilitates a cyclical and transparent assessment quality assurance procedure that is continuously available to various stakeholders in quality assurance.
Details
Keywords
Umut Al, Pablo Andrade Blanco, Marcel Chiranov, Lina Maria Cruz Silva, Luba Nikolaeva Devetakova, Yulianto Dewata, Ieva Dryžaite, Fiona Farquharson, Maciej Kochanowicz, Tetiana Liubyva, Andrea López Naranjo, Quynh Truc Phan, Rocky Ralebipi-Simela, Irem Soydal, David Streatfield, Resego Taolo, Tâm Thị Thanh Trần and Yuliya Tkachuk
The purpose of this paper is to report on performance measurement and impact assessment progress made in 14 countries as part of the Global Libraries initiative, starting with the…
Abstract
Purpose
The purpose of this paper is to report on performance measurement and impact assessment progress made in 14 countries as part of the Global Libraries initiative, starting with the early country grants in Mexico and Chile. For the mature grants in Bulgaria, Botswana, Poland, Romania, Ukraine and Viet Nam which were recently completed or are approaching completion, the nature of the country program is outlined, before the impact assessment work is described and some recent results and conclusions are reported. A similar approach is adopted with pilot and new grants in Colombia, Indonesia, South Africa, Turkey and Lithuania.
Design/methodology/approach
The country reports are presented as a series of case studies, in some cases supplementing those in an earlier special issue of this journal.
Findings
Where appropriate, recent country-specific survey findings are reported.
Practical implications
This paper shares Global Libraries IPA learning at country level with people in other countries who may be contemplating public library evaluation at regional, national or local level or who are interested in performance measurement and impact evaluation.
Originality/value
These cases studies reflect concentrated impact assessment and performance measurement work at country level across a range of countries over more than 12 years.
Details
Keywords
Óscar Martín Rodríguez, Francisco González-Gómez and Jorge Guardiola
The purpose of this paper is to focus on the relationship between student assessment method and e-learning satisfaction. Which e-learning assessment method do students prefer? The…
Abstract
Purpose
The purpose of this paper is to focus on the relationship between student assessment method and e-learning satisfaction. Which e-learning assessment method do students prefer? The assessment method is an additional determinant of the effectiveness and quality that affects user satisfaction with online courses.
Design/methodology/approach
The study employs data from 1,114 students. The first set of data was obtained from a questionnaire on the online platform. The second set of information was obtained from the external assessment reports by e-learning specialists. The satisfaction revealed by the students in their responses to the questionnaire is the dependent variable in the multivariate technique. In order to estimate the influence of the independent variables on the global satisfaction, we use the ordinary least squares technic. This method is the most appropriate for dependent discrete variables whose categories are ordered but have multiple categories, as is the case for the dependent variable.
Findings
The method influences e-learning satisfaction, even though only slightly. The students are reluctant to be assessed by a final exam. Students prefer systems that award more importance to the assessment of coursework as part of the final mark.
Practical implications
Knowing the level of student satisfaction and the factors that influence it is helpful to the teachers for improving their courses.
Originality/value
In online education, student satisfaction is an indicator of the quality of the education system. Although previous research has analyzed the factors that influence e-student satisfaction, to the best of authors’ knowledge, no previous research has specifically analyzed the relationship between assessment systems and general student satisfaction with the course.
Details
Keywords
John Garger, Paul H. Jacques, Brian W. Gastle and Christine M. Connolly
The purpose of this paper is to demonstrate that common method variance, specifically single-source bias, threatens the validity of a university-created student assessment of…
Abstract
Purpose
The purpose of this paper is to demonstrate that common method variance, specifically single-source bias, threatens the validity of a university-created student assessment of instructor instrument, suggesting that decisions made from these assessments are inherently flawed or skewed. Single-source bias leads to generalizations about assessments that might influence the ability of raters to separate multiple behaviors of an instructor.
Design/methodology/approach
Exploratory factor analysis, nested confirmatory factor analysis and within-and-between analysis are used to assess a university-developed, proprietary student assessment of instructor instrument to determine whether a hypothesized factor structure is identifiable. The instrument was developed over a three-year period by a university-mandated committee.
Findings
Findings suggest that common method variance, specifically single-source bias, resulted in the inability to identify hypothesized constructs statistically. Additional information is needed to identify valid instruments and an effective collection method for assessment.
Practical implications
Institutions are not guaranteed valid or useful instruments even if they invest significant time and resources to produce one. Without accurate instrumentation, there is insufficient information to assess constructs for teaching excellence. More valid measurement criteria can result from using multiple methods, altering collection times and educating students to distinguish multiple traits and behaviors of individual instructors more accurately.
Originality/value
This paper documents the three-year development of a university-wide student assessment of instructor instrument and carries development through to examining the psychometric properties and appropriateness of using this instrument to evaluate instructors.
Details
Keywords
Relying on a design science paradigm, the purpose of this paper is to describe the development and evaluation of items for an ICT artefact that supports the assessment of…
Abstract
Purpose
Relying on a design science paradigm, the purpose of this paper is to describe the development and evaluation of items for an ICT artefact that supports the assessment of transversal professional competences within the validation of prior learning (VPL). To do so, the authors build a conceptual bridge between the Occupational Information Network (O*NET) and the European Qualifications Framework (EQF).
Design/methodology/approach
Design science research paradigm, in particular the participatory development of candidate items and their evaluation in a multi-stakeholder approach.
Findings
The authors find that a self-assessment of professional competences should be comprised of 160 items in order to cover the breadth and depth of the O*NET in the hierarchical taxonomy. Such quantity of items sufficiently builds a conceptual bridge between the O*NET and the; EQF.
Practical implications
When designing procedures for the VPL, it is imperative to bear in mind the purpose of the validation procedure, in order to determine relevant stakeholders and their needs in advance as well as the; required language proficiency of the assessment instrument.
Social implications
The innovative value of this approach lies in the combination of an underlying hierarchical taxonomy with assessment items that are developed based on the qualification standards of different Austrian professions. Together with specific verbs that were adapted for each particular item, an innovative self-assessment is proposed. Thereby the authors aim to account for some of the mentioned shortcomings of the EQF.
Originality/value
This paper applies a design science paradigm to develop an ICT artefact that should support the VPL. By reflecting on the design process, the authors introduce a theoretical bridge between the O*NET and the EQF. Thereby the authors aim to account for some of the mentioned shortcomings of the EQF.
Details
Keywords
Rasha Ismail, Fadi Safieddine and Ashraf Jaradat
The setting up of e-university has been slow-going. Much of e-university slow progress has been attributed to poor business models, branding, disruptive technologies, lack of…
Abstract
Purpose
The setting up of e-university has been slow-going. Much of e-university slow progress has been attributed to poor business models, branding, disruptive technologies, lack of organisational structure that accommodates such challenges, and failure to integrate a blended approach. One of the stumbling blocks, among many, is the handling of evaluation process. E-university models do not provide much automation compared to the original brick-and-mortar classroom model of delivery. The underlining technologies may not have been supportive; however, the conditions are changing, and more evaluation tools are becoming available for academics. The paper aims to discuss these issues.
Design/methodology/approach
This paper identifies the extent of current online evaluation processes. In this process, the team reviews the case study of a UK E-University using Adobe Connect learning model that mirrors much of the physical processes as well as online exams and evaluation tools. Using the Riva model, the paper compares the physical with the online evaluation processes for e-universities to identify differences in these processes to evaluate the benefits of e-learning. As a result, the models can help us to identify the processes where improvements can take place for automating the process and evaluate the impact of this change.
Findings
The paper concludes that this process can be significantly shortened and provide a fairer outcome but there remain some challenges for e-university processes to overcome.
Originality/value
This paper examines the vital quality assurance processes in academia as more universities move towards process automation, blended or e-university business models. Using the case study of Arden University online distance learning, the paper demonstrates, through modelling and analysis that the process of online automation of the evaluation process is achieved with significant efficiency.
Details
Keywords
Ayaka Noda, Angela Yung Chi Hou, Susumu Shibui and Hua-Chi Chou
The purpose of this paper is to examine how the Japanese and Taiwanese national quality assurance (QA) agencies, National Institution for Academic Degrees and Quality Enhancement…
Abstract
Purpose
The purpose of this paper is to examine how the Japanese and Taiwanese national quality assurance (QA) agencies, National Institution for Academic Degrees and Quality Enhancement (NIAD-QE) and Higher Education Evaluation and Accreditation Council of Taiwan (HEEACT), transform their respective frameworks in response to social demands, and analyze and compare the respective approaches for the key concepts of autonomy, accountability, improvement and transparency.
Design/methodology/approach
Using a qualitative document analysis approach, this paper initially examines the higher education system, major policies and QA developments, after which the methods associated with the QA restructuring transformations are outlined in terms of motivations, expectations and challenges. Finally, the NIAD-QE and HEEACT evaluation policies and frameworks are compared to assess how each has prepared to respond to emerging challenges.
Findings
During the QA framework restructuring, both the NIAD-QE and HEEACT struggled to achieve autonomy, accountability, improvements and transparency. While the new internal Japanese QA policy is assured through the external QA, the Taiwanese internal QA, which has a self-accreditation policy, is internally embedded with university autonomy emphasized. The QA policies in both the NIAD-QE and HEEACT have moved from general compliance to overall improvement, and both emphasize that accountability should be achieved through improvements. Finally, both agencies sought transparency through the disclosure of the QA process and/or results to the public and the enhancement of public communication.
Originality/value
This study gives valuable insights into the QA framework in Asian higher education institutions and how QA has been transformed to respond to social needs.
Details
Keywords
Göran Finnveden, Eva Friman, Anna Mogren, Henrietta Palmer, Per Sund, Göran Carstedt, Sofia Lundberg, Barbro Robertsson, Håkan Rodhe and Linn Svärd
Since 2006, higher education institutions (HEIs) in Sweden, should according to the Higher Education Act, promote sustainable development (SD). In 2016, the Swedish Government…
Abstract
Purpose
Since 2006, higher education institutions (HEIs) in Sweden, should according to the Higher Education Act, promote sustainable development (SD). In 2016, the Swedish Government asked the Swedish higher education authority to evaluate how this study is proceeding. The authority chose to focus on education. This paper aims to produce a report on this evaluation.
Design/methodology/approach
All 47 HEIs in Sweden were asked to write a self-evaluation report based on certain evaluation criteria. A panel was appointed consisting of academics and representatives for students and working life. The panel wrote an evaluation of each HEI, a report on general findings and recommendations, and gave an overall judgement of each HEI in two classes as follows: the HEI has well-developed processes for integration of SD in education or the HEI needs to develop their processes.
Findings
Overall, a mixed picture developed. Most HEIs could give examples of programmes or courses where SD was integrated. However, less than half of the HEIs had overarching goals for integration of SD in education or had a systematic follow-up of these goals. Even fewer worked specifically with pedagogy and didactics, teaching and learning methods and environments, sustainability competences or other characters of education for SD. Overall, only 12 out of 47 got a higher judgement.
Originality/value
This is a unique study in which all HEIs in a country are evaluated. This provides unique possibilities for identifying success factors and barriers. The importance of the leadership of the HEIs became clear.
Details
Keywords
Javier Mula-Falcón and Katia Caballero
Improving and assuring the quality of higher education has become a key element of policy agendas worldwide. To this end, a complete accountability system has been developed…
Abstract
Purpose
Improving and assuring the quality of higher education has become a key element of policy agendas worldwide. To this end, a complete accountability system has been developed through various evaluation procedures. Specifically, this study analyzes the perceptions of university teaching staff on the impact of performance appraisal systems on their professional activity, health and personal lives.
Design/methodology/approach
The study adopted a nonexperimental descriptive and causal-comparative design using a questionnaire that was completed by a sample of 2,183 Spanish teachers. The data obtained were analyzed using descriptive statistics and comparisons of differences.
Findings
The results show that, according to teachers, the evaluation criteria undermine the quality of their work by encouraging them to neglect teaching, increase scientific production and engage in unethical research practices. Their views also emphasize the social and health-related consequences of an increasingly competitive work climate, including increased stress levels. Finally, significant differences are observed regarding gender, professional category and academic discipline, with women, junior faculty and social sciences teachers expressing particularly strong views.
Originality/value
The originality of this study lies in the application of a method that contributes to the international debate through a national perspective (Spain) that has so far received little attention.
Details
Keywords
Saudi universities have incorporated capstone projects in the final year of an undergraduate study. Although universities are following recommendations of the National Commission…
Abstract
Purpose
Saudi universities have incorporated capstone projects in the final year of an undergraduate study. Although universities are following recommendations of the National Commission for National Commission for Academic Accreditation and Assessment (NCAAA) and Accreditation Board for Engineering and Technology (ABET), no detailed guidelines for management and assessment of capstone projects are provided by these accreditation bodies. Variation in the management and assessment practices of capstone project courses and analysis of the students' capabilities to align with industry demands, to realize Vision 2030, is challenging. This study investigates the current practices for structure definition, management and assessment criteria used for capstone project courses at undergraduate level for information technology (IT) programs at Saudi universities.
Design/methodology/approach
A web-based questionnaire is administered using a web service commonly used for questionnaires and polls to investigate the structure, management and assessment of capstone projects at the undergraduate level offering software engineering, computer science and information technology (SECSIT) programs. In total, 42 faculty members (with range of experience of managing/advising capstone projects from 1 to more than 10 years) from 22 Saudi universities (out of more than 30 universities offering SECSIT undergraduate programs) participated in the study.
Findings
The authors have identified that Saudi universities are facing challenges in the utilized process model, the distribution of work and marks, the knowledge sharing approach and the assessment scheme. To cope with these challenges, the authors recommend the use of an incremental development process, the utilization of a project-driven approach, the development of a national level digital archive and the implementation of homogeneous assessment scheme.
Social implications
To contribute to the national growth and to fulfill the market demand, universities are recommended to align the capstone project courses with latest technology trends. Universities must collaborate with the industry and update the structure and requirements of capstone project courses accordingly. This will further facilitate to bridge the gap between industry and academia and will develop a win–win scenario for all the stakeholders.
Originality/value
Although universities are committed to increase innovative capacities of their students for enabling them to contribute to economic and social growth, it is still hard to know the knowledge creation and sharing at national level. Variations in the management and assessment practices for capstone projects further intensify this challenge. Hence, there is a need of smart assessment and management of software capstone projects being developed in Saudi universities. Incorporating latest technologies, such unified management can facilitate discovering the trends and patterns related to the domain and complexity.
Details