Innovative assignment rubrics for ODL courses: design, implementation and impact

Rames Mariapan (Open University Malaysia, Kuala Lumpur, Malaysia)

Asian Association of Open Universities Journal

ISSN: 2414-6994

Article publication date: 22 June 2018

Issue publication date: 12 April 2019

1879

Abstract

Purpose

The purpose of this paper is to propose a new standard of assignment rubrics to minimize various interpretations and confusing expectations of the assignment outcome among all stakeholders and enhance the assignment rubrics to function not only as a grading tool but also as an assignment guiding tool for self-managed learning among open and distance learning (ODL) learners.

Design/methodology/approach

The paper looks into the problems and issues related to assignment rubrics such as various interpretation, confusing expectations and the need to have appropriate descriptions in the rubrics in order to reflect proper learning outcome among the assignment stakeholders. To solve these issues, the paper explores the new and improved requirements which were imposed to support the new assignment rubrics for courses in the university via a self-guided manual known as Rubrics Formulation Guide.

Findings

Based on the feedback received from university’s lecturers, who also functioned as moderators, it was indicated that the time taken to moderate the assignment rubrics had drastically reduced and in terms of grading, the clarity of the assignment performance expectations among the learners showed improvement, whereby as compared to the previous semester, there was significant drop for the application of remarking of assignments among May 2014 semester learners.

Practical implications

The paper includes implications of developing innovative rubrics that enhance common understanding and consistent expectation of what the final outcome of the assignment should be.

Originality/value

This paper fulfills the purpose of expanding the potential of assignment rubrics which is to guide and grade.

Keywords

Citation

Mariapan, R. (2018), "Innovative assignment rubrics for ODL courses: design, implementation and impact", Asian Association of Open Universities Journal, Vol. 13 No. 2, pp. 117-129. https://doi.org/10.1108/AAOUJ-01-2018-0009

Publisher

:

Emerald Publishing Limited

Copyright © 2018, Rames Mariapan

License

Published in the Asian Association of Open Universities Journal. Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial & non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


1. Introduction

Assessment is the process of measuring and evaluating a learner’s academic achievement for a particular course through continuous assessments and examination (Brown and Knight, 1994). Assessment has always been an integral part of the learning process regardless whether it is traditional and conventional learning or open and distance learning (ODL). Most of the ODL institutions have a combination of formative evaluation via written assignments and summative evaluation via term-end examination (Chaudhary and Dey, 2013). Formative assessment or evaluation in the form of written assignments or essay represents a very flexible test format for assessing ODL learners and enables the stakeholders to monitor whether outcome-oriented or outcome-based learning took place (Chaudhary and Dey, 2013; Oosterhof et al., 2008). Four main features of formative assessment that facilitates these are: embedment of assessment activities within teaching and learning processes; diversity of ongoing and authentic assessment conduct; ongoing formative feedback; and clarity of expected outcomes through the assessment rubrics (Gikandi et al., 2011). By simple definition, assessment rubrics comprise a set of criteria and weightings used for grading (Ali and Fadzil, 2013). A common definition of rubrics would be a document that articulates the expectations for an assignment by listing the criteria or what counts and describing levels of quality from excellent to poor (Reddy and Andrade, 2010). There are two general methods for rubrics design, which are holistic scoring rubrics and analytic scoring rubrics. Nitko and Brookhart (2011) indicated that holistic scoring rubrics “rate or score the product or process as a whole without scoring parts or components separately,” whereas analytic scoring rubrics “rate or score separate parts or characteristics of the product or process first, and then sum up these part scores to obtain a total score.” Much has been written on the needs of rubrics for ODL assignment, and research has indicated that the highest percentage of students strongly agreed that clear guidelines for course assignments and rubrics are important in contributing to their learning satisfaction (Lee, 2014). Other than saving time taken for grading and providing meaningful feedback, rubrics if used properly, can promote better learning discussion experiences and encourage learners to fulfill two main attributes of ODL, which are to become self-motivated and independent learners (Stevens and Levi, 2013).

2. Problem

The reasons rubrics were introduced in Open University Malaysia’s (OUM) course assignments were to guide learners on how to do their assignments and make grading simpler and structured for graders or examiners (Ali and Fadzil, 2013). Hence from May 2010 semester, all the subject-matter experts were given instruction and guide to develop assignment questions accompanied with rubrics. As an ODL institution, OUM only hires limited number of full-time academics, as such 80 percent of these subject-matter experts consist of external academics from other higher learning institutions that mostly have traditional classroom setting. Although the intention to use rubrics was noble, OUM moderators had to spend time “fixing” the rubrics received from the external subject-matter experts for some of the courses, as the rubrics developed by them were open to various interpretations and confusing expectations of the assignment outcome. Most of the external subject-matter experts developed the rubrics based on traditional classroom setting; however, simply transferring assignment rubrics from the traditional classroom to the ODL environment is not always the best decision (Hemby et al., 2006). Jonsson (2014) reviewed that “rubrics can facilitate improvements if combined with other meta-cognitive learning activities” and went further to add on “but there is limited evidence supporting the claim that the use of rubrics by itself for self-learning leads to improvements in performance.” Rubrics will normally contain criteria with descriptions directed at learners; however, rubrics can also contain descriptions directed at other stakeholders such as graders “for guidance and comment banks associated with the criteria” (Heinrich et al., 2012). Rubrics can both teach and evaluate, but many rubrics received from the subject-matter experts only emphasized on evaluation whereby it indicates in which level the learners fall into, but did not show or teach learners how to achieve the highest level of performance (Hemby et al., 2006; Reddy and Andrade, 2010).

3. Objective

The objectives of this paper are to:

  • propose a new standard of assignment rubrics to minimize various interpretations and confusing expectations of the assignment outcome among all stakeholders;

  • enhance the assignment rubrics to function not only as a grading tool but also as an assignment guiding tool for self-managed learning among ODL learners; and

  • implement the above innovations on the assignment rubrics.

4. Justification for innovation

Assignment rubrics come in various forms with some having complex descriptions of what learners need to achieve, whereas some with simple and specific descriptions (Stevens and Levi, 2013). In traditional learning setting, the lecturers and tutors, who are the subject-matter experts whom had crafted the rubrics and will eventually mark the assignments, have the opportunity to physically explain and deliberate on these descriptions in the classroom so that the learners have an understanding of what is expected from them (Hemby et al., 2006). However in ODL setting, the learners do not have the same privilege as they have limited classroom or totally rely on online learning, whereby the discussion on assignment expectation and rubrics might be limited or might not be as comprehensive and interactive as in classroom setting. Another challenge is that assignment rubrics goes through various processes in ODL setting from development, moderation, teaching, learning and grading by various and different stakeholders, which exposes various interpretations (Chaudhary and Dey, 2013). Thus the ultimate aim of this paper is to create assignment rubrics that are suitable for ODL courses which enhances common understanding of the assignment outcome. In order to support this “common understanding” functionality, the rubrics has to be focused on not only grading but also on guiding, which can be done by innovatively enhancing the criteria of the rubrics as well as standardizing the level of performance descriptions of the rubrics. These innovative assignment rubrics can be useful for various ODL institutions in supporting their learners for formative assessment especially in assignment writing.

5. Design

5.1 Rubrics in OUM

OUM has been using assignment as a component of continuous assessment since its establishment. As a move to improve assignment submission and grading, OUM has incorporated the usage of assignment rubrics in the assignment questions starting May 2010 semester (Ali and Fadzil, 2013). The move is also in conjunction with the implementation of online Assignment Submission System and Assignment Grading System. The Assignment Submission System enables learners to view or download their assignment questions together with the rubrics from OUM’s online learning management platform. The learners will submit the soft copy of their completed assignment via this system, whereas the Assignment Grading System allows appointed graders to view learner’s assignments and mark them according to the embedded rubrics in the system. The general rubrics structure used by OUM is as per shown in Figure 1.

Prior to publishing the assignment questions and rubrics online, a few processes take place each semester. First, the assignment questions and rubrics are developed by the appointed subject-matter experts a few months earlier before the start of the semester. Then the assignment questions and rubrics go through the moderation process done by OUM’s internal academics or lecturers. It is during this process that the assignment questions and rubrics are edited, corrected and modified if necessary to be error free and elicit clear understanding of the expectations. Once moderated, the assignment questions and rubrics are published via OUM’s online learning management platform to be used by face-to-face (classroom) tutors, online tutors and learners for teaching and learning. Finally OUM’s appointed graders will mark the assignments submitted by the learners. Figure 2 depicts the various stakeholders in various processes or stages involving assignments in OUM.

Although the subject-matter experts were given Assignment Kits which contain guidelines on how to develop the rubrics, the rubrics that some of them had developed can be considered imperfect, as the rubrics were either too simple or missing crucial details. Such rubrics may have been allowed to be published and distributed in traditional learning setting as the subject-matter experts who will in turn be the classroom lecturers or tutors have the luxury to explain the intended meaning and expectations of the rubrics clearly to their learners. However, as mentioned earlier, such situations may not happen in ODL setting whereby the face-to-face tutors and online tutors who comprise of many and different individuals teach various number of learners spread around the region. With such imperfect rubrics, the graders are at lost on how to mark the learners’ assignments, as the descriptions in the rubrics do not reflect learners’ work. Thus the moderators play an important role to modify the assignment rubrics appropriately in order to minimize various interpretations and confusing expectations among various stakeholders. This modification process takes a lot of time and the main issues during moderation are discussed as per the following.

5.2 Issues

5.2.1 Number of points needed

Some subject-matter experts had developed assignment rubrics’ Levels of Performance (LoP) which indicates that learners need to provide a certain number of points and elaborate or explain those points as given in Figure 3.

The description of number of points given in the LoP columns confuses the graders. The description of performance of Level 1, 2 and 3 is shown below:

  • Level 1: provides two Competitive Forces (CF) models with poor explanation.

  • Level 2: provides three CF models with fair explanation.

  • Level 3: provides four CF models with good explanation.

The graders were undecided or have difficulty to grade if the learners provided four CF models but only two of their CF models had good explanations. The learners may not fulfill Level 3 requirement fully, but their effort is also not reflective of Level 1 and 2, thus the graders were unable to tick a suitable level for the learners.

5.2.2 A lot of descriptions in the LoP columns

Some of the assignment rubrics’ LoP developed by the subject-matter experts had a lot of description of performance that the learners need to meet for a particular criteria, as shown in Figure 4.

Such description of performance has also posed problems to the graders, whereby as per example given in Figure 4, for Level 3:

  • Description 1: two types of data storage are stated.

  • Description 2: only one type of the data storage is described.

  • Description 3: example of the data storage is given.

Thus if the learners meet all the three descriptions of the performance, then only the learners fulfill Level 3 of the criteria. Problem arises if the learners only met two of the description and did not manage to meet the third criterion. In this case, the graders were also unable to tick a suitable level for the learners, as the other LoPs do not match the learners’ efforts.

5.2.3 Simple criteria and LoP

Most subject-matter experts developed the assignment rubrics with criteria that only had simple description of what is required from the learners for the particular section. Thus the descriptions on how to perform or write the essay section effectively were supposed to be indicated at the LoP columns. However, most subject-matter experts also developed the LoPs with simple description of performance. This turns the assignment rubrics into a very weak or simple guide for writing the assignment without thoroughly informing the learners on what they need to demonstrate in each section. Some subject-matter experts too, write the description of performance for each LoP (other than Level 4 or Excellent Level) in a manner of guessing how the learners would not score the particular criteria of the section. Thus moderators had to fix each of the LoP to reflect the degree of which the learners were able to demonstrate the learning outcomes for the particular section. Looking into this scenario, it was decided that the moderators are spending too much time on fixing the LoP columns which were previously suggesting to the learners on how not to score excellent level marks (other than Level 4 or Excellent Level) and the learners too, spending too much time relating how best they can score the particular criteria by analyzing the Level 4 or Excellent Level of the LoP as shown in Figure 5.

5.3 New design of assignment rubrics

5.3.1 New approaches taken

After going through the issues such as various interpretation, confusing expectations and the need to have appropriate criteria and LoPs to reflect proper learning outcome among the assignment stakeholders, it was decided that a new standard of assignment rubrics has to be developed. The decision came after several meetings among OUM moderators (internal lecturers) on the issues faced by them with assignment rubrics and measures to improve them. Starting May 2014 semester, these requirements were imposed to support the new assignment rubrics for non-programming information technology (IT) courses at OUM:

  • all assignments should only use analytic scoring rubric as it is more suitable for non-programming IT courses as it allows evaluation of specific dimension and elements of learners’ response for each criteria (Kane et al., 1997; Nitko and Brookhart 2011);

  • description of the criteria in the assignment rubrics has to be strengthened and should be detailed enough in order to guide the learners to complete the assignment (Wolf et al., 2008); and

  • since the criteria will be detailed enough, the words in the LoP columns will be standardized and customized to only indicate how near or far the learner’s are from achieving the requirements of the rubrics according to the criteria.

5.3.2 Analytic scoring rubric

Previously most of the subject-matter experts developed analytic kind of scoring rubrics for the non-programming IT courses’ assignments; however, there were a few holistic scoring rubrics too, submitted by the them. Starting May 2014 semester, all subject-matter experts were requested to submit their assignment questions accompanied with analytic scoring rubrics only. Based on past researches, analytic scoring rubrics would be more suitable for the non-programming IT courses’ assignments as this type of rubrics provides some objectivity to the evaluation of learner’s performance on specific sections and learning outcomes, helps in clarify and avoids confusing assignments and is suitable to provide a common ground for tutors and learners in understanding the assignments (Nitko and Brookhart, 2011; Gikandi et al., 2011; Stevens and Levi, 2013). Analytic scoring rubrics are also considered most suitable to support the next initiative which is strengthening of the criteria description.

5.3.3 Strengthening of the criteria description

One of the most significant changes to the rubrics was the strengthening of the criteria description in which the criteria will have explicit and clear descriptions of what is expected for the particular section of the assignment rather than simple descriptions (Kearns, 2012). In order to do this, the criteria section will contain detailed descriptions (like a checklist) of what learners are supposed to do for this section of the assignment (teach) and what they need to fulfill (taken from Level 4 or Excellent Level). In other words, the criteria description will be enhanced and will also have a merger of description taken from the Excellent Level as shown in Figure 6.

The new criteria will look like Figure 7. All the requirements have to be clearly stated in the criteria column, thus learners do not need to read any other columns in terms of guidance. The other columns (Low-Excellent) will have standardized descriptions and are only to show where the learners’ marks fall. With these changes, the moderators are no longer expected to spend time moderating Low to Excellent Level columns extensively, instead they just have to spend time on the criteria column during moderation and do the changes or corrections if necessary.

5.3.4 Standardization of LoP columns

One of the problems faced with the previous semester rubrics was that the LoPs submitted by some of the subject-matter experts did not have consistent leveling between each level or did not show appropriate incremental improvement between LoPs, as the subject-matter experts were too focused on what could have the learners done wrong for the particular section of their assignment in the LoPs (other than the Excellent Level). Other than this, the previous assignment rubrics’ LoPs had descriptions that indicate certain number of points or a lot of description of performance that the learners need to meet, which posed difficulties for the graders to mark learners’ assignments (if the learners only partially meet the LoPs). Thus the new strategy was to standardize the descriptions in the LoPs, which now can be easily done since the criteria section is explicit enough, as shown in Figure 7. To avoid confusion and overlapping between the description in the criteria and LoP columns during new assignment rubrics development, subject-matter experts were informed that the descriptions in the criteria section are specific pointers of what is required, whereas the words in the LoP columns are more of scales (poorly, minimally, good, in-depth, etc.). The descriptions in the criteria section can use words such as suitable, appropriate, relevant, etc. but should avoid using the words that represent scales such as great, intricate, deep, good, excellent and others.

As shown in Figure 8, by having standard descriptions in the LoP columns, the previous problem of the grader being undecided on which marks or rubrics to tick for learners who only partially fulfills a certain LoP, no longer arises.

As per the description of the criteria section in Figure 8, learners have to discuss about five CF Models in Company ABC by elaborating the strategies used and relevant example for each model. Since the new LoP columns do not have the words such as “Provides 4 CF models with good explanation” or “Provides 5 CF models with excellent explanation” as shown in Figure 3 previously, the grader is no longer bounded to mark or tick on the LoP Column that does not represent the learners’ effort for the particular assignment section. If a learner provided five CF models but the elaboration was not long enough, then the marks could fall in Above average column. However, there were concerns that the standard descriptions in the LoP columns do not provide adequate formative feedback to the learners in relation to their writing performance for the particular assignment section, but this would not be an issue in OUM as the Assignment Grading System allows the graders to provide specific feedback in the overall comments section as shown in Figure 9.

6. Implementation

The new standard of assignment rubrics was suggested based on several meetings that OUM moderators (internal lecturers) had, to discuss their experiences and challenges endured in fixing the previous rubrics submitted by the subject-matter experts. In the IT cluster, there are two types of courses, which are programming (inclusive of mathematics) and non-programming courses. The programming courses’ assignments such as C and Java have structured and specific answers, thus the previous rubrics did not pose any problem of misinterpretations. However, for non-programming courses’ assignments, which have a mixture of restricted and extended responses sections in the assignment, the new standard of assignment rubrics was very much needed.

In order to improve the understanding and clarity of the rubrics, it was decided that improvement efforts should start from the beginning, which is during the assignment question and rubrics development by the subject-matter experts. As such a new guideline for rubrics was introduced in May 2014 semester and included in the Assignment Kits. Using the same concept of modules or textbooks created by OUM for its ODL learners, which supports self-managed and self-regulated learning, the new guideline for rubrics for subject-matter experts was developed in the form of a self-guided manual to train them on how to prepare “good rubrics” suitable for ODL learners as well as to understand why such rubrics are important and how are the rubrics different from other conventional rubrics. The self-guided manual is aptly named as Rubrics Formulation Guide (RFG), which the subject-matter experts can read on their own pace before working on creating the assignment question and rubrics.

The RFG was created, not only to train the subject-matter experts on what kind of rubrics to produce but also why such rubrics is needed for ODL learners as well incorporating answers for questions that might be predictably popping up in the subject-matter experts’ minds when they read the guideline. The contents of the guideline were decided after a few brainstorming sessions among the OUM moderators, and the predictive questions were also discussed in such sessions. The RFG content are divided into: How to develop the new rubrics criteria and LoP columns, Assigning of appropriate weightage and maximum marks for the rubrics, Frequently asked questions (FAQs), Examples of assignment questions with rubrics and Rubrics templates for subject-matter experts for rubrics development. The Rubrics Templates consist of sample descriptions (in English and Bahasa Malaysia) for the criteria and LoP columns for Introduction and Conclusion sections, sections which need illustration and sections which need the learners to provide discussions, recommendations, justifications, etc. The Rubrics Templates section (only) can be accessed by public at https://drive.google.com/open?id=0B6cnrxakGFneSDEzRXd5by1uMHc.

Past researches had also emphasized that clarity and appropriateness of language are central concerns among the stakeholders of the assignment rubrics (Reddy and Andrade, 2010). As such the RFG had also included the new instructions that requests subject-matter experts to use past tense (example: was, were, provided, discussed, etc.) or past future tense (example: there is still room to improve, etc.), since the rubrics will also function as a grading tool to show how the learners have fared for their assignment work.

7. Impact

The new guideline for rubrics, RFG included in the Assignment Kits and distributed early May 2014 semester, had positive responses from the subject-matter experts. Although the subject-matter experts claim that the time taken to develop the assignment rubrics according to the new requirement took a longer time as compared to previous semester rubrics, they understood and appreciated the improvement done for the rubrics. However, some of the subject-matter experts did raise two main concerns which are: the detailed criteria in the rubrics might function as an answer scheme and expose the answers to the learners and the standardized version of the LoPs might not be a comprehensive feedback needed by the learners. The subject-matter experts were assured that these concerns had been taken into consideration earlier during the development of the RFG. For the first concern about revelation of answer through detailed criteria, in the RFG’s FAQ section, it had been mentioned that the details in the criteria, only function as a guideline or focused scope for the learners to follow or write in which without these details, the learners might have provided inaccurate answers or explanations that are off-target. As for the second concern, that the learners might not get comprehensive feedback as the descriptions in the LoP columns are quite standard and general, the subject-matter experts were briefed that OUM’s Assignment Grading System has an Overall Comment section whereby the graders can provide customized and specific feedback according to the learner’s performance for each assignment section.

After the implementation of the new rubrics for May 2014 semester, the moderators had given encouraging response. Based on the feedback received (during academic meeting) from ten of OUM’s internal lecturers under IT cluster, who also functioned as moderators, it was indicated that the time taken to moderate the assignment rubrics had drastically reduced. On average basis, previously it took about 3 h to moderate each assignment question and rubrics; however, after the implementation of the new rubrics, it only took 1 h on an average for the moderation. This is due to the improved rubrics design whereby the moderators no longer spend much time moderating the LoPs Columns which had standardized description, the focus now is only on the criteria column which needs to be checked whether the description clearly exhibits the learning outcome for the particular section and contains clear details that guide the learners on how to achieve them.

However the ten moderators who also teach at OUM’s Sri Rampai Learning Center claimed that the time taken to discuss about how to write the assignment and explain each section of the assignment rubrics in the classroom and online forum did not significantly change as compared to previous semesters. This was also echoed by two external part-time tutors who teach at Sri Rampai Learning Center, when they were asked during tutor meet-and-greet session, about the impact of the new assignment rubrics. Upon clear deliberations, it was found out that the learners were always eager and spend time during the classroom sessions to get information regarding how to write the assignment and get good score, regardless of the type of assignment question any semester; however, for May 2014 semester, the classroom discussion this time did not involve much time for the tutors to explain or dissolve about any misinterpretations, confusion or lack of details in the rubrics as compared to previous semesters.

In terms of grading, the clarity of the assignment performance expectations among the learners showed improvement. As compared to the previous semester, there were significant drop for the application of remarking of assignments among May 2014 semester learners. Table I shows the total number application for remarking made by learners for the non-programming courses’ assignments from January 2014 semester until September 2015 semester.

In January 2014 semester, a semester before the implementation of the new rubrics, there were total of 48 applications submitted by learners who were dissatisfied with their assignment results. However, for the May 2014 semester, the semester in which the new rubrics were implemented, the application for remarking dropped to 35 application, a 27 percent drop from the previous semester and the following September 2014 semester showed a higher drop of 40 percent (21 applications). This shows that the gap between expected performance and actual performance among learners has been reduced as the rubrics criteria have provided a clearer guidance and the LoP columns no longer have simple or confusing descriptions.

8. Conclusion

Formative assessment such as written assignment is an integral part of ODL assessment. In terms of ODL setting, assignment is well positioned to support ODL learners as it allows them to apply their experience and knowledge in the written form. It is of utmost pertinence that assignment by itself must be reliable and consistent in relaying the targeted outcome of learning, so that the ODL learners can gain the most benefit of this assessment. Thus, ODL learners need formative assessment instruments which are more objective in nature and outcome-orientated or outcome-based. Rubrics descend nicely into place in answering or representing such assessment instrument. However, merely providing rubrics will not bring ODL teaching and learning participants anywhere. Rubrics by themselves need to be “rich” in order to guide the learners objectively and independently, while at the same time reflecting representative grade. In other words, rubrics should guide and grade. The innovative rubrics presented here can support both, in addition to improving common understanding, promoting consistent expectation of the learning outcome and minimizing various interpretations across the assignment stakeholders.

As outcome-based assessment is now very popular among ODL institutions, it is aptly suitable to consider widespread of usage of rubrics as a one-stop assessment platform for guiding and grading. Assessment rubrics which have been used to make the teaching-learning process effective and measure the learning outcomes, would be one of the most effective ways to implement outcome-based formative assessment. It is hoped that this paper will give birth for more innovative rubrics that can support ODL assessment in general and outcome-based assessment in specific. In future, the usage of these innovative rubrics can be expanded beyond IT courses, to other field of subjects, in order to support ODL institutions in measuring achievement of course learning outcomes via formative assessment such as assignment writing.

Figures

Structure of rubrics

Figure 1

Structure of rubrics

Stakeholders involved in various assignment stages in OUM

Figure 2

Stakeholders involved in various assignment stages in OUM

Number of points needed to be elaborated

Figure 3

Number of points needed to be elaborated

More than one description of performance in LoP columns

Figure 4

More than one description of performance in LoP columns

Various LoPs referred in assignment rubrics

Figure 5

Various LoPs referred in assignment rubrics

Merger of descriptions from excellent level column to criteria column

Figure 6

Merger of descriptions from excellent level column to criteria column

Example of new criteria with detailed description

Figure 7

Example of new criteria with detailed description

Standard descriptions in the LoPs

Figure 8

Standard descriptions in the LoPs

OUM assignment grading system

Figure 9

OUM assignment grading system

Total number of remarking application for non-programming courses

References

Ali, A. and Fadzil, M. (2013), “Open University Malaysia”, in Jung, I., Wong, T.M. and Belawati, T. (Eds), Quality Assurance in Distance Education and E-Learning: Challenges and Solutions from Asia, Sage Publications, New Delhi, pp. 258-274.

Brown, S. and Knight, P. (1994), Assessing Learners in Higher Education, Kogan Page, London.

Chaudhary, S.V.S. and Dey, N. (2013), “Assessment in open and distance learning system (ODL): a challenge”, Open Praxis, Vol. 5 No. 3, pp. 207-216.

Gikandi, J.W., Morrow, D. and Davis, N.E. (2011), “Online formative assessment in higher education: a review of the literature”, Computers & Education, Vol. 57 No. 4, pp. 2333-2351.

Heinrich, E., Milne, J. and Granshaw, B. (2012), “Pathways for improving support for the electronic management and marking of assignments”, Australasian Journal of Educational Technology, Vol. 28 No. 2, pp. 279-294.

Hemby, K., Wilkinson, K. and Crews, T.B. (2006), “Converting assessment of traditional classroom assignments to the e-learning environment”, Online Journal for Workforce Education and Development, Vol. 2 No. 2, pp. 1-20.

Jonsson, A. (2014), “Rubrics as a way of providing transparency in assessment”, Assessment & Evaluation in Higher Education, Vol. 39 No. 7, pp. 840-852.

Kane, M.B., Khattri, N., Reeve, A.L. and Adamson, R.J. (1997), “Assessment of student performance”, Studies of Education Reform, Office of Educational Research and Improvement, US Department of Education, Washington, DC.

Kearns, L.R. (2012), “Student assessment in online learning: challenges and effective practices”, Journal of Online Learning and Teaching, Vol. 8 No. 3, pp. 198-208.

Lee, J. (2014), “An exploratory study of effective online learning: assessing satisfaction levels of graduate students of mathematics education associated with human and design factors of an online course”, The International Review of Research in Open and Distributed Learning, Vol. 15 No. 1, pp. 111-132.

Nitko, A.J. and Brookhart, S.M. (2011), Educational Assessment of Students, 6th ed., Pearson, Boston MA.

Oosterhof, A., Conrad, R.M. and Ely, D.P. (2008), Assessing Learners Online, Pearson, NJ.

Reddy, Y.M. and Andrade, H. (2010), “A review of rubric use in higher education”, Assessment & Evaluation in Higher Education, Vol. 35 No. 4, pp. 435-448.

Stevens, D.D. and Levi, A.J. (2013), Introduction to Rubrics: An Assessment Tool to Save Grading Time, Convey Effective Feedback, and Promote Student Learning, Stylus Publishing, LLC, Herndon, VA.

Wolf, K., Connelly, M. and Komara, A. (2008), “A tale of two rubrics: improving teaching and learning across the content areas through assessment”, Journal of Effective Teaching, Vol. 8 No. 1, pp. 21-32.

Corresponding author

Rames Mariapan can be contacted at: mrames@oum.edu.my

Related articles