E-learning for introductory Computer Science concept on recursion applying two types of feedback methods in the learning assessment

Lynnard Mondigo (Department of Computer Science, University of the Philippines Cebu, Cebu, Philippines)
Demelo Madrazo Lao (Department of Computer Science, University of the Philippines Cebu, Cebu, Philippines)

Asian Association of Open Universities Journal

ISSN: 2414-6994

Article publication date: 6 November 2017

4465

Abstract

Purpose

The purpose of this paper is to develop a web-based interactive learning object (ILO) of introductory Computer Science (CS) concept on recursion and compare two feedback methods in the learning assessment part.

Design/methodology/approach

Test driven development (TDD) approach was used to develop ILO. The authors adapted Multimedia Educational Resource for Learning and Online Teaching (MERLOT) standard instrument to evaluate ILO’s effectiveness as an e-learning tool. Three respondents, from a list of pre-identified prospective evaluators, were randomly chosen and served as raters for MERLOT, while 32 student-respondents coming from first-year Math and CS undergraduate majors were randomly assigned to each ILO version implementing either one of the two feedback methods.

Findings

ILO obtained mean ratings above 4 (in scale 1-5) in three MERLOT criteria, namely, potential effectiveness as teaching tool, ease of use, and quality of content, which is rated highest (mean=4.40, SD=0.53). The study also revealed that immediate feedback increases retention while delayed feedback improves generating new knowledge. Respondents who viewed the ILO implementing immediate feedback in their first session had statistically significantly higher scores (mean=8.25, SD=0.80) than those who viewed with delayed feedback (mean=7.63, SD=0.89). In their second session, the same observation was noted although with higher mean scores. These results give evidence that the developed ILO met standards in e-learning material and showed evidence of its effectiveness with preferably implementing immediate feedback.

Research limitations/implications

Although the developed ILO can now be used in school as supplementary learning material in teaching the concept of recursion in an introductory CS subject, a pilot testing of the web-based ILO using a larger sample of respondents to validate its effectiveness for online distance learning educational material can be pursued. Furthermore, in designing and creating an ILO, the provision of feedback during the assessment stage is necessary for effecting learning.

Originality/value

The study was a first to develop ILO for CS topic on recursion. The paper also compared which of two known feedback methods is best to implement in an ILO.

Keywords

Citation

Mondigo, L. and Lao, D.M. (2017), "E-learning for introductory Computer Science concept on recursion applying two types of feedback methods in the learning assessment", Asian Association of Open Universities Journal, Vol. 12 No. 2, pp. 218-229. https://doi.org/10.1108/AAOUJ-02-2017-0019

Publisher

:

Emerald Publishing Limited

Copyright © 2017, Lynnard Mondigo and Demelo Madrazo Lao

License

Published in the Asian Association of Open Universities Journal. Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


1. Introduction

Students nowadays belong to the generation what we call the millennials, or they can be referred also to as “digital natives” (Prensky, 2001), where they are exposed to various multimedia resources and have experienced with unprecedented level of access to information brought about by the digital revolution. Digital natives, also known as “Generation Y” (Tkalac Verčič and Verčič, 2013), are those born in 1980s and after during the introduction of digital technologies. They found in their previous study that digital natives prefer to use digital media in their personal lives, although this is not necessarily observed in their business lives.

A recent study by Akçayır et al. (2016) to determine levels of technological proficiency among university students revealed that university students actively used computers and the internet. The authors utilized an instrument known as the Digital Native Assessment Scale, developed by another group, to characterize objectively as “digital natives” the university student-respondents. Their findings also showed that there are no significant differences among university student participants’ perceptions of themselves as “digital natives” due to their gender or academic disciplines.

Some earlier studies on “digital natives” focused on gathering evidence which popular press authors claim that millennials think and learn differently from any generation that came before them (Thompson, 2013). Other groups studied on what engages the “digital natives” (Hakkarainen et al., 2015), while another researcher looked into the use of computer games for learning foreign languages among “digital natives” (Butler, 2015).

Since learning can be personalized using digital media, the foregoing research undertakings provide impetus to capitalize on this phenomenon among “Generation Y” learners as alternative to the traditional classroom instruction. Although the learning efficacy and cognitive dimension may vary among the millennials when using information and communication technologies (ICTs) in education (Bellini et al., 2016), there is no denying that their patronage use of the internet is common place.

One approach to exploit the above-mentioned phenomena for educational purpose is e-learning. E-learning can refer collectively as web-based learning, distributed learning, online learning, computer-assisted instruction, or internet-based learning with the purpose to enhance knowledge and performance (Ruiz et al., 2006). Ruiz et al. (2006) also pointed out that one advantage in applying e-learning in the teaching-learning activity is it being a learner-centered manner. With this approach, the learner has control over several key aspects of learning such as the content and the pace.

Specifically, personalized e-learning environment has been tried lately to improve student’s conceptual learning on basic computer programming (Chookaew et al., 2014). The study showed that the students who learned with the developed e-learning environment could develop an understanding of basic computer programming by analyzing the data. Furthermore, the study revealed that students had a positive attitude toward the developed e-learning environment which fits with their personalized learning.

Another approach utilizes artificial intelligence planning techniques to create fully tailored learning routes, as sequences of learning objects (LOs) that fit the pedagogical and students’ requirements for personalized e-learning (Garrido et al., 2016). Here, the researchers were able to demonstrate with their experiments those scenarios with large courses and where a high number of students can be solved.

A more radical approach for Computer Science (CS) education, which is introduced just recently, is to design robot-oriented generative learning objects (GLOs) that are heterogeneous meta-programs to teach CS topics such as programming (Štuikys et al., 2016). The novelty of this approach is the seamless integration of feature modeling and meta-programming, which are two known technologies, in designing robot-oriented GLOs and its supporting tools. While traditional approach in creating reusable LO separates the content (from the context), GLO approach focuses on the pedagogical form (or pattern) as the fundamental basis for reuse (www.igi-global.com/dictionary/generative-learning-objects-glos/12040).

Given the rapid advancement of modern technology, in particular, ICTs, it has transformed the face of education through digital technologies that create more extensive tools to increase the quality, effectiveness, and accessibility of higher education including that on distance learning education (Matei and Vrabie, 2013). Information now can already be accessed and obtained from almost anywhere. The internet makes this possible and serves as an alternate avenue for individuals who seek more knowledge – the type we cannot merely obtain from the books or in the classroom alone. Students could search for queries at any time in their own convenience, which paves way for them to learn at their own pace.

Self-paced learning basically means allowing the students to learn the subject matter at their own speed, in their most convenient time, and at their own will. They could also choose to continue studying if they feel like they can still absorb more knowledge that is being provided to them. This so-called learning material comes in various forms and one of which is the interactive learning object (ILO).

Due to its perceived usefulness and proven effectiveness as an educational tool and platform (Mo et al., 2015; Lai et al., 2015), computers are widely used by students to study, especially by those who are taking up computer-related courses. Teachers must take advantage of this phenomenon by adapting their way of teaching and fitting in the modern trend. One approach is the use of ILO, which is a powerful learning material that can be used as a learning tool in the current digital age.

For the above-mentioned reasons, this study aims to develop an ILO for teaching-learning recursion in a fundamental programming course in CS using the C programming language. The ILO must be able to present the topic on recursion in a way that the learner can understand easily the content. Furthermore, two different types of feedback in the assessment part were compared and evaluated to determine which type of feedback yields more positive results.

2. Related works

2.1 ILO

An LO is defined as any entity, whether digital or non-digital, that is used to support learning, education, or training (IEEE Learning Technology Standards Committee, 2002). On the other hand, an ILO is a digital self-contained and reusable entity with a clear educational purpose.

While traditional content used to last for a few hours, learning objectives have a time span ranging from 2 to 15 minutes (Wilson and Korn, 2007). Users gain knowledge actively by participating in investigative activities in the ILO.

Borrowing the idea of the ILO, a sort of extension of the approach is implemented in interactive learning environments (ILEs) for teaching social science courses (Ceresia, 2016). The study reported that the ILE and the related inquiry-based instructional approach appear to help students understand fundamental concepts quite easily.

2.2 Use of e-learning environment

The use of e-learning environments for teaching-learning activities abounds in past studies that showed positive results (Connolly et al., 2006), or have proven effective (Polsani, 2006; Eikaas et al., 2006). The study of Connolly et al. (2006) used an online game-based approach to teach database design, which is a core concept in CS. They found out that students who had access to the e-learning environment have higher exam scores than those who had none.

However, previous studies also have revealed that many students fail to realize that learning computer programming concepts is both a combination of knowledge-gaining and problem-solving skills. And instead, the students have viewed computer programming as a “purely technical activity” (Bennedsen and Carpersen, 2008; Kazimoglu et al., 2010; Liu et al., 2011). This perspective of the student can be problematic, if left uncheck, since this might only produce CS graduates without the proper skill set, specifically problem solving, acquired through computer programming constructs (Kazimoglu et al., 2012).

Aside from CS education, other fields of discipline have embraced the use of e-learning for both education and training. For engineering education, Banday et al. (2014) presented the level of adoption of ICT and e-learning tools in engineering institutions of the state of Jammu and Kashmir, and subsequently, made recommendations in ways to improve e-learning implementations in engineering education. In medicine, the application of e-learning varies from teaching complex biological processes such as simulation of the plasma glucose level after a simulated meal or during diabetes and simulation of the ion transport leading to the resting and action potential in nerves (Christ and Thews, 2016) to training orthopedic surgeons (Tarpada et al., 2016). In fact, the use of e-learning went as far as to be used by endoscopists to improve the diagnosis and detection of gastric cancer at an early stage (Yao et al., 2016).

2.3 Need for feedback

Feedback is defined as “the process of sharing observations, concerns and suggestions between persons or divisions of the organization with an intention of improving both personal and organizational performance” (University of North Texas in partnership with Texas Education Agency, 2008). Moreover, the main purpose of gathering feedback is to improve performance. Providing feedback to learners of all kinds is a strong strategy that can be applied in any learning situation (University of North Texas in partnership with Texas Education Agency, 2008). The main purpose of establishing feedback to students is to be able to increase their knowledge, skills, abilities, and other characteristics that effectively improve their academic performance.

2.3.1 Immediate feedback

A teaching strategy based on feedback that is given immediately with weekly assessment does contribute to noticeable reduction in the number of dropouts and improves academic results (Ghilay and Ghilay, 2015). The study showed that the control of students over the learning process, and especially over the assessment process, gives a greater sense of security and more confidence because it commits them to decide regularly on aspects related to their benefit.

2.3.2 Delayed feedback

Delayed feedback is believed to be more effective than other feedback methods due to the delay retention effect (DRE), which is a phenomenon wherein feedback is delayed for a specific time period to purposely aid in the memory retention of learners (Dihoff et al., 2003). In the early 1960s, Brackbill et al. (1963) demonstrated that delayed feedback across brief intervals promoted the retention of meaningful material. Butler et al. (2007) found out that delayed feedback led to superior final test performance relative to immediate feedback in a multiple-choice type of test. Furthermore, in a study that investigated superior memory performance among grade 6 children and college student adults in vocabulary learning, regardless whether “lag to test” (or short retention interval) was controlled or not, delayed feedback produced better final test performance (Metcalfe et al., 2009).

However, other research findings had questioned the impact of DRE on student retention and performance. Peeck et al. (1985) demonstrated that final exam scores for students in a typical instructional setting were very similar whether they received immediate or delayed feedback. Opitz et al. (2011) reported that delayed feedback is less effective for artificial language learning (AGL). Moreover, a study on the effect of delay on the regret of online learning algorithms revealed that delay increases the regret in a multiplicative way in adversarial problems, and in an additive way in stochastic problems (Joulani et al., 2013).

2.4 Recursion

The technique that programmers use to express operations in terms of themselves is called recursion. In the C programming language, this takes the form of a function that calls itself. A useful way to think of recursive functions is to imagine them as a process being performed where one of the instructions is to “repeat the process” (cprogramming.com – Lesson 16: Recursion in C). In other words, recursion is simply a form of “self-referencing” or “self-calling.” In computer programming, the (recursive) function is executed again inside of the same function, or the function call is part of the function body.

Recursion is one of the most important CS concepts. It is included in almost all introductory CS courses. Recursion is a central notion in CS mainly because it enables to describe complex algorithms and data structures in a simple and elegant manner by applying the idea of self-reference to programming (Harvey and Wright, 1999).

3. Methods

3.1 Developing the ILO

Agile framework was observed in developing the ILO as the requirements are dynamically identified; however, initially, a set of requirements were specified. The requirements change in every iteration according to the suggestions of content experts (see Figure 1).

A test driven development (TDD) approach was used in developing the ILO. In TDD, problems are solved incrementally. Small features are being developed first, then the most complex features last. Every time a feature is developed, it is being tested using drivers (i.e. “calling programs”) and stubs (i.e. dummy pieces of code) before integrating into the main ILO to ensure that the ILO is bug free.

The ILO was designed conceptually by three main parts, namely, the “Introduction,” the “Lesson Presentation” (i.e. teaching-learning part), and the multiple-choice “Quiz” (i.e. Assessment part). The “Introduction” part briefly introduces the topic on recursion, and, at the same time, provides an overview of what to expect in the next few minutes of the viewing-learning session. The “Lesson Presentation” part contains all the vital information for the viewer-learner to understand how a recursive function works. This part implemented viewer-learner activities such as tracing a sample program, line by line, so that they will know how a recursive function runs and how it affects the execution of the program. In consonance with all these activities, there was the use of graphics and animation coupled with user interactivity.

One of the important design features of the ILO was the implementation of the role-playing game (RPG) theme. In this approach, the viewer-learner could customize an avatar, i.e. a graphic icon that represents the viewer-learner persona while viewing the ILO, at start of every viewing-learning session.

Two similar versions of the ILO were developed. The only difference is the “Quiz” part. One version implemented an “Immediate Feedback” during assessment, in which a sound is produced, either an applause together with a popping “Correct” icon for a correct answer or a buzzer for a wrong one, whenever the viewer-learner selects a choice. However, the viewer-learner is given multiple chances, with the corresponding penalty or deduction in the point earned for that item, until he/she figures out the correct answer. In other words, if there are previous attempts made prior to selecting the correct answer, then the learner-viewer could not earn the full point for that question item.

On the other hand, the other version implemented the “Delayed Feedback,” in which the viewer-learner is not allowed to answer multiple times, or only allowed to answer once per question. Regardless whether the answer is correct or not, no sound is played, and proceeds right away to the next question item after a choice is made. Checking for the correct answers is done after all question items of the quiz are answered. Then, all the quiz items are shown with the corresponding mark before it, either a check for having a correct answer or a “X” for a wrong one. If the answer for a question item is correct, then only the correct answer is shown. However, if the answer is wrong, then the viewer-learner’s answer for that question item is shown side-by-side with the correct one.

In any case for both versions of the ILO, viewing time is within the attentive span of the learner of 15 minutes on the average (Wilson and Korn, 2007).

3.2 Experimental setup

For the respondents of this study, first-year BS Mathematics (BS Math) and BS Computer Science (BSCS) students were considered since they already have a background on “Functions,” but are foreign to the concept of recursion, which made them appropriate target population for the study.

A sample size of 32 students, 24 BS Math I and 8 BSCS I were determined based on the assumption that this size results to an acceptable level of power of a test (i.e.>80 percent) when the underlying distribution of the data is believed to be approximately normal. Then, the 32 respondents were divided into two groups, and assignment to each group was done randomly. One group was subjected to the ILO with the immediate feedback while the other group to the ILO with the delayed feedback. And, for each group, there were 8 males and 8 females for a total of 16 respondents per group or assigned respondents to each ILO version.

3.3 Evaluating the ILO as educational resource

To assess the developed ILO for acceptability as a web-based e-learning material, the Multimedia Educational Resource for Learning and Online Teaching (MERLOT) standard instrument was adapted and utilized in this study. For this purpose, a list of pre-identified would-be raters coming from three occupations, namely, (undergraduate) programming students, CS/IT (programming) faculty, and IT-industry practitioners, was prepared. However, only one representative from each occupation was picked randomly and was asked to serve as evaluator-rater using the MERLOT instrument.

This setup was implemented in order to have a variety on the feedback and the perspective in the evaluation based from their varying levels of experiences regarding software and their knowledge of recursion as well. Moreover, doing so is one way to determine whether there is consistency in the ratings by measuring inter-rater agreement or consensus in the perceived utility of the developed ILO as an e-learning material even though they have diverse background.

4. Results and discussion

The developed ILO had an RPG theme to entice the students considering that most students today love to play games. The RPG feel gives them a personal experience in viewing the ILO as they are allowed to choose their own avatar. More importantly, the RPG theme allows them to feel like they are playing instead of studying for a lesson. This approach also promotes for a more personalized learning environment, which we believed to enhance learner-engagement (when viewing the ILO, in particular, and learning in general.

4.1 Sharable Content Object Reference Model (SCORM) standard

The developed ILO passed the SCORM standard, which is a set of technical standards for e-learning software products. Specifically, the ILO has met the following high-level requirements on: accessibility, interoperability, durability, and reusability.

Given that the ILO is created in HTML format, the ILO passes the “accessibility” requirement. In fact, the ILO could be either shared locally and run offline, or accessed online by its URL (at http://bit.ly/recursion_delayed_final for the ILO version which applied the delayed feedback, and at http://bit.ly/recursion_immediate_final for the ILO version which applied the immediate feedback).

The “interoperability” requirement was satisfied considering that the ILO can run on multiple platforms, operating systems, and web browsers.

The ILO also passed the “durability” standard as it applied the latest technology available during the time of the study (i.e. using HTML 5), which is widely used by web developers. And, as such, the ILO does not run on an obsolete software platform, which, in effect, prepares it to withstand technological changes.

For the “reusability” criteria, the fact that the developed ILO can be used by the prospective user(s) as many times as they wish to, without diminishing the learning quality of the ILO, has effectively satisfied this standard.

4.2 MERLOT (adapted) questionnaire internal consistency

To assess the internal consistency or reliability of the questionnaire items, which was adapted from the MERLOT standard instrument, Cronbach’s α for each subject criteria is calculated using SPSS statistical software (see Table I). These computed indices determine how well the questionnaire items consistently measure the corresponding criteria on “ease of use” (or user friendliness of the ILO), “quality of content,” and “potential effectiveness as a teaching tool”, respectively. Looking at Table I, the computed indices are well above 0.9 (or α⩾0.9), which implies that the MERLOT (adapted) questionnaire that uses Likert-scale questions had captured reliably, or excellently by the rule of thumb (www.statisticshowto.com/cronbachs-alpha-spss/), the aforementioned subject criteria. Moreover, the computed indices give evidence that the MERLOT (adapted) questionnaire is a highly acceptable instrument for assessing the developed ILO.

4.3 MERLOT evaluation

Table II shows the summary results of the ILO assessment using the MERLOT instrument as well as Kendall’s measure of concordance, W, for each of MERLOT’s subject criteria. “Quality of content” is rated as the highest (M=4.40, SD=0.53) among the MERLOT standards, while “potential effectiveness as a teaching tool” (M=4.11, SD=0.84) is rated as the lowest. These results imply that the developed ILO is perceived by the raters to be above average or “very good” when it comes to educational content or informational correctness of the content as presented. However, inter-rater agreement is fair (i.e. 0.33), which suggests that, although they are consistent in rating the ILO, they somewhat differ in the propensity of their ratings. In contrast, while “potential effectiveness as teaching tool” is rated lowest, still the rating obtained is above average, but the raters slightly agreed (i.e. 0.19) in their assessment. This shows that the raters’ diverse background and experiences have greatly impacted this part of the MERLOT criteria, which is reasonable considering their different occupational orientations.

In addition, noticeably, the Kendall’s W measure for the “ease of use” criterion is rated moderate (i.e. 0.47), which implies that the raters consistently rated the ILO as “user-friendly,” although we cannot strongly assert this here due to the small number of raters involved. Nevertheless, the developed ILO has garnered high scores (i.e. above average) in the MERLOT assessment with an overall Kendall’s W measure of 0.42, which corresponds to “moderate raters” agreement. Therefore, these results have provided evidence that the developed ILO has met the MERLOT high standards for a web-based e-learning material.

4.4 Immediate vs delayed feedback

Table III shows that the embedded quiz scores from student-respondents, who viewed the ILO implemented with immediate feedback type during the first attempt, had statistically significant higher scores (M=8.25, SD=0.80) than those who had delayed feedback during the first attempt (M=7.63, SD=0.89). Similarly, during the second attempt, their embedded quiz scores are statistically significant higher (M=8.67, SD=0.69) than those who viewed the ILO with delayed feedback (M=8.13, SD=0.81). The results reveal that immediate type of feedback is more effective than delayed. This observation is consistent with the study outcome of Ghilay and Ghilay (2015), in which there was significant reduction in the number of dropouts and an improvement in academic results when feedback was given immediately.

In the case of respondents’ performance between BS Math and BSCS students who viewed the ILO version with immediate feedback, on the average, the latter got higher scores numerically than the former, respectively, in both attempts. However, the observed differences between the two groups of student-respondents are statistically insignificant (p-value>0.05, results not shown). This result implies that the corresponding statistical evidence is weak, and the observed differences are random events. Nevertheless, this outcome can be a plausible evidence that the ILO met the “reusability” criteria of SCORM, i.e. it can be reused for learning across different course groups of students.

On the other hand, comparison of student-respondents’ scores who viewed the ILO version with delayed feedback cannot be done since all of whom that were randomly assigned belong to only one group.

4.4.1 Wilcoxon-signed rank test

To determine whether there is real improvement in the quiz scores, or learning in general of the student-respondents, a Wilcoxon-Signed Rank test was used. Table IV indicates that student-respondents who viewed the ILO with the immediate feedback have less improvement as evident in lower number of positive ranks (i.e. 7) observed than those who viewed the ILO with the delayed feedback (i.e. 14 in Table V) based from two attempts. However, those who viewed the ILO with immediate feedback also have greater learning retention as shown in more number of tied scores or ranks (i.e. 9) than those who viewed the ILO with the delayed feedback (i.e. 2 in Table V). These results suggest that the learner who prefers to view the ILO version with delayed feedback employed requires higher number of viewing times than those who prefers the ILO version with immediate feedback to effect enhanced learning retention.

On the other hand, neither of the ILO versions have resulted to learning degradation nor negative learning outcome after the second attempt as manifested in zero observed negative ranks. This observation implies that, regardless of what type of feedback methods implemented in the developed ILO, certainly, learning of the “recursion” concept is realized. This realization is similar with the results among college students learning GRE-level words for vocabulary learning in the study of Metcalfe et al. (2009). In their study, immediate and delayed feedback yielded better results than without feedback.

5. Conclusion and recommendations

The development of a web-based ILO in this study has been successful as evidence by the above average scores, i.e. higher than 4.0 across the three subject criteria, in the MERLOT standard instrument, and the positive learning outcome observed (with zero negative scores) after two learning sessions. Our results further strengthen the claim that feedback mechanism is vital to the learning process especially when implemented in an ILO. Furthermore, we found out that the ILO version implementing with the immediate feedback method is better, in terms of likely higher learning retention of the viewer-learner, than the version implementing with the delayed feedback. Therefore, either of the two developed ILOs versions can now be used as educational resources since both have effected positive learning outcome and met the MERLOT standards.

For future research directions, the study can be further improved by looking into the more advanced concept of “recursion.” In addition, implementation of the “recursion” concept from the perspective of two different programming languages can be explored.

Figures

The systems development life cycle (SDLC) process implemented in this study

Figure 1

The systems development life cycle (SDLC) process implemented in this study

Cronbach’s α values for the MERLOT (adapted) questionnaire items under each subject-criteria or criteria group

Criteria group Cronbach’s αa
Quality of content 0.95
Ease of use 0.92
Potential effectiveness as a teaching tool 0.98

Note: aCalculated by SPSS version 22

Mean ratings and ratings’ agreement for MERLOT criteria

MERLOT criteria n Mean SD Kendall’s Wa Remarkb
Quality of content 3 4.40 0.53 0.33 Fair agreement
Ease of use 3 4.24 0.72 0.47 Moderate agreement
Potential effectiveness as a teaching tool 3 4.11 0.84 0.19 Slight agreement

Notes: n=3. aCalculated by SPSS version 22; binterpretation adopted from Landis and Koch (for κ values) in Gisev et al. (2013)

Descriptive measures for quiz scores of student respondents

Feedback method Mean SD Min. Max.
Immediate 8.25-8.67* 0.80-0.69 6.75-7.00 9.50-9.75
Delayed 7.63-8.13* 0.89-0.81 6.00-7.00 9.00-9.00

Notes: n=32. *p-value<0.05

SPSS output for Wilcoxon-signed rank test on the student-respondents’ embedded quiz scores from two attempts under the ILO version with the immediate feedback

Second-first n Mean rank Sum of ranks
Negative ranks 0a 0.00 0.00
Positive ranks 7b 4.00 28
Ties 9c
Total 16

Notes: n=16. aFirst attempt>second attempt; bsecond attempt>first attempt; csecond attempt=first attempt

SPSS output for Wilcoxon-signed rank test on the student-respondents’ embedded scores from two attempts under the ILO version with the delayed feedback

Second-first n Mean rank Sum of ranks
Negative ranks 0a 0.00 0.00
Positive ranks 14b 7.50 105
Ties 2c
Total 16

Notes: n=16. aFirst attempt>second attempt; bsecond attempt>first attempt; csecond attempt=first attempt

References

Akçayır, M., Dündar, H. and Akçayır, G. (2016), “What makes you a digital native? Is it enough to be born after 1980?”, Computers in Human Behavior, Vol. 60, Part C, pp. 435-440, doi: 10.1016/j.chb.2016.02.089.

Banday, M.T., Ahmed, M. and Jan, T.R. (2014), “Applications of e-learning in engineering education: a case study”, Procedia – Social and Behavioral Sciences, Vol. 123, March, pp. 406-413, doi: 10.1016/j.sbspro.2014.01.1439.

Bellini, C.G.P., Isoni Filho, M.M., Moura Junior, P.J. and Pereira, R.C.F. (2016), “Self-efficacy and anxiety of digital natives in face of compulsory computer-mediated tasks: a study about digital capabilities and limitations”, Computers in Human Behavior, Vol. 59, June, pp. 49-57, doi: 10.1016/j.chb.2016.01.015.

Bennedsen, J. and Carpersen, M.E. (2008), “Exposing the programming process”, in Bennedsen, J., Carpersen, M.E. and Kolling, M. (Eds), Reflection on the Theory of Programming: Methods and Implementation, 1st ed., Springer Verlag, Berlin, pp. 6-16.

Brackbill, Y., Boblitt, W.E., Davlin, D. and Wagner, J.E. (1963), “Amplitude of response and the delay-retention effect”, Journal of Experimental Psychology, Vol. 66 No. 1, pp. 57-64, doi: 10.1037/h0043368.

Butler, A., Karpicke, J. and Roediger, H. (2007), “The effect of type and timing of feedback on learning from multiple-choice tests”, Journal of Experimental Psychology: Applied, Vol. 13 No. 4, pp. 273-281.

Butler, Y.G. (2015), “The use of computer games as foreign language learning tasks for digital natives”, System, Vol. 54, November, pp. 91-102, doi: 10.1016/j.system.2014.10.010.

Ceresia, F. (2016), “Interactive learning environments (ILEs) as effective tools for teaching social sciences”, Procedia – Social and Behavioral Sciences, Vol. 217, February, pp. 512-521, doi: 10.1016/j.sbspro.2016.02.031.

Chookaew, S., Panjaburee, P., Wanichsan, D. and Laosinchai, P. (2014), “A personalized e-learning environment to promote student’s conceptual learning on basic computer programming”, Procedia – Social and Behavioral Sciences, Vol. 116, February, pp. 815-819, doi: 10.1016/j.sbspro.2014.01.303.

Christ, A. and Thews, O. (2016), “Using numeric simulation in an online e-learning environment to teach functional physiological contexts”, Computer Methods and Programs in Biomedicine, Vol. 127, Part C, pp. 15-23, doi: 10.1016/j.cmpb.2016.01.012.

Connolly, T., Stansfield, M. and McLellan, E. (2006), “Using an online games-based learning approach to teach database design concepts”, Electronic Journal of e-Learning, Vol. 4 No. 1, pp. 103-110.

Dihoff, R.E., Brosvic, G.M. and Epstein, M.L. (2003), “The role of feedback during academic testing: the delay retention effect revisited”, The Psychological Record, Vol. 53 No. 4, pp. 533-548, available at: http://opensiuc.lib.siu.edu/tpr/vol53/iss4/2

Eikaas, T., Foss, B., Solbjorg, O. and Bjolseth, T. (2006), “Game-based dynamic simulations supporting technical education and training”, International Journal of Online Engineering, Vol. 2 No. 2, available at: http://online-journals.org/index.php/i-joe/article/view/325

Garrido, A., Morales, L. and Serina, I. (2016), “On the use of case-based planning for e-learning personalization”, Expert Systems with Applications, Vol. 60, Part C, pp. 1-15, doi: 10.1016/j.eswa.2016.04.030.

Ghilay, Y. and Ghilay, R. (2015), “FBL: feedback based learning in higher education”, Higher Education Studies, Vol. 5 No. 5, pp. 1-10, doi: 10.5539/hes.v5n5p1.

Gisev, N., Bell, J.S. and Chen, T.F. (2013), “Interrater agreement and interrater reliability: key concepts, approaches, and applications”, Research in Social and Administrative Pharmacy, Vol. 9 No. 3, pp. 330-338, doi: 10.1016/j.sapharm.2012.04.004.

Hakkarainen, K., Hietajärvi, L., Alho, K., Lonka, K. and Salmela-Aro, K. (2015), “Sociodigital revolution: digital natives vs digital immigrants”, in Wright, J.D. (Ed.), International Encyclopedia of the Social & Behavioral Sciences, Vol. 22, Scientific Publication Co., Elsevier, Amsterdam, pp. 918-923.

Harvey, B. and Wright, M. (1999), Simply Scheme: Introducing Computer Science, 2nd ed., The MIT Press, London, pp. 173-178.

IEEE Learning Technology Standards Committee (2002), “Draft Standard for Learning Object Metadata”, IEEE, Piscataway, NJ, available at: http://grouper.ieee.org/groups/ltsc/wg12/files/LOM_1484_12_1_v1_Final_Draft.pdf (accessed September 3, 2016).

Joulani, P., György, A. and Szepesvári, C. (2013), “Online learning under delayed feedback”, Proceedings of the 30th International Conference on Machine Learning: JMLR: W&CP, Vol. 28 No. 3, pp. 1453-1461.

Kazimoglu, C., Kiernan, M., Bacon, L. and Mackinnon, L. (2010), “Developing a game model for computational thinking and learning traditional programming through game-play”, in Sanchez, J. and Zhang, K. (Eds), Proceedings of E-Learn: World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education 2010, Association for the Advancement of Computing in Education (AACE), Chesapeake, VA, pp. 1378-1386.

Kazimoglu, C., Kiernan, M., Bacon, L. and Mackinnon, L. (2012), “A serious game for developing computational thinking and learning introductory computer programming”, Procedia – Social and Behavioral Sciences, Vol. 47, February, pp. 1991-1999, doi: 10.1016/j.sbspro.2012.06.938.

Lai, F., Luo, R., Zhang, L., Huang, X. and Rozelle, S. (2015), “Does computer-assisted learning improve learning outcomes? Evidence from a randomized experiment in migrant schools in Beijing”, Economics of Education Review, Vol. 47, August, pp. 34-48, doi: 10.1016/j.econedurev.2015.03.005.

Liu, C.-C., Cheng, Y.-B. and Huang, C.-W. (2011), “The effect of simulation games on the learning of computational problem solving”, Computers & Education, Vol. 57 No. 3, pp. 1907-1918, doi: 10.1016/j.compedu.2011.04.002.

Matei, A. and Vrabie, C. (2013), “E-learning platforms supporting the educational effectiveness of distance learning programmes: a comparative study in administrative sciences”, Procedia – Social and Behavioral Sciences, Vol. 93, October, pp. 526-530, doi: 10.1016/j.sbspro.2013.09.233.

Metcalfe, J., Kornell, N. and Finn, B. (2009), “Delayed versus immediate feedback in children’s and adults’ vocabulary learning”, Memory & Cognition, Vol. 37 No. 8, pp. 1077-1087.

Mo, D., Huang, W., Shi, Y., Zhang, L., Boswell, M. and Rozelle, S. (2015), “Computer technology in education: evidence from a pooled study of computer assisted learning programs among rural students in china”, China Economic Review, Vol. 36, Part C, pp. 131-145, doi: 10.1016/j.chieco.2015.09.001.

Opitz, B., Ferdinand, N. and Mecklinger, A. (2011), “Timing matters: the impact of immediate and delayed feedback on artificial language learning”, Frontiers in Human Neuroscience, Vol. 5, February, pp. 1-9.

Peeck, J., van den Bosch, A.B. and Kreupeling, W.J. (1985), “Effects of informative feedback in relation to retention of initial responses”, Contemporary Educational Psychology, Vol. 10 No. 4, pp. 303-313, doi: 10.1016/0361-476x(85)90028-1.

Polsani, P. (2006), “Use and abuse of reusable learning objects”, Journal of Digital Information, Vol. 3 No. 4, available at: https://journals.tdl.org/jodi/index.php/jodi/article/view/89/88

Prensky, M. (2001), “Digital natives, digital immigrants part 1”, On the Horizon, Vol. 9 No. 5, pp. 1-6, doi: 10.1108/10748120110424816.

Ruiz, J.G., Mintzer, M.J. and Leipzig, R.M. (2006), “The impact of e-learning in medical education”, Academic Medicine, Vol. 81 No. 3, pp. 207-212, doi: 10.1097/00001888-200603000-00002.

Štuikys, V., Burbaitė, R., Bespalova, K. and Ziberkas, G. (2016), “Model-driven processes and tools to design robot-based generative learning objects for computer science education”, Science of Computer Programming, Vol. 129, November, pp. 48-71, doi: 10.1016/j.scico.2016.03.009.

Tarpada, S.P., Morris, M.T. and Burton, D.A. (2016), “E-learning in orthopedic surgery training: a systematic review”, Journal of Orthopaedics, Vol. 13 No. 4, pp. 425-430, doi: 10.1016/j.jor.2016.09.004.

Thompson, P. (2013), “The digital natives as learners: technology use patterns and approaches to learning”, Computers & Education, Vol. 65, July, pp. 12-33, doi: 10.1016/j.compedu.2012.12.022.

Tkalac Verčič, A. and Verčič, D. (2013), “Digital natives and social media”, Public Relations Review, Vol. 39 No. 5, pp. 600-602, doi: 10.1016/j.pubrev.2013.08.008.

University of North Texas in partnership with Texas Education Agency (2008), “Classroom best practices: providing feedback to students in the classroom”, available at: https://plc.moodle.bismarckschools.org/pluginfile.php/30754/mod_book/chapter/2100/feedback%20pdf%20for%20MS%20SBG.pdf (accessed September 3, 2016).

Wilson, K. and Korn, J.H. (2007), “Attention during lectures: beyond ten minutes”, Teaching of Psychology, Vol. 34 No. 2, pp. 85-89, doi: 10.1177/009862830703400202.

Yao, K., Uedo, N., Muto, M., Ishikawa, H., Cardona, H.J., Filho, E.C.C., Pittayanon, R., Olano, C., Yao, F., Parra-Blanco, A., Ho, S.H., Avendano, A.G., Piscoya, A., Fedorov, E., Bialek, A.P., Mitrakov, A., Caro, L., Gonen, C., Dolwani, S., Farca, A., Cuaresma, L.F., Bonilla, J.J., Kasetsermwiriya, W., Ragunath, K., Kim, S.E., Marini, M., Li, H., Cimmino, D.G., Piskorz, M.M., Iacopini, F., So, J.B., Yamazaki, K., Kim, G.H., Ang, T.L., Milhomem-Cardoso, D.M., Waldbaum, C.A., Carvajal, W.A.P., Hayward, C.M., Singh, R., Banerjee, R., Anagnostopoulos, G.K. and Takahashi, Y. (2016), “Development of an E-learning system for the endoscopic diagnosis of early gastric cancer: an international multicenter randomized controlled trial”, EBioMedicine, Vol. 9, July, pp. 140-147, doi: 10.1016/j.ebiom.2016.05.016.

Corresponding author

Demelo Madrazo Lao can be contacted at: dmlao1@up.edu.ph

Related articles