Abstract
Purpose
Feedback is crucial in a learning process, particularly in an online interaction where both learners and instructors are distantly located. Thus, this paper aims to investigate the association between feedback strategies, embedded course syllabus and learning improvement in the Sakai Learning Management System.
Design/methodology/approach
This paper uses a survey design to collect cross-sectional data from adult distance learning students. The data were analysed using descriptive statistics and a standard multiple regression model in Stata.
Findings
The results show that feedback strategies (timing, mode, quality and quantity) and embedded course syllabus have a significant relationship with learning improvement. However, the feedback strategy – target – is not significantly related to learning improvement though it is the highest feedback strategy.
Originality/value
This paper has contributed to the extant literature by providing empirical evidence to support the constructivism theory of learning from a distance learning perspective in a developing country. The study has shown that if the feedback strategies are well managed and applied, they would make a considerable impact on distance education students' academic pursuits. Hence, the paper provides a pedagogical foundation for short and long-term distance learning policy.
Keywords
Citation
Attiogbe, E.J.K., Oheneba-Sakyi, Y., Kwapong, O.A.T.F. and Boateng, J. (2023), "Assessing the relationship between feedback strategies and learning improvement from a distance learning perspective", Journal of Research in Innovative Teaching & Learning, Vol. ahead-of-print No. ahead-of-print. https://doi.org/10.1108/JRIT-10-2022-0061
Publisher
:Emerald Publishing Limited
Copyright © 2023, Esther Julia Korkor Attiogbe, Yaw Oheneba-Sakyi, O.A.T.F. Kwapong and John Boateng
License
Published in Journal of Research in Innovative Teaching & Learning. Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode
1. Introduction
Feedback enables teachers and students to improve upon their performance. Feedback is vital in learning, especially in distance education, because it can help interaction between instructors and learners. Interaction between instruction and learning is essential for critical reasoning and deep learning in distance education (Tagoe and Cole, 2020). Therefore, feedback strategies can enhance interaction and the exchange of information between lecturers, students and peers.
From a global perspective, feedback has been touted as essential to learning success. For example, in the USA, tasks, self-assessments, peer appraisals and periodic assignments with instantaneous feedback are effective strategies and key factors in online and distance education (Gaytan and McEwen, 2007). Studies from Europe such as Nicol and Macfarlane-Dick (2006, p. 205) proposed “principles of good feedback practice that engender student self-regulation and learning improvement.” Similarly, Ypsilandis (2002) notes that feedback is an assistance mechanism and a key factor for successful learning. Thus, feedback is understood as an interaction between learners (peers) or as something generated by the learners themselves. Also, in an Australian study, it was found that feedback encouraged peer engagement and subsequently improved students learning (Moore and Teather, 2013). From the African perspective, most of the literature is situated in the South African context. Ngwenya (2019) explains that large class sizes prevented effective feedback provision to students. Formative assessment and feedback provide distance education students in Ghana with an opportunity to better appreciate the gap between their present and anticipated performance (Amoako, 2018). Additionally, the strategy or mechanism used to provide online feedback has an effect on students' learning (Boateng et al., 2016) and that feedback must be prompt in distance education (Quansah et al., 2017) to help students achieve better performance.
It follows that feedback is valuable when it informs the student's direction to enhance student self-regulation (Brookhart, 2008; Price et al., 2010). This feedback strategy improves student learning and academic performance. Challenging feedback also leads to reflective thinking (Sternad, 2015) and is critical for adult learners in distance education. Although Learning Management Systems (LMSs) aid interaction between stakeholders in the learning environment, researchers are silent on the role that feedback plays in this interaction (Cavus, 2015). Badu-Nyarko and Amponsah (2016) contend that feedback provision is a challenge, leading to 75.5% of students' dissatisfaction with tutorial feedback than any other teaching and learning activity. A related study conducted in Hong Kong by Espasa et al. (2018) reported a lack of understanding regarding feedback by both learners and instructors in higher education. However, with the advent of online or blended learning discussions, tools such as a chatroom, forum or video meeting are enabled. These tools can facilitate a rich feedback experience for lecturers and learners. Participants in a post-graduate hybrid class indicated that selecting the right tools in an LMS enabled quick feedback that enhanced student–lecturer participation (Asamoah and Oheneba-Sakyi, 2017).
The development of technology has also improved the provision of distance education in higher education institutions globally (Biney, 2020; Chen et al., 2021). Distance education provision in the USA rose from 25.9% in 2012 to 29.7% in 2015 (Seaman et al., 2018). Trines (2018) said that seven million students in Bangladesh, India, Iran, Pakistan, South Africa and Turkey use open and distance learning or digital learning. Digital learning is a combination of delivering content through technology (Abumandour, 2022) no matter the geographic location, increasing accessibility and breaching the equality gap in education. In this regard, the teacher facilitates the process despite the place, path or time, and students set their own pace to learn (Davis, 2020). The trend in education technology has helped to interact through LMSs and Content Management Systems (CMSs). These programmes have been used extensively in universities and colleges in the USA, Europe and, more recently, Asia and Africa (Unwin et al., 2010; Dahlstrom et al., 2014; Kigundu, 2014; Larbi-Apau et al., 2017; Rucker and Frass, 2017). Using LMS for teaching and learning has also been touted as the quickest way for students to be involved in their studies because of the ubiquity of smartphones and Internet availability (Ramírez-Correa et al., 2015). Involvement implies interacting with information and activities between students and their peers and between students and their lecturers through feedback.
LMSs have become an integral part of education, and their use will continue to influence the teaching and learning environment globally (Dubey and Sahu, 2022). Many universities have resorted to using LMSs intensively after COVID-19. Though much literature is available on feedback usage, information on how feedback is pursued and utilised via technology is limited. Thus, it is imperative to assess feedback strategies that will stimulate students' performance in an online environment from the Ghanaian perspective. Since limited studies have reported the influence of feedback strategies in LMSs and how students use this feedback to improve learning, this paper focuses on the phenomenon among students pursuing distance education in higher education at the University of Ghana (UG), which is the only university in Ghana that has successfully rolled out the Sakai LMS. The survey design is employed to collect numeric data for statistical analysis using the standard multiple regression techniques in Stata. The study findings provide empirical evidence for short- and long-term distance learning policies in the post-COVID-19 era. The rest of the paper includes a literature review and hypotheses, methodology, results, discussion, implications, a conclusion and limitations and suggestions for future research.
2. Literature review and hypotheses development
2.1 Constructivists' theory of learning
In a practical learning setting, both students and instructors or facilitators are involved in the desired outcomes activities. Involvement means interacting with each other in the learning environment (Woo and Reeves, 2007). The concept of interactive learning is referred to as a constructivist approach to learning, and they describe “strategies, tools, and practices” that are important for effective learning (Powell and Kalina, 2009, p. 249). Constructivist elements for learning consist of two: cognitive constructivism (Piaget, 1953) and social constructivism (Vygotsky, 1978). Cognitive and social constructivism are used to achieve effective learning outcomes (Powell and Kalina, 2009; Liu and Chen, 2010). In constructivist environments, teacher reflectivity in practice encourages students to constantly evaluate their understanding of learning (Bada, 2015). Furthermore, in a constructivist learning context, the syllabus is holistically broken down into observable learning patterns, the focus is on exciting students' curiosity and basic materials as well as those that allow students to explore ideas further. Learning is engaging based on students' knowledge. Teaching and learning dialogically encourage students to co-construct; the role of the teacher is that of a facilitator, assessment is both formative and summative, the experiences gained in the learning environment lead to more knowledge creation and group work is a primary focus (Bada, 2015; Deulen, 2013).
Such constructivist elements lead to new ways of learning for students. In other words, using feedback strategies in online learning should be embedded in the course syllabus to provide students with the ability to socially construct their own knowledge as well as that of their peers. Modern LMSs allow interaction and collaboration between and among learners, which in itself is a motivation for giving and receiving good feedback. Again, the propensity to engage in reflective learning as students receive feedback from all relevant stakeholders is likely to lead to improved learning. The ability to interact is a characteristic of a social constructivist environment where diversity is embraced for the purpose of learning (Powell and Kalina, 2009). Thus, students help each other with the curriculum and content, with the teacher facilitating the process.
2.2 Feedback strategies
Aoun et al. (2018) identify five feedback mechanisms, including feedback given by colleagues (peers), summative feedback (grades) on assessments, generic (formative) feedback on assessments, feedback on lecture exercises and feedback on tutorial activities. These mechanisms were embedded in the university course curriculum they investigated. Some of these mechanisms are related to formative and summative feedback. Verification is another type of feedback that confirms whether a thing is right or wrong, and there may be more variables to such a feedback mechanism (Shute, 2008). Elaborative feedback can address a topic, a response from the learner, particular errors (specific and directive), provide samples for guidance or counsel the learner (general and facilitative) (Shute, 2008; Archer, 2010). In other words, elaborate feedback strategies can be verbal (written or oral).
Written feedback strategy sometimes involves the use of codes. McLeod and Mortimer (2012) argue that faculty or teachers use codes in providing feedback and that students must be equally aware of the codes and use the feedback for effective learning. This is critical because if students do not know the types and forms of feedback available, how would they identify and use feedback? They also agreed with other researchers that assessment provided as an end-point measurement (summative) does not lead to effective learning (Sadler, 1998; McLeod and Mortimer, 2012; Espasa et al., 2018).
Researchers are concerned with how students engage effectively with feedback that positively impacts the learning process (McLeod and Mortimer, 2012). In terms of useful feedback, the literature suggests that institutions that used feedback instruments designed into the curricula benefitted students the most (McLeod and Mortimer, 2012; Ruohoniemi et al., 2017). Indicating that, in an online environment, streamlining how feedback is given to aid teaching and learning is a matter of course rather than selective use. Again, balancing the strategies so that learners benefit is key (Brown, 2018). Furthermore, Guasch et al. (2019) add corrective, suggestive and epistemic-suggestive feedback strategies, as explained in the previous paragraph.
It is worth noting that much research on feedback has been conducted among undergraduate students, especially first-year students because feedback meetings are useful to students to adjust their learning progress in higher education (Cramp, 2011; Tsai, 2013; Crimmins et al., 2016; Ruohoniemi et al., 2017). On the contrary, Sadler (1998) argues that feedback is critical for teaching and learning and that it is relevant at all levels of education and should be part of the syllabus.
2.3 Learning improvement
Learning improvement is the achievement that students make in the learning process. Terenzini (2020) asserts that learning improvement is based on teaching and learning experiences and suggests six features that enabled learning improvement among students. These include when students come across thought-provoking concepts, ideas or people, students actively involved with the challenge, happening in helpful settings, inspiring practical learning, including other people, and encouraging reflection (Terenzini, 2020). More so, peer assessment and the active engagement of peers in providing quality feedback lead to positive gains in learning improvement (Li et al., 2010). Discovery learning and redesign also lead to positive learning outcomes among distance education students (Ames, 2016). In other words, instructors can use different strategies over the course of a semester or programme to support learning processes. Adequate assessment practices are key to improvements in learning, but when they are vaguely defined, improvements are detrimental (Fulcher et al., 2017). Thus, institutions must create and design supportive environments through specific goal-setting practices. According to Sánchez et al. (2020), some factors that improve learning are the usage of innovative ways of teaching and learning, such as gamification. Others explore the use of blogs and wikis to meet the demands of distance students taking control of their learning (Beldarrain, 2006).
Distance learners need to use and act on feedback in an online setting if they want to improve their learning. Cavalcanti et al. (2021) say that feedback is important for distance learning because students and teachers are in different parts of the world. They conducted a systematic review, which revealed that about 65% of studies proved that automatic feedback improves students' performance. It is therefore important to encourage continuous research on how to use and implement online feedback. In another study, Espasa et al. (2022) found that students in online learning environments preferred video modes of feedback over text or audio feedback. The reason may be that video feedback may generate more interpersonal relationships, which distance students desire most.
2.4 Conceptual framework and hypotheses
Figure 1 shows that there may be a strong link between feedback strategies (timing, mode, target, quality and quantity) and learning improvement. The assumption is consistent with the social constructivist's belief that using an appropriate learning approach influences learners' behaviour and, as such, provides a relevant attitude (reflection) for learners' success (Lynch, 2016). From the constructivist approach, learners' experiences from group work lead to knowledge generation (Bada, 2015). Furthermore, the assertion is consistent with Al-Harthi's (2010) findings that students avoid uncertainty by preferring programmes with high structure and high interaction levels.
Empirically, feedback is related positively to learning outcomes (Fonseca and Chi, 2011). The study by Diab (2011) shows that the timing of the feedback influences learners' academic improvement. It has become essential to consider feedback as not just an option but a necessary process in the learning environment, so its provision must be timely (Diab, 2011). When feedback is provided late, it has the tendency to affect students' academic improvement. This suggests that timely feedback has the propensity to influence learning improvement, particularly in the online learning environment where there is no physical contact. Therefore, the following hypothesis has been proposed:
Feedback strategy (timing) is significantly related to learning improvement.
Hattie (2009) indicates that feedback correlates positively with academic achievement in traditional learning spaces, which is underscored by Hattie et al. (2017), who argue that useful feedback needs to be understood from the perspective of the amount of information received by learners rather than what is given. But Hawe and Parr (2014) say that most feedback focused on how well students did in school rather than on learning itself. When giving feedback, it becomes important to think about what the feedback is about. It means that feedback cannot be general but must be specific so that the goal of improving learning can be reached. Hence, the following hypothesis has been proposed:
Feedback strategy (target) is significantly related to learning improvement.
Brookhart (2012) contends that teachers ought to choose the feedback mode that is likely to be most appropriate and sufficient to ensure that the information meant to enhance learners' academic performance is delivered. Likewise, Brooks et al. (2019) reveal the feeding-back mode of feedback in the form of verbal comments provided by teachers to their students, as opposed to written feedback in the form of feeding-back transferred from teachers to students. Double et al. (2020) also report that the mode of feedback may vary considerably from detailed written and verbal reviews to quantitative among different students' performance ratings. The above studies suggest that feedback mode might have something to do with how well students do in school, but there is not any clear evidence. Therefore, the following hypothesis has been proposed:
Feedback strategy (mode) is significantly related to learning improvement.
Instructor's quality feedback exchanges with learners in the forms of instruction ensure integrated environments for learners that identify, form, motivate and affect learning motivation (Jackson et al., 2013; Tan et al., 2019). Diab (2011) also shows that quality feedback influences learners' academic improvement. This was underscored by Rotsaert et al. (2018) by revealing that quality feedback has a more significant effect on the learner's academic improvement. On the other hand, Hawe and Parr (2014) report that the quality of feedback that is learning-intensive is not always useful for students' learning improvement. It means that Hawe and Parr (2014) admit that quality feedback is helpful but not always. As a result, Tan et al. (2019) posit that the quality of the information being provided should be sufficient in quantity and provide helpful feedback. Additionally, the volume of feedback provided by instructors facilitates a connected learning community (Deulen, 2013). It follows that the quality of the information being given must be as important as the quantity of information (Hattie et al., 2017). Thus, to ensure that both the learner and instructor are interacting effectively to enhance learning improvement, quality and quantity feedback strategies cannot be compromised. Hence, the following hypothesis has been proposed:
Feedback strategy (quality) is significantly related to learning improvement.
Feedback strategy (quantity) is significantly related to learning improvement.
Nicol (2010) establishes that feedback provided through embedded course syllabus is significantly associated with learning improvement. Thus, there is a need to reconceptualise feedback in terms of how it is received by the learner rather than how the teacher gives it (Hattie et al., 2017). The embedded course syllabus helps spell out how feedback is likely to be. Thus, it is important to understand the relationship with learning improvement as suggested by earlier studies (for example, Saba, 2002; Nicol, 2010). The following hypothesis has been, therefore, proposed:
Embedded course syllabus is significantly related to learning improvement.
3. Methodology
3.1 Research design
The study adopted descriptive and cross-sectional survey designs. These designs help to describe the relationship between feedback strategies and learning improvement using cross-sectional data. Additionally, a quantitative approach was employed to collect numeric data during the survey. A quantitative approach is objective, has a larger sample size and has higher credibility (Johnson and Christensen, 2004; Nueman, 2014). The quantitative data helped test the hypotheses formulated. The constructivist theory suggests that using an appropriate learning approach influences the learner's behaviour and success, implying a relationship. The chosen designs and approaches are, therefore, helpful to explore that relationship and provide empirical proof of the theory in the context of LMS and learning improvement.
3.2 Population
The study's population comprises all third-year students studying the four different programmes at the Distance Education Department of the School of Continuing and Distance Education (University of Ghana). Third-year students were chosen because they had been using the Sakai LMS for at least two years. Among the public universities, only UG has all distance education programmes mounted on the Sakai LMS interface as of 2019. UG had recently migrated from KEWL to a new LMS (Sakai) in 2014. Initial discussions revealed that only the UG has adopted a 70% online teaching and learning model augmented with 30% face-to-face interaction for all distance learning students. UG, therefore, should have in place a well-structured ICT and LMS infrastructure that would help achieve the objectives of this research. Some universities have installed LMSs, such as the Kwame Nkrumah University of Science and Technology (KNUST) and the University of Cape Coast, which use Moodle. However, these universities did not have all courses mounted in the LMS. So the researchers settled on the UG whose LMS was fully operational to conduct the study.
3.3 Sampling
Using the formula developed by Krejcie and Morgan (1970), the researcher determined a sample size of 355 across the four programmes with a margin of error of 5% at a confidence level of 95% from a total population of 1,223. The distance learning students were naturally stratified, so they were proportionately assigned to the four programmes based on the data from the Academic Office. Bachelor of Arts was allocated 46.2%, BSc Administration was allocated 30%, Nursing had 21% and IT was allocated 2.7%. The allocation for IT is low because there are few students at this level who took the IT programme. Getting access to the full list is critical in order to make it possible for all the respondents to have an equal chance of selection (Saunders et al., 2016). For the actual data collection, a simple random technique via the lottery process was used to select the respondents (Thompson, 2012). So, in each lecture hall, the researcher then wrote “yes or no” on pieces of paper and passed them around for the respondents to choose. Anyone who chose “yes” was given a questionnaire to fill out. When it got to the turn of the nursing students, it was difficult to access all of them in the lecture hall at once due to the nature of their work. So, only the nursing students were conveniently recruited for the study.
3.4 Instrumentation and data collection
The researchers adapted scales for feedback strategies (Aoun et al., 2018) and learning improvement (Terenzini, 2020) to create a five-point Likert scale questionnaire based on the study's objectives. The questionnaire was divided into four sections. Section I collected demographic data. Section II collected data on feedback strategies adopted in Sakai online teaching and learning. Section III collected data on how the syllabus was designed to integrate feedback-embedded learning outcomes. Section IV collected data on learning improvement. Samples of the items in the instrument are provided in the appendix section. A pilot test was carried out using distance learning students whose courses were mounted in Moodle's LMS at KNUST's Kwabenya Campus to confirm the construct validity and item reliability. Before the pilot test, the instrument was given to two higher education learning experts to review to ensure content validity. Their feedback was positive, with some minor suggestions that helped refine the final instrument.
A different university was chosen for the pilot test to ensure validity and reliability. Also, the respondents who were recruited for the pilot had all their courses mounted in Moodle at KNUST. Furthermore, we decided to pilot the study at KNUST because the characteristics of the students at UG matched those of KNUST. Also, the external pilot has been preferred to the internal (Machin et al., 2018). An internal pilot is used to review the sample size in an ongoing main study and is considered part of the main study analysis and interpretation. External-pilot research gives information to construct a definitive (main) study, but its data are not included in its analysis (Machin et al., 2018), which is the case with our present study. A pilot study is a preparatory study that evaluates research designs, measures, processes, recruiting criteria and operational tactics for use in a subsequent, generally larger study (Moore et al., 2011).
3.5 Validity and reliability
The exploratory factor analysis (EFA) was done using the principal components analysis with the varimax rotation to identify the factors that influence the study constructs (Tabachnick and Fidell, 2014). The full EFA is presented in Table 1. Convergent validity and composite reliability were also conducted and reported in Table 2. Cronbach’s alpha (CA) coefficients are all above the 0.70 threshold, indicating that the measuring items are internally consistent (Cooper and Schindler, 2008). The composite reliability indices are also above 0.70. All AVE scores are greater than 0.5, implying that there is convergent validity in the constructs (Hair et al., 2013). Furthermore, discriminant validity was assessed by calculating the square root of the AVEs, which are greater than their correlations (Table 2).
3.6 Data analysis
The analysis was divided into two: descriptive and inferential statistics. The descriptive analysis comprised frequencies, means, standard deviations, skewness and kurtosis. A normality test was done using skewness and kurtosis. The inferential statistics were performed using standard multiple regression analysis to test the hypotheses. Thus, the following regression model was estimated:
TotalLi = Dependent variable (overall learning improvement);
⟨0 = Constant/intercept: remains constant when all the independent variables are equal to zero;
Independent variables = FBTiming, FBMode, FBTarget, FBQual and FBQuant;
β1, β2, β3, β4 and β5 = Coefficients/parameter: measure change independent variable as a result of a unit change in the independent variables;
∑i = Error term, representing other variables that were not included in the regression model.
4. Results
Table 3 presents the results for the demographic characteristics. It can be observed that 44% of the respondents are male while more than 56% are female. Regarding marital status, 83% of respondents are single; 16% are married; 0.6% (representing two persons) are separated; and 0.3% (representing one person) are divorced. Again, 47% of the respondents are employed, 14.6% are self-employed and 38.3% are unemployed, meaning that most of the respondents are employed. With respect to the respondents' age, 46% are between 23 and 27 years, 28% are between 28 and 32 years, 19% are between 17 and 22 years, close to 6% are between 33 and 37 years, around less than 1% was between 38 and 42 and 43 and 50+ years. Concerning the programme pursued by the participants, 44% are pursuing a BA, almost 32% are studying BSc Administration, about 20% are BSc Nursing students and the remaining 4% are pursuing BSc Information Technology. This indicates that most of the students sampled are BA students.
Table 4 presents the descriptive statistics for the study constructs. The mean scores depict that an averagely embedded course syllabus is the highest form of feedback. Among the feedback strategies, targeting recorded the highest mean, while mode recorded the least. However, on the five-point scale, all the constructs recorded mean scores above the mid-point of 2.5, indicating their affirmation, while the standard deviations show the extent of variations in the responses obtained. The data set is considered to be normally distributed since skewness and kurtosis fall between the ranges of −1 and +1 (Hilton et al., 2021). Hence, a regression analysis is carried out to ascertain the relationship between feedback strategies and learning improvement (Hilton et al., 2021).
Table 5 presents the regression results for learning improvement as a dependent variable and feedback strategies and the embedded course syllabus as independent variables. The linear regression model shows learning improvement depends on feedback strategies (timing, mode, target, quality and quantity) and the embedded course syllabus. The R2 shows the proportion of variation in learning improvement explained by the regressors in the regression model. The result shows that approximately 43% of the variations in learning improvement are explained by the feedback strategies (timing, mode, target, quality and quantity) and embedded course syllabus. All the regressors in the model except the feedback target are statistically significant; hence, the adjusted R2 is approximately 43%. The p-value of F-statistics (0.000) shows all the regressors are jointly statistically significant at the 5% significance level.
From Table 5, FS-timing has a significant positive relationship with learning improvement. All other variables held constant, a unit change in FS-timing may result in a 0.811 unit change in incremental learning improvement. Thus, FS-timing has a statistically significant effect on learning improvement. This implies that distance education students learning on the Sakai LMS may improve when they receive timely feedback. Further, FS-mode has a positive relationship with learning improvement, and the relationship is statistically significant. Holding all other variables constant, the coefficient of FS-mode shows that a unit change in FS-mode may result in 0.851 unit changes in learning improvement. It suggests that FS-mode significantly influences the level of learning improvement. Regarding FS-targeting, its relationship with learning improvement is not significant, though positive. It means that a change in learning improvement as a result of a unit change in FS-targeting will be insignificant. It can be further seen that FS-quality is significantly positively related to learning improvement. The coefficient of FS-quality shows learning improvement may develop by 0.501 units as a result of a unit change in FS-quality, holding all other variables constant. FS-quantity is also positively related to learning improvement, and the relationship is statistically significant. A one-unit change in FS-quantity may result in 0.479 changes in learning improvement, holding all other variables constant. It also implies that the volume or quantity of feedback that distance education students derive from the Sakai LMS may significantly influence how they improve.
Additionally, the embedded course syllabus has a positive significant association with learning improvement. All other things being equal, a unit change in the embedded course syllabus may result in 0.444 unit changes in the level of learning improvement. This implies that when the course syllabus shows how feedback will be used in the online learning context, the learning improvement of online distance learners may improve.
To ensure that the independent variables employed in the regression model are not highly correlated, a multicollinearity test was conducted. The Variance Inflation Factor (VIF) was used to test for the presence of multicollinearity in the regression model. The decision rule is given as, If the VIF exceeds 5, then there is the presence of severe multicollinearity. A VIF of 1 shows no multicollinearity, and a VIF exceeding one and below 5 indicates moderate multicollinearity. From Table 6, since the VIF is less than 5, there is an absence of severe multicollinearity in the regression model.
To obtain reliable results devoid of the presence of heteroscedasticity and the influence of outliers that have the potential of affecting the regression parameters in the linear multiple regression models employed, the Breusch–Pagan test for heteroscedasticity was conducted (Fornalski, 2015). The p-value of the chi-square in the Breusch–Pagan test is 0.1535, which exceeds the critical value of 0.05. Hence, the null hypothesis of constant variance, or heteroscedasticity is accepted.
Table 7 provides a summary of the hypotheses path in the study. The findings revealed that timing has a significant positive effect on FNLI (supported HA1), and the mode has a significant positive effect on FNLI (supported HA2). Further, the target shows a non-significant positive effect on FNLI (accepted HA3); quality feedback had a significant positive effect on FNLI (supported HA4). Quantity feedback has a significant positive effect on FNLI (confirmed HA5). Finally, the embedded course syllabus (EMS) also shows a significant positive effect on FNLI (confirmed HA6).
5. Discussion
This study has examined the relationship between feedback strategies, embedded course syllabus and learning improvement among UG distance learning students using the Sakai LMS. The empirical results show that all the feedback strategies (timing, mode, quality and quantity) except target have a significant relationship with learning improvement. This is contrary to the findings of Hawe and Parr (2014), who found that most feedback targeted learners' academic achievement rather than the learning itself. Similarly, the present finding is inconsistent with Hawe and Parr's (2014) study, which demonstrated that the quality of feedback that is learning-intensive is not always useful for students' learning improvement. The differences in the results could be due to the context of the study.
Furthermore, the current findings are consistent with the results of Hattie (2009), which indicated that feedback correlates positively with academic achievement in traditional learning spaces. In terms of individual influence, the results indicated that the contribution of the feedback-embedded course syllabus to influencing learning improvement is the highest (Beta = 0.241), followed by feedback mode (Beta = 0.156) and timing (Beta = 0.146), and the least being feedback quantity (Beta = 0.133). This supports the evidence that quantity of feedback is necessary, as Hattie et al. (2017) argued that useful feedback needs to be understood from the perspective of the number of pieces of information received by learners rather than what is given. In line with the constructivist approach, lecturers facilitate a connected learning community through the volume of feedback provided (Deulen, 2013). Again, the timing of feedback is critical, as delayed or untimely feedback proves to negatively impact students learning. The students end up not utilising delayed feedback because it becomes irrelevant by the time they receive it.
Brookhart (2012) recommended that teachers ought to choose the feedback mode that is likely to be most appropriate and sufficient to ensure that the information meant to enhance learners' academic progress will be received at all costs. But the results of this study showed that feedback quality, timing and a course syllabus that included feedback had a statistically significant effect on how well students learned. This result is similar to what Rotsaert et al. (2018) found that the quality of the feedback that learners get from their assessors has a bigger effect on how much they improve in school. Jackson et al. (2013) and Tan et al. (2019) found that teachers' quality feedback exchanges with learners in the form of instruction create integrated environments for learners that identify, form, motivate and affect their motivation to learn. The current findings support these findings.
Similarly, Brooks et al. (2019) found the feeding-back mode of feedback in the form of verbal comments provided by teachers to their students, as opposed to written feedback in the form of feeding-back transferred from teachers to students. On the other hand, the present study considered both written and verbal feedback through an online learning management system. The study found that there was a statistically significant link between the embedded course syllabus, the feedback mode, the feedback target and the improvement in learning. The study's findings are consistent with those of Diab (2011), which depicted that both quality and timing of the feedback can play a significant role in learners' academic improvement. Compared to the study of Nicol (2010), the findings of the current study have shown that feedback provided through an embedded course syllabus can significantly influence learning improvement. A structured course syllabus will further enhance the degree of autonomy at the students' disposal (Saba, 2002). Likewise, the present results agree with the claim of Double et al. (2020) that the mode of feedback may vary considerably from detailed written and verbal reviews to quantitative among different students' performance ratings.
Additionally, the current findings support Hattie et al.'s (2017) argument that there is a need to reconceptualise feedback in terms of how (mode) it is received by the learner rather than how the teacher gives it. Using feedback strategies and an embedded online syllabus to predict learning improvement, the estimated R2 (0.429) showed that feedback strategies and an embedded course syllabus explained 42.9% of the variation in learning improvement.
6. Implications
The study findings provide a pedagogical foundation for short- and long-term policy. Firstly, priorities should be given to feedback strategies, particularly feedback quantity, mode, timing and quality, when the management of the various universities runs online distance education programmes in Ghana. In instituting measures aimed at improving students' learning outcomes, feedback targets could be given less attention and instead, focus on the other feedback strategies when distance education practitioners are to make policy preferences among the various feedback strategies. This is critical because the study results indicated that only the feedback target has an insignificant influence on learning improvement. Theoretically, the finding elucidates that feedback strategies applied in online distance education could vary in timing, mode, target, as well as quality and quantity. This is in tandem with the studies of Hsu (2016) and Peters et al. (2018). They found among distance learners and on-campus students that online experimental learning relates positively to learning progress based on appropriate feedback.
The significant positive association between embedded course syllabus and learning improvement is contrary to the view of Li and Gao (2016) that it is challenging to utilise online learning systems in conveying feedback, and this can negatively affect learning progress. Practically, the regression model developed in this study can be adapted or adopted by policymakers for planning. Also, other researchers can adopt it in conducting their studies in the area of feedback strategies and their relationship with students' academic improvement, particularly at the public universities in Ghana that run online distance education programmes. Therefore, the methodology adopted in the study that yielded the data results can be subjected to replication in order to identify a gap that would warrant another study in different dimensions.
7. Conclusion
Using the right feedback strategies, such as feedback timing, feedback mode, feedback target, feedback quality and feedback quantity, can improve both teaching and learning when Sakai LMS is used for distance education. The study has shown that feedback strategies could have a big effect on the academic goals of distance education students if they are well managed and used. Additionally, evidence from several prior studies reviewed in this study showed that the use of Sakai LMS is not pervasive in colleges and universities in most African countries. However, the narrative is gradually changing in Ghana, as the UG has taken the lead in LMS usage for blended learning for other universities to follow. With the advent of COVID-19, opportunities are ripe for institutions to adopt LMSs. Additionally, there is now empirical evidence to conclude that a combination of feedback strategies (timing, mode, target, quality and quantity) and embedded course syllabus as a measure of an appropriate feedback mechanism can yield reliable results in future studies. Finally, the study demonstrates that for adult distance education students to achieve academic progress through feedback provision, management of distance education and lecturers should adopt an appropriate feedback culture that would integrate all the distance education students onto the Sakai LMS for high performance.
8. Limitations and suggestions for future research
Despite the significance of this study, the focus on a single university and country limits the generalisation of the findings. Therefore, it is suggested that future studies should incorporate more universities so that a larger sample size than what was used in this study could be used to ensure more representativeness for the generalisation of the findings for policy implications beyond Ghana. Again, a comparative study on the usage of online platforms for teaching and learning could be carried out in both public and private universities in Ghana. Such a study could aim at assessing the performance of both private and public universities in terms of utilisation of online platforms for teaching and learning.
Figures
Rotated matrix for study constructs (EFA)
Items | Loadings | Eigenvalues | % of variance | Cumulative % of variance |
---|---|---|---|---|
ECS | 4.64 | 6.75 | 24.06 | |
ECS4 | 0.75 | |||
ECS6 | 0.73 | |||
ECS7 | 0.73 | |||
ECS2 | 0.72 | |||
ECS3 | 0.69 | |||
ECS1 | 0.66 | |||
ECS5 | 0.63 | |||
ECS8 | 0.58 | |||
FSU | 2.47 | 5.66 | 39.12 | |
FSU2 | 0.77 | |||
FSU11 | 0.75 | |||
FSU13 | 0.75 | |||
FSU7 | 0.75 | |||
FSU6 | 0.75 | |||
FSU8 | 0.74 | |||
FSU17 | 0.73 | |||
FSU12 | 0.72 | |||
FSU16 | 0.72 | |||
FSU9 | 0.72 | |||
FSU10 | 0.71 | |||
FNLI | 2.30 | 3.90 | 44.28 | |
FNLI2 | 0.64 | |||
FNLI3 | 0.68 | |||
FNLI4 | 0.67 | |||
FNLI5 | 0.72 | |||
FNLI6 | 0.77 | |||
FNLI7 | 0.74 | |||
FNLI8 | 0.69 |
Convergent validity, composite reliability and inter-factor correlation
Constructs | CA | CR | AVE | 1 | 2 | 3 | 4 | 5 | 6 |
---|---|---|---|---|---|---|---|---|---|
1. ECS | 0.75 | 0.85 | 0.69 | 0.83 | |||||
2. FST | 0.90 | 0.77 | 0.74 | 0.53** | 0.87 | ||||
3. FSM | 0.90 | 0.76 | 0.74 | 0.48** | 0.49** | 0.87 | |||
4. FSTG | 0.90 | 0.77 | 0.74 | 0.46** | 0.38** | 0.60** | 0.87 | ||
5. FSQ | 0.90 | 0.80 | 0.75 | 0.42** | 0.35** | 0.54** | 0.61** | 0.87 | |
6. LI | 0.95 | 0.87 | 0.62 | 0.47** | 0.49** | 0.50** | 0.60** | 0.61** | 0.79 |
Demographic characteristics of respondents
Factor | Frequency (355) | Percent (%) |
---|---|---|
Gender | ||
Male | 157 | 44.2 |
Female | 198 | 55.8 |
Marital Status | ||
Single | 295 | 83.1 |
Married | 57 | 16.1 |
Divorced | 1 | 0.3 |
Separated | 2 | 0.6 |
Employment Status | ||
Employed | 167 | 47.0 |
Self-Employed | 52 | 14.6 |
Unemployed | 136 | 38.3 |
Age | ||
17–22 years | 68 | 19.2 |
23–27 years | 163 | 45.9 |
28–32 years | 99 | 27.9 |
33–37 years | 22 | 6.2 |
38–42 years | 2 | 0.6 |
43–50+ years | 1 | 0.3 |
Programme | ||
BA | 157 | 44.2 |
BSc ADMIN | 113 | 31.8 |
BSc NURSING | 72 | 20.3 |
BSc INFOTECH | 13 | 3.7 |
Descriptive statistics
Variables | Mean | Std. Deviation | Skewness | Kurtosis |
---|---|---|---|---|
FS-Timing | 2.89 | 0.98 | −0.06 | −0.58 |
FS-Mode | 2.46 | 1.00 | 0.32 | −0.61 |
FS-Target | 2.97 | 0.84 | −0.02 | 0.03 |
FS-Quality | 2.81 | 0.93 | −0.09 | −0.40 |
FS-Quantity | 2.63 | 1.07 | 0.10 | −0.84 |
Embedded Syllabus | 3.19 | 0.76 | −0.30 | −0.11 |
Learning improvement | 3.31 | 0.85 | −0.53 | −0.10 |
Regression results for learning improvement
Variables | Coefficients | Standard errors |
---|---|---|
FS-Timing | 0.811*** | (0.312) |
FS-Mode | 0.851*** | (0.313) |
FS-Target | 0.443 | (0.291) |
FS-Quality | 0.501*** | (0.146) |
FS-Quantity | 0.479** | (0.191) |
Embedded Course Syllabus | 0.444*** | (0.103) |
Constant | 13.062 | (5.427) |
Observations | 355 | |
R2 Adjusted R2 Prob > F | 0.426 0.417 0.000*** |
Note(s): Standard errors in parentheses ***p < 0.01, **p < 0.05, *p < 0.10
Test for multicollinearity
Variable | VIF | 1/VIF |
---|---|---|
Feedback Mode | 1.91 | 0.522533 |
Feedback Timing | 1.89 | 0.529757 |
Feedback Target | 1.78 | 0.562271 |
Feedback Quantity | 1.46 | 0.684516 |
Feedback Quality | 1.07 | 0.934463 |
Mean VIF | 1.62 |
Summary of hypotheses results
Hypotheses path | Hypotheses | t-values | Beta | p-values | Decisions |
---|---|---|---|---|---|
FS-Timing → FNLI | H1a | 2.604 | 0.146 | 0.010 | Supported |
FS-Mode → FNLI | H1b | 2.723 | 0.156 | 0.007 | Supported |
FS-Target → FNLI | H1c | 1.521 | 0.084 | 0.129 | Rejected |
FS-Qual → FNLI | H1d | 3.440 | 0.145 | 0.001 | Supported |
FS-Quant → FNLI | H1e | 2.510 | 0.133 | 0.013 | Supported |
ECS → FNLI | H2 | 4.304 | 0.241 | 0.000 | Supported |
Note(s): ***p < 0.01, **p < 0.05 and *p < 0.10
Appendix Questionnaire
References
Abumandour, E.T. (2022), “Applying e-learning system for engineering education – challenges and obstacles”, Journal of Research in Innovative Teaching and Learning, Vol. 15 No. 2, pp. 150-169.
Al-Harthi, A.S. (2010), “Cultural differences in transactional distance preference by Arab and American distance learners”, Quarterly Review of Distance Education, Vol. 11 No. 4, pp. 257-267.
Ames, K. (2016), “Distance education and ‘discovery learning’ in first-year journalism: a case in subject improvement”, Asia Pacific Media Educator, Vol. 26 No. 2, pp. 214-225.
Amoako, I. (2018), “Formative assessment practices among distance education tutors in Ghana”, African Journal of Teacher Education, Vol. 7 No. 3, pp. 22-36.
Aoun, C., Vatanasakdakul, S. and Ang, K. (2018), “Feedback for thought: examining the influence of feedback constituents on learning experience”, Studies in Higher Education, Vol. 43 No. 1, pp. 72-95.
Archer, J.C. (2010), “State of the science in health professional education: effective feedback”, Medical Education, Vol. 44 No. 1, pp. 101-108.
Asamoah, M.K. and Oheneba-Sakyi, Y. (2017), “Constructivist tenets applied in ICT- mediated teaching and learning: higher education perspectives”, Africa Education Review, Vol. 14 Nos 3/4, pp. 196-211.
Bada, S.O. (2015), “Constructivism learning theory: a paradigm for teaching and learning”, IOSR Journal of Research and Method in Education, Vol. 5 No. 6, pp. 2320-7388.
Badu-Nyarko, S.K. and Amponsah, S. (2016), “Assessment of challenges in distance education at University of Ghana”, Indian Journal of Open Learning, Vol. 25 No. 2, pp. 87-103.
Beldarrain, Y. (2006), “Distance education trends: integrating new technologies to foster student interaction and collaboration”, Distance Education, Vol. 27 No. 2, pp. 139-153.
Biney, I.K. (2020), “Experiences of adult learners using Sakai learning management system in learning in Ghana”, Journal of Adult and Continuing Education, Vol. 26 No. 2, pp. 262-282.
Boateng, R., Boateng, S.L., Awuah, R.B., Ansong, E. and Anderson, A.B. (2016), “Video in learning in higher education: assessing perceptions and attitudes of students at the University of Ghana”, Smart Learning Environment, Vol. 3, pp. 1-13.
Brookhart, S.M. (2008), “Effective feedback”, Educational Leadership, ASCD, Vol. 128 No. 1, doi: 10.1016/j.ajic.2009.04.219.
Brookhart, S.M. (2012), “Teacher feedback in formative classroom assessment”, in Webbe, C. and Lupart, J. (Eds), Leading Student Assessment, Springer, pp. 225-239.
Brooks, C., Carroll, A., Gillies, R.M. and Hattie, J. (2019), “A Matrix of feedback for learning”, Australian Journal of Teacher Education, Vol. 44 No. 4, pp. 14-32.
Brown, A. (2018), “Engaging students as partners in developing online learning and feedback activities for first-year fluid mechanics”, European Journal of Engineering Education, Vol. 43 No. 1, pp. 26-39.
Cavalcanti, A.P., Barbosa, A., Carvalho, R., Freitas, F., Tsai, Y.S., Gašević, D. and Mello, R.F. (2021), “Automatic feedback in online learning environments: a systematic literature review”, Computers and Education: Artificial Intelligence, Vol. 2, p. 100027.
Cavus, N. (2015), “Distance learning and learning management systems”, Procedia - Social and Behavioral Sciences, Vol. 191, pp. 872-877.
Chen, C., Landa, S., Padilla, A. and Yur-Austin, J. (2021), “Learners' experience and needs in online environments: adopting agility in teaching”, Journal of Research in Innovative Teaching and Learning, Vol. 14 No. 1, pp. 18-31.
Cooper, C.R. and Schindler, P.S. (2008), Business Research Methods, 10th ed., McGraw-Hill, Boston.
Cramp, A. (2011), “Developing first-year engagement with written feedback”, Active Learning in Higher Education, Vol. 12 No. 2, pp. 113-124.
Crimmins, G., Nash, G., Oprescu, F., Liebergreen, M., Turley, J., Bond, R. and Dayton, J. (2016), “A written, reflective and dialogic strategy for assessment feedback that can enhance student/teacher relationships”, Assessment and Evaluation in Higher Education, Vol. 41 No. 1, pp. 141-153.
Dahlstrom, E., Brooks, D.C. and Bichsel, J. (2014), “The current ecosystem of learning management systems in Higher Education: student, faculty, and IT perspectives”, EDUCAUSE Research Report (Issue September 2014), doi: 10.13140/RG.2.1.3751.6005.
Davis, L. (2020), “Digital learning: what to know”, Evolving, available at: https://gosa.georgia.gov/what-digital-learning
Deulen, A.A. (2013), “Social constructivism and online learning environments: toward a theological model for christian educators”, Christian Education Journal: Research on Educational Ministry, Vol. 10 No. 1, pp. 90-98.
Diab, N.M. (2011), “Assessing the relationship between different types of student feedback and the quality of revised writing”, Assessing Writing, Vol. 16 No. 4, pp. 274-292.
Double, K.S., McGrane, J.A. and Hopfenbeck, T.N. (2020), “The impact of peer assessment on academic performance: a meta-analysis of control group studies”, Educational Psychology Review, Vol. 32, pp. 481-509.
Dubey, P. and Sahu, K.K. (2022), “Investigating various factors that affect students' adoption intention to technology-enhanced learning”, Journal of Research in Innovative Teaching and Learning, Vol. 15 No. 1, pp. 110-131.
Espasa, A., Guasch, T., Mayordomo, R.M., Martínez-Melo, M. and Carless, D. (2018), “A Dialogic Feedback Index measuring key aspects of feedback processes in online learning environments”, Higher Education Research and Development, Vol. 37 No. 3, pp. 499-513.
Espasa, A., Mayordomo, R.M., Guasch, T. and Martinez-Melo, M. (2022), “Does the type of feedback channel used in online learning environments matter? Students' perceptions and impact on learning”, Active Learning in Higher Education, Vol. 23 No. 1, pp. 49-63.
Fonseca, B.A. and Chi, M.T. (2011), “Instruction based on self-explanation”, Handbook of Research on Learning and Instruction, Routledge, pp. 310-335.
Fornalski, K. (2015), “Application of robust Bayesian regression analysis”, International Journal of Systems Science, Vol. 7 No. 4, pp. 314-333.
Fulcher, K.H., Smith, K.L., Sanchez, E.R.H., Ames, A.J. and Meixner, C. (2017), “Return of the pig: standards for learning improvement”, Research and Practice in Assessment, Vol. 11, pp. 5-17.
Gaytan, J. and McEwen, B.C. (2007), “Effective online instructional and assessment strategies”, The American Journal of Distance Education, Vol. 21 No. 3, pp. 117-132.
Guasch, T., Espasa, A. and Martinez-Melo, M. (2019), “The art of questioning in online learning environments: the potentialities of feedback in writing”, Assessment and Evaluation in Higher Education, Vol. 44 No. 1, pp. 111-123.
Hair, J.F., Ringle, C.M. and Sarstedt, M. (2013), “Partial least squares structural equation modeling: rigorous applications, better results and higher acceptance”, Long Range Planning, Vol. 46 No. 1/2, pp. 1-12.
Hattie, J. (2009), Visible Learning: A Synthesis of over 800 Meta-Analyses Relating to Achievement, Routledge, London.
Hattie, J., Gan, M. and Brooks, C. (2017), “Instruction based on feedback”, in Mayer, R.E. and Alexander, P.A. (Eds), Handbook of Research on Learning and Instruction, 2nd ed., Routledge, pp. 290-324.
Hawe, E. and Parr, J. (2014), “Assessment for learning in the writing classroom: an incomplete realisation”, Curriculum Journal, Vol. 25 No. 2, pp. 210-237.
Hilton, S.K., Arkorful, H. and Martins, A. (2021), “Democratic leadership and organizational performance: the moderating effect of contingent reward”, Management Research Review, Vol. 44 No. 7, pp. 1042-1058.
Hsu, T.C. (2016), “Effects of a peer assessment system based on a grid-based knowledge classification approach on computer skills training”, Journal of Educational Technology and Society, Vol. 19 No. 4, pp. 100-111.
Jackson, B., Whipp, P.R., Chua, K.P., Dimmock, J.A. and Hagger, M.S. (2013), “Students' tripartite efficacy beliefs in high school physical education: within-and cross-domain relations with motivational processes and leisure-time physical activity”, Journal of Sport and Exercise Psychology, Vol. 35, pp. 72-84.
Johnson, R.B. and Christensen, L. (2004), Educational Research: Quantitative, Qualitative and Mixed Approaches, 2nd ed., Allyn & Bacon, SC.
Kigundu, S. (2014), “Engaging e-learning in higher education: issues and challenges”, International Journal of Educational Sciences, Vol. 6 No. 1, pp. 125-132.
Krejcie, R.V. and Morgan, D.W. (1970), “Determining sample size for research activities”, Educational and Psychological Measurement, Vol. 30, pp. 607-610.
Larbi-Apau, J.A., Guerra-Lopez, I., Moseley, J.L., Spannaus, T. and Yaprak, A. (2017), “Educational technology-related performance of teaching faculty in higher education”, Journal of Educational Technology Systems, Vol. 46 No. 1, pp. 61-79.
Li, L. and Gao, F. (2016), “The effect of peer assessment on project performance of students at different learning levels”, Assessment and Evaluation in Higher Education, Vol. 41 No. 6, pp. 885-900.
Li, L., Liu, X. and Steckelberg, A.L. (2010), “Assessor or assessee: how student learning improves by giving and receiving peer feedback”, British Journal of Educational Technology, Vol. 41 No. 3, pp. 525-536.
Liu, C.C. and Chen, I.J. (2010), “Evolution of constructivism”, Contemporary Issues in Education Research, Vol. 3 No. 4, pp. 63-66.
Lynch, M. (2016), “Social constructivism in education”, The Edvocate, available at: https://www.theedadvocate.org/social-constructivism-in-education/
Machin, D., Campbell, M.J., Tan, S.B. and Tan, S.H. (2018), Sample Sizes for Clinical, Laboratory and Epidemiology Studies, John Wiley & Sons.
McLeod, G.W. and Mortimer, R.J.G. (2012), “Evaluating feedback mechanisms in the school of earth and environment”, Planet, Vol. 26 No. 1, pp. 36-41.
Moore, C. and Teather, S. (2013), “Engaging students in peer review: feedback as learning”, Issues in Educational Research, Vol. 23 No. 2, pp. 196-211.
Moore, C.G., Carter, R.E., Nietert, P.J. and Stewart, P.W. (2011), “Recommendations for planning pilot studies in clinical and translational research”, Clinical and Translational Science, Vol. 4 No. 5, pp. 332-337.
Ngwenya, J. (2019), “Accounting teachers' experiences of communal feedback in rural South Africa”, South African Journal of Education, Vol. 39 No. 2, pp. 1-10.
Nicol, D. (2010), “From monologue to dialogue: improving written feedback processes in mass higher education”, Assessment and Evaluation in Higher Education, Vol. 35 No. 5, pp. 501-517.
Nicol, D. and Macfarlane‐Dick, D. (2006), “Formative assessment and self‐regulated learning: a model and seven principles of good feedback practice”, Studies in Higher Education, Vol. 31 No. 2, pp. 198-218.
Nueman, L.W. (2014), Social Research Methods: Qualitative and Quantitative Approaches, 7th ed., Pearson Education, London.
Peters, O., Körndle, H. and Narciss, S. (2018), “Effects of a formative assessment script on how vocational students generate formative feedback to a peer's or their own performance”, European Journal of Psychology of Education, Vol. 33 No. 1, pp. 117-143.
Piaget, J. (1953), The Origins of Intelligence in Children, Basic Books.
Powell, K.C. and Kalina, C.J. (2009), “Cognitive and social constructivism: developing tools for an effective classroom”, Journal of Education, Vol. 130 No. 2, pp. 241-250.
Price, M., Handley, K., Millar, J. and O'Donovan, B. (2010), “Feedback: all that effort, but what is the effect?”, Assessment and Evaluation in Higher Education, Vol. 35 No. 3, pp. 277-289.
Quansah, F., Ankoma-sey, V.R. and Aheto, S.-P.K. (2017), “Involvement in assessment decisions in Ghana”, Asian Journal of Distance Education, Vol. 12 No. 1, pp. 17-24.
Ramírez-Correa, P.E., Arenas-Gaitán, J. and Rondán-Cataluña, F.J. (2015), “Gender and acceptance of e-learning: a multi-group analysis based on a structural equation model among college students in Chile and Spain”, PLoS ONE, Vol. 10 No. 10, pp. 1-17.
Rotsaert, T., Panadero, E. and Schellens, T. (2018), “Anonymity as an instructional scaffold in peer assessment: its effects on peer feedback quality and evolution in students' perceptions about peer assessment skills”, European Journal of Psychology of Education, Vol. 35 No. 1, pp. 75-99.
Rucker, R.D. and Frass, L.R. (2017), “Migrating learning management systems in Higher Education”, Journal of Educational Technology Systems, Vol. 46 No. 2, pp. 259-277.
Ruohoniemi, M., Forni, M., Mikkonen, J. and Parpala, A. (2017), “Enhancing quality with a research-based student feedback instrument: a comparison of veterinary students' learning experiences in two culturally different European universities”, Quality in Higher Education, Vol. 23 No. 3, pp. 249-263.
Saba, F. (2002), “Evolution of research in distance education: challenges of the online distance learning environment”, Second Conference on Research in Distance and Adult Learning in Asia, The Open University of Hong Kong, Hong Kong.
Sadler, D.R. (1998), “Formative assessment: revisiting the territory”, Assessment in Education: Principles, Policy and Practice, Vol. 5 No. 1, pp. 77-84.
Sánchez, S.P., Belmonte, J.L., Cabrera, A.F. and Núñez, J.A.L. (2020), “Gamification as a methodological complement to flipped learning—an incident factor in learning improvement”, Multimodal Technologies and Interaction, Vol. 4 No. 2, p. 12.
Saunder, M., Lewis, P. and Thornhill, A. (2016), Research Methods for Business Students, 7th ed., Pearson Education, New York.
Seaman, J., Allen, I.E. and Seaman, J. (2018), Grade Increase: Tracking Distance Education in the United States, Babson Survey Research Group.
Shute, V.J. (2008), “Focus on formative feedback”, Review of Educational Research, Vol. 78 No. 1, pp. 153-189.
Sternad, D. (2015), “A challenge-feedback learning approach to teaching international business”, Journal of Teaching in International Business, Vol. 26 No. 4, pp. 241-257.
Tabachnick, B.G. and Fidell, L.S. (2014), Using Multivariate Statistics, 6th ed., Pearson Education, London.
Tagoe, M.A. and Cole, Y. (2020), “Using the Sakai Learning Management System to change the way Distance Education nursing students learn: are we getting it right?”, Open Learning: The Journal of Open, Distance and E-Learning, Vol. 35 No. 3, pp. 201-221.
Tan, F.D., Whipp, P.R., Gagné, M. and Van Quaquebeke, N. (2019), “Students' perception of teachers' two-way feedback interactions that impact learning”, Social Psychology of Education, Vol. 22 No. 1, pp. 169-187.
Terenzini, P.T. (2020), Rethinking Effective Student Learning Experiences, Inside Higher, available at: https://www.insidehighered.com/advice/2020/07/29/six-characteristics-promote-student-learning-opinion (accessed 28 August 2021).
Thompson, S.K. (2012), Sampling, 3rd ed., Wiley, PA.
Trines, S. (2018), Educating the Masses: The Rise of Online Education in Sub-saharan Africa and South Asia, World Education News Reviews, available at: https://wenr.wes.org/2018/08/educating-the-masses-the-rise-of-online-education
Tsai, C.W. (2013), “An effective online teaching method: the combination of collaborative learning with initiation and self-regulation learning with feedback”, Behaviour and Information Technology, Vol. 32 No. 7, pp. 712-723.
Unwin, T., Kleessen, B., Hollow, D., Williams, J.B., Oloo, L.M., Alwala, J., Mutimucuio, I., Eduardo, F. and Muianga, X. (2010), “Digital learning management systems in Africa: myths and realities”, Open Learning, Vol. 25 No. 1, pp. 5-23.
Vygotsky, L. (1978), “Interaction between learning and development”, Readings on the Development of Children, Vol. 23 No. 3, pp. 34-41.
Woo, Y. and Reeves, T.C. (2007), “Meaningful interaction in web-based learning: a social constructivist interpretation”, Internet and Higher Education, Vol. 10 No. 1, pp. 15-25.
Ypsilandis, G.S. (2002), “Feedback in distance education”, Computer Assisted Language Learning, Vol. 15 No. 2, pp. 167-181.