Abstract
Purpose
This systematic literature review paper critically examines the effectiveness of screencast feedback compared with text feedback in promoting student learning outcomes in online higher education. This paper aims to contribute to the ongoing discussion surrounding feedback modalities and their impact on online learning environments.
Design/methodology/approach
This paper adopts a systematic review approach to synthesise and analyse existing studies investigating the use of screencast feedback in online higher education settings. A comprehensive search and selection process was employed to identify relevant literature. The selected studies were then analysed for their methodologies, findings and implications. This paper seeks to provide an overview of the current state of research, highlighting the benefits, challenges and potential impacts of screencast feedback on student learning outcomes.
Findings
The findings of this paper suggest that while there is a positive perception of screencast feedback among students and instructors, drawing definitive conclusions about its superiority over text feedback remains at the very beginning. Students generally appreciate the personalised, supportive and engaging nature of screencast feedback, particularly within the online learning context. However, challenges such as technical barriers and potential workload implications for instructors are also noted. Further empirical research is needed to comprehensively evaluate the comparative efficacy of screencast feedback, considering factors like online engagement, digital literacy and the impact on diverse student populations.
Research limitations/implications
This review underscores the acute necessity for expansive and meticulously designed studies that can provide conclusive insights into the authentic potential of screencast feedback and its resonance within the unique landscape of online learning. Through rigorous inquiry, educators can discern the optimal strategies for harnessing the advantages of screencast feedback to enhance student learning outcomes, aligning harmoniously with the dynamics of virtual classrooms.
Practical implications
Screencast feedback emerges as a promising avenue to foster meaningful connections between instructors and learners. The review highlights that screencast feedback engenders a more dialogic interaction between lecturers and students, resulting in personalised, supportive and engaging feedback experiences.
Social implications
The systematic review conducted underscores the positive reception of screencast feedback from both students and lecturers in this context. The findings are consistent with the principles of social constructivist theory, suggesting that the interactive and personalised nature of screencast feedback facilitates a richer educational experience for students, even within the confines of virtual classrooms (Vygotsky, 1978).
Originality/value
This innovative blend of methodologies contributes new insights that can inform educational practices and pedagogical strategies in online learning environments.
Keywords
Citation
Din Eak, A. and Annamalai, N. (2024), "Enhancing online learning: a systematic literature review exploring the impact of screencast feedback on student learning outcomes", Asian Association of Open Universities Journal, Vol. 19 No. 3, pp. 247-263. https://doi.org/10.1108/AAOUJ-08-2023-0100
Publisher
:Emerald Publishing Limited
Copyright © 2024, Arathai Din Eak and Nagaletchimee Annamalai
License
Published in the Asian Association of Open Universities Journal. Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) license. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this license may be seen at http://creativecommons.org/licences/by/4.0/legalcode
Introduction
Feedback plays a crucial role in the learning process of students in higher education (Brown, 2019; Henderson et al., 2019; Hyland and Hyland, 2019; Maslova et al., 2022; Winstone and Carless, 2019). It is a valuable tool for illuminating students’ strengths and identifying areas requiring improvement, facilitating their overall progress and development. Respected scholars like Ramaprasad (1983) and Bardach et al. (2021) underscore feedback’s informative nature, revealing comprehension gaps. Furthermore, feedback’s significance extends beyond mere information provision; it plays a pivotal role in motivating and nurturing students, effectively countering disengagement and the sense of isolation that can potentially lead to academic setbacks (Miller and Corley, 2001; Taskiran and Yazici, 2021). Noteworthy studies conducted in higher education underscore the pivotal role of feedback in enhancing learner success and satisfaction (Elola and Oskoz, 2016; Espasa et al., 2018, 2019; Espasa and Meneses, 2010; Grigoryan, 2017; Moore, 2016; Ofoha, 2012).
Within the domain of online learning, feedback assumes an exceptionally pivotal role. Online learners attribute immense value to feedback (Kara et al., 2019; Musingafi et al., 2015) as it evolves into a cornerstone that underpins their journey towards success and contentment. This prominence is a direct consequence of the absence of direct interactions with instructors and peers, a hallmark of online education. Consequently, the motivational aspect of feedback becomes even more pronounced and indispensable in this setting. To articulate this differently, providing effective feedback to online learners emerges as an imperative requirement (Simonson et al., 2019). It empowers these learners to apprehend their strengths and discern areas that warrant improvement, thus serving as a driving force that propels their ongoing learning trajectory.
Moreover, this process fosters a heightened sense of motivation and engagement, bestowing upon learners a substantial phenomenon of robust support and genuine recognition within the unique online learning setting. Despite being acknowledged as a fundamental element of the learning process and highly valued by students, feedback in higher education, particularly in online educational settings, frequently falls short, resulting in limited enhancements to revisions and subsequent assignments. It is often overlooked, misinterpreted or perceived as needing more clarity, connection and specificity (Crook et al., 2012; Douglas et al., 2016; Orsmond and Merry, 2011). The above shortcomings are apparent in the low ratings attributed to feedback, as demonstrated by the findings of the National Student Experience surveys (Office for Students, 2022).
Traditional versus multimodal feedback: a comparative overview
Online learning heavily relies on text-based feedback on written assignments as the primary means of interaction (Simonson, 2022; Simonson et al., 2019). This reliance stems from the absence of direct interactions with instructors and peers, making assignments a crucial medium for students to articulate and convey their ideas and arguments proficiently (Kara et al., 2019; Simonson, 2022; Simonson et al., 2019). However, acquiring this skill is not without challenges, and the absence of adequate guidance might hinder students’ academic achievements.
Despite the widespread use of text-based feedback in higher education (Bahula and Kay, 2021; McCarthy, 2015), students criticise it for its inefficiency and lack of effectiveness (Cunningham and Link, 2021; Musingafi et al., 2015; Orsmond and Merry, 2011). The limited scope of guidance in the comments lacks explicit examples of constructive feedback (Killingback et al., 2019). Moreover, online education lacks the capacity to interpret non-verbal cues and visually evaluate students’ understanding (Uribe and Vaughan, 2017). Consequently, individuals engaged in online learning frequently lack the essential assistance and constructive criticism required to enhance their proficiency in academic writing (Fidalgo et al., 2020; Ilonga et al., 2020; Musingafi et al., 2015). Without clear examples, students may struggle to identify specific areas for improvement and not fully grasp how to make the necessary changes.
Providing impactful and high-quality feedback to students presents a formidable challenge (Winstone and Carless, 2019), particularly in online education environments where many programmes rely on text-based feedback strategies that lack the interpersonal cues that underlie effective social interaction (Seckman, 2018). The complexity of this challenge is rooted in the need to tailor feedback to the unique requirements of individual students, ensuring that it is specific, timely and consistently delivered. Achieving this level of precision necessitates instructors possessing an in-depth comprehension of students' needs and the expertise to furnish feedback that resonates meaningfully, thereby propelling student advancement.
Given the challenges associated with text-based feedback, there is growing interest in using technology to improve feedback practices. Students have responded positively to technology-assisted feedback, increasing productivity (Bissell, 2017; Kim, 2018; Marriott and Teoh, 2012; Swartz and Gachago, 2018). Multimodal feedback, which involves presenting information through different senses, such as visuals, audio and video, has shown promising results in enhancing students’ learning experiences.
With advancements in technology, instructors can now utilise a wide range of existing and newly developed tools to effectively provide feedback in unprecedented ways (Campbell and Feldmann, 2017; Tyrer, 2021). Multimodal feedback encompasses a diverse range of strategies and necessitates the utilisation of multiple modes of communication (Campbell and Feldmann, 2017). Instructors are presented with numerous possibilities for implementing feedback on written assignments, such as audibly documenting their commentary while reviewing a student’s written work and offering feedback that would typically be annotated in the document’s margins. Feedback can also be delivered through recorded conversations with the student present, enhancing the traditional individual writing conference.
An alternative approach involves using an instructor-generated screencast, wherein the instructor records themselves reviewing the student’s paper, incorporating comments and posing clarifying inquiries. This method effectively showcases areas for potential improvement within the specific context of the paper. The students can perceive auditory and visual stimuli as they are exposed to the instructor’s voice and witness the instructors evaluate their work. Technology enables multimodal feedback, significantly influencing students' perceptions of their work and the writing process. Multimodal feedback allows students to pause, revise, replay or revisit the feedback later, aligning with the cognitive theory of multimedia learning (Mayer, 2002, 2014), emphasising the effective utilisation of various multimedia elements to enhance comprehension.
However, the application of multimodal feedback in online education remains unexplored, mainly warranting further investigation. While alternative methods have proven effective, their application in online education requires more studies examining technology-assisted feedback, mainly through a multimodal approach, to enhance online learners’ learning and academic writing skills. By investigating how students effectively utilise multimodal feedback, online instructors can design feedback systems that address students’ writing challenges. Moreover, clear guidelines on effective feedback strategies can empower online instructors to guide their students' feedback processes better.
Systematic reviews on screencast feedback: identifying gaps and defining research questions
Multimodal feedback, particularly screencast video feedback, has been extensively studied in higher education, primarily focusing on students’ perspectives. Bahula and Kay (2021) conducted a systematic review exploring students’ perceptions of video-based feedback in higher education, analysing fifty-eight peer-reviewed articles published between 2009 and 2019. Their findings revealed that video feedback positively influenced students’ perceptions, engagement and academic performance, highlighting benefits such as increased clarity, personalisation and the ability to convey non-verbal cues. However, their review did not specifically focus on online learning settings and the unique challenges online learners face.
Similarly, Penn and Brown’s (2022) systematic review compared screencast feedback to text feedback in higher education, examining their impact on student learning. Their review, which included 15 studies published between 2011 and 2020, found that screencast feedback was generally more effective than text feedback regarding student satisfaction, engagement and academic performance. The authors attributed the success of screencast feedback to its ability to provide more detailed, personalised and interactive feedback compared to traditional text-based methods.
While Penn and Brown’s (2022) review included some of the literature items covered in the current study and provided valuable insights into the effectiveness of screencast feedback, it did not specifically address the impact of multimodal feedback on online learners’ satisfaction with assignment feedback. The current systematic review aims to extend the work of Penn and Brown (2022) by focusing on this specific context and incorporating more recent studies published after 2020.
The rapid advancements in technology and the increasing adoption of online learning necessitate an updated understanding of the impact of multimodal feedback on online learners’ satisfaction. The unique challenges online learners face warrant a targeted investigation into the effectiveness of multimodal feedback in this setting. By examining factors such as clarity, personalisation, and interactivity, this review provides actionable insights and recommendations for optimising the design and implementation of multimodal feedback strategies for online learners.
The core objective of this systematic review is to investigate the following research question: Does multimodal feedback through screencast feedback effectively improve online students’ learning outcomes in higher education? By focusing on online learning environments and the specific components of multimodal feedback that contribute to their effectiveness, this review aims to provide valuable insights and recommendations for educational practitioners and researchers.
Given the nascent stage of multimodal feedback research within higher education, particularly in online learning environments, this study provides a unique opportunity for researchers to attain a profound and nuanced understanding. The investigation strives to contribute significantly to developing practised pedagogical and research frameworks. The outcomes of this inquiry have the potential to equip educational practitioners with invaluable insights, enabling them to adeptly craft and implement multimodal feedback strategies that yield tangible and meaningful results for online learners. This article will discuss how multimodal feedback, which involves presenting information through different senses, such as visuals, audio and video, can enhance students’ learning outcomes in online learning.
Materials and method procedure
According to Popay et al. (2006), systematic reviews in the social sciences should be conducted in three distinct stages. The first research stage involved identifying the research question, databases, search terms and inclusion criteria. The researchers conducted a comprehensive literature search to identify and select relevant studies that met our criteria. The second stage involved gathering and organising data consistently across studies and capturing key characteristics, methodology and results. The form also allowed us to evaluate the quality and rigour of each study, which informed our assessment of the strength of their respective conclusions. In stage three, the researchers synthesised the findings from the selected studies to provide an answer to our research question. By adopting this approach, the study will provide practitioners and scholars in higher education with a robust and reliable evaluation of the available empirical evidence on screencast video feedback.
Databases and search strategy
The present study used secondary data analysis to examine peer-reviewed research articles published in scholarly journals between 2013 and 2023. The decision to focus on articles published from 2013 onwards was based on several factors related to the rapid advancements in screencast technology and its adoption in higher education in recent years. While screencast technology has been available since the early 2000s (Udell, 2005), it has undergone significant improvements in terms of quality, accessibility and ease of use over the past decade (Mahoney et al., 2019; Noetel et al., 2021). These advancements have been driven by factors such as the increasing availability of high-speed internet connections, the development of user-friendly screencasting tools and the growing emphasis on technology-enhanced learning in higher education (Ghilay and Ghilay, 2015; Mohorovičić and Tijan, 2011). Furthermore, the rapid shift towards online and remote learning after the coronavirus disease 2019 (COVID-19) pandemic has accelerated the adoption of screencasting and other digital feedback methods in higher education (Carrillo and Flores, 2020; Mishra et al., 2020). In light of these developments, the researchers argue that limiting the review to studies published from 2013 onwards is critical for capturing the most relevant and contemporary research on the use of screencasting for feedback in higher education. This timeframe ensures that the analysis incorporates the latest advancements and reflects the current state of the field, thereby providing a robust and up-to-date understanding of the topic.
For this study, the researchers searched four online databases known for their credibility and scholarly standards: ScienceDirect, Scopus, ERIC and Google Scholar. These databases were selected for their complementary coverage of peer-reviewed literature and relevant journals in education and technology. According to Bramer et al. (2017) and Hartling et al. (2017), multiple databases can help ensure a more thorough literature search. Each database has strengths and limitations, and combining them can help minimise the risk of missing relevant studies.
A thesaurus and scoping search were utilised to identify relevant terminology for research on screencasting, feedback and higher education. To ensure a comprehensive inclusion of studies related to screencast feedback, the search strategy intentionally excluded the terms “student learning” and “student revision”. The search string employed was “(screencast* OR “screen cast*” OR “audio-visual” OR “screen capture”) AND (student* OR “higher education” OR college OR university* OR “post-secondary” OR postsecondary OR undergrad* OR postgrad* OR tertiary) AND (feedback OR “feed back”).” The use of wildcard symbols allowed for the identification of all variations of the terms “screencast”, “student” and “university”. In addition, the term “audio-visual” was added to reflect the use of audio and visual feedback, which is common with screencasting.
Inclusion criteria
This review’s primary scope encompassed examining multimodal feedback’s impact on students’ learning outcomes within the context of writing and assignment coursework. The evaluation embraced direct measures, including criteria such as quality, clarity, coherence, organisation, grammar, spelling, punctuation and other pertinent aspects of writing. Additionally, indirect measures, such as student reflections and factors influencing the outcomes, like feedback delivery methods, were considered. Notably, only studies that offered feedback on written assessments, both formative and summative, in higher education settings via online platforms were considered. This selective criterion ensured consistency across the range of studies under scrutiny.
The search was confined to academic articles published in peer-reviewed journals, excluding unpublished dissertations and conference proceedings, as these are not subject to peer review. All studies published in English were included, irrespective of geographic origin. The search was completed in July 2023 to ensure an up-to-date and comprehensive literature review on screencast video feedback in higher education.
Selection process
A thorough search of common educational databases yielded a total of 457 articles. These articles underwent a meticulous review process, with two independent reviewers assessing the titles and abstracts to determine eligibility for full-text analysis. Figure 1 illustrates the literature search process, showing the screening and selection stages. Any discrepancies between the reviewers were resolved through consultation with a third reviewer. Of the initially screened articles, 376 were excluded for lacking a multimodal intervention, focusing on group feedback rather than individual feedback, not being about online learning contexts, or lacking emphasis on writing.
Following this initial screening, the remaining 81 articles were further categorised based on predefined inclusion criteria. The literature search results from the two reviewers are detailed in Table 1, which shows the categorisation of articles by relevance, uncertainty and irrelevance, along with post-discussion outcomes. Among these, 56 were subsequently excluded for not meeting the specified criteria. This led to a final selection of 25 articles for an in-depth full-text review. After rigorous scrutiny, nine articles were chosen for comprehensive data analysis and inclusion in the review. The selection process adhered to a high standard of rigour, as demonstrated by a calculated Cohen’s Kappa statistic of 0.91 and a percentage agreement of 95.15% to assess interrater reliability. By employing such stringent selection criteria, the present study ensured that the chosen articles were highly relevant and aligned with the predetermined criteria for inclusion. The outcomes of this review yield valuable insights into the investigated topic and may carry implications for future research in this domain.
Data analysis
The initial phase of the review involved analysing all selected articles using key descriptors, encompassing factors such as publication year, country of origin, academic level, academic discipline, assessment methods, media modalities employed and feedback length. Subsequently, the review delved into identifying emerging themes by meticulously scrutinising the results and discussion sections of the articles. A sample of five articles was conveniently selected for open coding to ensure consistency and alignment with the emerging themes. This approach helped establish a robust foundation for the thematic framework. The constant comparative method (Strauss and Corbin, 1998) was methodically applied as the analysis progressed to code the remaining articles. This methodological choice further solidified the credibility of the thematic categorisation. The focal point of this review centres around three principal themes: enhanced clarity and understanding, heightened engagement and motivation, and precision-oriented skill development. By drawing insights from the analysed articles, the review has shed light on the efficacy of multimodal feedback in enriching students’ learning outcomes and elevating their proficiency in writing skills.
Study characteristics
For this review, studies published between 2013 and 2023 were selected. In total, five studies (n = 5) were conducted in the United States of America, followed by Turkey (n = 1), the United Kingdom (n = 1), Australia (n = 1) and Ecuador (n = 1). Figure 2 illustrates the population of the studies selected for analysis, highlighting the geographical distribution and participant numbers. Three studies were conducted in a blended setting, and the other five were conducted in online learning settings. Students ranging from the first- to fourth-year undergraduate levels were included in the studies, and two studies included students at the postgraduate levels. Participants ranged from 20 to 180, and a diverse range of subjects was included in the study (see Table 2).
An overview of study interventions
Table 3 reveals distinct comparative patterns among the studies; four studies (n = 4) compared text-only feedback and feedback comprising text and screencast elements. Additionally, two studies compared screencast feedback and text with highlighting (n = 2). One study (n = 1) compared written feedback (WF) and a combination of screencast and WF, while another individual study (n = 1) compared text and video feedback. A singular study (n = 1) adopted the sole utilisation of screencast feedback as its intervention. Delivering feedback through screencasts primarily employed the Jing software programme, which confines screencasts to five minutes (n = 4).
Six studies were analysed in which the instructors provided feedback to participants (Borup et al., 2015; Cavaleri et al., 2019; Cheng and Li, 2020; Huachizaca and Yambay-Armijos, 2023; Lowenthal et al., 2022; Yiğit and Seferoğlu, 2021). One study did not report the researcher’s involvement (Harper et al., 2018), while the remaining two did not involve the researcher in providing feedback (Anson et al., 2016; Grigoryan, 2017).
Results and conclusion
Table 4 illustrates that the primary outcomes of the studies focused on factors that directly or indirectly impacted student learning. Students’ performance on assignments, essay revisions and grades were considered explicit indicators of student learning. In contrast, student satisfaction and motivation were considered implicit indicators of student learning and factors affecting student learning.
Direct evidence of student learning
The impact of feedback modality on student learning in academic writing skills, essay revisions and academic grades was investigated through six studies (Cavaleri et al., 2019; Cheng and Li, 2020; Grigoryan, 2017; Huachizaca and Yambay-Armijos, 2023; Lowenthal et al., 2022; Yiğit and Seferoğlu, 2021). Grigoryan (2017) observed no significant distinctions in revision quantity or type between text feedback and combined text-screencast feedback among a larger group of fifty first-year writing composition students. Conversely, Cavaleri et al. (2019) noted that screencast videos garnered more content-related comments, while Microsoft Word comments predominantly concerned grammar and language usage. Positive improvements were observed in 84% of cases where screencast feedback and minimal marginal comments were used. According to logistic regression analysis, this resulted in 1.59 times higher odds of successful revisions. Interestingly, students with lower English proficiency improved more than their proficient counterparts.
Yiğit and Seferoğlu (2021) demonstrated the efficacy of video feedback in aiding 43 undergraduates to revise assignments and incorporate feedback into subsequent drafts. Similarly, Huachizaca and Yambay-Armijos (2023) affirmed the advantages of audio-visual feedback (AVF) and WF for enhancing writing skills among higher education students in virtual classrooms. Their treatment, a combination of AVF and WF, effectively improved mechanics, including punctuation, capitalisation and spelling. Furthermore, in their study, it was observed that the treatment group exhibited enhanced text organisation, particularly in the utilisation of transition words within paragraphs. Consequently, the treatment positively influenced online writing skills. Lowenthal et al. (2022) concurred that students expressed contentment with video feedback, attributing it to heightened perceived learning. In summary, these studies collectively emphasise the significance of feedback modality in advancing student learning outcomes in writing skills, revisions and academic achievement.
When evaluating specific writing aspects, Grigoryan (2017) conducted a comprehensive analysis encompassing overall essay quality, task completion, content relevance, organisation, purpose, intended audience, mechanics and paragraph structure. However, this investigation yielded no statistically significant disparities in overall essay grades or other writing-related concerns. Notably, students who received combined screencast and text feedback on their initial drafts achieved notably higher scores in the categories of purpose and audience (Grigoryan, 2017). In another study by Borup et al. (2015), they observed that feedback delivered through video communication tended to adopt a more conversational, supportive and elaborative tone compared to what could practically be achieved with text-based communication. This observation was substantiated by their analysis of feedback comments, indicating that video feedback exhibited significantly higher word counts than textual comments (p < 0.01). Furthermore, video feedback featured a greater frequency of praise, support and efforts to build a rapport with the recipient than text-based feedback. However, both students and instructors in their study concurred that while video feedback had its merits, textual feedback was more convenient and efficient.
Indirect evidence of student learning
Screencast feedback has been a subject of exploration in five distinct research studies (Anson et al., 2016; Cheng and Li, 2020; Harper et al., 2018; Lowenthal et al., 2022; Yiğit and Seferoğlu, 2021), where its impact on student perception was investigated. Notably, Yiit and Seferolu (2021) highlighted the superiority of video feedback over text-based feedback, particularly within online learning environments. A prevailing sentiment among students was their favourable perception of screencast feedback, characterising it as more personalised, supportive, detailed, helpful and comprehensible than textual feedback. The personalised nature of screencast feedback held particular significance, imbuing students with a sense of direct engagement with their instructors. Consequently, this dynamic fostered increased interaction, allowing for follow-up inquiries arising from the feedback.
Moreover, students lauded screencast feedback for its practicality in tracking the instructor’s cursor movements and observing real-time corrections on the screen, bolstering its usefulness and clarity. Multiple researchers have underscored the pedagogical value of screencast feedback. Lowenthal et al. (2022), Anson et al. (2016), Harper et al. (2018) and Yiğit and Seferoğlu (2021) concurred that screencast feedback allows more comprehensive engagement, incorporating additional words and examples that serve as effective teaching aids.
However, this approach was not without its limitations. Certain students perceived screencasts as potentially time-intensive, grappling with concerns like video file downloads, while others felt the feedback videos tended to be overly protracted (Cheng and Li, 2020). Among studies delving into student preferences, Cheng and Li (2020) and Lowenthal et al. (2022) demonstrated a distinct preference for screencast feedback. However, it is worth noting that some students, as reported by Borup et al. (2015), still lean towards text-based feedback.
Factors influencing student learning outcomes
In recent years, education has witnessed extensive research centred around screencast feedback. The focus of this research has been two-fold: understanding how lecturers perceive screencast feedback and delving into the language used in tutor feedback. Notably, Harper et al. (2018) revealed that screencast feedback empowers lecturers to offer more thorough insights while engaging students in a more impactful and less overwhelming manner, unlike text-based feedback. This feedback mode has even exhibited advantages for students and lecturers facing learning difficulties, such as dyslexia, owing to its auditory or verbal nature (Harper et al., 2018).
Exploring the language employed in tutor feedback, Anson et al. (2016), Cavaleri et al. (2019), Harper et al. (2018) and Yiğit and Seferoğlu (2021) conducted studies that highlighted the personalised and supportive nature of screencast feedback, rendering it more comprehensible than its textual counterpart (Harper et al., 2018; Yiğit and Seferoğlu, 2021). Additionally, the positive repercussions of screencast feedback extended to heightened student motivation and improved social presence, ultimately fostering enhanced learning outcomes (Lowenthal et al., 2022).
While numerous benefits have emerged from adopting screencast feedback, lecturers have raised some concerns, including initial apprehension and the time required to master this new method (Harper et al., 2018). Nonetheless, implementing screencast feedback has not negatively affected student learning. Diverse studies have also examined the language effects of feedback modality, revealing that screencast feedback stimulates student self-sufficiency and offers in-depth insights into content, structure and organisation (Anson et al., 2016; Cavaleri et al., 2019; Harper et al., 2018). Similarly, Cheng and Li (2020) affirmed the personalised and supportive attributes of screencast feedback, reinforcing its potential to foster a more robust lecturer-student relationship. Furthermore, the efficacy of screencast feedback extended to students and educators facing learning challenges, showcasing its effectiveness in addressing diverse learning needs, such as dyslexia.
Taking a step further, Harper et al. (2018) conducted a study that evaluated the synergy of audio and visual feedback in augmenting students' exposure to spoken language. This approach blended textual and screencast feedback, effectively enhancing students’ interactions with course material through personalised and supportive guidance. Similarly, Huachizaca and Yambay-Armijos (2023) explored the potency of audio-visual and WF in refining students’ writing skills. Their findings indicated that a combination of AVF + WF significantly influenced mechanics and the usage of transition words compared to text-based feedback.
Further enriching our understanding, Cavaleri et al. (2019) dissected screencast feedback by analysing feedback comments and student interviews. Their investigation unveiled that verbal explanations and the ability to perceive the lecturer’s sentiments could heighten student engagement beyond what text-based feedback offered. Additionally, students showed the engaging and supportive attributes of screencast feedback compared to text-based alternatives. In essence, the evolving landscape of educational feedback is profoundly shaped by the insights offered by these studies.
Discussion
The surge of interest in screencast feedback within higher education aligns notably with the paradigm shift towards online learning over the past decade. This discussion revolves around the effectiveness of screencast feedback in an online learning context, addressing its benefits, challenges and potential impact on student learning outcomes.
In online education, where face-to-face interactions are limited, the significance of adequate feedback mechanisms is amplified. The systematic review underscores the positive reception of screencast feedback from students and lecturers in this context. The findings are consistent with the principles of social constructivist theory, suggesting that the interactive and personalised nature of screencast feedback facilitates a richer educational experience for students, even within the confines of virtual classrooms (Vygotsky, 1978).
Through the lens of online learning, screencast feedback emerges as a promising avenue to foster meaningful connections between instructors and learners. The review highlights that screencast feedback engenders a more dialogic interaction between lecturers and students, resulting in personalised, supportive and engaging feedback experiences (Anson et al., 2016; Cavaleri et al., 2019; Harper et al., 2018). The very attributes students value, such as the easy-to-understand nature and detailed explanations, become crucial in an online setting where the absence of face-to-face cues might exacerbate the potential for misinterpretation.
However, the online context brings to the forefront certain challenges unique to screencast feedback. While students report the advantages of personalised and supportive feedback, some still prefer text feedback due to potential technical barriers and the need for effective listening and reading skills (Borup et al., 2015). These issues may become accentuated in online learning environments where technology literacy levels vary among students. Moreover, instructors express concerns about the additional workload and anxiety of producing compelling videos (Harper et al., 2018). These issues can be particularly pronounced in the already demanding landscape of online education.
The dynamics of screencast feedback’s effectiveness in online learning extend beyond traditional boundaries. The interactive nature of screencast feedback might mitigate the perceived detachment of online education, fostering a sense of instructor–student relationship crucial for student engagement and motivation. Moreover, the flexibility of asynchronous access to screencast feedback can accommodate diverse learning schedules, enhancing the convenience and utility of the feedback process for online students.
Nevertheless, the need for empirical research specifically tailored to the online learning environment is evident. While the review’s findings allude to the potential benefits of screencast feedback in enhancing student learning outcomes, the online context introduces variables that warrant further investigation. Factors such as online engagement, digital literacy and the impact of screencast feedback on different learning styles in virtual classrooms necessitate rigorous examination.
Screencast feedback’s role in enhancing student learning outcomes within online education is a topic of increasing significance. The review reflects a positive outlook on screencast feedback’s personalised, engaging and impactful attributes, especially in the virtual classroom. However, the challenges of technical barriers, varying student preferences and potential workload implications require further exploration. As online education evolves, understanding the intricate interplay of screencast feedback’s affordances and limitations will be pivotal in optimising its contribution to student learning in the digital realm.
Conclusion
In conclusion, the rise of screencast feedback within higher education is closely intertwined with the ongoing shift towards online learning, reflecting its growing relevance in this digital era. The systematic review reveals that screencast feedback offers substantial benefits, particularly in fostering personalised, interactive and engaging feedback experiences that resonate well with students and instructors. The alignment of screencast feedback with social constructivist principles highlights its potential to create a richer educational experience, even without traditional face-to-face interactions.
However, the transition to online education also brings unique challenges that must be addressed. While screencast feedback is generally well-received for its clarity and detailed explanations, issues such as technological barriers, varying levels of digital literacy and increased workload for instructors present significant obstacles. These challenges underscore the need for further empirical research to explore the nuanced dynamics of screencast feedback in online learning environments. Such research should focus on factors like student engagement, the effectiveness of feedback across different learning styles and the impact of digital literacy on the feedback process.
As online education expands, the role of screencast feedback in enhancing student learning outcomes becomes increasingly significant. By understanding and addressing its affordances and limitations, educators can optimise using screencast feedback to improve student engagement, comprehension and achievement in virtual classrooms. This will be crucial in ensuring that screencast feedback complements and enhances the evolving landscape of higher education in the digital age.
Figures
Literature search results from two reviewers
Reviewer 1/Reviewer 2 | Relevant | Uncertain | Irrelevant |
---|---|---|---|
Relevant | A (n = 25) | B (n = 0) | D (n = 2) |
Uncertain | B (n = 0) | C (n = 0) | E (n = 0) |
Irrelevant | D (n = 2) | E (n = 0) | F (n = 16) |
Category | No. of articles | No. of articles post-discussion | Remarks |
A | 25 | 9 | Accepted for review |
B | 0 | 0 | – |
C | 0 | 0 | – |
D | 2 | 0 | Re-categorised after discussion |
E | 2 | 0 | Re-categorised after discussion |
F | 16 | 16 | Excluded from review |
Source(s): Adapted from Chang and Kabilan (2022)
Population demographics in studies selected for analysis
Author(s) | Location | Sample size | Level | Subject | Mode of teaching |
---|---|---|---|---|---|
Cheng and Li (2020) | USA | 54 students | Postgraduate | English Writing | Online |
Harper et al. (2018) | UK | 54 students 9 lecturers | Undergraduate | Spanish and German | Online |
Yiğit and Seferoğlu (2021) | Turkey | 43 students | Undergraduate | Computer Education and Instructional Technologies | Online |
Cavaleri et al. (2019) | Australia | 20 students | Undergraduate | Not reported | Blended |
Anson et al. (2016) | USA | 141 students | Undergraduate | Sciences, Social Sciences and Humanities | Blended |
Grigoryan (2017) | USA | 50 students | Undergraduate | Composition courses | Online |
Huachizaca and Yambay-Armijos (2023) | Ecuador | 129 students | Undergraduate | English Writing | Online |
Borup et al. (2015) | USA | 180 students | Undergraduate | Educational Technology | Blended |
Lowenthal et al. (2022) | USA | 84 students | Postgraduate | Educational Technology | Online |
Source(s): Adapted from Penn and Brown (2022)
The methods used by the selected studies
Author(s) | Feedback interventions | Screencast programme |
---|---|---|
Cheng and Li (2020) | Text vs Screencast | Screencast-O-Matic |
Harper et al. (2018) | Text vs (Text + Screencast) | Jing |
Yiğit and Seferoğlu (2021) | Text vs Screencast | Not reported |
Cavaleri et al. (2019) | Text vs (Screencast + Text) | Jing |
Anson et al. (2016) | Text vs (Screencast + Text) | Jing |
Grigoryan (2017) | Text vs (Text + Screencast) | Jing |
Huachizaca and Yambay-Armijos (2023) | Written vs (Screencast + + written) | Google Meet and Screencast-O-Matic |
Borup et al. (2015) | Text vs Video | Canva Video |
Lowenthal et al. (2022) | Screencast | Camtasia |
Source(s): Adapted from Penn and Brown (2022)
Measures of results and tools employed by studies chosen for analysis
Author(s) | Primary Outcome/s | Instrument/s | Assignment/s |
---|---|---|---|
Cheng and Li (2020) | Student perceptions, effective feedback practices, revisions | Open-ended questions | Essays |
Harper et al. (2018) | Student and lecturer perceptions, feedback quality/quantity | Questionnaires, lecturer feedback reviews | Essays |
Yiğit and Seferoğlu (2021) | Effectiveness of feedback use | Feedback use | Academic assignments |
Cavaleri et al. (2019) | Student engagement, feedback quality/quantity | Revisions made, interviews, lecturer feedback reviews | Various academic assignments |
Anson et al. (2016) | Student perceptions, mediation of face, construction of identities | Questionnaires, interviews | 3–5 page essays |
Grigoryan (2017) | Essay revisions and writing skill | Revisions made | 1,000–1,300 word essays |
Huachizaca and Yambay-Armijos (2023) | Effectiveness of feedback practices, revisions | Pre and post-interventions | Essays |
Borup et al. (2015) | Student perceptions | Open-ended surveys and interviews | Portfolio-based assignment |
Lowenthal et al. (2022) | Student perceptions | Surveys | Academic assignment |
Source(s): Adapted from Penn and Brown (2022)
References
Anson, C.M., Dannels, D.P., Laboy, J.I. and Carneiro, L. (2016), “Students' perceptions of oral screencast responses to their writing: exploring digitally mediated identities”, Journal of Business and Technical Communication, Vol. 30 No. 3, pp. 378-411, doi: 10.1177/1050651916636424.
Bahula, T. and Kay, R. (2021), “Exploring student perceptions of video-based feedback in higher education: a systematic review of the literature”, Journal of Higher Education Theory and Practice, Vol. 21 No. 4, doi: 10.33423/jhetp.v21i4.4224.
Bardach, L., Klassen, R.M., Durksen, T.L., Rushby, J.V., Bostwick, K.C.P. and Sheridan, L. (2021), “The power of feedback and reflection: testing an online scenario-based learning intervention for student teachers”, Computers and Education, Vol. 169, 104194, doi: 10.1016/j.compedu.2021.104194.
Bissell, L. (2017), “Screen-casting as a technology-enhanced feedback mode”, Journal of Perspectives in Applied Academic Practice, Vol. 5 No. 1, doi: 10.14297/jpaap.v5i1.223.
Borup, J., West, R.E. and Thomas, R. (2015), “The impact of text versus video communication on instructor feedback in blended courses”, Educational Technology Research and Development, Vol. 63 No. 2, pp. 161-184, doi: 10.1007/s11423-015-9.
Bramer, W.M., Rethlefsen, M.L., Kleijnen, J. and Franco, O.H. (2017), “Optimal database combinations for literature searches in systematic reviews: a prospective exploratory study”, Systematic Reviews, Vol. 6, pp. 1-12, doi: 10.1186/s13643-017-0644-y.
Brown, S. (2019), “Using assessment and feedback to empower students and enhance their learning”, in Innovative Assessment in Higher Education. A Handbook for Academic Practitioners, pp. 50-63, doi: 10.4324/9780429506857-5.
Campbell, B.S. and Feldmann, A. (2017), “The power of multimodal feedback”, Journal of Curriculum, Teaching, Learning and Leadership in Education, Vol. 2 No. 2, p. 1.
Carrillo, C. and Flores, M.A. (2020), “COVID-19 and teacher education: a literature review of online teaching and learning practices”, European Journal of Teacher Education, Vol. 43 No. 4, pp. 466-487, doi: 10.1080/02619768.2020.1821184.
Cavaleri, M., Kawaguchi, S., Di Biase, B. and Power, C. (2019), “How recorded audio-visual feedback can improve academic language support”, Journal of University Teaching and Learning Practice, Vol. 16 No. 4, p. 6, doi: 10.53761/1.16.4.6.
Chang, S.L. and Kabilan, M.K. (2022), “Using social media as e-Portfolios to support learning in higher education: a literature analysis”, Journal of Computing in Higher Education, Vol. 36, pp. 1-28, doi: 10.1007/s12528-022-09344-z.
Cheng, D. and Li, M. (2020), “Screencast video feedback in online TESOL classes”, Computers and Composition, Vol. 58, 102612, doi: 10.1016/j.compcom.2020.102612.
Crook, A., Mauchline, A., Maw, S., Lawson, C., Drinkwater, R., Lundqvist, K., Orsmond, P., Gomez, S. and Park, J. (2012), “The use of video technology for providing feedback to students: can it enhance the feedback experience for staff and students?”, Computers and Education, Vol. 58 No. 1, pp. 386-396, doi: 10.1016/j.compedu.2011.08.025.
Cunningham, K.J. and Link, S. (2021), “Video and text feedback on ESL writing: understanding attitude and negotiating relationships”, Journal of Second Language Writing, Vol. 52, 100797, doi: 10.1016/j.jslw.2021.100797.
Douglas, T., Salter, S., Iglesias, M., Dowlman, M. and Eri, R. (2016), “The feedback process: perspectives of first and second year undergraduate students in the disciplines of education, health science and nursing”, Journal of University Teaching and Learning Practice, Vol. 13 No. 1, p. 3, doi: 10.53761/1.13.1.3.
Elola, I. and Oskoz, A. (2016), “Supporting second language writing using multimodal feedback”, Foreign Language Annals, Vol. 49 No. 1, pp. 58-74, doi: 10.1111/flan.12183.
Espasa, A. and Meneses, J. (2010), “Analysing feedback processes in an online teaching and learning environment: an exploratory study”, Higher Education, Vol. 59 No. 3, pp. 277-292, doi: 10.1007/s10734-009-9247-4.
Espasa, A., Guasch, T., Mayordomo, R., Martínez-Melo, M. and Carless, D. (2018), “A Dialogic Feedback Index measuring key aspects of feedback processes in online learning environments”, Higher Education Research and Development, Vol. 37 No. 3, pp. 499-513, doi: 10.1080/07294360.2018.1430125.
Espasa, A., Mayordomo, R.M., Guasch, T. and Martinez-Melo, M. (2019), “Does the type of feedback channel used in online learning environments matter? Students' perceptions and impact on learning”, Active Learning in Higher Education, Vol. 23 No. 1, doi: 10.1177/1469787419891307.
Fidalgo, P., Thormann, J., Kulyk, O. and Lencastre, J.A. (2020), “Students' perceptions on distance education: a multinational study”, International Journal of Educational Technology in Higher Education, Vol. 17, pp. 1-18, doi: 10.1186/s41239-020-00194-2.
Ghilay, Y. and Ghilay, R. (2015), “TBAL: technology-based active learning in higher education”, Journal of Education and Learning, Vol. 4 No. 4, pp. 10-18, doi: 10.5539/jel.v4n4p10.
Grigoryan, A. (2017), “Feedback 2.0 in online writing instruction: combining audio-visual and text-based commentary to enhance student revision and writing competency”, Journal of Computing in Higher Education, Vol. 29 No. 3, pp. 451-476, doi: 10.1007/s12528-017-9153-7.
Harper, F., Green, H. and Fernandez-Toro, M. (2018), “Using screencasts in the teaching of modern languages: investigating the use of Jing® in feedback on written assignments”, Language Learning Journal, Vol. 46 No. 3, pp. 277-292, doi: 10.1080/09571736.2015.1061586.
Hartling, L., Featherstone, R., Nuspl, M., Shave, K., Dryden, D.M. and Vandermeer, B. (2017), “Grey literature in systematic reviews: a cross-sectional study of the contribution of non-English reports, unpublished studies and dissertations to the results of meta-analyses in child-relevant reviews”, BMC Medical Research Methodology, Vol. 17, pp. 1-11, doi: 10.1186/s12874-017-0347-z.
Henderson, M., Ajjawi, R., Boud, D. and Molloy, E. (2019), “Why focus on feedback impact?”, in Henderson, M., Ajjawi, R., Boud, D. and Molloy, E. (Eds), The Impact of Feedback in Higher Education: Improving Assessment Outcomes for Learners, Springer International Publishing, pp. 3-14, doi: 10.1007/978-3-030-25112-3_1.
Huachizaca, V. and Yambay-Armijos, K. (2023), “Difference-in-difference estimation in combined feedback on writing skill: a quasi-experimental study”, Journal of Applied Research in Higher Education, Vol. 15 No. 5, pp. 1213-1235, doi: 10.1108/JARHE-01-2022-0017.
Hyland, K. and Hyland, F. (2019), Feedback in Second Language Writing: Contexts and Issues, Cambridge University Press, doi: 10.1017/9781108235652.
Ilonga, A., Ashipala, D.O. and Tomas, N. (2020), “Challenges experienced by students studying through open and distance learning at a higher education institution in Namibia: implications for strategic planning”, International Journal of Higher Education, Vol. 9 No. 4, pp. 116-127, doi: 10.5430/ijhe.v9n4p116.
Kara, M., Erdogdu, F., Kokoç, M. and Cagiltay, K. (2019), “Challenges faced by adult learners in online distance education: a literature review”, Open Praxis, Vol. 11 No. 1, pp. 5-22, doi: 10.5944/openpraxis.11.1.8.
Killingback, C., Ahmed, O. and Williams, J. (2019), “‘It was all in your voice’ -tertiary student perceptions of alternative feedback modes (audio, video, podcast, and screencast): a qualitative literature review”, Nurse Education Today, Vol. 72, pp. 32-39, doi: 10.1016/j.nedt.2018.10.002.
Kim, V. (2018), “Technology-enhanced feedback on student writing in the English-medium instruction classroom”, English teaching, Vol. 73 No. 4, pp. 29-53, doi: 10.15858/engtea.73.4.201812.29.
Lowenthal, P.R., Fiock, H.S., Shreaves, D.L. and Belt, E.S. (2022), “Investigating students' perceptions of screencasting style of video feedback in online courses”, TechTrends, Vol. 66 No. 2, pp. 265-275, doi: 10.1007/s11528-021-00665-x.
Mahoney, P., Macfarlane, S. and Ajjawi, R. (2019), “A qualitative synthesis of video feedback in higher education”, Teaching in Higher Education, Vol. 24 No. 2, pp. 157-179, doi: 10.1080/13562517.2018.1467359.
Marriott, P. and Teoh, L.K. (2012), “Using screencasts to enhance assessment feedback: students' perceptions and preferences”, Accounting Education, Vol. 21 No. 6, pp. 583-598, doi: 10.1080/09639284.2012.725637.
Maslova, A., Koval, O., Kotliarova, V., Tkach, M. and Nadolska, Y. (2022), “On the way to successful learning and teaching: constructive feedback”, Journal of Higher Education Theory and Practice, Vol. 22 No. 6.
Mayer, R.E. (2002), “Cognitive theory and the design of multimedia instruction: an example of the two‐way street between cognition and instruction”, New Directions for Teaching and Learning, Vol. 2002 No. 89, pp. 55-71, doi: 10.1002/tl.47.
Mayer, R.E. (2014), “Multimedia instruction”, in Handbook of Research on Educational Communications and Technology, Springer, pp. 385-399, doi: 10.1007/978-1-4614-3185-5_31.
McCarthy, J. (2015), “Evaluating written, audio and video feedback in higher education summative assessment tasks”, Issues in Educational Research, Vol. 25 No. 2, pp. 153-169.
Miller, M.D. and Corley, K. (2001), “The effect of e-mail messages on student participation in the asynchronous on-line course: a research note”, Online Journal of Distance Learning Administration, Vol. 4 No. 3, pp. 17-24.
Mishra, L., Gupta, T. and Shree, A. (2020), “Online teaching-learning in higher education during lockdown period of COVID-19 pandemic”, International Journal Of Educational Research Open, Vol. 1, 100012, doi: 10.1016/j.ijedro.2020.100012.
Mohorovičić, S. and Tijan, E. (2011), “Using screencasts in computer programming courses”, Proceedings of the 22nd EAEEIE Annual Conference, Maribor.
Moore, R.L. (2016), “Interacting at a distance: creating engagement in online learning environments”, in Handbook of Research on Strategic Management of Interaction, Presence, and Participation in Online Courses, IGI Global, pp. 401-425, doi: 10.4018/978-1-4666-9582-5.ch016.
Musingafi, M.C., Mapuranga, B., Chiwanza, K. and Zebron, S. (2015), “Challenges for open and distance learning (ODL) students: experiences from students of the Zimbabwe Open University”, Journal of Education and Practice, Vol. 6 No. 18, pp. 59-66.
Noetel, M., Griffith, S., Delaney, O., Sanders, T., Parker, P., del Pozo Cruz, B. and Lonsdale, C. (2021), “Video improves learning in higher education: a systematic review”, Review of Educational Research, Vol. 91 No. 2, pp. 204-236, doi: 10.3102/0034654321990713.
Office for Students (2022), “National student survey - NSS”, May 15, 2024, available at: https://www.officeforstudents.org.uk/advice-and-guidance/student-information-and-data/national-student-survey-nss/
Ofoha, D. (2012), “Assuring quality evaluation practices in open and distance learning system: the case of National Open University of Nigeria”, Africa Education Review, Vol. 9 No. 2, pp. 230-248, doi: 10.1080/18146627.2012.722394.
Orsmond, P. and Merry, S. (2011), “Feedback alignment: effective and ineffective links between tutors' and students' understanding of coursework feedback”, Assessment and Evaluation in Higher Education, Vol. 36 No. 2, pp. 125-136, doi: 10.1080/02602930903201651.
Penn, S. and Brown, N. (2022), “Is screencast feedback better than text feedback for student learning in higher education? A systematic review”, Ubiquitous Learning: An International Journal, Vol. 15 No. 2, pp. 1-18, doi: 10.18848/1835-9795/CGP/v15i02/1-18.
Popay, J., Roberts, H., Sowden, A., Petticrew, M., Arai, L., Rodgers, M., Britten, N., Roen, K. and Duffy, S. (2006), “Guidance on the conduct of narrative synthesis in systematic reviews”, A product from the ESRC methods programme Version, Vol. 1 No. 1, p. b92.
Ramaprasad, A. (1983), “On the definition of feedback”, Behavioral Science, Vol. 28 No. 1, pp. 4-13, doi: 10.1002/bs.3830280103.
Seckman, C. (2018), “Impact of interactive video communication versus text-based feedback on teaching, social, and cognitive presence in online learning communities”, Nurse Educator, Vol. 43 No. 1, pp. 18-22, doi: 10.1097/NNE.0000000000000448.
Simonson, M. (2022), Distance Learning, Volume 19# 3, IAP.
Simonson, M., Zvacek, S.M. and Smaldino, S. (2019), Teaching and Learning at a Distance: Foundations of Distance Education, 7th ed., Information Age Publishing, Charlotte, NC.
Strauss, A. and Corbin, J. (1998), Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory, Sage Publications, Thousand Oaks, CA.
Swartz, B. and Gachago, D. (2018), “Students' perceptions of screencast feedback in postgraduate research supervision”.
Taskiran, A. and Yazici, M. (2021), “Formative feedback in online distance language learning: boosting motivation with automated feedback”, in Motivation, Volition, and Engagement in Online Distance Learning, IGI Global, pp. 100-125, doi: 10.4018/978-1-7998-6557-5.ch006.
Tyrer, C. (2021), “The voice, text, and the visual as semiotic companions: an analysis of the materiality and meaning potential of multimodal screen feedback”, Education and Information Technologies, Vol. 26 No. 4, pp. 1-20, doi: 10.1007/s10639-021-10545-2.
Udell, J. (2005), What is Screencasting, O'Reilly Media, May 15, 2024, available at: http://digitalmedia.oreilly.com/pub/a/oreilly/digitalmedia/2005/11/16/what-is-screencasting.html?page=1
Uribe, S.N. and Vaughan, M. (2017), “Facilitating student learning in distance education: a case study on the development and implementation of a multifaceted feedback system”, Distance Education, Vol. 38 No. 3, pp. 288-301, doi: 10.1080/01587919.2017.1369005.
Vygotsky, L. (1978), Mind in Society: Development of Higher Psychological Processes, Harvard University Press.
Winstone, N. and Carless, D. (2019), Designing Effective Feedback Processes in Higher Education: A Learning-Focused Approach, Routledge.
Yiğit, M.F. and Seferoğlu, S.S. (2021), “Effect of video feedback on students' feedback use in the online learning environment”, Innovations in Education and Teaching International, Vol. 60, pp. 1-11, doi: 10.1080/14703297.2021.1949165.