Abstract
Purpose
Smart learning analytics (Smart LA) – i.e. the process of collecting, analyzing and interpreting data on how students learn – has great potentials to support opportunistic learning and offer better – and more personalized – learning experiences. The purpose of this paper is to provide an overview of the latest developments and features of Smart LA by reviewing relevant cases.
Design/methodology/approach
The paper studies several representative cases of Smart LA implementation, and highlights the key features of Smart LA. In addition, it discusses how instructors can use Smart LA to better understand the efforts their students make, and to improve learning experiences.
Findings
Ongoing research in Smart LA involves testing across various learning domains, learning sensors and LA platforms. Through the collection, analysis and visualization of learner data and performance, instructors and learners gain more accurate understandings of individual learning behavior and ways to effectively address learner needs. As a result, students can make better decisions when refining their study plans (either by themselves or in collaboration with others), and instructors obtain a convenient monitor of student progress. In summary, Smart LA promotes self-regulated and/or co-regulated learning by discovering opportunities for remediation, and by prescribing materials and pedagogy for remedial instruction.
Originality/value
Characteristically, Smart LA helps instructors give students effective and efficient learning experiences, by integrating the advanced learning analytics technology, fine-grained domain knowledge and locale-based information. This paper discusses notable cases illustrating the potential of Smart LA.
Keywords
Citation
Kumar, K. and Vivekanandan, V. (2018), "Advancing learning through smart learning analytics: a review of case studies", Asian Association of Open Universities Journal, Vol. 13 No. 1, pp. 1-12. https://doi.org/10.1108/AAOUJ-12-2017-0039
Publisher
:Emerald Publishing Limited
Copyright © 2018, Kinshuk and Vivekanandan Kumar
License
Published in the Asian Association of Open Universities Journal. Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode
Introduction
Since its emergence, learning analytics (LA) has continued to prove its importance to the field of education. LA involves measuring, collecting, analyzing and reporting contextual learner data for understanding and optimizing learning and learning environments (Siemens, 2012, p. 1382). LA has the potential to create better learning experiences customized to individual preferences, strengths and needs (Clow, 2013; Siemens, 2012). The customization of learning experiences can be intensified when all learning related activities are observed at finer levels of granularity – including episodes of study by students, episodes of assessment preparation and completion, and episodes of teaching – to generate awareness/insights about what learners know and how to promote constructive learning (Kumar, Kinshuk, Pinnell and Paulmani, 2017; Pinnell et al., 2017).
Recent learning models have moved the conversation from rote learning to focusing on inclusive or personalized learning. Consequently, it has exposed a critical question about how learning can be made practical and relevant to students. It is increasingly evident that tests and/or examinations based on memorization do not effectively aid learners in absorbing and applying knowledge. Hwang et al. (2008) and Yahya et al. (2010) argue that when students perceive learning as meaningful and relevant, they are motivated to learn with enthusiasm. In addition, Lage et al. (2000) and Sampson et al. (2002) emphasize the importance of inclusive learning for all students. Accordingly, educational goals should focus on ways to support students by focusing on their individual strengths and weaknesses. Moreover, learning should be able to take place whenever and wherever an opportunity is available, and not be limited to a fixed time or location. Such an opportunistic learning process requires the ubiquitous presence of learner support, such as timely feedback or technological assistance. Contemporary learning models are moving toward more omnipresent and contextualized learning – where personalization ensures that students can learn at their own pace and receive customized feedback addressing individual strengths and weaknesses, following the theory that learning that happens spontaneously in real life is more difficult to forget (Sampson et al., 2002).
Correspondingly, the concept of “smart learning” has evolved as the conceptualization of learning changed. Experts have highlighted the importance of using technology to improve learning, following the emergence of adaptation and personalization as two key components of smart learning (Gros, 2016). A good technological design goes beyond simply using the latest technology in education, to nurturing smart behavior by amalgamating various technologies for collective use (Höjer and Wangel, 2015). Under the theme of “smart learning,” ideas such as “seamless learning” and “ubiquitous learning” have emerged. Sharples et al. (2014) argued that seamless learning allows a person to experience learning continuously “across a combination of locations, times, technologies and social settings.” Ubiquitous learning can be understood as learning experience that is more distributed across time and space, such that students can learn in environments where the line between work and play, and public and private, becomes blurred (Burbules, 2012). Essentially, “smart learning” does not merely emphasize technology-enhanced learning, but offers a way to improve learning experiences by integrating various technologies, environments and content.
Researchers and educators have turned their attention to the idea of incorporating an element of “smart” into LA. Smart learning (Smart LA) analytics is a subclass of LA that supports certain processes and characteristics of smart learning (Giannakos et al., 2016). Educators can use Smart LA to discover and analyze data on student behavior, instruction merit and the suitability of learning environments, gathering information from various sources to discern learning traces that can facilitate instructional support (Kinshuk, 2017). The development of big data, and new ways to connect and exchange information, has laid the foundation for more interactive and personalized learning. Data on learner capabilities and competencies, and the settings where students use specific technology, are useful for analyzing learner actions, interaction, trends in preferences and changes in ability levels. Consequently, instructors can identify each student’s learning trace – i.e. “a network of observed study activities that lead to a measurable chunk of learning” (Kumar, Kinshuk, Pinnell and Paulmani, 2017; Pinnell et al., 2017) – which is a major source of data for LA. Learning traces make personalization possible, such that individual students with distinct learning preferences can adopt different learning approaches even for the same learning activities.
Many researchers have studied areas such as how instructors can use LA to better understand student capability, how to design smart environments and how to apply existing technologies. However, few have explored the ability of Smart LA to promote engaging and personalized learning, i.e. smart learning. In the existing work, Boulanger et al. (2015) examined Smart LA using a case study on the Smart LA framework SCALE, and its effectiveness in improving student learning outcomes. Similarly, Giannakos et al. (2016) explored the concept of Smart LA using video-based learning.
This paper reviews cases relating to the implementation of Smart LA, and offers an overview of the latest developments and trends in Smart LA. It examines the use of various online platforms and/or software tools to facilitate smart learning through Smart LA. The paper begins by reviewing multiple case studies on Smart LA to identify key features. Subsequently, it surveys how instructors can use Smart LA to better understand student ability and learning progress, and thus promote more personalized learning.
Case studies of Smart LA
Procedure evaluation/E-learning/E-training Tool – PeT
PeT is a training and testing system designed for workers in the oil and gas industry. It aims to familiarize trainees with various emergency operating procedures (Boulanger et al., 2014). It adopts task-related knowledge structures, chunking and laying out tasks in a hierarchical model with an unlimited number of layers (Johnson et al., 1988). The procedural tasks for training are selected and prioritized by several parameters that include: the requirements of the institution’s training framework; the personal needs of a trainee as determined by himself/herself; and the recommendations of the PeT analytics system. Training and testing scenarios adapt to the profile of the trainees, changing as they make decisions within the PeT system. The PeT analytics system continuously tracks certain fine-grained data to provide information on user profiles. These include navigation pathways session, question, deliberation before selecting a choice and choice selection. For example, session data include the time taken by a learner to complete various activities within a session, and the data corresponding to the procedural activities attempted within the session; question data include not only the final answer, but also the navigational pathways of trainees as they proceed toward an answer in authentic environments (Boulanger et al., 2014). Accordingly, organizations can use such data to manage and optimize knowledge assets and human capital. Essentially, PeT creates a competence portfolio for each trainee depending on experience and expertise. The portfolio helps to diagnose and predict whether the trainee has the proper skillset for executing tasks, and it also provides remedial instructions to address trainee weaknesses by analyzing the interaction between the trainee and the objects in an authentic (e.g. augmented reality, virtual reality and simulated) task environment. Trainees have access to both virtual reality episodes (of targeted steps in each procedure) and augmented reality experiences in collaborative task activities, which are part of an instructional design that employs the Assessing the Learning Strategies of AdultS instrument to customize activities based on prevalent learning strategies adopted by the trainees.
Augmented reality training/testing (ART)
ART provides an immersive training and testing environment for augmented-reality-oriented training, providing trainees with multimedia lessons using animation, interactive maps and customized learning activities. Bacca et al. (2015) introduced an ART application, Paint-cAR, to help students on a vocational training program in car maintenance. Specifically, it engages students to become familiar with the procedural steps for repairing the paint on a car, including safety measures and identification of chemical tools or products. With Paint-cAR, students can interactively practice the procedure step by step in a virtual world. The application provides three modes – guided, evaluation and informative modes – for students to choose from, and offers flexibility in terms of the degree of guidance, accessibility of information and the sequence of training. Students also get timely feedback when they take a wrong step. Bacca et al. (2015) found that it had positive effects on the dimensions of attention, relevance, confidence and satisfaction, which were core contributors to increasing motivation in learning. An ART application with a good design not only enhances the trainees’ level of engagement, but also addresses the limitations of one-size-fits-all curricula, and hence supports a more personalized learning experience (Bacca et al., 2015). The pedagogical approach prescribed in ART can be applied in other domains as well.
CODing experience (CODEX)
CODEX is a coding analytics software designed to capture the coding experiences of students in areas such as design, debugging, testing and optimization, from within an integrated development environment such as NetBeans (Seanosky et al., 2016; Kumar et al., 2018). It tracks programming activities, such as the design of Unified Modeling Language diagrams, debugging and even keystrokes, and records them securely. Using these data, CODEX attempts to recognize design, coding, debugging, testing and optimization strategies of students. In addition, it analyzes the functionality of the program designed by students, offering proactive feedback to help them develop as professional programmers. It can observe in real time various coding habits of students in different parts of the world who are working on similar problems and connect compatible students (Kumar et al., 2018). Given that teachers are often unaware of the problems students face when solving programming problems, they can use CODEX to gain more real-time awareness of the CODEX and coding habits of their students, and to offer just-in-time feedback. Students can get timely and highly contextual feedback from teachers, classmates and/or anyone from anywhere on the planet working on a similar problem. Notably, this system offers students a global perspective. The system can collect the coding habits of top students in a library, with their permission, and share it with novice programmers across the world.
Mixed-initiative dashboard (MI-DASH)
MI-DASH is a web-based learner-centric dashboard that offers query, visualization and self-reflection interfaces, and presents student learning progress, capacity and performance. It presents the results of analyses carried out by Smart Competence Analytics on LEarning (SCALE) engine (Boulanger, Seanosky, Clemens, Kumar and Kinshuk, 2016; Boulanger, Seanosky, Pinnell, Bell, Kumar and Kinshuk, 2016). SCALE is a smart analytics technology that measures student competence by translating learning traces from data collected from different learning sensors such as CODEX. MI-DASH can visualize the student proficiency level per learning activity and monitor the confidence levels over time. Basically, it provides both an overview of student learning performances and details of their competencies. In addition, MI-DASH can identify the student confidence levels, which can influence capability. This function is significant, given that the knowledge of correct steps and/or procedures does not necessarily equal to confidence to execute them. This system computes how well students perform certain skills over time. The more consistently students use specific skills correctly, the more confident they become (Boulanger et al., 2014).
Self-/Co-regulated learning (SCRL)
SCRL is a tool created to measure and improve the student ability during self-regulated and co-regulated learning (Pinnell et al., 2015). Self-regulated learning involves students being self-directed, taking initiatives to “set their goals, select and deploy tactics, monitor their own actions and evaluate their own effectiveness” (Zheng et al., 2015). Also, students can manage their own learning tasks and engage the coursework in a robust manner (Boulanger et al., 2015). SCRL offers a way to rectify loopholes in “generic” contemporary educational systems by giving students more control of their personal learning process, and allowing them to set goals and design strategies to achieve those goals (Pinnell et al., 2015). SCRL works with different types of learning sensors, such as MI-WRITER and CODEX for data collection, and it provides a dashboard to visualize student proficiency levels in various domains. In addition, the system offers students diverse initiatives that can help them enhance proficiencies or improve deficiencies (Seanosky et al., 2016). Students can choose the types of competence they want to improve and expect guidance on ways to achieve success.
The system provides real-time monitoring and immediate feedback. It recommends changes to student initiatives whenever it detects possible obstacles to progress (Pinnell et al., 2015). Similarly, teachers can use this system to monitor student progress. They can employ an immersive communication system to create a learner group for students with similar goals and create shared initiatives to address specific weakness of certain groups of learners. These features promote effective self-regulated and co-regulated learning, which are proven to be essential components of successful online learning environments (Zheng et al., 2015).
Writing analytics – MI-WRITER and 2Write
Writing is a process best taught in a formative manner. However, most educational settings regard writing as a product and assess it in a summative manner. Writing analytics offers teachers the opportunity to observe student-writing behavior at finer levels of granularity, informing on the evolution of writing competences such as grammatical accuracy, topic flow, transition and vocabulary usage. Teachers can use MI-WRITER (Clemens et al., 2017) to gain potential insights about the writing processes of students, and to engage students in trace-based scaffolding for developing writing skills.
MI-WRITER is a learning sensor designed to measure English writing competences in both language and content. In the language aspect, MI-WRITER can track certain data produced by students performing writing exercises, including grammatical and spelling errors, sentence patterns, ability to correct errors and the speed of writing. In the content aspect, the MI-WRITER records the flows and formation of topics. This process of analyzing various features of writing and quantifying student performance allows for real-time feedback on ways to enhance English writing efficiency and quality.
2Write (Boulanger et al., 2018; Mitchnick et al., 2017; Kumar, Kinshuk, Pinnell and Paulmani, 2017; Kinshuk, 2017) is another writing analytics tool for enhancing English writing competence. It is compatible with learning management systems such as Moodle. 2Write collects data from students as and when they work on assignments, and provides a dashboard showing the areas to improve on. It captures every character a student enters, and performs real-time analysis. With the aid of this tool, students can self-monitor their progress and set their own goals to achieve desired writing abilities.
MUSIc eXperiences (MUSIX)
MUSIX employs LA to enhance both music-related learning experiences of students, and teaching approaches by instructors (Guillot et al., 2015). The system collects data from sources such as music theory lectures, instrumental performances and vocal trainings. MUSIX analyses the collected data and displays the results on a dashboard that highlights student musical competencies. Subsequently, the system provides customized feedback based on the identified learning trace of individuals. Also, it features precise instruction, games and quizzes to improve the competence levels of learners, and utilizes self-regulation and co-regulation techniques to help learners build a deeper understanding of their music skills and plan strategies to improve their skills.
The above systems briefly reviewed displayed some commonalities in analytics functions, system structure and user experiences. The next section identifies these features, which are conceived as the pillars of Smart LA systems.
Key features of Smart LA
As the cases above show, Smart LA involves a thorough study of learners and learning environments. Smart LA requires various types of data to identify learning traces that inform on building better learning environments. Researchers and/or teachers face the challenge of discovering relevant data sources and extracting useable information, and effectively analyzing and interpreting the data, as it becomes available.
Teachers, students and parents can observe changes in learner competence levels by studying corresponding learning traces. A learning trace is created when both a student’s fine-grained learning activities and study outcomes are compiled and interpreted at different points in time. These real-time results can support customized learning that meets individual learner needs. Such customization can be an individual endeavor (i.e. teacher or student or parent) or a collaborative effort. With Smart LA, researchers can reliably evaluate and predict the nature of learning occurring in different environments, including learner efforts and efficiencies. Consequently, various agencies can offer appropriate institutional support based on policies that target competency-based learning.
The following sections discuss key features of Smart LA, including how it provides essential insights into learning, how it embeds certain functionalities to create better learning experiences and how it makes personalized learning possible.
Learner awareness
Smart LA promotes acute learning awareness. As the case studies above indicate, learners are better served with information about their own progress in the learning process. Students who are aware of their learning behavior can track their study progress and seek adequate help to improve study habits (Ebner et al., 2015). Smart LA makes learners more aware of ways to achieve their best performance. One of the feedback channels of Smart LA is the visualization of learning progress. Such visualization should be carefully designed for seamless communication among students, teachers and the analytics system, such that effective pedagogical interventions becomes possible.
There are various data sensors providing learner information such as performance level, meta-cognitive skills, cognitive skills, learning strategies, effective state and physiological symptoms. Smart LA can use such data sets to personalize study experiences. In many cases, personalization begins with learner modeling, which extracts both the characteristics of learners and the pertinence of learning strategies from the raw data sets. In learner modeling, models emerge from observing the interactions between learners and their instructional environments that reveal pertinent learner information such as knowledge levels, weaknesses and misconceptions (Bull and Kay, 2010). A learner model continuously updates as data arrive from sensors, and Smart LA allows learners access to these models that individualize learning experiences and allow learners to reflect, plan and control their own learning.
Technology awareness
Technology is vital to Smart LA. Technology awareness ensures the best use of the software, and facilitates content customization and the personalization of learning. Users must understand the key functionalities of various technologies and devices embedded in Smart LA and e-learning software, to fully utilize the resources and dynamically optimize content to suit functionality (McKenny and Reeves, 2016). For example, instructors should know what kind of device or technology will best support their teaching. Functionality refers to the tools embedded in a device or software that helps it accomplish its tasks, (Gros, 2016). Functionality includes items such as display facility, audio and video capability, multi-language capacity and operation platform quality. In the MUSIX example above, the software uses both the audio capability tool and the MIDI connection between instrument and computer to collect and analyze learner data from vocal or instrumental practice (Guillot et al., 2015).
Another noteworthy issue is big data centralism. Smart LA is powerful in its ability to bring different sources of data together (Ebner et al., 2015). Advancements in technology mean that people can use different devices to access Smart LA software, while the aggregation of generalized data remains central. In addition, software such as MI-DASH and SCRL provide platforms for central data mining across various LA tools to facilitate self/co-regulation. Also, continuous data analysis helps pedagogical scientists, or the system itself, to acquire new knowledge structures from the analyzed data, and derive better knowledge structures that benefit the dynamical adoption of learner needs (Clemens et al., 2017; Clemens, n.d.).
Location awareness
Location-based technologies facilitate location-based learning. The former helps to identify the location of learners when they are using the software, and, according to Greer (2009), the latter transfers knowledge using the location-based intelligence of wireless interface and sensor networks, which continuously adapt to user location (cited in Martin and Ertzberger, 2013, p. 77). Current wireless technologies have positioning systems such as GPS, Wi-Fi, RFID chips and Bluetooth that can automatically detect user locations. Location – both the physical location of learners and the opportunities to learn – is essential to contextual learning (Brown et al., 2010). Opportunities for learning are increasingly location-sensitive. For instance, the ability to identify students in nearby locations with similar characteristics and learning preferences makes it possible to link them up for their mutual learning benefits. A museum using QR codes is another example of location-sensitive learning opportunity.
Surrounding awareness
The idea of mobile learning emerged following advancements in location-aware technologies and devices. Mobile learning refers to the ease with which learning moves with learners, and is not restricted to learning facilitated by mobile devices (Walsh, 2011). Ally and Prieto-Blazquez (2014) further stressed that mobile learning is not about technology, but about putting learners at the center of learning. It is about learners who are “mobile.” Basically, technology provides the tools that enable learners to learn in various contexts, while mobile learning promotes the development of relevant learning activities based on both learner-specified objectives and certain context-aware knowledge structures of different domains. The term “context-aware” refers to providing users with information relevant to their tasks within specific contexts (Abowd et al., 1999, cited in Kinshuk et al., 2016).
Mobile learning engages students in the context of their environments, providing authentic interactive activities, and making informal learning easier (Martin and Ertzberger, 2013). Learning embedded in the surrounding environment helps students learn wherever they want. Consequently, real-life physical objects become essential vehicles for location-based adapted learning. For example, it has become common practice for museums to attach QR codes to different exhibition boards, in both indoor and outdoor settings. Learners with smartphones or equivalent devices can obtain additional information and interact with digital representations of displayed items by scanning the QR codes. Such mobile learning systems, when designed well, exemplify both ubiquitous and personalized learning (Kinshuk, 2017). For example, when a user repeatedly scans the same QR code, a well-designed personalized system should be able to identify the user’s learning progress and adapt subsequent information, rather than giving the same information every time.
Understandably, surrounding awareness cannot be divorced from cultivating smart learning environments. Combining the advantages of both physical classrooms and various virtual learning environments – which are facilitated by the holistic internet of things and certain ubiquitous sensing devices – provide the full toolkit for context awareness (Kinshuk et al., 2016). With location awareness, learners can learn at their own pace, and teachers can monitor student progress, adapting feedbacks and assistance to relevant big data from the respective smart learning environments. Boulanger et al. (2015) argue that collecting and analyzing learner data provides better understandings of learning experiences, and facilitates tailormade teaching that introduces relevant and optimal learning conditions to the learning environment.
How can smart LA give instructors a better understanding of their students?
Mapping learning outcomes to specific skills
The effectiveness of student assessment depends on the effectiveness of the infrastructure design of the supposed system (Yassine et al., 2016). Smart LA should include a dynamic course map that produces chronological lists of all learning outcomes, together with the best approaches to achieving learning goals and assessments methods. Ideally, learning outcomes should be further mapped to specific skills, to give instructors a better understanding of each individual student’s competence level in different areas. In addition, there should be a method or framework that helps to make the comprehensive knowledge base measurable or, more precisely, “quantifiable” (Loewen et al., 2015). Smart LA software use certain embedded frameworks to map a baseline of quantified knowledge relevant to creating learning objectives for learners. Therefore, instructors can use clear and measurable learning outcomes to continuously track with ease student proficiency levels in specific skills.
Dashboards for teachers
The Smart LA platform should have dashboards for teachers to monitor individual and group learning progress. Visual feedback, or feedback channel, is important to teachers getting information about class performance, or knowledge of how well the Smart LA platform is enhancing learning (Ebner et al., 2015). Teachers can access triangulated data about issues faced by students on such dashboards. For example, line charts of task specific competence levels indicating skill history or bar charts overviewing learner capacity in different skills. Teachers can use such visual aids to dynamically check student competence levels and detect their pedagogic needs. More importantly, both teachers and learners can perceive learning as a highly active and dynamic growth process, rather than seeing it as a single “snap shot” (Kumar, Kinshuk, Pinnell and Paulmani, 2017). Teachers and students can follow a learning timeline and see how learning has influenced performance over a certain period.
Thus, teachers can use the available data to tailor course offerings (Van Harmelen and Workman, 2012). Widely sourced analytics, not limited to student capability, provide teachers with better understandings of student learning preferences, the technologies they have used and their preferred places to learn. For example, a teacher who knows the type of technology a learner is using can customize instructions to that technology (Kinshuk, 2017). Since Smart LA continuously records student preferences over long periods of time, teachers can more easily notice changes, or predict student preferences. Smart LA achieves the promise of tailormade educational opportunities based on student needs and abilities through data mining, interpretation and modeling (Van Harmelen and Workman, 2012).
IMS caliper framework
The IMS Caliper framework tracks and shares online learning interaction data (Zheng et al., 2015). It comprises three major components – a learning events sensor, an analytics server and an analytics dashboard. The learning events sensor collects data from learning tools. The analytics server connects various data types and engages learners in carrying out “initiatives-based intelligent interactions,” which encourages self-regulated learning. The analytics dashboard uses an interactive interface to report the learning activities carried out by students, and customizes initiatives and recommendations. The following are two examples of LA platforms adopting the architecture of the IMS Caliper framework.
Lambda
Lambda is an LA platform that centralizes, senses, analyses and demonstrates analytics data from different software tools. It is a competence management system measuring both learner proficiency and confidence levels (Seanosky et al., 2016). A wide range of learning domains can be reported on one platform, giving both teachers and learners a clearer and broader picture. The first part of Lambda is about its sensing technologies. CODEX, MI-WRITER and SCRL are examples of the client-side sensor-based technologies, which can detect learning activities in different learning domains and provide raw data for further analysis (Boulanger et al., n.d.). The second part involves a processing and analysis engine. The processor receives raw data from different sensors and makes sense out of them according to certain stipulated learning outcomes. The final part is about visualization and reporting. LA aims to provide customized feedback that helps individual learners work on their own weaknesses, and results in higher knowledge levels (Siemens, 2012). Essentially, while human feedback can be biased, Lambda provides an accurate analysis of learners based on corresponding performance data, which prevents students from overestimating or underestimating their capabilities (Seanosky et al., 2016).
SCALE
SCALE is another LA platform that aims to collect learning traces from different learning domains, and analyze respective competency levels. Unlike Lambda, SCALE can be embedded in course management systems, such as the blackboard, and several automated testing tools that make detected data reliable and relevant for learning outcomes (Boulanger et al., 2015). However, SCALE and Lambda have generally similar architecture. Like Lambda, SCALE has sensing, analysis and visualization layers. In addition, it includes an extra layer, “the competency layer,” which converts analysis results of various competencies and links them to learning outcomes (Boulanger, Seanosky, Pinnell, Bell, Kumar and Kinshuk, 2016).
Boulanger et al. (2015) reported an experimental study showing that SCALE has great potentials to improve the student performance. They found that classes that adopted the new e-learning tracing-oriented technologies generally performed better than the classes that followed conventional teaching methods. In brief, SCALE centralizes the generation, interpretation and display of learning events. However, it is important to note that the lack of a control group weakened the validity of this experiment. Their findings showed that the new design was likely to be effective in optimizing student learning experiences; however, the system adoption time might make the process longer.
Conclusion
The “smartness” of Smart LA depends on how adaptive the system is, and how well it is able to sense, infer, anticipate and promote self-learning/-organization (Uskov et al., 2017, p. 194). Smart LA strives to facilitate context-aware learning by tracking learner capability in as many learning activities as possible, and offering tailormade feedback to improve learning. It facilitates self-regulation, such that learners can visualize their personal learning progress and design learning plans for themselves.
Today, Smart LA and smart learning environments are indispensable. Smart learning environments use technology to ensure that learning can happen anywhere, and Smart LA enhances the “smartness” content by addressing issues such as the extent of customization, the scale of ubiquity, and the degree of self/co-regulation. With maturity, Smart LA will have tools to offer students more intricate support. For instance, Smart LA could recommend to learners the most suitable learning or career paths based on their learning profiles (Boulanger et al., 2015). Such developments are focused on improving learning experiences and making learning more omnipresent and highly contextual for all learners.
References
Abowd, G.D., Dey, A.K., Brown, P.J., Davies, N., Smith, M. and Steggles, P. (1999), “Towards a better understanding of context and context-awareness”, in Gellersen, H.-W. (Ed.), Handheld and Ubiquitous Computing, Springer, Berlin and Heidelberg, available at: http://link.springer.com/chapter/10.1007/3-540-48157-5_29
Ally, M. and Prieto-Blazquez, J. (2014), “What is the future of mobile learning in education?”, International Journal of Educational Technology in Higher Education, Vol. 11 No. 1, pp. 142-151.
Bacca, J., Baldiris, S., Fabregat, R., Kinshuk and Graf, S. (2015), “Mobile augmented reality in vocational education and training”, Procedia Computer Science, Vol. 75, pp. 49-58.
Boulanger, D., Clemens, C., Seanosky, J., Fraser, S. and Kumar, V.S. (2018), “Performance analysis of a serial NLP pipeline for scaling analytics of academic writing process”, in Sampson, D., Ifenthaler, D., Spector, J.M., Isaías, P.I. and Sergis, S. (Eds), Learning Technologies for Transforming Teaching, Learning and Assessment at Large Scale, Springer, New York, NY.
Boulanger, D., Seanosky, J., Baddeley, M., Kumar, V. and Kinshuk (2014), “Learning analytics in the energy industry: measuring competences in emergency procedures”, 2014 IEEE 6th International Conference on Technology for Education (T4E), Clappana, December 18-21.
Boulanger, D., Seanosky, J., Kumar, V., Kinshuk, Panneerselvam, K. and Somasundaram, T.S. (2015), “Smart learning analytics”, in Chen, G., Kumar, V., Kinshuk, Huang, R. and Kong, S.C. (Eds), Emerging Issues in Smart Learning, Springer, Berlin, pp. 289-296.
Boulanger, D., Seanosky, J., Pinnell, C. and Bell, J. (n.d.), “LAMBDA: learning analytics”, available at: http://learninganalytics.ca/research/lambda/
Boulanger, D., Seanosky, J., Clemens, C., Kumar, V. and Kinshuk (2016), “SCALE: a smart competence analytics solution for English writing”, Proceedings of the 2016 IEEE 16th International Conference on Advanced Learning Technologies (ICALT), pp. 468-472, available at: http://doi.org/10.1109/ICALT.2016.108
Boulanger, D., Seanosky, J., Pinnell, C., Bell, J., Kumar, V. and Kinshuk (2016), “SCALE: a competence analytics framework”, in Li, Y., Chang, M., Kravcik, M., Popescu, E., Huang, R., Kinshuk and Chen, N.-S. (Eds), State-of-the-Art and Future Directions of Smart Learning, Springer Singapore, Singapore, pp. 19-30, available at: http://doi.org/10.1007/978-981-287-868-7_3 (accessed December 25, 2017).
Brown, E., Börner, D., Sharples, M., Glahn, C., Jong, T.D. and Specht, M. (2010), “Location-based and contextual mobile learning. A STELLAR small-scale study”, report, available at: www.stellarnet.eu/d/1/2/images/2/23/Sss6.pdf
Bull, S. and Kay, J. (2010), “Open learner model”, in Nkambou, R., Bourdeau, J. and Mizoguchi, R. (Eds), Advances in Intelligent Tutoring Systems, Springer-Verlag, Berlin and Heidelberg, pp. 301-322.
Burbules, N.C. (2012), “Ubiquitous learning and the future of teaching”, Encounters in Theory and History of Education, Vol. 1 No. 13, pp. 3-14.
Clemens, C., Kumar, V.S., Boulanger, D., Seanosky, J and Kinshuk (2017), “Learning traces, competence, and causal inference for English composition”, in Essa, A., Spector, M., Huang, YM., Tortorella, R., Koper, R., Chang, TW., Kumar, V.S., Li, Y.Y. and Zhang, Z. (Eds), Frontiers of Cyberlearning – Emerging Technologies for Teaching and Learning, Lecture notes in Educational Technology, Springer and Nature, Singapore.
Clow, D. (2013), “An overview of learning analytics”, Teaching in Higher Education, Vol. 18 No. 6, pp. 683-695.
Ebner, M., Taraghi, B., Saranti, A. and Schon, S. (2015), “Seven features of smart learning analytics – lessons learned from four years of research with learning analytics”, eLearning Papers, Vol. 40, pp. 51-55, available at: www.openeducationeuropa.eu/en/article/Assessment-certification-and-quality-assurance-in-open-learning_From-field_40_3?paper=164347
Giannakos, M., Sampson, D.G. and Kidzinski, L. (2016), “Introduction to smart learning analytics: foundations and developments in video-based learning”, Smart Learning Environment, Vol. 3 No. 12, doi: 10.1186/s40561-016-0034-2.
Greer, T. (2009), “Learning on the fourth screen: innovations in location-based learning”, available at: http://api.ning.com/files/Gs5aYnX-mNwygskcA58tG5pfiH6qaILCnF1GHra2VE_/locationbasedlearning.pdf
Gros, B. (2016), “The design of smart educational environments”, Smart Learning Environment, Vol. 3 No. 15, doi: 10.1186/s40561-016-0039-x.
Guillot, C., Guillot, R., Kumar, V. and Kinshuk (2015), “MUSIX: learning analytics in music teaching”, in Li, Y., Chang, M., Kravcik, M., Popescu, E., Huang., R., Kinshuk and Chen, N.S. (Eds), State-of-the-Art and Future Directions of Smart Learning, Springer, Singapore, pp. 269-273.
Höjer, M. and Wangel, J. (2015), “Smart sustainable cities: definition and challenges”, in Hilty, L. and Aebischer, B. (Eds), ICT Innovations for Sustainability: Advances in Intelligent Systems and Computing, Springer International Publishing, Switzerland, p. 310.
Hwang, G.-J., Tsai, C.-C. and Yang, S.J.H. (2008), “Criteria, strategies and research issues of context-aware ubiquitous learning”, Educational Technology & Society, Vol. 11 No. 2, pp. 81-91.
Johnson, P., Johnson, H., Waddington, R. and Shouls, A. (1988), “Task related knowledge structures: analysis, modelling and application”, Proceedings of the Fourth Conference of the British Computer Society on People and Computers IV, Cambridge University Press, New York, NY, pp. 35-62.
Kinshuk (2017), “Improving learning through smart learning analytics (PowerPoint Slides)”, presented at the Symposium on Open and Innovative Education, October 27, available at: www.ouhk.edu.hk/wcsprd/Satellite?pagename=OUHK/tcSingPage&c=C_PO&cid=1510016193961&lang=eng (accessed February 20, 2018).
Kinshuk, Chen, N.S. and Cheng, I.L. (2016), “Evolution is not enough: revolutionizing current learning environments to smart learning environments”, International Journal of Artificial Intelligence in Education, Vol. 26 No. 2, pp. 561-581.
Kumar, V.S., Kinshuk, Pinnell, C. and Paulmani, G. (2017), “Analytics in authentic learning”, in Chang, T.-W., Huang, R. and Kinshuk (Eds), Authentic Learning Through Advances in Technologies, Springer Nature, Singapore, pp. 75-89.
Kumar, V.S., Kinshuk, Pinnell, C. and Paulmani, G. (2018), “Analytics in authentic learning”, in Chang, T.W., Huang, R. and Kinshuk (Eds), Authentic Learning through Advances in Technologies, Lecture Notes in Educational Technology, Springer, Singapore, pp. 75-89.
Lage, M.J., Platt, G.J. and Treglia, M. (2000), “Inverting the classroom: a gateway to creating an inclusive learning environment”, The Journal of Economic Education, Vol. 31 No. 1, pp. 30-43.
Loewen, J., Loewen, D., Kinshuk and Suhonen, J. (2015), “Towards an ICT framework for providing inclusive learning objects for indigenous learners”, in Chen, G., Kumar, V., Kinshuk, Huang, R. and Kong, S.C. (Eds), Emerging Issues in Smart Learning, Springer, Berlin, pp. 345-352.
McKenny, S. and Reeves, T.C. (2016), “Educational design and construction: processes and technologies”, in Gros, B., Kinshuk and Maina, M. (Eds), The Future of Ubiquitous Learning: Learning Designs for Emerging Pedagogies, Springer, Berlin, pp. 131-151.
Martin, F. and Ertzberger, J. (2013), “Here and now mobile learning: an experimental study on the use of mobile technology”, Computers & Education, Vol. 68, pp. 76-85.
Mitchnick, D., Clemens, C., Kagereki, J., Kumar, V.S. and Fraser, S. (2017), “Measuring the written language disorder among students with attention deficit hyperactivity disorder”, Journal of Writing Analytics, Vol. 1 No. 1, pp. N/A, available at: https://journals.colostate.edu/analytics/article/view/131
Pinnell, C., Bell, J., El-Bishouty, M. and Zheng, L. (2015), “SCRL: learning analytics”, available at: http://learninganalytics.ca/research/n-z/scrl/ (accessed December 25, 2017).
Pinnell, C., Paulmani, G., Kumar, V.S. and Kinshuk (2017), “Curricular and learning analytics: a big data perspective”, in Daniel, B. and Butson, R. (Eds), Big Data and Learning Analytics in Higher Education, Springer International, Switzerland, pp. 125-145, doi: 10.1007/978-3-319-06520-5_9.
Sampson, D., Karagiannidis, C. and Kinshuk (2002), “Personalised learning: educational, technological and standardization perspective”, Interactive Educational Multimedia, Vol. 4, pp. 24-39.
Seanosky, J., Boulanger, D., Pinnell, C., Bell, J., Forner, L., Baddeley, M., Kinshuk and Kumar, V.S. (2016), “Measurement of quality of a course: analysis to analytics”, in Gros, B., Kinshuk and Maina, M. (Eds), The Future of Ubiquitous Learning: Learning Designs for Emerging Pedagogies, Springer, Berlin, pp. 199-216.
Sharples., M., Adams, A., Ferguson, R., Gaved, M., McAndres, P., Rienties, B., Weller, M. and Whitelock, D. (2014), Innovating Pedagogy 2014: Open University Innovation Report 3, The Open University, Milton Keynes.
Siemens, G. (2012), “Learning analytics: Envisioning a research discipline and a domain of practice”, paper presented at the 2nd International Conference on Learning Analytics and Knowledge, Vancouver, 29 April-2 May.
Uskov, V., Bakken, J., Heinemann, C., Rachakonda, R., Gudure, V., Thomas, A. and Bodduluri, D. (2017), “Building smart learning analytics system for smart university”, in Uskov, V., Howiett, R. and Jain, L. (Eds), Smart Education and e-Learning 2017, Springer Cham, pp. 191-204.
Van Harmelen, M. and Workman, D. (2012), “Analytics for learning and teaching”, CETIS Analytical Series, Vol. 1 No. 3, available at: http://publications.cetis.org.uk/2012/516 (accessed December 25, 2017).
Walsh, A. (2011), “Blurring the boundaries between our physical and electronic libraries: location-aware technologies, QR codes and RFID tags”, The Electronic Library, Vol. 29 No. 4, pp. 429-437.
Yahya, S., Ahmad, E.A. and Jalil, K.A. (2010), “The definition and characteristics of ubiquitous learning: a discussion”, International Journal of Education and Development Using Information and Communication Technology, Vol. 6 No. 1, pp. 1-11.
Yassine, S., Kadry, S. and Sicilia, M.A. (2016), “Measuring learning outcomes effectively in smart learning environments”, Conference on Smart Solutions for Future Cities, Kuwait City, Kuwait, February 7-9, pp. 1-5.
Zheng, L., El-Bishouty, M.M., Pinnell, C., Bell, J., Kumar, V. and Kinshuk (2015), “A framework to automatically analyse regulation”, in Chen, G., Kumar, V., Kinshuk, Huang, R. and Kong, S.C. (Eds), Emerging Issues in Smart Learning, Springer, Berlin, pp. 23-30.
Further reading
Boulanger, D. and Seanosky, J. (n.d.), “SCALE: learning analytics”, available at: http://learninganalytics.ca/research/scale/ (accessed December 25, 2017).
Clemens, C. (n.d.), “MI-WRITER: learning analytics”, available at: http://learninganalytics.ca/research/mi-writer/ (accessed December 25, 2017).
Guillot, R. (n.d.), “ART: learning analytics”, available at: http://learninganalytics.ca/research/art/ (accessed December 25, 2017).
Kumar, V.S., Kinshuk, Somasundaram, T.S., Boulanger, D., Seanosky, J. and Vilela, M. (2015), “Big data learning analytics: a new perspective”, in Kinshuk and Huang, R. (Eds), Ubiquitous Learning Environments and Technologies, Springer Berlin Heidelberg, Berlin, pp. 139-158, doi: 10.1007/978-3-662-44659-1_8.
Kumar, V.S., Fraser, S.N. and Boulanger, D. (2017), “Discovering the predictive power of five baseline writing competences”, Journal of Writing Analytics, Open Access Journal, Vol. 1 No. 1, pp. N/A, available at: https://journals.colostate.edu/analytics/article/view/107 (accessed December 25, 2017).
Seanosky, J. (n.d.a), “CODEX, learning analytics”, available at: http://learninganalytics.ca/research/codex/ (accessed December 25, 2017).
Seanosky, J. (n.d.b), “MI-DASH, learning analytics”, available at: http://learninganalytics.ca/research/mi-dash/ (accessed December 25, 2017).
Acknowledgements
This research was funded by the Industrial Research Chair and Discovery Grant programs under the Natural Sciences and Engineering Research Council (NSERC) of Canada.