Search results
1 – 2 of 2Andy Nguyen, Joni Lämsä, Adinda Dwiarie and Sanna Järvelä
Self-regulated learning (SRL) is crucial for successful learning and lifelong learning in today’s rapidly changing world, yet research has shown that many learners need support…
Abstract
Purpose
Self-regulated learning (SRL) is crucial for successful learning and lifelong learning in today’s rapidly changing world, yet research has shown that many learners need support for SRL. Recently, learning analytics has offered exciting opportunities for better understanding and supporting SRL. However, substantial endeavors are still needed not only to detect learners’ SRL processes but also to incorporate human values, individual needs and goals into the design and development of self-regulated learning analytics (SRLA). This paper aims to examine the challenges that lifelong learners faced in SRL, their needs and desirable features for SRLA.
Design/methodology/approach
This study triangulated data collected from three groups of educational stakeholders: focus group discussions with lifelong learners (n = 27); five teacher interviews and four expert evaluations. The groups of two or three learners discussed perceived challenges, support needs and willing-to-share data contextualized in each phase of SRL.
Findings
Lifelong learners in professional development programs face challenges in managing their learning time and motivation, and support for time management and motivation can improve their SRL. This paper proposed and evaluated a set of design principles for SRLA.
Originality/value
This paper presents a novel approach for theory-driven participatory design with multistakeholders that involves integrating learners, teachers and experts’ perspectives for designing SRLA. The results of the study will answer the questions of how learners’ voices can be integrated into the design process of SRLA and offer a set the design principles for the future development of SRLA.
Details
Keywords
Joseph Nockels, Paul Gooding and Melissa Terras
This paper focuses on image-to-text manuscript processing through Handwritten Text Recognition (HTR), a Machine Learning (ML) approach enabled by Artificial Intelligence (AI)…
Abstract
Purpose
This paper focuses on image-to-text manuscript processing through Handwritten Text Recognition (HTR), a Machine Learning (ML) approach enabled by Artificial Intelligence (AI). With HTR now achieving high levels of accuracy, we consider its potential impact on our near-future information environment and knowledge of the past.
Design/methodology/approach
In undertaking a more constructivist analysis, we identified gaps in the current literature through a Grounded Theory Method (GTM). This guided an iterative process of concept mapping through writing sprints in workshop settings. We identified, explored and confirmed themes through group discussion and a further interrogation of relevant literature, until reaching saturation.
Findings
Catalogued as part of our GTM, 120 published texts underpin this paper. We found that HTR facilitates accurate transcription and dataset cleaning, while facilitating access to a variety of historical material. HTR contributes to a virtuous cycle of dataset production and can inform the development of online cataloguing. However, current limitations include dependency on digitisation pipelines, potential archival history omission and entrenchment of bias. We also cite near-future HTR considerations. These include encouraging open access, integrating advanced AI processes and metadata extraction; legal and moral issues surrounding copyright and data ethics; crediting individuals’ transcription contributions and HTR’s environmental costs.
Originality/value
Our research produces a set of best practice recommendations for researchers, data providers and memory institutions, surrounding HTR use. This forms an initial, though not comprehensive, blueprint for directing future HTR research. In pursuing this, the narrative that HTR’s speed and efficiency will simply transform scholarship in archives is deconstructed.
Details