“Trust us,” they said. Mapping the contours of trustworthiness in learning analytics

Sharon Slade (Independent Researcher, Bradford, UK)
Paul Prinsloo (College of Economic and Management Sciences, University of South Africa, Pretoria, South Africa)
Mohammad Khalil (Centre for the Science of Learning and Technology (SLATE), University of Bergen, Bergen, Norway)

Information and Learning Sciences

ISSN: 2398-5348

Article publication date: 18 October 2023

Issue publication date: 6 November 2023

480

Abstract

Purpose

The purpose of this paper is to explore and establish the contours of trust in learning analytics and to establish steps that institutions might take to address the “trust deficit” in learning analytics.

Design/methodology/approach

“Trust” has always been part and parcel of learning analytics research and practice, but concerns around privacy, bias, the increasing reach of learning analytics, the “black box” of artificial intelligence and the commercialization of teaching and learning suggest that we should not take stakeholder trust for granted. While there have been attempts to explore and map students’ and staff perceptions of trust, there is no agreement on the contours of trust. Thirty-one experts in learning analytics research participated in a qualitative Delphi study.

Findings

This study achieved agreement on a working definition of trust in learning analytics, and on factors that impact on trusting data, trusting institutional understandings of student success and the design and implementation of learning analytics. In addition, it identifies those factors that might increase levels of trust in learning analytics for students, faculty and broader.

Research limitations/implications

The study is based on expert opinions as such there is a limitation of how much it is of a true consensus.

Originality/value

Trust cannot be assumed is taken for granted. This study is original because it establishes a number of concerns around the trustworthiness of learning analytics in respect of how data and student learning journeys are understood, and how institutions can address the “trust deficit” in learning analytics.

Keywords

Citation

Slade, S., Prinsloo, P. and Khalil, M. (2023), "“Trust us,” they said. Mapping the contours of trustworthiness in learning analytics", Information and Learning Sciences, Vol. 124 No. 9/10, pp. 306-325. https://doi.org/10.1108/ILS-04-2023-0042

Publisher

:

Emerald Publishing Limited

Copyright © 2023, Sharon Slade, Paul Prinsloo and Mohammad Khalil.

License

Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial & non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


1. Introduction

Never trust a species that grins all the time. It’s up to something. – Terry Pratchett

Research shows that students (still) trust higher education institutions (HEIs) to protect their data and to use their data in alignment with the fiduciary duty and mandate of higher education (HE) (Slade et al., 2019; Tsai et al., 2021). As HEIs increasingly teach in online spaces that we “rent” from platform providers (Komljenovic, 2021; Sadowski, 2020; Williamson, 2021), using obscure artificial intelligence (AI)-driven analytical tools (Klein et al., 2019a, 2019b) protected by company intellectual property rights (Koshiyama et al., 2022), students may reconsider the trust they place in HE to protect and use their data with their interests at heart.

Many of the discourses surrounding “trust” in the collection, analysis and use of data (whether student data or in general) refer, inter alia, to the respective role of consent, the phrasing and length of Terms and Conditions and whether the collection, analysis and use are within the confines of law (Jones et al., 2020). Since 2019, there has also been an increase in the number of articles covering both “trust” and learning analytics (Jones et al., 2020). With increasing attention and research around associated issues such as bias and transparency in AI (Baneres et al., 2021), and the reality that learning analytics is increasingly unthinkable without AI, it is crucial that we understand “trust” in learning analytics – its contours, the factors impacting on it and what institutions of learning can do to prevent a move from trust to mistrust, distrust or untrust.

While trust has been part of learning analytics research from the beginning (Ferguson, 2012; Greller and Drachsler, 2012), the notion of “trust” was investigated “as is” – without attempting to critically engage with the definition and different elements or contours of trust (de Fine Licht and Brülde, 2021) in the context of learning analytics. While trust in the context of learning analytics has been connected to how we understand student (learning) data, as well as institutional capacity and skill sets, to mention but two, this article also links trust in learning analytics to how institutions understand student success and retention. Though some current research (Tsai et al., 2021) explore how trust in learning analytics might be improved, this Delphi study brings together expert views on how trust in learning analytics can be improved and sustained among students, staff and broader society. As such, this article provides key insights into preventing students, staff and society’s trust from changing trust to untrust or mistrust (see the discussion later on Marsh and Dibben, 2005). As such, the article provides key insights to a range of stakeholders such as, but not limited to, institutions, researchers and practitioners.

1.1 Understanding trust

Scholars do not generally agree on what trust is, what its primary traits are or how it develops (de Fine Licht and Brülde, 2021). The concept of trust presents “a number of problems and paradoxes […] [and] existing theory is not fully adequate for understanding why there is so much of it, why it occurs, and so forth” (Kimbrough, 2005, p. 1). Trust can be described as a form of reliance that the person or institution will deliver, based on an assessment of relevant competence, motivation and opportunity (de Fine Licht and Brülde, 2021). Trust’s different contours become visible when we consider Marsh and Dibben’s (2005) mapping of trust, distrust and untrust. They define trust as:

[…] the belief (or a measure of it) that a person (the trustee) will act in the best interests of another (the truster) in a given situation, even when controls are unavailable, and it may not be in the trustee’s best interests to do so. (p. 19)

“Mistrust” is defined as “misplaced trust” (Marsh and Dibben, 2005, p. 20), i.e. where the trustee should not have been trusted in the first place or later defaults on that trust. “Distrust,” in contrast, is an indication of how much belief there is that the trustee will actually work against the interests of the truster. Interestingly, Marsh and Dibben (2005) add another variation that of “untrust” referring to “how little the trustee is actually trusted” (p. 21; italics in the original). Untrust is not the opposite of trust, and can indicate some trust, but not enough to cooperate with you.

In the context of education, trust has always been central to the social contract between students, communities, educational providers and governments. However, there are increasing concerns that an “uncritical embrace of technology” subverts trust and goodwill (UNESCO, 2022, p. 9), and a new social contract for education is proposed based on principles of “non-discrimination, social justice, respect for life, human dignity and cultural diversity” encompassing “an ethic of care, reciprocity, and solidarity” (p. 4). As the research focus and practice of learning analytics mature, we should not shy away from exploring how the increasing datafication of learning and teaching changes the social contract and, specifically, trust, in the relations between students, the institution and broader society.

1.2 Trust in learning analytics: a brief overview

Research into trust in learning analytics covers a range of foci including, but not limited to, the role of HEIs as “information fiduciaries” (Jones, et al., 2020); the role of stakeholders (e.g. teachers, students) in the design and implementation of learning analytics (Nazaretsky et al., 2022; Rodríguez-Triana et al., 2018; Slade et al., 2019); criteria for measuring the trustworthiness of learning analytics and specifically, AI in learning analytics (Baneres et al., 2021; Veljanova, et al., 2022); and interrogation of the broader context and interests in the commercial value of the platformization of learning and of student data (Klein et al., 2019b).

One of the earliest studies emphasizing how the implementation process impacts on trust in learning analytics is the DELICATE framework and checklist by Drachsler and Greller (2016). Interestingly, Drachsler and Greller (2016) use “trust” in a variety of ways such as “trusted implementation” (p. 89), referring to the checklist “to support educational organizations in becoming trusted Learning Analytics users” (p. 90; italics added), “Trusted Knowledge Organizations” (p. 94; italics added), “trust in the service” (p. 95) – the latter referring to the obfuscation surrounding black box of algorithms in the context of Google; and ensuring protection from abuse and ensuring “treatment in a trustful way” (p. 96). Ensuring the involvement of all stakeholders in the planning and implementation of learning analytics and getting the support of all stakeholders will prevent “a monitoring and surveillance system that will destroy any trusted teacher-learner relationship and scare(s) them of making mistakes and learn from them” (Drachsler and Greller, 2016, p. 96).

Jones et al. (2020) note that learning analytics are “economical and political tools” (p. 4) in helping “institutional actors run a highly bureaucratic institution that relies on vast and varied resources” (p. 5) and frame the responsibility of HEIs as “information fiduciaries” (p. 2). As information fiduciaries, HEIs have an obligation to advance student interests, support the educational mission of the institution, use data only when consistent with principles of intellectual freedom and use data only in a way that is consistent with profound trust between student and institution (Jones et al., 2020, p. 17).

A seminal article in the context of mapping trust in learning analytics is the research by Tsai et al. (2021) who state that the adoption of learning analytics in “complex educational systems is woven into sociocultural and technical challenges that have induced distrust in data and difficulties in scaling LA” (p. 81). This research focused on comparing perspectives of students and staff on trust in learning analytics in the context of one HEI.

Tsai et al. (2021) provide two examples of trust issues in learning analytics. With regard to the first, the subjectivity of numbers, the authors mention a range of concerns such as the dimensionality of data, how preprocessing and transformation of collected data may change its value/meaning, the measurement of behavior that can be quantified, reductionist approaches, the potential of digital discrimination “in a relentless process of classification and categorization for the purpose of inclusion or exclusion” (p. 83) and the inherent decontextualization in the analysis and presentation of data. The second issue impacting on trust in learning analytics derives from the “fear of power diminution” (p. 83) emerging from data extraction as a one-way process, and the normalization of surveillance as being “contradictory to educational values” (Tsai et al., 2021, p. 83), and threatening student agency.

Addressing the trust deficit in learning analytics, Tsai et al. (2021) discuss a number of inclusive approaches such as including a range of stakeholders and subgroups, especially teachers and students in the decision-making processes surrounding the design and implementation processes, tool development and procurement processes. They suggest that codesigning models in the development and implementation of learning analytics would contribute significantly in ensuring trust in learning analytics.

Tsai et al. (2021) furthermore found that staff felt caught in the middle between management and student expectations and that the implementation of learning analytics did not consider their capacity to engage. Learning analytics was seen as adding workload, with staff also concerned about a lack of consultation, and the dangers that learning analytics might be used against teachers in course evaluations when not being complemented with “professional knowledge of learning and teaching design” (p. 89). Students expressed distrust “toward external parties that offer services to the university through accessing and processing data about students” (p. 90). The researchers also noted concerns about equity of treatment (with some students losing out on opportunities or excluded from resource allocation); the impact of surveillance; and how learning analytics might impact learner autonomy by taking away decisions from students and/or by spoon-feeding them. Another thread highlighted the potential misuse of learning analytics and the limitations of the quantification of learning.

Klein et al. (2019a) report on a number of factors that impact the adoption of learning analytics and found that the lack of a trustworthy data infrastructure was “the most commonly reported barrier to adoption” (p. 613). Untrustworthy data infrastructures were defined as a “proliferation of multiple and unintegrated campus technologies, which were deemed cumbersome and misaligned with user needs,” as well as “frequent lack of technological integration and accurate and timely data” (p. 613). They also refer to the impact of inadequate integration between systems on trust and how the lack of access to accurate easy-to-use data through effective visualization can erode trust. The increasing use of AI tools in learning analytics and specifically the use of AI in predictive modeling has contributed to a “notable lack of trust” (p. 616). Important in determining the trust deficit in learning analytics were concerns about the “absence of context” and how “data might be used to shape future outcomes” (p. 617).

In another article highlighting the impact of organizational factors on trust in learning analytics, Klein et al. (2019b) report that “organizational context and commitment, including structures, policies, processes, and leadership impact individual decisions to trust and adopt learning analytics tools” (p. 565). Interestingly, these authors also report that learning analytic tools are often underutilized or not mandated due to a lack of confidence in institutional decision-making where there is no overarching strategy for the procurement of technologies (Klein et al., 2019b). They also flag a perception that university administrators:

[…] are under the sway of vendor marketing; value learning analytics tool data over human resources and professional skills; and do not have a clear plan for implementation and use of these tools. (p. 585)

The authors propose that intentionally building trust is foundational in the successful implementation of learning analytics and that it is crucial to include the voices of all stakeholders and to clarify needs, roles and expectations instead of applying a top-down approach to building trust. Including all stakeholders should ensure consideration of workloads, the need for professional development and rewards and incentive schemes. Trust in learning analytics emerges at the intersections of:

[…] individual needs, interests, and trust and […] institutional capacity and readiness to create an environment in which use of these tools is valued, encouraged, and becomes a part of the organizational context. (Klein et al., 2019b, p. 590)

Several articles explore students’ perceptions of trust in the context of learning analytics. For example, de Quincey et al. (2019) investigated students’ perceptions and found that student trust and feelings of dependability were variable. The variability is attributed to:

the number of disclaimers that were included in the system as to the accuracy of the predictions and also the scores for the motivators were limited by the data available. (p. 361)

The authors conclude that students “will only engage with a system if they trust the data and understand how ‘scores’ are calculated” (p. 362).

Slade et al. (2019) discuss students’ willingness to exchange their data for benefits (real or perceived) in terms of their “trust in a particular context and their (real or perceived) control of their data” (p. 235). Their findings include reporting on students’ high levels of trust in the Open University (UK) and establish a link between the importance that students allocate to having control over the data collected and the purposes of that collection. Students’ trust in the Open University (UK) was based on the belief that the institution would “not use their personal data inappropriately or to share their data with third parties” (p. 242). Students’ trust in the institution outweighed more general privacy concerns. In the context of high levels of trust in the institution’s reputation and track record, students’ desire to have and maintain control of data becomes less important. The latter finding is confirmed in the research by Jones and Afnan (2019) who found that students’ trust in the university was based on the belief that “universities are seen as good institutions who employ trustworthy people” (p. 683). Though students trust universities, the research points out that this trust needs to be sustained through ensuring secure data and information systems to prevent breaches, controlling the sharing of the data and limiting the collection of data for learning analytics purposes to what is considered to be “useful […] but not personal” (p. 683). The research also made it clear that not informing students of the collection and use of their data and the selling of student data would “greatly decrease their trust” (p. 683) (also see Jones et al., 2020).

Finally, there are several articles attempting to map criteria for measuring trust in the context of learning analytics. For example, Nazaretsky et al. (2022) set out a number of dimensions that influence teachers’ trust in AI-based educational technology, namely, perceived benefits; the lack of human characteristics in AI-based educational technology; a lack of transparency; anxieties related to using AI-based educational technology; self-efficacy; the required shift in pedagogy resulting from using AI-based educational technology; AI-based EdTech’s perceived lack of transparency; preferred means to increase trust in AI-based educational technology; and the implications of the trade-offs between human advice and recommendation and AI-based educational technology. Of interest to this study are the following findings: the fear that adopting AI-based educational technology would mean significant changes in their teaching practices and increases in workloads and less trust in AI-based educational technology than in human expertise and advice (also see criteria proposed by Veljanova et al., 2022; the work by Baneres et al., 2021, on trustworthy AI; and the proposal of a Code of Practice for the development of trust in AI by Welsh and McKinney, 2015).

1.3 Research gap

The previous section provided a brief overview of some of the existing research into trust in learning analytics. This review established a number of concerns around the trustworthiness of learning analytics in respect of how data and student learning journeys are understood, the design and implementation of learning analytics and, more generally, how institutions can address the “trust deficit” in learning analytics – especially with regard to staff, students and broader society. What seems lacking is exploring understandings of trust and an agreement on the contours of trust in learning analytics (i.e. on the elements and the importance of separate and mutually constitutive elements of trust). Though there is some research on student and staff perceptions of the “trustworthiness” of learning analytics, these perceptions are not explored in the context of broader societal concerns about the trustworthiness of learning analytics on the social contract between society, HEIs and students.

2. Methodology

2.1 Research design

Not all research methodologies are equally suited for all types of research questions and objectives (Al-Ababneh, 2020), and we acknowledge that in the case of mapping the contours of trust, a variety of methodologies may have provided informative results, albeit providing different contributions. The benefits of the Delphi technique include helping researchers to identify variables and develop propositions; helping to build and/or confirm theory; enabling experts to justify their views; and contributing to construct validity (Okoli and Pawlowski, 2004).

Although Delphi studies are often used as a quantitative technique, they are considered “well suited to rigorously capture qualitative data” (Skulmoski et al., 2007, p. 9). The approach has been described by Linstone and Turoff (1975) as a way to organize group communication efficiently to allow the group to deal with a difficult or complex topic as a collective. The four key features of Delphi, according to Skulmoski et al. (2007), are the anonymity of the participants; an iterative process; controlled feedback in which participants are informed of other participants’ perspectives; and an analytical aggregation of the group response.

2.1.1 Sampling size and criteria.

Skulmoski et al. (2007) propose a number of factors for consideration in deciding on sample size, such as having a heterogenous or homogenous sample (e.g. homogenous groups require smaller samples, and bigger samples increase the complexity and difficulty in, e.g. reaching consensus). Bigger samples may reduce group error but are likely to increase the challenges in managing the process. Smaller samples often require follow-up research to verify the findings. Okoli and Pawlowski (2004, p. 19) state that:

The Delphi group size does not depend on statistical power, but rather on group dynamics for arriving at consensus among experts. Thus, the literature recommends 10–18 experts on a Delphi panel. (Also see Hsu and Sandford, 2007)

2.1.2 Purposeful sampling of expert panel.

Yousuf (2007, p. 87) states that “The information obtained by the Delphi study is only as good as the experts who participate.” According to Skulmoski et al. (2007), there are four requirements for expertise, namely, evidence of knowledge and experience of the issue under investigation; capacity and willingness to participate; sufficient time to participate; and effective communication skills. With this in mind, a purposive sampling approach was followed to identify published authors with a clear interest in trust in the context of learning analytics.

A search was conducted on March 25, 2022, on Web of Science and Scopus with search terms “trust” AND “learning analytics” of published journal articles and conference proceedings in the English language. WoS produced 52 articles and Scopus 47. The list of authors, article title, keywords and abstract were exported to Excel and the abstracts analyzed to establish the extent to which trust was discussed in the context of the article. The final corpus of articles resulted in an initial population sample of 136 authors. The email addresses of 121 authors were traced via publicly available data, and an initial invitation to participate in the Delphi study was sent on August 22, 2022. After removing those for whom a long term out of office or undeliverable message was received, 99 invitees received the survey invitation.

2.1.3 Methodological norms – reliability and validity/trustworthiness.

Hasson et al. (2000) propose that a Delphi study cannot be considered reliable because different samples of experts may produce different results. However, Hasson and Keeney (2011) suggest that the approach can be enhanced by focusing on credibility, dependability, confirmability and transferability. In this study, these were achieved via clear communication and feedback to participants; rigor in the selection of researchers with published peer-reviewed research on trust in learning analytics; and by keeping an audit trail of every step. This latter included documenting the selection process of the population sample, an interactive, documented process between the authors in formulating and finalizing the surveys for both rounds, piloting the survey for Round 1 and adapting as needed, and agreement around the coding of emerging themes.

2.1.4 Reaching consensus.

McPherson et al. (2018) report that there are varying opinions around consensus in Delphi studies – ranging from a bare majority (50%+) of agreement to 100% agreement. They suggest that 100% agreement is practically impossible, and also unnecessary, in establishing both the nuances and the importance of a particular topic. Hsu and Sandford (2007) refer to consensus as having a certain percentage of experts supporting a statement and propose that at least 70% need to rate three or higher on a four-point Likert scale (1 = strongly disagree – 4 = strongly agree).

For the purposes of this research, we adopted a five-point Likert scale, allowing a midpoint neutral option. The authors of this study agreed that consensus could be assumed where there was an interpolated median of at least 4 and disagreement was below 25%. An interpolated median (which takes account of the spread of data around the median) was adopted to provide a view of the center value of the level of agreement while also giving an indication of the weight of the data around the true median. The interpolated median is recommended for interpretation of results from an ordinal scale.

2.1.5 The Delphi process.

Following the identification of the panel of experts, the researchers established the willingness of the experts to participate. Of the 99 individuals invited, 34 agreed to respond within the suggested time frames. The researchers considered the relative homogeneity of the group of experts, and the aim of understanding the nuances or contours of trust in learning analytics and opted for a two-round Delphi study with the option of a third round should consensus not be achieved within two rounds (Skulmoski et al., 2007).

Thirty-one participants fully completed Round 1 via a Qualtrics survey. The responses were analyzed inductively and deductively (Elo and Kyngäs, 2008). The results of Round 1 informed the survey for Round 2 sent out on September 12, 2022. In line with the objectives of Delphi studies, the aim of Round 2 was to establish the extent to which there was consensus on the findings of Round 1. Thirty-one participants completed Round 2, of which three had not participated in the first round. In the final analysis, these responses were included as they did not significantly impact the results. This is supported by research from Boel et al. (2021) who found that the inclusion of new responders who had missed earlier rounds of a Delphi study can lead to an improved representation of opinions and can reduce the chances of false consensus, while also not unduly influencing the final outcomes.

2.2 The Delphi Round 1 survey

The literature review pointed to a number of specific issues in the context of trust in learning analytics, and the survey questionnaire was designed with these in mind. Round 1 of the Delphi process began with an exploration of participants’ own understandings of trust and of the different elements of trust.

As is clear since the emergence of learning analytics as a unique research focus and practice, the collection, analysis and use of student data are in service of understanding and improving students’ learning and the different factors and circumstances that impact students’ learning, retention or dropping out (Gašević et al., 2015). There is a direct link between the choices made regarding data collection and our understanding of students’ learning and the different factors that impact students’ success. Therefore, the second part of the first survey (Round 1) explored our trust in how our institutions actually understand student learning.

Because data is central to learning analytics, it was natural to explore participants’ understanding of factors that might impact our trust in the data we collect. In addition, the literature review highlighted that trust in learning analytics can be impacted both by the design and planning for the implementation of learning analytics (e.g. stakeholder involvement, expertise, the tools [e.g. AI], digital infrastructures, etc.), as well as the operationalization and uses of learning analytics in institutions (e.g. expertise, methods of analysis, dashboards, responsiveness, impact, etc.).

The final survey section in Round 1 explored ways in which trust in learning analytics might be increased for three stakeholder groups, namely, students, faculty and instructors and broader society.

Following Hasson et al. (2000), contributors were given opportunities to add additional comments after each question.

Each question asked for three to five text responses and then asked for those answers to be ranked in terms of importance (e.g. “What three (3) to five (5) things could institutions do to increase students’ trust in learning analytics?” and “Rank these factors from most to least impact.”). The results from Round 1 were downloaded from Qualtrics as a series of user-generated text with associated rankings.

2.2.1 Initial analysis from Round 1.

Thirty-one participants completed Round 1 of the survey. Although respondents were given a broad range of options to select from in terms of their role and educational context, the response showed that almost all identified as researchers within the learning analytics community (29) with the remaining two identifying as faculty members. Information regarding discipline was not collected. Similarly, 27 declared their context as further/HE; 3 as K-12/primary and secondary education and 1 as having no specific area. Twenty-one respondents stated that they had more than five years of experience working in the field of learning analytics, with the remaining 10 selecting less than five years. This does highlight a shortcoming of the Delphi design that by selecting known experts in the field based on their publications, those represented would likely be researchers, as opposed to practitioners (as their primary role). Having said that, there is no simple way of identifying the “expert practitioner,” either in terms of their specific role (the individual who designs the algorithms, the analyst or the educator who uses these analyses to inform teaching?) or in terms of locating a representative sample of practitioners.

Two respondents worked at institutions where learning analytics was fully operationalized across the institution; 14 at institutions with partially operationalized learning analytics (used on some courses/by some instructors/faculty); 3 in institutions which were planning to operationalize shortly; 4 where the viability of learning analytics was under consideration; and 8 at institutions with a learning analytics research focus only.

An inductive approach was taken to the thematic analysis of the question responses. Any issues of understanding were often supported by the additional comments provided. For each question, text responses were collated, and clear themes sought. These were cross-checked between the authors until consensus was reached. Once the original responses were mapped onto the themes, it was possible to establish the frequency of each theme and its overall importance within the group of themes. Importance was determined by sorting according to the number of times a theme was chosen as the most important factor, then by the number of times it was chosen as the second most important and so on.

2.3 The Delphi Round 2 survey

The findings from Round 1 of the study were used to produce the second survey which simply reflected back each of the questions with the most highly ranked themes. Participants were then asked to use a 5-point Likert scale to indicate their agreement with the themes raised and their rankings. The scale options were 1 = strongly disagree; 2 = disagree; 3 = neutral; 4 = agree; and 5 = strongly agree.

3. Analysis and findings

Based on all of the responses from Survey 2, Cronbach’s α was calculated as 0.93, indicating strong internal consistency of the survey instrument.

In Round 2, there were also 31 participants, three of whom had not completed Round 1. Twenty-eight complete surveys were received. In this round, participants had an opportunity to reflect on the profile of the group. One contributor remarks on the absence of “student voice, student support and management” in the survey. Another flags that the predominance of researchers suggests “a potential disconnect between people who are having to *do* learning analytics […] and those studying them.” These were both valid points.

For each question, a summary of agreement against the Likert scale 1–5 was established, and an interpolated median was calculated to reflect the extent of agreement (or otherwise).

3.1 Defining trust

Based on an analysis of respondents’ descriptions in Round 1, the following description of trust was offered to respondents in Round 2:

Trust is subjective, psycho-social, relational and often asymmetrical and founded on the character/values/credibility and track record/consistency/expertise of the person/organization requiring our trust. The level of trust can be influenced by the transparency of the process/requirements and predictability of the envisaged outcomes, and a belief in fairness and benevolence (non-malfeasance).

Participants were asked to indicate their agreement with the description on a scale from 1 = strongly disagree to 5 = strongly agree. The interpolated median was 4.15, suggesting agreement with the provided description of trust. There was no disagreement (see Figure 1).

3.2 Elements of trust

In Round 1, participants were asked to provide a list of what they considered to be the main elements of trust with no specific context. The responses were inductively analyzed and coded into broader themes. In Round 2 of the Delphi process, participants were requested to indicate their agreement with the ranking of the themes.

Table 1 provides a summary of the most important elements of trust as determined by the participants. Reliability or consistency was most highly ranked (comments included: “we can trust that someone will do something because they always do it” and “confidence in the reliability of the person or organization”). The concept of beneficence (do no harm) was ranked as the second most important element (e.g. “to accept that the other party will behave in a way that results in an acceptable outcome” and “attention to integrity […]”).

The interpolated median for agreement for the ranking shown in Table 1 was 3.71. This is perhaps not surprising. Determining what trust means to each of us in terms of the relative importance of components of a definition is difficult. One participant remarked that “Honesty and transparency should come first,” while another stated “How is transparency an element of trust?” As many commented throughout the survey, context is key.

The remaining sections follow a similar format. The tables summarize the most important themes (generated from a coding exercise) based on their individual rankings and frequency and illustrate the percentages of those who agreed/disagreed with the rankings.

3.3 Trust in institutional understanding of factors that impact student learning, retention and dropout

An unexplored issue on researching trust in the context of learning analytics is to link trust to how institutions understand student success. How HEIs understand student learning and the different factors that impact student success determines what data will be collected and how the data would be analyzed. Understanding the probability of student success only as a result of students’ efforts, and not see student success as found in the nexus of a variety of often mutually constitutive factors involving students’ previous learning experiences, loci of control and self-efficacy, institutional and departmental efficiencies, timetables and pedagogical approaches as well as macro societal factors impacting on both institutions and students, impacts on how we could trust learning analytics (Archer and Prinsloo, 2020). As such, this question sought to clarify issues which affect our trust in how institutions understand factors that contribute to student outcomes and learning (see Table 2). The most important factor here was considered to be competency, that is, having a good understanding of relevant factors, what the data is telling them (and what gaps there are) and understanding students’ lives. Transparency (e.g. what data is collected) was frequently mentioned, as was the capability of the institution to contribute to successful student outcomes, whether through sufficiency of resources or of understanding. One respondent commented that:

It's very important that we forefront the question of what we do with this information – what resources are available to support students who are failing? We have to talk about that and plan for that before we have a functional system. The two go together.

Institutional understanding of relevant learning theory also ranked highly here. One participant talked about institutions being wedded to “certain teaching methods,” and another mentioned “assumptions considering learning (behaviorist, constructivist, […]).” Several others commented on the importance of privacy and ethics and on the need for a stronger understanding of the student and their social context. Broad agreement was reached for these themes (interpolated median = 3.88) with only 4% (one respondent) indicating disagreement.

3.4 Factors impacting on trustworthiness of data

This question elicited views on issues relating to the trustworthiness of the data used as part of learning analytics (Table 3). A frequently emerging theme was the need for a sufficiently complete and/or relevant data set (this included, e.g. the use of proxies to substitute for student behaviors). Data completeness in this context was usually taken to mean sufficiency, i.e. having enough information on which to make reasonable decisions. Data completeness was also ranked as the most important issue here, followed by the need for an ethical framework (including, e.g. issues around consent). Understandably, another key issue related to the competence of data analysts and learning analytics practitioners in making sense of data and outputs. Other important themes included data stewardship and concern around quantification, i.e. in taking care not to reduce complex behaviors (or indeed, students) to simple numbers. Agreement around the factors emerging from Round 1 had an interpolated median of 3.71. Those disagreeing totaled around 22%. The findings from Round 1 of the study for this particular theme were perhaps the most contentious. To aid our understanding of this, several respondents commented, with one stating “we trust LA IF it is possible to quantify behaviors to measurements. And we know that this is a reductionist approach by definition,” another that “Competence in data analysis is the most important element, understanding student success is also key,” and further that “the whole problem: ‘reducing’ […]. [it is] worrying.” Despite a lower level of agreement for this topic, no further additions were made to the themes already raised – rather, those selecting a neutral or disagree response were promoting an alternative rank order.

3.5 Trust in the planning and design of learning analytics in institutions

This question focused on the planning and design of learning analytics processes, considering, for example, stakeholder involvement, staff expertise, the tools and digital infrastructures (Table 4). Understandably, stakeholder engagement scored highly, both in terms of frequency of mention and importance. Many also felt that a thorough examination of purpose was crucial but often missing at the design state, e.g. “I’m not seeing any real emphasis on the use case, or how the data will be used to benefit students.” The other key theme here was the need to have a strong understanding of the approach to learning analytics and the theory underlying the approach taken. Support for the themes was fairly strong, although several commented on the absence of considerations of privacy as a theme and the clear need to have an ethical framework at this early stage. The interpolated median agreement was 4.04 with the % of disagreement in the key themes at around 11%.

3.6 Trust in the operationalization and use of learning analytics in institutions

With regard to trust in the operationalization of learning analytics, the capabilities of staff and students to use learning analytics were most frequently mentioned (Table 5). The two most important elements were transparency (regarding purpose, data capture and use) and understanding of the actual problem and/or of the context. Interestingly, the need to implement meaningful consent and the existence of a clear ethical/legal framework were also prominent here. Although it was considered important, practical concerns around implementing consent were apparent from the comments, e.g.:

(i) People don't know what they are consenting to, (ii) often consent is assumed from a checkbox or proceeding with a process, (iii) is it to be given once for everything? What is it precisely that is being consented to? So for operationalization I think it is easy to say but very hard to do

and:

I think consent (which is needed) is done in a very automatic fashion now, and then, it does not add a lot of trust (although of course, it is needed and a basic element).

Many participants flagged issues around whether their institution fully understands the benefits and the limitations of learning analytics, with one user commenting that institutions can have “inflated expectations of what learning analytics can offer.” Agreement for the ranked themes was high (interpolated median = 4.14).

The remaining questions examined practical actions that institutions might take to increase trust in learning analytics from a range of stakeholder perspectives. The first of these considered the student perspective.

3.7 What might institutions do to increase student trust in learning analytics

There was a high level of consensus here, with most respondents agreeing that involving students in the design and development process was the most important element of increasing student trust (Table 6). Transparency and active communication were also key. Others highlighted the difficulties in allowing students control over their own data, e.g. “While I agree in theory, it's unlikely (at least in my experience in the US) that students would be able to control their data or results.” Agreement around the themes had an interpolated median of 4.32.

3.8 What might institutions do to increase instructor and/or faculty trust in learning analytics

Given the composition of Delphi respondents, it is understandable that this section had mixed views and provided a variety of concerned comments alongside the issues raised (Table 7). Although the more frequently mentioned themes were broadly practical suggestions, the most important theme was considered to be the involvement of academic staff at the design, implementation and improvement stages. Also highly ranked was the suggestion that a better understanding of the pros and cons of using learning analytics would be helpful. Interestingly, a moratorium on using analytics to assess staff was also mentioned, although did not rank highly. Comments reflecting the latter issue included: “We all know that if we let them surveil the students, it'll just turn into faculty surveillance, ever aimed at squeezing more out of us”; “[to] commit to moratorium would help to avoid conflict, but not to increase trust”; “Not sure optional adoption by instructors is fair on students. Is that ethical?” and “Moratorium on staff analytics? What is sauce for the goose is surely sauce for the gander?” Agreement for the themes had an interpolated median of 3.77.

3.9 What might institutions do to increase public trust in learning analytics

Finally, respondents were asked to review the themes emerging from a question asking what more could be done to increase public trust in learning analytics (Table 8). The most important themes here were to (better) communicate a clear purpose, to adopt an ethical approach and to be able to communicate the potential benefits of using learning analytics. Some respondents had significant concerns about involving public stakeholders, wanting to ensure demonstrable independence from third-party software vendors. Agreement around the important themes had an interpolated median of 4.14.

4. Discussion

All of the themes raised in Round 1 achieved broad consensus (an interpolated median of at least 4 and disagreement below 25%) in Round 2.

Significant, and as far as we could assess, the first attempt to define trust in the context of learning analytics, is the consensus decision (86% of the experts agreed with 14% being neutral) on a broad definition of trust in the context of learning analytics. The definition aligns well with the issues raised by de Fine and Brülde (2021) and Marsh and Dibben (2005), but the 14% neutrality may point to the possibility that any definition does not completely erase some of the problems and paradoxes in understanding “trust” (Kimbrough, 2005).

While literature points to the impact of the lack of theory in learning analytics (Gašević et al., 2019); what has not yet been explored yet is how institutions’ understanding of student success and the interrelationships of factors that may impact student success and shape how trustworthy learning analytics is (Archer and Prinsloo, 2020). There is a danger that we assume the data will speak for itself and that institutions do not need to have a research-informed understanding of student success (Prinsloo and Slade, 2019). Currently, the dominant theory in learning analytics research is self-regulated learning (Khalil et al., 2022) and while it falls outside the scope of this article to explore how learning analytics would play out should student learning be understood differently, Broughan and Prinsloo, 2020 suggest that the type of data collected may provide a glimpse of how institutions understand student learning.

In breaking down issues relating to trust in learning analytics, perhaps the most contentious issue was that of factors impacting the trustworthiness of the data used (Table 3) – themes emerging here achieved the least consensus of all of the questions, whereas there was greater agreement around things institutions might do to improve student trust, for example. Interestingly, while concerns about the platformitization of HE (Komljenovic, 2021; Williamson et al., 2020) and the political economy of data (Prainsack, 2020; Sadowski, 2020) are well-documented, the explicit outsourcing of learning analytics to commercial vendors did not emerge as a concern. What was mentioned that could have been proxies concerns about the impact platformitization and the outsourcing of learning analytics to commercial vendors are issues pertaining to ethics and data stewardship [as discussed by Klein (2019b)].

As in previous research (see, e.g. Drachsler and Greller, 2016; Jones et al., 2020), themes which emerged throughout were around (clarity of) purpose; transparency (of process, of data used, etc.); data adequacy (sufficiency and appropriateness); involvement of stakeholders (at all stages); and the existence of an ethical framework. There is no assumption that improvements in these areas would be sufficient to improve trust in all contexts, but they might at least be considered primary themes to address when considering the implementation of learning analytics. With regard to ethical frameworks, for example, there is evidence (Joksimović et al., 2022) that few institutions have formally implemented policies which explicitly address such issues despite there being a reasonable number of exemplars from which to draw.

The large majority of participants in this study were academic researchers. In terms of trusting others with the adoption of learning analytics, there appeared to be some doubt around both the ability of the institution (and presumably, its staff) to understand the theoretical framework driving uses of learning analytics and the institutional capability to apply it effectively (e.g. “Whether management understands the complexities and interdependencies impacting on student success”). Concerns were also consistently expressed around key drivers as financial rather than pedagogical and on a reliance of focusing only on things that could be counted. These findings confirm the research by Tsai et al. (2021) and Klein et al. (2019b).

Respondents were given a final opportunity to share their thoughts at the end of the Delphi process. Many of these focused on the inherent untrustworthiness of learning analytics and expressed concern that this is not widely shared by others. One respondent commented that she/he was:

Disappointed but not surprised that the respondents (presumably most of whom are LA practitioners) seem to assume that LA is trustworthy, so what do we need to do to convince stakeholders of that?

while another reflected that:

[…] a little bit more honesty would be welcome. The problem is that an awful lot of LA is not trustworthy, so getting stakeholders to trust it would be dishonest and unethical.

These sentiments explain the higher levels of disagreement in Table 3 (Factors impacting on trustworthiness of data), Table 4 (Trust in the planning and design of learning analytics implementation) and Table 7 (What might institutions do to increase instructor and/or faculty trust in learning analytics).

The unease expressed by this panel of experts brings to mind the opening quote of this article – “Never trust a species that grins all the time. It’s up to something” (Pratchett, 1989).

5. Conclusion

The tracks of a leopard are not made by a dog – African Proverb

At the start of the article, we explored the heuristic of trust provided by Marsh and Dibben (2005) – trust, mistrust, distrust and untrust – in an attempt to better understand the notion of a trust deficit in learning analytics. In the past, students trusted institutions despite explicit controls or guidelines in place on which to have based their trust. There is, at present, no evidence of widespread distrust or misplaced trust among students, although the increasing platformitization of HE, dataism and the outsourcing of analytics services raise concerns among faculty and broader society (Jones et al., 2020). HE may be at the point where we should consider the impact of “untrust” not as the “opposite of trust, [which] can indicate some trust, but not enough to cooperate with you” (Marsh and Dibben, 2005).

With the platformitization of education, commercial interests in student data and the increasing use of AI raising concerns (UNESCO, 2022), we should not assume that trust can be taken for granted. Calls for control, transparency and oversight are growing, with trust increasingly replaced by an apprehensiveness or “untrust.” As in the quotation above, when we surveil the “tracks” of learning analytics, there is some nervousness that they may be left by something more ominous and dangerous.

The summary definition of trust emerging from this study is that it is multifaceted, complex and context-dependent. Even within this (largely) homogeneous stakeholder group – academic researchers –, there remain fundamental concerns about whether learning analytics may be considered trustworthy at all. Perhaps, most worryingly, key factors related to trust in learning analytics emerging here demonstrate that things have changed little in the past decade or so. Perhaps, one message here is that those influencing the field should take on a more active role in engaging with the gatekeepers of change.

Students have a right to know what data are collected, by whom, for what purpose and under what conditions their data will be shared (Slade et al., 2019). With the social contract between students, society and HE being reshaped by commercial interests and dataism (Williamson, 2021), there is a need for student data activism and “returning the gaze” (Thompson and Prinsloo, 2023). Faculty and broader society have to hold HEIs to account to uphold the social contract and, if need be, to renegotiate the social contract between society and HE (UNESCO, 2022).

Without trust, there can be no social contract between educational institutions, students, staff and broader society. There is much at stake.

6. Limitations

The Delphi technique is not without its limitations as the consensus reached in a Delphi study “may not be a true consensus; it may be a product of specious or manipulated consensus” (Yousuf, 2007, p. 85). Other limitations include that one cannot generalize from the findings, and extreme positions are eliminated and a “middle-of-the-road” consensus is accepted (Barnes, 1987). Delphi studies:

[…] offer a snapshot of expert opinion, for that group, at a particular time, which can be used to inform thinking, practice or theory. As such, Delphi findings should be compared with other relevant evidence in the field and verified with further research to enable findings to be tested against observed data to enhance confidence. (Hasson and Keeney, 2011, p. 1701)

We acknowledge that selecting the expert panel using, as basis, individuals who have published on trust and learning analytics provides a specific lens on the findings.

Figures

Levels of participant agreement with the suggested definition of trust

Figure 1.

Levels of participant agreement with the suggested definition of trust

Summary of the elements of trust identified in Round 1

Ranked as:
Rank Element Most important 2nd 3rd 4th Least important
1 Reliability/consistency 5 4 2 1 1
2 Beneficence 3 2 2 3 2
3 Transparency 3 1 3 4
4 Value alignment 3 1 1
5 Honesty 2 2 2 1
6 Equitability 1 1
7 Morality 1 1
8 Accuracy 1
4% Disagree, 14% Neutral, 29% Agree, 39% Strongly agree, 14%
Note:

Round 2 % agreement for ranking of themes

Source: Table by authors

Trust in institutional understanding of factors that impact student learning, retention and dropout

Ranked as:
Rank Element Most important 2nd 3rd 4th Least important
1 Competency 6 1 1 1 1
2 Transparency 3 6 3
3 Understanding of learning theory 3 3
4 Institutional values 2 4 3 1
5 Privacy and ethics 2 1 1 1
6 Data quality 1 1 1 1 3
7 Finance/staff resource 1 1 1 1
8 Student as individual 6 1 1 1 1
Disagree, 4% Neutral, 29% Agree, 46% Strongly agree, 21%
Note:

% agreement for ranking of themes

Source: Table by authors

Factors impacting on trustworthiness of data

Ranked as:
Rank Element Most
important
2nd 3rd 4th Least
important
1 Data completeness/relevance 10 7 8 6 1
2 Ethical framework 5 2 1 1
3 Competence in data analysis 3 3 8 2
4 Data stewardship 3 2
5 Transparency (of methods, of data collected, etc.) 2 4 3 2 1
6 Quantification 2 2
7 Understanding context 1 2 1 1
8 Purpose 1 2 1 1
3% Disagree, 19% Neutral, 19% Agree, 44% Strongly agree, 15%
Note:

% agreement for ranking of themes

Source: Table by authors

Trust in the planning and design of learning analytics implementation

Ranked as:
Rank Element Most
important
2nd 3rd 4th Least
important
1 Stakeholder involvement 10 6 5
2 (Lack of) understanding of purpose 4 1 1
3 Appropriate methodology/theory base 2 3 3 2
4 Cohesive vision/strategy 2 2 1 1
5 Understanding of student success 2 1 1
6 Data availability/appropriateness 1 3 2 2
7 Ethical framework 1 2 1 1
8 Transparency 1 1 5 1 2
Disagree, 11% Neutral, 11% Agree, 52% Strongly agree, 26%
Note:

% agreement for ranking of themes

Source: Table by authors

Trust in the operationalization and use of learning analytics in institutions

Ranked as:
Rank Element Most
important
2nd 3rd 4th Least
important
1 Transparency regarding purpose, data capture and use 7 1 5 1
2 Problem/contextual understanding 3 1
3 Consent 3
4 Staff/student capabilities/training 2 4 8 2
5 Understanding limitations and benefits of LA 2 4 3 2 1
6 Key stakeholder involvement 2 4 3
7 Organizational readiness 2 3 2 1
8 Clear ethical and legal framework 2 2
Disagree, 7% Neutral, 11% Agree, 48% Strongly agree, 33%
Note:

% agreement for ranking of themes

Source: Table by authors

Things institutions could do to increase student trust in learning analytics

Ranked as:
Rank Element Most
important
2nd 3rd 4th Least
important
1 Involve students in the design and development process 9 2 1
2 Transparency 4 7 1
3 Communicate purpose of LA 4 6 1 1
4 Demonstrable ethical practice and oversight 4 1 2 1 2
5 Consider opt in, opt out position 3 2 5 2
6 Data transparency 1 2 3 2 1
7 Student control over data/results 1 2 3 1 1
8 Data stewardship 1 2 1 1
Disagree, 4% Neutral, 15% Agree, 37% Strongly agree, 44%
Note:

% agreement for ranking of themes

Source: Table by authors

Things institutions could do to increase instructor and/or faculty trust in learning analytics

Ranked as:
Rank Element Most
important
2nd 3rd 4th Least
important
1 Involvement at design/implementation/improvement stages 6 3 3 2 1
2 Demonstration of impact (benefits, limitations, workload implications) 4 7 8 2
3 Ethical framework 4 2 2 1
4 Staff training/support 3 3 5 3
5 Transparency 3 3 1
6 Based on clear theory/pedagogy 2 1
7 Use relevant metrics/interventions 1 6 1 1
8 Optional adoption 1 1
Disagree, 15% Neutral, 22% Agree, 52% Strongly agree, 11%
Note:

% agreement for ranking of themes

Source: Table by authors

Things institutions could do to increase public trust in learning analytics

Ranked as:
Rank Element Most
important
2nd 3rd 4th Least
important
1 Communicate clear purpose 5 2 1 1
2 Adopt ethical approach 4 4 2
3 Communicate potential benefits 4 3 4 1 1
4 Transparency 4 2 4 1
5 Focus on most important issues 3 1 1 1 1
6 Implement opt in/opt out choice 2 1 3
7 Communicate re data used 2 1 1
8 Demonstrate independence from vendors 2 1
Disagree, 4% Neutral, 11% Agree, 52% Strongly agree, 33%
Note:

% agreement for ranking of themes

Source: Table by authors

References

Al-Ababneh, M.M. (2020), “Linking ontology, epistemology and research methodology”, Science and Philosophy, Vol. 8 No. 1, pp. 75-91.

Archer, E. and Prinsloo, P. (2020), “Speaking the unspoken in learning analytics: troubling the defaults”, Assessment and Evaluation in Higher Education, Vol. 45 No. 6, pp. 888-900, doi: 10.1080/02602938.2019.1694863.

Baneres, D., Guerrero-Roldán, A.E., Rodríguez-González, M.E. and Karadeniz, A. (2021), “A predictive analytics infrastructure to support a trustworthy early warning system”, Applied Sciences, Vol. 11 No. 13, p. 5781, doi: 10.3390/app11135781.

Barnes, J.L. (1987), “An international study of curricular organizers for the study of technology”, Unpublished doctoral dissertation, Virginia Polytechnic Institute and State University, Blacksburg, VT, available at: https://vtechworks.lib.vt.edu/handle/10919/37284

Boel, A., Navarro-Compán, V., Landewé, R. and van der Heijde, D. (2021), “Two different invitation approaches for consecutive rounds of a Delphi survey led to comparable final outcome”, Journal of Clinical Epidemiology, Vol. 129, pp. 31-39, doi: 10.1016/j.jclinepi.2020.09.034.

Broughan, C. and Prinsloo, P. (2020), (“Re) centring students in learning analytics: in conversation with Paulo Freire”, Assessment and Evaluation in Higher Education, Vol. 45 No. 4, pp. 617-628.

de Fine Licht, K. and Brülde, B. (2021), “On defining “reliance” and “trust”: purposes, conditions of adequacy, and new definitions”, Philosophia, Vol. 49, pp. 1981-2001, doi: 10.1007/s11406-021-00339-1.

de Quincey, E., Briggs, C., Kyriacou, T. and Waller, R. (2019), “Student centred design of a learning analytics system”, In Hsiao, S. and Cunningham, J. (Eds), Proceedings of the 9th international conference on learning analytics and knowledge, New York, NY: ACM, pp. 353-362.

Drachsler, H. and Greller, W. (2016), “Privacy and analytics: it's a DELICATE issue. A checklist for trusted learning analytics”, In Gasevic, D., Lynch, G., (Eds), Proceedings of the sixth international conference on learning analytics and knowledge, New York, NY: ACM, pp. 89-98.

Elo, S. and Kyngäs, H. (2008), “The qualitative content analysis process”, Journal of Advanced Nursing, Vol. 62 No. 1, pp. 107-115.

Ferguson, R. (2012), “Learning analytics: drivers, developments and challenges”, International Journal of Technology Enhanced Learning, Vol. 4 Nos 5/6, pp. 304-317.

Gašević, D., Dawson, S. and Siemens, G. (2015), “Let’s not forget: learning analytics are about learning”, TechTrends, Vol. 59 No. 1, pp. 64-71.

Gašević, D., Tsai, Y.S., Dawson, S. and Pardo, A. (2019), “How do we start? An approach to learning analytics adoption in higher education”, The International Journal of Information and Learning Technology, Vol. 36 No. 4, pp. 342-353.

Greller, W. and Drachsler, H. (2012), “Translating learning into numbers: a generic framework for learning analytics”, Journal of Educational Technology and Society, Vol. 15 No. 3, pp. 42-57.

Hasson, F. and Keeney, S. (2011), “Enhancing rigor in the Delphi technique research”, Technological Forecasting and Social Change, Vol. 78 No. 9, pp. 1695-1704.

Hasson, F., Keeney, S. and McKenna, H. (2000), “Research guidelines for the Delphi survey technique”, Journal of Advanced Nursing, Vol. 32 No. 4, pp. 1008-1015, doi: 10.1046/j.1365-2648.2000.t01-1-01567.x.

Hsu, C.C. and Sandford, B.A. (2007), “The Delphi technique: making sense of consensus”, Practical Assessment, Research, and Evaluation, Vol. 12 No. 1, pp. 1-8, doi: 10.7275/pdz9-th90.

Joksimović, S., Marshall, R., Rakotoarivelo, T., Ladjal, D., Zhan, C. and Pardo, A. (2022), “Privacy-driven learning analytics”, In: McKay, E. (Eds), Manage Your Own Learning Analytics. Smart Innovation, Systems and Technologies, Springer, Cham, Vol. 261.

Jones, K.M. and Afnan, T. (2019), “For the benefit of all students”: student trust in higher education learning analytics practices”, Proceedings of the Association for Information Science and Technology, Vol. 56 No. 1, pp. 682-683, doi: 10.1002/pra2.132.

Jones, K.M., Rubel, A. and LeClere, E. (2020), “A matter of trust: higher education institutions as information fiduciaries in an age of educational data mining and learning analytics”, Journal of the Association for Information Science and Technology, Vol. 71 No. 10, pp. 1227-1241.

Khalil, M., Prinsloo, P. and Slade, S. (2022), “The use and application of learning theory in learning analytics: a scoping review”, Journal of Computing in Higher Education, pp. 1-22.

Kimbrough, S.O. (2005), “Foraging for trust: exploring rationality and the Stag Hunt Game”, In Herrmann, P., Issarny, V. and Shiu, S. (Eds), International Conference on Trust Management, Springer, Berlin, Heidelberg, pp. 1-16.

Klein, C., Lester, J., Rangwala, H. and Johri, A. (2019a), “Technological barriers and incentives to learning analytics adoption in higher education: insights from users”, Journal of Computing in Higher Education, Vol. 31 No. 3, pp. 604-625.

Klein, C., Lester, J., Rangwala, H. and Johri, A. (2019b), “Learning analytics tools in higher education: adoption at the intersection of institutional commitment and individual action”, The Review of Higher Education, Vol. 42 No. 2, pp. 565-593.

Komljenovic, J. (2021), “The rise of education rentiers: digital platforms, digital data and rents”, Learning, Media and Technology, Vol. 46 No. 3, pp. 320-332.

Koshiyama, A., Kazim, E. and Treleaven, P. (2022), “Algorithm auditing: managing the legal, ethical, and technological risks of artificial intelligence, machine learning, and associated algorithms”, Computer, Vol. 55 No. 4, pp. 40-50.

Linstone, H.A. and Turoff, M. (1975), The Delphi Method: Techniques and Applications, Addison-Wesley, London, UK.

McPherson, S., Reese, C. and Wendler, M.C. (2018), “Methodology update: Delphi studies”, Nursing Research, Vol. 67 No. 5, pp. 404-410.

Marsh, S. and Dibben, M.R. (2005), “Trust, untrust, distrust and mistrust – an exploration of the dark (er) side”, In Herrmann, P., Issarny, V. and Shiu, S. (Eds), International Conference on Trust Management, Springer, Berlin, Heidelberg, pp. 17-33.

Nazaretsky, T., Cukurova, M. and Alexandron, G. (2022), “An instrument for measuring teachers’ trust in AI-based educational technology”, LAK22: 12th international learning analytics and knowledge conference, pp. 56-66.

Okoli, C. and Pawlowski, S.D. (2004), “The Delphi method as a research tool: an example, design considerations and applications”, Information and Management, Vol. 42 No. 1, pp. 15-29.

Prainsack, B. (2020), “The political economy of digital data: introduction to the special issue”, Policy Studies, Vol. 41 No. 5, pp. 439-446.

Pratchett, T. (1989), Pyramids: A Discworld Novel: 7, Gollancz, London, UK.

Prinsloo, P. and Slade, S. (2019), “Retracing the evolution of thinking ethically about student data”, New Directions for Institutional Research, Vol. 2019 No. 182, pp. 19-34.

Rodríguez-Triana, M.J., Prieto, L.P., Martínez-Monés, A., Asensio-Pérez, J.I. and Dimitriadis, Y. (2018), “The teacher in the loop: customizing multimodal learning analytics for blended learning”, In Pardo, A., Baltimore-Aufflick, K. and Lynch, G. (Eds), Proceedings of the 8th international conference on learning analytics and knowledge, New York, NY: Association for Computing Machinery (ACM), pp. 417-426.

Sadowski, J. (2020), “The internet of landlords: digital platforms and new mechanisms of rentier capitalism”, Antipode, Vol. 52 No. 2, pp. 562-580.

Skulmoski, G.J., Hartman, F.T. and Krahn, J. (2007), “The Delphi method for graduate research”, Journal of Information Technology Education: Research, Vol. 6 No. 1, pp. 1-21.

Slade, S., Prinsloo, P. and Khalil, M. (2019), “Learning analytics at the intersections of student trust, disclosure and benefit”, In Hsiao, S. and Cunningham, J. (Eds), Proceedings of the 9th international conference on learning analytics and knowledge, New York, NY: Association for Computing Machinery (ACM), pp. 235-244.

Thompson, T.L. and Prinsloo, P. (2023), “Returning the data gaze in higher education”, Learning, Media and Technology, Vol. 48 No. 1, pp. 153-165.

Tsai, Y.S., Whitelock-Wainwright, A. and Gašević, D. (2021), “More than figures on your laptop:(dis) trustful implementation of learning analytics”, Journal of Learning Analytics, Vol. 8 No. 3, pp. 81-100, doi: 10.18608/jla.2021.7379.

UNESCO (2022), “Reimagining our futures together. A new social contract for education. Report from the international commission on the futures of education”, available at: https://unesdoc.unesco.org/ark:/48223/pf0000379381

Veljanova, H., Barreiros, C., Gosch, N., Staudegger, E., Ebner, M. and Lindstaedt, S. (2022), “Towards trustworthy learning analytics applications: an interdisciplinary approach using the example of learning diaries”, In International Conference on Human-Computer Interaction, Springer, Cham, pp. 138-145.

Welsh, S. and McKinney, S. (2015), “Clearing the fog: a learning analytics code of practice”, available at: https://research.moodle.org/80/1/Welsh%20%282015%29%20Clearing%20the%20Fog-%20A%20Learning%20Analytics%20Code%20of%20Practice.pdf

Williamson, B. (2021), “Making markets through digital platforms: pearson, edu-business, and the (e) valuation of higher education”, Critical Studies in Education, Vol. 62 No. 1, pp. 50-66.

Williamson, B., Bayne, S. and Shay, S. (2020), “The datafication of teaching in higher education: critical issues and perspectives”, Teaching in Higher Education, Vol. 25 No. 4, pp. 351-365.

Yousuf, M.I. (2007), “The Delphi technique”, Essays in Education, Vol. 20 No. 1, pp. 80-89.

Further reading

Esmeijer, J. and van der Plas, A. (2013), “The desirable future of learning analytics – a multistakeholder perspective”, In EDULEARN13 Proceedings, IATED, pp. 2746-2755.

Selwyn, N. (2015), “Data entry: towards the critical study of digital data and education”, Learning, Media and Technology, Vol. 40 No. 1, pp. 64-82.

Acknowledgements

The authors thank all those who took the time to fully engage with this study. The insight gained was invaluable and is very much appreciated. Ethical clearance for this research was provided by the University of South Africa [Ref: 2022_CRERC_060(FA)].

Corresponding author

Paul Prinsloo can be contacted at: prinsp@unisa.ac.za

Related articles