Search results

1 – 10 of over 1000
Article
Publication date: 6 June 2023

Jo Bates, Helen Kennedy, Itzelle Medina Perea, Susan Oman and Lulu Pinney

The purpose is to present proposals to foster what we call a socially meaningful transparency practice that aims to enhance public understanding of data-based systems through the…

Abstract

Purpose

The purpose is to present proposals to foster what we call a socially meaningful transparency practice that aims to enhance public understanding of data-based systems through the production of accounts that are relevant and useful to diverse publics, and society more broadly.

Design/methodology/approach

The authors’ proposals emerge from reflections on challenges they experienced producing written and visual accounts of specific public sector data-based systems for research purposes. Following Ananny and Crawford's call to see limits to transparency practice as “openings”, the authors put their experience into dialogue with the literature to think about how we might chart a way through the challenges. Based on these reflections, the authors outline seven proposals for fostering socially meaningful transparency.

Findings

The authors identify three transparency challenges from their practice: information asymmetry, uncertainty and resourcing. The authors also present seven proposals related to reduction of information asymmetries between organisations and non-commercial external actors, enhanced legal rights to access information, shared decision making about what gets made transparent, making visible social impacts and uncertainties of data-systems, clear and accessible communication, timing of transparency practices and adequate resourcing.

Social implications

Socially meaningful transparency aims to enhance public understanding of data-based systems. It is therefore a necessary condition not only for informed use of data-based products, but crucially for democratic engagement in the development of datafied societies.

Originality/value

The paper contributes to existing debates on meaningful transparency by arguing for a more social, rather than individual, approach to imagining how to make transparency practice more meaningful. The authors do this through their empirical reflection on our experience of doing transparency, conceptually through our notion of socially meaningful transparency, and practically through our seven proposals.

Details

Journal of Documentation, vol. 80 no. 1
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 19 December 2023

Susan Gardner Archambault

Research shows that postsecondary students are largely unaware of the impact of algorithms on their everyday lives. Also, most noncomputer science students are not being taught…

Abstract

Purpose

Research shows that postsecondary students are largely unaware of the impact of algorithms on their everyday lives. Also, most noncomputer science students are not being taught about algorithms as part of the regular curriculum. This exploratory, qualitative study aims to explore subject-matter experts’ insights and perceptions of the knowledge components, coping behaviors and pedagogical considerations to aid faculty in teaching algorithmic literacy to postsecondary students.

Design/methodology/approach

Eleven semistructured interviews and one focus group were conducted with scholars and teachers of critical algorithm studies and related fields. A content analysis was manually performed on the transcripts using a mixture of deductive and inductive coding. Data analysis was aided by the coding software program Dedoose (2021) to determine frequency totals for occurrences of a code across all participants along with how many times specific participants mentioned a code. Then, findings were organized around the three themes of knowledge components, coping behaviors and pedagogy.

Findings

The findings suggested a set of 10 knowledge components that would contribute to students’ algorithmic literacy along with seven behaviors that students could use to help them better cope with algorithmic systems. A set of five teaching strategies also surfaced to help improve students’ algorithmic literacy.

Originality/value

This study contributes to improved pedagogy surrounding algorithmic literacy and validates existing multi-faceted conceptualizations and measurements of algorithmic literacy.

Details

Information and Learning Sciences, vol. 125 no. 1/2
Type: Research Article
ISSN: 2398-5348

Keywords

Article
Publication date: 23 September 2021

Donghee Shin, Azmat Rasul and Anestis Fotiadis

As algorithms permeate nearly every aspect of digital life, artificial intelligence (AI) systems exert a growing influence on human behavior in the digital milieu. Despite its…

2391

Abstract

Purpose

As algorithms permeate nearly every aspect of digital life, artificial intelligence (AI) systems exert a growing influence on human behavior in the digital milieu. Despite its popularity, little is known about the roles and effects of algorithmic literacy (AL) on user acceptance. The purpose of this study is to contextualize AL in the AI environment by empirically examining the role of AL in developing users' information processing in algorithms. The authors analyze how users engage with over-the-top (OTT) platforms, what awareness the user has of the algorithmic platform and how awareness of AL may impact their interaction with these systems.

Design/methodology/approach

This study employed multiple-group equivalence methods to compare two group invariance and the hypotheses concerning differences in the effects of AL. The method examined how AL helps users to envisage, understand and work with algorithms, depending on their understanding of the control of the information flow embedded within them.

Findings

Our findings clarify what functions AL plays in the adoption of OTT platforms and how users experience algorithms, particularly in contexts where AI is used in OTT algorithms to provide personalized recommendations. The results point to the heuristic functions of AL in connection with its ties in trust and ensuing attitude and behavior. Heuristic processes using AL strongly affect the credibility of recommendations and the way users understand the accuracy and personalization of results. The authors argue that critical assessment of AL must be understood not just about how it is used to evaluate the trust of service, but also regarding how it is performatively related in the modeling of algorithmic personalization.

Research limitations/implications

The relation of AL and trust in an algorithm lends strategic direction in developing user-centered algorithms in OTT contexts. As the AI industry has faced decreasing credibility, the role of user trust will surely give insights on credibility and trust in algorithms. To better understand how to cultivate a sense of literacy regarding algorithm consumption, the AI industry could provide examples of what positive engagement with algorithm platforms looks like.

Originality/value

User cognitive processes of AL provide conceptual frameworks for algorithm services and a practical guideline for the design of OTT services. Framing the cognitive process of AL in reference to trust has made relevant contributions to the ongoing debate surrounding algorithms and literacy. While the topic of AL is widely recognized, empirical evidence on the effects of AL is relatively rare, particularly from the user's behavioral perspective. No formal theoretical model of algorithmic decision-making based on the dual processing model has been researched.

Article
Publication date: 5 December 2023

Ting Deng, Chunyong Tang and Yanzhao Lai

How to improve continuance commitment for platform workers is still unclear to platforms' managers and academic scholars. This study develops a configurational framework based on…

Abstract

Purpose

How to improve continuance commitment for platform workers is still unclear to platforms' managers and academic scholars. This study develops a configurational framework based on the push-pull theory and proposes that continuance commitment for platform workers does not depend on a single condition but on interactions between push and pull factors.

Design/methodology/approach

The data from the sample of 431 full-time and 184 part-time platform workers in China were analyzed using fuzzy-set qualitative comparative analysis (FsQCA).

Findings

The results found that combining family motivation with the two kinds of pull factors (worker's reputation and algorithmic transparency) can achieve high continuance commitment for full-time platform workers; combining job alternatives with the two kinds of pull factors (worker's reputation and job autonomy) can promote high continuance commitment for part-time platform workers. Particularly, workers' reputations were found to be a core condition reinforcing continuance commitment for both part-time and full-time platform workers.

Practical implications

The findings suggest that platforms should avoid the “one size fits all” strategy. Emphasizing the importance of family and improving worker's reputation and algorithmic transparency are smart retention strategies for full-time platform workers, whereas for part-time platform workers it is equally important to reinforce continuance commitment by enhancing workers' reputations and doing their best to maintain and enhance their job autonomy.

Originality/value

This study expands the analytical context of commitment research and provides new insights for understanding the complex causality between antecedent conditions and continuance commitment for platform workers.

Details

Management Decision, vol. 62 no. 1
Type: Research Article
ISSN: 0025-1747

Keywords

Article
Publication date: 16 July 2019

Donghee (Don) Shin, Anestis Fotiadis and Hongsik Yu

The purpose of this study is to offer a roadmap for work on the ethical and societal implications of algorithms and AI. Based on an analysis of the social, technical and…

Abstract

Purpose

The purpose of this study is to offer a roadmap for work on the ethical and societal implications of algorithms and AI. Based on an analysis of the social, technical and regulatory challenges posed by algorithmic systems in Korea, this work conducts socioecological evaluations of the governance of algorithmic transparency and accountability.

Design/methodology/approach

This paper analyzes algorithm design and development from critical socioecological angles: social, technological, cultural and industrial phenomena that represent the strategic interaction among people, technology and society, touching on sensitive issues of a legal, a cultural and an ethical nature.

Findings

Algorithm technologies are a part of a social ecosystem, and its development should be based on user interests and rights within a social and cultural milieu. An algorithm represents an interrelated, multilayered ecosystem of networks, protocols, applications, services, practices and users.

Practical implications

Value-sensitive algorithm design is proposed as a novel approach for designing algorithms. As algorithms have become a constitutive technology that shapes human life, it is essential to be aware of the value-ladenness of algorithm development. Human values and social issues can be reflected in an algorithm design.

Originality/value

The arguments in this study help ensure the legitimacy and effectiveness of algorithms. This study provides insight into the challenges and opportunities of algorithms through the lens of a socioecological analysis: political discourse, social dynamics and technological choices inherent in the development of algorithm-based ecology.

Details

Digital Policy, Regulation and Governance, vol. 21 no. 4
Type: Research Article
ISSN: 2398-5038

Keywords

Article
Publication date: 26 June 2019

Mariella Bastian, Mykola Makhortykh and Tom Dobber

The purpose of this paper is to develop a conceptual framework for assessing what are the possibilities and pitfalls of using algorithmic systems of news personalization – i.e…

1068

Abstract

Purpose

The purpose of this paper is to develop a conceptual framework for assessing what are the possibilities and pitfalls of using algorithmic systems of news personalization – i.e. the tailoring of individualized news feeds based on users’ information preferences – for constructive conflict coverage in the context of peace journalism, a journalistic paradigm calling for more diversified and creative war reporting.

Design/methodology/approach

The paper provides a critical review of existing research on peace journalism and algorithmic news personalization, and analyzes the intersections between the two concepts. Specifically, it identifies recurring pitfalls of peace journalism based on empirical research on constructive conflict coverage and then introduces a conceptual framework for analyzing to what degree these pitfalls can be mediated – or worsened – through algorithmic system design.

Findings

The findings suggest that AI-driven distribution technologies can facilitate constructive war reporting, in particular by countering the effects of journalists’ self-censorship and by diversifying conflict coverage. The implementation of these goals, however, depends on multiple system design solutions, thus resonating with current calls for more responsible and value-sensitive algorithmic design in the domain of news media. Additionally, our observations emphasize the importance of developing new algorithmic literacies among journalists both to realize the positive potential of AI for promoting peace and to increase the awareness of possible negative impacts of new systems of content distribution.

Originality/value

The article particle is the first to provide a comprehensive conceptualization of the impact of new content distribution techniques on constructive conflict coverage in the context of peace journalism. It also offers a novel conceptual framing for assessing the impact of algorithmic news personalization on reporting traumatic and polarizing events, such as wars and violence.

Details

International Journal of Conflict Management, vol. 30 no. 3
Type: Research Article
ISSN: 1044-4068

Keywords

Article
Publication date: 7 March 2023

Omoregie Charles Osifo

The purpose of this paper is to identify the key roles of transparency in making artificial intelligence (AI) greener (i.e. causing lesser carbon dioxide emissions) during the…

Abstract

Purpose

The purpose of this paper is to identify the key roles of transparency in making artificial intelligence (AI) greener (i.e. causing lesser carbon dioxide emissions) during the design, development and manufacturing stages or processes of AI technologies (e.g. apps, systems, agents, tools, artifacts) and use the “explicability requirement” as an essential value within the framework of transparency in supporting arguments for realizing greener AI.

Design/methodology/approach

The approach of this paper is argumentative, which is supported by ideas from existing literature and documents.

Findings

This paper puts forward a relevant recommendation for achieving better and sustainable outcomes after the reexamination of the identified roles played by transparency within the AI technology context. The proposed recommendation is based on scientific opinion, which is justified by the roles and importance of the two approaches (compliance and integrity) in ethics management and other areas of ethical studies.

Originality/value

The originality of this paper falls within the boundary of filling the gap that exists in sustainable AI technology and the roles of transparency.

Details

Journal of Information, Communication and Ethics in Society, vol. 21 no. 2
Type: Research Article
ISSN: 1477-996X

Keywords

Article
Publication date: 15 August 2023

Myojung Chung

While there has been a growing call for insights on algorithms given their impact on what people encounter on social media, it remains unknown how enhanced algorithmic knowledge…

Abstract

Purpose

While there has been a growing call for insights on algorithms given their impact on what people encounter on social media, it remains unknown how enhanced algorithmic knowledge serves as a countermeasure to problematic information flow. To fill this gap, this study aims to investigate how algorithmic knowledge predicts people's attitudes and behaviors regarding misinformation through the lens of the third-person effect.

Design/methodology/approach

Four national surveys in the USA (N = 1,415), the UK (N = 1,435), South Korea (N = 1,798) and Mexico (N = 784) were conducted between April and September 2021. The survey questionnaire measured algorithmic knowledge, perceived influence of misinformation on self and others, intention to take corrective actions, support for government regulation and content moderation. Collected data were analyzed using multigroup SEM.

Findings

Results indicate that algorithmic knowledge was associated with presumed influence of misinformation on self and others to different degrees. Presumed media influence on self was a strong predictor of intention to take actions to correct misinformation, while presumed media influence on others was a strong predictor of support for government-led platform regulation and platform-led content moderation. There were nuanced but noteworthy differences in the link between presumed media influence and behavioral responses across the four countries studied.

Originality/value

These findings are relevant for grasping the role of algorithmic knowledge in countering rampant misinformation on social media, as well as for expanding US-centered extant literature by elucidating the distinctive views regarding social media algorithms and misinformation in four countries.

Details

Internet Research, vol. 33 no. 5
Type: Research Article
ISSN: 1066-2243

Keywords

Article
Publication date: 18 April 2022

Donghee Shin, Saifeddin Al-Imamy and Yujong Hwang

How does algorithmic information processing affect the thoughts and behavior of artificial intelligence (AI) users? In this study, the authors address this question by focusing on…

Abstract

Purpose

How does algorithmic information processing affect the thoughts and behavior of artificial intelligence (AI) users? In this study, the authors address this question by focusing on algorithm-based chatbots and examine the influence of culture on algorithms as a form of digital intermediation.

Design/methodology/approach

The authors conducted a study comparing the United States (US) and Japan to examine how users in the two countries perceive the features of chatbot services and how the perceived features affect user trust and emotion.

Findings

Clear differences emerged after comparing algorithmic information processes involved in using and interacting with chatbots. Major attitudes toward chatbots are similar between the two cultures, although the weights placed on qualities differ. Japanese users put more weight on the functional qualities of chatbots, and US users place greater emphasis on non-functional qualities of algorithms in chatbots. US users appear more likely to anthropomorphize and accept explanations of algorithmic features than Japanese users.

Research limitations/implications

Different patterns of chatbot news adoption reveal that the acceptance of chatbots involves a cultural dimension as the algorithms reflect the values and interests of their constituencies. How users perceive chatbots and how they consume and interact with the chatbots depends on the cultural context in which the experience is situated.

Originality/value

A comparative juxtaposition of cultural-algorithmic interactions offers a useful way to examine how cultural values influence user behaviors and identify factors that influence attitude and user acceptance. Results imply that chatbots can be a cultural artifact, and chatbot journalism (CJ) can be a socially contextualized practice that is driven by the user's input and behavior, which are reflections of cultural values and practices.

Details

Cross Cultural & Strategic Management, vol. 29 no. 3
Type: Research Article
ISSN: 2059-5794

Keywords

Book part
Publication date: 25 May 2022

Igor Calzada

This chapter develops a conceptual taxonomy of five emerging digital citizenship regimes: (1) the globalised and generalisable regime called pandemic citizenship that clarifies…

Abstract

This chapter develops a conceptual taxonomy of five emerging digital citizenship regimes: (1) the globalised and generalisable regime called pandemic citizenship that clarifies how post-COVID-19 datafication processes have amplified the emergence of four intertwined, non-mutually exclusive and non-generalisable new technopoliticalised and city-regionalised digital citizenship regimes in certain European nation-states’ urban areas; (2) algorithmic citizenship, which is driven by blockchain and has allowed the implementation of an e-Residency programme in Tallinn; (3) liquid citizenship, driven by dataism – the deterministic ideology of big data – and contested through claims for digital rights in Barcelona and Amsterdam; (4) metropolitan citizenship, as revindicated in reaction to Brexit and reshuffled through data co-operatives in Cardiff; and (5) stateless citizenship, driven by devolution and reinvigorated through data sovereignty in Barcelona, Glasgow and Bilbao. This chapter challenges the existing interpretation of how these emerging digital citizenship regimes together are ubiquitously rescaling the associated spaces/practices of European nation-states.

1 – 10 of over 1000