Search results

1 – 10 of over 1000
Article
Publication date: 15 November 2019

Claude Draude, Goda Klumbyte, Phillip Lücking and Pat Treusch

The purpose of this paper is to propose that in order to tackle the question of bias in algorithms, a systemic, sociotechnical and holistic perspective is needed. With reference…

1559

Abstract

Purpose

The purpose of this paper is to propose that in order to tackle the question of bias in algorithms, a systemic, sociotechnical and holistic perspective is needed. With reference to the term “algorithmic culture,” the interconnectedness and mutual shaping of society and technology are postulated. A sociotechnical approach requires translational work between and across disciplines. This conceptual paper undertakes such translational work. It exemplifies how gender and diversity studies, by bringing in expertise on addressing bias and structural inequalities, provide a crucial source for analyzing and mitigating bias in algorithmic systems.

Design/methodology/approach

After introducing the sociotechnical context, an overview is provided regarding the contemporary discourse around bias in algorithms, debates around algorithmic culture, knowledge production and bias identification as well as common solutions. The key concepts of gender studies (situated knowledges and strong objectivity) and concrete examples of gender bias then serve as a backdrop for revisiting contemporary debates.

Findings

The key concepts reframe the discourse on bias and concepts such as algorithmic fairness and transparency by contextualizing and situating them. The paper includes specific suggestions for researchers and practitioners on how to account for social inequalities in the design of algorithmic systems.

Originality/value

A systemic, gender-informed approach for addressing the issue is provided, and a concrete, applicable methodology toward a situated understanding of algorithmic bias is laid out, providing an important contribution for an urgent multidisciplinary dialogue.

Details

Online Information Review, vol. 44 no. 2
Type: Research Article
ISSN: 1468-4527

Keywords

Open Access
Article
Publication date: 21 June 2023

Sudhaman Parthasarathy and S.T. Padmapriya

Algorithm bias refers to repetitive computer program errors that give some users more weight than others. The aim of this article is to provide a deeper insight of algorithm bias

1002

Abstract

Purpose

Algorithm bias refers to repetitive computer program errors that give some users more weight than others. The aim of this article is to provide a deeper insight of algorithm bias in AI-enabled ERP software customization. Although algorithmic bias in machine learning models has uneven, unfair and unjust impacts, research on it is mostly anecdotal and scattered.

Design/methodology/approach

As guided by the previous research (Akter et al., 2022), this study presents the possible design bias (model, data and method) one may experience with enterprise resource planning (ERP) software customization algorithm. This study then presents the artificial intelligence (AI) version of ERP customization algorithm using k-nearest neighbours algorithm.

Findings

This study illustrates the possible bias when the prioritized requirements customization estimation (PRCE) algorithm available in the ERP literature is executed without any AI. Then, the authors present their newly developed AI version of the PRCE algorithm that uses ML techniques. The authors then discuss its adjoining algorithmic bias with an illustration. Further, the authors also draw a roadmap for managing algorithmic bias during ERP customization in practice.

Originality/value

To the best of the authors’ knowledge, no prior research has attempted to understand the algorithmic bias that occurs during the execution of the ERP customization algorithm (with or without AI).

Details

Journal of Ethics in Entrepreneurship and Technology, vol. 3 no. 2
Type: Research Article
ISSN: 2633-7436

Keywords

Content available
Article
Publication date: 14 March 2023

Paula Hall and Debbie Ellis

Gender bias in artificial intelligence (AI) should be solved as a priority before AI algorithms become ubiquitous, perpetuating and accentuating the bias. While the problem has…

3344

Abstract

Purpose

Gender bias in artificial intelligence (AI) should be solved as a priority before AI algorithms become ubiquitous, perpetuating and accentuating the bias. While the problem has been identified as an established research and policy agenda, a cohesive review of existing research specifically addressing gender bias from a socio-technical viewpoint is lacking. Thus, the purpose of this study is to determine the social causes and consequences of, and proposed solutions to, gender bias in AI algorithms.

Design/methodology/approach

A comprehensive systematic review followed established protocols to ensure accurate and verifiable identification of suitable articles. The process revealed 177 articles in the socio-technical framework, with 64 articles selected for in-depth analysis.

Findings

Most previous research has focused on technical rather than social causes, consequences and solutions to AI bias. From a social perspective, gender bias in AI algorithms can be attributed equally to algorithmic design and training datasets. Social consequences are wide-ranging, with amplification of existing bias the most common at 28%. Social solutions were concentrated on algorithmic design, specifically improving diversity in AI development teams (30%), increasing awareness (23%), human-in-the-loop (23%) and integrating ethics into the design process (21%).

Originality/value

This systematic review is the first of its kind to focus on gender bias in AI algorithms from a social perspective within a socio-technical framework. Identification of key causes and consequences of bias and the breakdown of potential solutions provides direction for future research and policy within the growing field of AI ethics.

Peer review

The peer review history for this article is available at https://publons.com/publon/10.1108/OIR-08-2021-0452

Details

Online Information Review, vol. 47 no. 7
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 13 August 2018

Nicol Turner Lee

The online economy has not resolved the issue of racial bias in its applications. While algorithms are procedures that facilitate automated decision-making, or a sequence of…

6389

Abstract

Purpose

The online economy has not resolved the issue of racial bias in its applications. While algorithms are procedures that facilitate automated decision-making, or a sequence of unambiguous instructions, bias is a byproduct of these computations, bringing harm to historically disadvantaged populations. This paper argues that algorithmic biases explicitly and implicitly harm racial groups and lead to forms of discrimination. Relying upon sociological and technical research, the paper offers commentary on the need for more workplace diversity within high-tech industries and public policies that can detect or reduce the likelihood of racial bias in algorithmic design and execution.

Design/methodology/approach

The paper shares examples in the US where algorithmic biases have been reported and the strategies for explaining and addressing them.

Findings

The findings of the paper suggest that explicit racial bias in algorithms can be mitigated by existing laws, including those governing housing, employment, and the extension of credit. Implicit, or unconscious, biases are harder to redress without more diverse workplaces and public policies that have an approach to bias detection and mitigation.

Research limitations/implications

The major implication of this research is that further research needs to be done. Increasing the scholarly research in this area will be a major contribution in understanding how emerging technologies are creating disparate and unfair treatment for certain populations.

Practical implications

The practical implications of the work point to areas within industries and the government that can tackle the question of algorithmic bias, fairness and accountability, especially African-Americans.

Social implications

The social implications are that emerging technologies are not devoid of societal influences that constantly define positions of power, values, and norms.

Originality/value

The paper joins a scarcity of existing research, especially in the area that intersects race and algorithmic development.

Details

Journal of Information, Communication and Ethics in Society, vol. 16 no. 3
Type: Research Article
ISSN: 1477-996X

Keywords

Article
Publication date: 19 December 2023

Susan Gardner Archambault

Research shows that postsecondary students are largely unaware of the impact of algorithms on their everyday lives. Also, most noncomputer science students are not being taught…

Abstract

Purpose

Research shows that postsecondary students are largely unaware of the impact of algorithms on their everyday lives. Also, most noncomputer science students are not being taught about algorithms as part of the regular curriculum. This exploratory, qualitative study aims to explore subject-matter experts’ insights and perceptions of the knowledge components, coping behaviors and pedagogical considerations to aid faculty in teaching algorithmic literacy to postsecondary students.

Design/methodology/approach

Eleven semistructured interviews and one focus group were conducted with scholars and teachers of critical algorithm studies and related fields. A content analysis was manually performed on the transcripts using a mixture of deductive and inductive coding. Data analysis was aided by the coding software program Dedoose (2021) to determine frequency totals for occurrences of a code across all participants along with how many times specific participants mentioned a code. Then, findings were organized around the three themes of knowledge components, coping behaviors and pedagogy.

Findings

The findings suggested a set of 10 knowledge components that would contribute to students’ algorithmic literacy along with seven behaviors that students could use to help them better cope with algorithmic systems. A set of five teaching strategies also surfaced to help improve students’ algorithmic literacy.

Originality/value

This study contributes to improved pedagogy surrounding algorithmic literacy and validates existing multi-faceted conceptualizations and measurements of algorithmic literacy.

Details

Information and Learning Sciences, vol. 125 no. 1/2
Type: Research Article
ISSN: 2398-5348

Keywords

Open Access
Article
Publication date: 28 June 2023

Blessing Mbalaka

The paper aims to expand on the works well documented by Joy Boulamwini and Ruha Benjamin by expanding their critique to the African continent. The research aims to assess if…

1581

Abstract

Purpose

The paper aims to expand on the works well documented by Joy Boulamwini and Ruha Benjamin by expanding their critique to the African continent. The research aims to assess if algorithmic biases are prevalent in DALL-E 2 and Starry AI. The aim is to help inform better artificial intelligence (AI) systems for future use.

Design/methodology/approach

The paper utilised a desktop study for literature and gathered data from Open AI’s DALL-E 2 text-to-image generator and StarryAI text-to-image generator.

Findings

The DALL-E 2 significantly underperformed when it was tasked with generating images of “An African Family” as opposed to images of a “Family”. The pictures lacked any conceivable detail as compared to the latter of this comparison. The StarryAI significantly outperformed the DALL-E 2 and rendered visible faces. However, the accuracy of the culture portrayed was poor.

Research limitations/implications

Because of the chosen research approach, the research results may lack generalisability. Therefore, researchers are encouraged to test the proposed propositions further. The implications, however, are that more inclusion is warranted to help address the issue of cultural inaccuracies noted in a few of the paper’s experiments.

Practical implications

The paper is useful for advocates who advocate for algorithmic equality and fairness by highlighting evidence of the implications of systemic-induced algorithmic bias.

Social implications

The reduction in offensive racism and more socially appropriate AI can be a better product for commercialisation and general use. If AI is trained on diversity, it can lead to better applications in contemporary society.

Originality/value

The paper’s use of DALL-E 2 and Starry AI is an under-researched area, and future studies on this matter are welcome.

Details

Digital Transformation and Society, vol. 2 no. 4
Type: Research Article
ISSN: 2755-0761

Keywords

Article
Publication date: 11 February 2022

Brahim Zarouali, Sophie C. Boerman, Hilde A.M. Voorveld and Guda van Noort

The purpose of this study is to introduce a comprehensive and dynamic framework that focuses on the role of algorithms in persuasive communication: the algorithmic persuasion

2033

Abstract

Purpose

The purpose of this study is to introduce a comprehensive and dynamic framework that focuses on the role of algorithms in persuasive communication: the algorithmic persuasion framework (APF).

Design/methodology/approach

In this increasingly data-driven media landscape, algorithms play an important role in the consumption of online content. This paper presents a novel conceptual framework to investigate algorithm-mediated persuasion processes and their effects on online communication.

Findings

The APF consists of five conceptual components: input, algorithm, persuasion attempt, persuasion process and persuasion effects. In short, it addresses how data variables are inputs for different algorithmic techniques and algorithmic objectives, which influence the manifestations of algorithm-mediated persuasion attempts, informing how such attempts are processed and their intended and unintended persuasive effects.

Originality/value

The paper guides future research by addressing key elements in the framework and the relationship between them, proposing a research agenda (with specific research questions and hypotheses) and discussing methodological challenges and opportunities for the future investigation of the framework.

Details

Internet Research, vol. 32 no. 4
Type: Research Article
ISSN: 1066-2243

Keywords

Article
Publication date: 21 December 2021

Luciana Monteiro-Krebs, Bieke Zaman, Sonia Elisa Caregnato, David Geerts, Vicente Grassi-Filho and Nyi-Nyi Htun

The use of recommender systems is increasing on academic social media (ASM). However, distinguishing the elements that may be influenced and/or exert influence over content that…

Abstract

Purpose

The use of recommender systems is increasing on academic social media (ASM). However, distinguishing the elements that may be influenced and/or exert influence over content that is read and disseminated by researchers is difficult due to the opacity of the algorithms that filter information on ASM. In this article, the purpose of this paper is to investigate how algorithmic mediation through recommender systems in ResearchGate may uphold biases in scholarly communication.

Design/methodology/approach

The authors used a multi-method walkthrough approach including a patent analysis, an interface analysis and an inspection of the web page code.

Findings

The findings reveal how audience influences on the recommendations and demonstrate in practice the mutual shaping of the different elements interplaying within the platform (artefact, practices and arrangements). The authors show evidence of the mechanisms of selection, prioritization, datafication and profiling. The authors also substantiate how the algorithm reinforces the reputation of eminent researchers (a phenomenon called the Matthew effect). As part of defining a future agenda, we discuss the need for serendipity and algorithmic transparency.

Research limitations/implications

Algorithms change constantly and are protected by commercial secrecy. Hence, this study was limited to the information that was accessible within a particular period. At the time of publication, the platform, its logic and its effects on the interface may have changed. Future studies might investigate other ASM using the same approach to distinguish potential patterns among platforms.

Originality/value

Contributes to reflect on algorithmic mediation and biases in scholarly communication potentially afforded by recommender algorithms. To the best of our knowledge, this is the first empirical study on automated mediation and biases in ASM.

Details

Online Information Review, vol. 46 no. 5
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 2 August 2022

Merijke Coenraad

Computing technology is becoming ubiquitous within modern society and youth use technology regularly for school, entertainment and socializing. Yet, despite societal belief that…

Abstract

Purpose

Computing technology is becoming ubiquitous within modern society and youth use technology regularly for school, entertainment and socializing. Yet, despite societal belief that computing technology is neutral, the technologies of today’s society are rife with biases that harm and oppress populations that experience marginalization. While previous research has explored children’s values and perceptions of computing technology, few studies have focused on youth conceptualizations of this technological bias and their understandings of how computing technology discriminates against them and their communities. This paper aims to examine youth conceptualizations of inequities in computing technology.

Design/methodology/approach

This study analyzes a series of codesign sessions and artifacts partnering with eight black youth to learn about their conceptualizations of technology bias.

Findings

Without introduction, the youth demonstrated an awareness of visible negative impacts of technology and provided examples of this bias within their lives, but they did not have a formal vocabulary to discuss said bias or knowledge of biased technologies less visible to the naked eye. Once presented with common technological biases, the youth expanded their conceptualizations to include both visible and invisible biases.

Originality/value

This paper builds on the current body of literature around how youth view computing technology and provides a foundation to ground future pedagogical work around technological bias for youth.

Details

Information and Learning Sciences, vol. 123 no. 7/8
Type: Research Article
ISSN: 2398-5348

Keywords

Article
Publication date: 14 September 2015

Florian Saurwein, Natascha Just and Michael Latzer

The purpose of this paper is to contribute to a better understanding of governance choice in the area of algorithmic selection. Algorithms on the Internet shape our daily lives…

5201

Abstract

Purpose

The purpose of this paper is to contribute to a better understanding of governance choice in the area of algorithmic selection. Algorithms on the Internet shape our daily lives and realities. They select information, automatically assign relevance to them and keep people from drowning in an information flood. The benefits of algorithms are accompanied by risks and governance challenges.

Design/methodology/approach

Based on empirical case analyses and a review of the literature, the paper chooses a risk-based governance approach. It identifies and categorizes applications of algorithmic selection and attendant risks. Then, it explores the range of institutional governance options and discusses applied and proposed governance measures for algorithmic selection and the limitations of governance options.

Findings

Analyses reveal that there are no one-size-fits-all solutions for the governance of algorithms. Attention has to shift to multi-dimensional solutions and combinations of governance measures that mutually enable and complement each other. Limited knowledge about the developments of markets, risks and the effects of governance interventions hampers the choice of an adequate governance mix. Uncertainties call for risk and technology assessment to strengthen the foundations for evidence-based governance.

Originality/value

The paper furthers the understanding of governance choice in the area of algorithmic selection with a structured synopsis on rationales, options and limitations for the governance of algorithms. It provides a functional typology of applications of algorithmic selection, a comprehensive overview of the risks of algorithmic selection and a systematic discussion of governance options and its limitations.

Details

info, vol. 17 no. 6
Type: Research Article
ISSN: 1463-6697

Keywords

1 – 10 of over 1000