Search results

1 – 10 of over 13000
Article
Publication date: 24 October 2023

Hamid Reza Saeidnia

The purpose of this study is to raise awareness about the ethical implications of artificial intelligence (AI) in the library and information industry, specifically focusing on…

487

Abstract

Purpose

The purpose of this study is to raise awareness about the ethical implications of artificial intelligence (AI) in the library and information industry, specifically focusing on bias and discrimination. It aims to highlight the need for proactive measures to mitigate these issues and ensure that AI technology is developed and implemented in an ethical and unbiased manner.

Design/methodology/approach

This viewpoint paper presents a critical analysis of the ethical implications of bias and discrimination in the library and information industry with respect to AI. It explores current practices and challenges in AI implementation and proposes strategies to address bias and discrimination in AI systems.

Findings

The findings of this study reveal that bias and discrimination are significant concerns in AI systems used in the library and information industry. These biases can perpetuate existing inequalities, hinder access to information and reinforce discriminatory practices. This study identifies key strategies such as data collection and representation, algorithmic transparency and inclusive design to address these issues.

Originality/value

This study contributes to the existing literature by examining the specific challenges of bias and discrimination in AI implementation within the library and information industry. It provides valuable insights into the ethical implications of AI technology and offers practical recommendations for professionals to confront and mitigate bias and discrimination in AI systems, ensuring equitable access to information for all users.

Open Access
Article
Publication date: 28 June 2023

Blessing Mbalaka

The paper aims to expand on the works well documented by Joy Boulamwini and Ruha Benjamin by expanding their critique to the African continent. The research aims to assess if…

1588

Abstract

Purpose

The paper aims to expand on the works well documented by Joy Boulamwini and Ruha Benjamin by expanding their critique to the African continent. The research aims to assess if algorithmic biases are prevalent in DALL-E 2 and Starry AI. The aim is to help inform better artificial intelligence (AI) systems for future use.

Design/methodology/approach

The paper utilised a desktop study for literature and gathered data from Open AI’s DALL-E 2 text-to-image generator and StarryAI text-to-image generator.

Findings

The DALL-E 2 significantly underperformed when it was tasked with generating images of “An African Family” as opposed to images of a “Family”. The pictures lacked any conceivable detail as compared to the latter of this comparison. The StarryAI significantly outperformed the DALL-E 2 and rendered visible faces. However, the accuracy of the culture portrayed was poor.

Research limitations/implications

Because of the chosen research approach, the research results may lack generalisability. Therefore, researchers are encouraged to test the proposed propositions further. The implications, however, are that more inclusion is warranted to help address the issue of cultural inaccuracies noted in a few of the paper’s experiments.

Practical implications

The paper is useful for advocates who advocate for algorithmic equality and fairness by highlighting evidence of the implications of systemic-induced algorithmic bias.

Social implications

The reduction in offensive racism and more socially appropriate AI can be a better product for commercialisation and general use. If AI is trained on diversity, it can lead to better applications in contemporary society.

Originality/value

The paper’s use of DALL-E 2 and Starry AI is an under-researched area, and future studies on this matter are welcome.

Details

Digital Transformation and Society, vol. 2 no. 4
Type: Research Article
ISSN: 2755-0761

Keywords

Content available
Article
Publication date: 14 March 2023

Paula Hall and Debbie Ellis

Gender bias in artificial intelligence (AI) should be solved as a priority before AI algorithms become ubiquitous, perpetuating and accentuating the bias. While the problem has…

3455

Abstract

Purpose

Gender bias in artificial intelligence (AI) should be solved as a priority before AI algorithms become ubiquitous, perpetuating and accentuating the bias. While the problem has been identified as an established research and policy agenda, a cohesive review of existing research specifically addressing gender bias from a socio-technical viewpoint is lacking. Thus, the purpose of this study is to determine the social causes and consequences of, and proposed solutions to, gender bias in AI algorithms.

Design/methodology/approach

A comprehensive systematic review followed established protocols to ensure accurate and verifiable identification of suitable articles. The process revealed 177 articles in the socio-technical framework, with 64 articles selected for in-depth analysis.

Findings

Most previous research has focused on technical rather than social causes, consequences and solutions to AI bias. From a social perspective, gender bias in AI algorithms can be attributed equally to algorithmic design and training datasets. Social consequences are wide-ranging, with amplification of existing bias the most common at 28%. Social solutions were concentrated on algorithmic design, specifically improving diversity in AI development teams (30%), increasing awareness (23%), human-in-the-loop (23%) and integrating ethics into the design process (21%).

Originality/value

This systematic review is the first of its kind to focus on gender bias in AI algorithms from a social perspective within a socio-technical framework. Identification of key causes and consequences of bias and the breakdown of potential solutions provides direction for future research and policy within the growing field of AI ethics.

Peer review

The peer review history for this article is available at https://publons.com/publon/10.1108/OIR-08-2021-0452

Details

Online Information Review, vol. 47 no. 7
Type: Research Article
ISSN: 1468-4527

Keywords

Case study
Publication date: 12 September 2023

Syeda Maseeha Qumer

This case is designed to enable students to understand the role of women in artificial intelligence (AI); understand the importance of ethics and diversity in the AI field;…

Abstract

Learning outcomes

This case is designed to enable students to understand the role of women in artificial intelligence (AI); understand the importance of ethics and diversity in the AI field; discuss the ethical issues of AI; study the implications of unethical AI; examine the dark side of corporate-backed AI research and the difficult relationship between corporate interests and AI ethics research; understand the role played by Gebru in promoting diversity and ethics in AI; and explore how Gebru can attract more women researchers in AI and lead the movement toward inclusive and equitable technology.

Case overview/synopsis

The case discusses how Timnit Gebru (She), a prominent AI researcher and former co-lead of the Ethical AI research team at Google, is leading the way in promoting diversity, inclusion and ethics in AI. Gebru, one of the most high-profile black women researchers, is an influential voice in the emerging field of ethical AI, which identifies issues based on bias, fairness, and responsibility. Gebru was fired from Google in December 2020 after the company asked her to retract a research paper she had co-authored about the pitfalls of large language models and embedded racial and gender bias in AI. While Google maintained that Gebru had resigned, she said she had been fired from her job after she had raised issues of discrimination in the workplace and drawn attention to bias in AI. In early December 2021, a year after being ousted from Google, Gebru launched an independent community-driven AI research organization called Distributed Artificial Intelligence Research (DAIR) to develop ethical AI, counter the influence of Big Tech in research and development of AI and increase the presence and inclusion of black researchers in the field of AI. The case discusses Gebru’s journey in creating DAIR, the goals of the organization and some of the challenges she could face along the way. As Gebru seeks to increase diversity in the field of AI and reduce the negative impacts of bias in the training data used in AI models, the challenges before her would be to develop a sustainable revenue model for DAIR, influence AI policies and practices inside Big Tech companies from the outside, inspire and encourage more women to enter the AI field and build a decentralized base of AI expertise.

Complexity academic level

This case is meant for MBA students.

Social implications

Teaching Notes are available for educators only.

Subject code

CCS 11: Strategy

Details

The Case For Women, vol. no.
Type: Case Study
ISSN: 2732-4443

Keywords

Article
Publication date: 2 October 2023

Mike Thelwall and Kayvan Kousha

Technology is sometimes used to support assessments of academic research in the form of automatically generated bibliometrics for reviewers to consult during their evaluations or…

Abstract

Purpose

Technology is sometimes used to support assessments of academic research in the form of automatically generated bibliometrics for reviewers to consult during their evaluations or by replacing some or all human judgements. With artificial intelligence (AI), there is increasing scope to use technology to assist research assessment processes in new ways. Since transparency and fairness are widely considered important for research assessment and AI introduces new issues, this review investigates their implications.

Design/methodology/approach

This article reviews and briefly summarises transparency and fairness concerns in general terms and through the issues that they raise for various types of Technology Assisted Research Assessment (TARA).

Findings

Whilst TARA can have varying levels of problems with both transparency and bias, in most contexts it is unclear whether it worsens the transparency and bias problems that are inherent in peer review.

Originality/value

This is the first analysis that focuses on algorithmic bias and transparency issues for technology assisted research assessment.

Details

Aslib Journal of Information Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2050-3806

Keywords

Article
Publication date: 19 October 2023

Ace Vo and Miloslava Plachkinova

The purpose of this study is to examine public perceptions and attitudes toward using artificial intelligence (AI) in the US criminal justice system.

Abstract

Purpose

The purpose of this study is to examine public perceptions and attitudes toward using artificial intelligence (AI) in the US criminal justice system.

Design/methodology/approach

The authors took a quantitative approach and administered an online survey using the Amazon Mechanical Turk platform. The instrument was developed by integrating prior literature to create multiple scales for measuring public perceptions and attitudes.

Findings

The findings suggest that despite the various attempts, there are still significant perceptions of sociodemographic bias in the criminal justice system and technology alone cannot alleviate them. However, AI can assist judges in making fairer and more objective decisions by using triangulation – offering additional data points to offset individual biases.

Social implications

Other scholars can build upon the findings and extend the work to shed more light on some problems of growing concern for society – bias and inequality in criminal sentencing. AI can be a valuable tool to assist judges in the decision-making process by offering diverse viewpoints. Furthermore, the authors bridge the gap between the fields of technology and criminal justice and demonstrate how the two can be successfully integrated for the benefit of society.

Originality/value

To the best of the authors’ knowledge, this is among the first studies to examine a complex societal problem like the introduction of technology in a high-stakes environment – the US criminal justice system. Understanding how AI is perceived by society is necessary to develop more transparent and unbiased algorithms for assisting judges in making fair and equitable sentencing decisions. In addition, the authors developed and validated a new scale that can be used to further examine this novel approach to criminal sentencing in the future.

Details

Journal of Information, Communication and Ethics in Society, vol. 21 no. 4
Type: Research Article
ISSN: 1477-996X

Keywords

Open Access
Article
Publication date: 21 June 2023

Sudhaman Parthasarathy and S.T. Padmapriya

Algorithm bias refers to repetitive computer program errors that give some users more weight than others. The aim of this article is to provide a deeper insight of algorithm bias

1017

Abstract

Purpose

Algorithm bias refers to repetitive computer program errors that give some users more weight than others. The aim of this article is to provide a deeper insight of algorithm bias in AI-enabled ERP software customization. Although algorithmic bias in machine learning models has uneven, unfair and unjust impacts, research on it is mostly anecdotal and scattered.

Design/methodology/approach

As guided by the previous research (Akter et al., 2022), this study presents the possible design bias (model, data and method) one may experience with enterprise resource planning (ERP) software customization algorithm. This study then presents the artificial intelligence (AI) version of ERP customization algorithm using k-nearest neighbours algorithm.

Findings

This study illustrates the possible bias when the prioritized requirements customization estimation (PRCE) algorithm available in the ERP literature is executed without any AI. Then, the authors present their newly developed AI version of the PRCE algorithm that uses ML techniques. The authors then discuss its adjoining algorithmic bias with an illustration. Further, the authors also draw a roadmap for managing algorithmic bias during ERP customization in practice.

Originality/value

To the best of the authors’ knowledge, no prior research has attempted to understand the algorithmic bias that occurs during the execution of the ERP customization algorithm (with or without AI).

Details

Journal of Ethics in Entrepreneurship and Technology, vol. 3 no. 2
Type: Research Article
ISSN: 2633-7436

Keywords

Article
Publication date: 17 February 2021

Yinying Wang

Artificial intelligence (AI) refers to a type of algorithms or computerized systems that resemble human mental processes of decision-making. This position paper looks beyond the…

2649

Abstract

Purpose

Artificial intelligence (AI) refers to a type of algorithms or computerized systems that resemble human mental processes of decision-making. This position paper looks beyond the sensational hyperbole of AI in teaching and learning. Instead, this paper aims to explore the role of AI in educational leadership.

Design/methodology/approach

To explore the role of AI in educational leadership, I synthesized the literature that intersects AI, decision-making, and educational leadership from multiple disciplines such as computer science, educational leadership, administrative science, judgment and decision-making and neuroscience. Grounded in the intellectual interrelationships between AI and educational leadership since the 1950s, this paper starts with conceptualizing decision-making, including both individual decision-making and organizational decision-making, as the foundation of educational leadership. Next, I elaborated on the symbiotic role of human-AI decision-making.

Findings

With its efficiency in collecting, processing, analyzing data and providing real-time or near real-time results, AI can bring in analytical efficiency to assist educational leaders in making data-driven, evidence-informed decisions. However, AI-assisted data-driven decision-making may run against value-based moral decision-making. Taken together, both leaders' individual decision-making and organizational decision-making are best handled by using a blend of data-driven, evidence-informed decision-making and value-based moral decision-making. AI can function as an extended brain in making data-driven, evidence-informed decisions. The shortcomings of AI-assisted data-driven decision-making can be overcome by human judgment guided by moral values.

Practical implications

The paper concludes with two recommendations for educational leadership practitioners' decision-making and future scholarly inquiry: keeping a watchful eye on biases and minding ethically-compromised decisions.

Originality/value

This paper brings together two fields of educational leadership and AI that have been growing up together since the 1950s and mostly growing apart till the late 2010s. To explore the role of AI in educational leadership, this paper starts with the foundation of leadership—decision-making, both leaders' individual decisions and collective organizational decisions. The paper then synthesizes the literature that intersects AI, decision-making and educational leadership from multiple disciplines to delineate the role of AI in educational leadership.

Details

Journal of Educational Administration, vol. 59 no. 3
Type: Research Article
ISSN: 0957-8234

Keywords

Article
Publication date: 31 December 2019

Lynette Yarger, Fay Cobb Payton and Bikalpa Neupane

The purpose of this paper is to offer a critical analysis of talent acquisition software and its potential for fostering equity in the hiring process for underrepresented IT…

4554

Abstract

Purpose

The purpose of this paper is to offer a critical analysis of talent acquisition software and its potential for fostering equity in the hiring process for underrepresented IT professionals. The under-representation of women, African-American and Latinx professionals in the IT workforce is a longstanding issue that contributes to and is impacted by algorithmic bias.

Design/methodology/approach

Sources of algorithmic bias in talent acquisition software are presented. Feminist design thinking is presented as a theoretical lens for mitigating algorithmic bias.

Findings

Data are just one tool for recruiters to use; human expertise is still necessary. Even well-intentioned algorithms are not neutral and should be audited for morally and legally unacceptable decisions. Feminist design thinking provides a theoretical framework for considering equity in the hiring decisions made by talent acquisition systems and their users.

Social implications

This research implies that algorithms may serve to codify deep-seated biases, making IT work environments just as homogeneous as they are currently. If bias exists in talent acquisition software, the potential for propagating inequity and harm is far more significant and widespread due to the homogeneity of the specialists creating artificial intelligence (AI) systems.

Originality/value

This work uses equity as a central concept for considering algorithmic bias in talent acquisition. Feminist design thinking provides a framework for fostering a richer understanding of what fairness means and evaluating how AI software might impact marginalized populations.

Details

Online Information Review, vol. 44 no. 2
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 22 January 2024

Dinesh Kumar and Nidhi Suthar

Artificial intelligence (AI) has sparked interest in various areas, including marketing. However, this exhilaration is being tempered by growing concerns about the moral and legal…

1307

Abstract

Purpose

Artificial intelligence (AI) has sparked interest in various areas, including marketing. However, this exhilaration is being tempered by growing concerns about the moral and legal implications of using AI in marketing. Although previous research has revealed various ethical and legal issues, such as algorithmic discrimination and data privacy, there are no definitive answers. This paper aims to fill this gap by investigating AI’s ethical and legal concerns in marketing and suggesting feasible solutions.

Design/methodology/approach

The paper synthesises information from academic articles, industry reports, case studies and legal documents through a thematic literature review. A qualitative analysis approach categorises and interprets ethical and legal challenges and proposes potential solutions.

Findings

The findings of this paper raise concerns about ethical and legal challenges related to AI in the marketing area. Ethical concerns related to discrimination, bias, manipulation, job displacement, absence of social interaction, cybersecurity, unintended consequences, environmental impact, privacy and legal issues such as consumer security, responsibility, liability, brand protection, competition law, agreements, data protection, consumer protection and intellectual property rights are discussed in the paper, and their potential solutions are discussed.

Research limitations/implications

Notwithstanding the interesting insights gathered from this investigation of the ethical and legal consequences of AI in marketing, it is important to recognise the limits of this research. Initially, the focus of this study is confined to a review of the most important ethical and legal issues pertaining to AI in marketing. Additional possible repercussions, such as those associated with intellectual property, contracts and licencing, should be investigated more deeply in future studies. Despite the fact that this study gives various answers and best practices for tackling the stated ethical and legal concerns, the viability and efficacy of these solutions may differ depending on the context and industry. Thus, more research and case studies are required to evaluate the applicability and efficacy of these solutions in other circumstances. This research is mostly based on a literature review and may not represent the experiences or opinions of all stakeholders engaged in AI-powered marketing. Further study might involve interviews or surveys with marketing professionals, customers and other key stakeholders to offer a full knowledge of the practical difficulties and solutions. Because of the rapid speed of technical progress, AI’s ethical and regulatory ramifications in marketing are continually increasing. Consequently, this work should be a springboard for more research and continuing conversations on this subject.

Practical implications

This study’s findings have several practical implications for marketing professionals. Emphasising openness and explainability: Marketing professionals should prioritise transparency in their use of AI, ensuring that customers are fully informed about data collection and utilisation for targeted advertising. By promoting openness and explainability, marketers can foster customer trust and avoid the negative consequences of a lack of transparency. Establishing ethical guidelines: Marketing professionals need to develop ethical rules for the creation and implementation of AI-powered marketing strategies. Adhering to ethical principles ensures compliance with legal norms and aligns with the organisation’s values and ideals. Investing in bias detection tools and privacy-enhancing technology: To mitigate risks associated with AI in marketing, marketers should allocate resources to develop and implement bias detection tools and privacy-enhancing technology. These tools can identify and address biases in AI algorithms, safeguard consumer privacy and extract valuable insights from consumer data.

Social implications

This study’s social implications emphasise the need for a comprehensive approach to address the ethical and legal challenges of AI in marketing. This includes adopting a responsible innovation framework, promoting ethical leadership, using ethical decision-making frameworks and conducting multidisciplinary research. By incorporating these approaches, marketers can navigate the complexities of AI in marketing responsibly, foster an ethical organisational culture, make informed ethical decisions and develop effective solutions. Such practices promote public trust, ensure equitable distribution of benefits and risk, and mitigate potential negative social consequences associated with AI in marketing.

Originality/value

To the best of the authors’ knowledge, this paper is among the first to explore potential solutions comprehensively. This paper provides a nuanced understanding of the challenges by using a multidisciplinary framework and synthesising various sources. It contributes valuable insights for academia and industry.

Details

Journal of Information, Communication and Ethics in Society, vol. 22 no. 1
Type: Research Article
ISSN: 1477-996X

Keywords

1 – 10 of over 13000