Search results
1 – 10 of over 9000A history of the intellectual origins of the debate over the astructural bias is presented. The chapter summarizes both the emergent bias thesis and the charge of an astructural…
Abstract
A history of the intellectual origins of the debate over the astructural bias is presented. The chapter summarizes both the emergent bias thesis and the charge of an astructural bias. The major works within this debate are reviewed. It has been found that the astructural bias still exists within the work of contemporary interactionists. The conclusion is that if interactionists want their work to be taken seriously, then they must seriously confront the distinguishing concept in sociology: social structure.
Details
Keywords
The purpose of this conceptual chapter is to analyze the current state of the astructural bias in symbolic interactionism as it relates to three inter-related processes over time…
Abstract
The purpose of this conceptual chapter is to analyze the current state of the astructural bias in symbolic interactionism as it relates to three inter-related processes over time: (1) the formalization of critiques of symbolic interactionism as ahistorical, astructural, and acritical perspectives; (2) an ahistorical understanding of early expressions of the disjuncture between symbolic interactionism and more widely accepted forms of sociological theorizing; and (3) persistent and widespread inattentiveness to past and present evidence-based arguments that address the argument regarding symbolic interactionism as an astructural, ahistorical, and acritical sociological perspective. The argument frames the historical development of the astructural bias concept in an historically and socially conditioned way, from its emergence through its rejection and ultimately including conclusions about contemporary state of the astructural bias as evidenced in the symbolic interactionist literatures of the last couple of decades. The analysis and argument concludes that the contemporary result of these intertwined historical and social conditioning processes is that the astructural bias myth has been made real in practice, and that the reification of the myth of an astructural bias has had the ruinous effect of virtually eradicating a vital tradition in the interactionist perspective which extends back to the earliest formulations of the perspective. As a result, a handful of suggestions that serve to aid in reclaiming the unorthodox structuralism of symbolic interactionism and the related interactionist study of social organization are provided in the conclusion.
Details
Keywords
Gender bias in artificial intelligence (AI) should be solved as a priority before AI algorithms become ubiquitous, perpetuating and accentuating the bias. While the problem has…
Abstract
Purpose
Gender bias in artificial intelligence (AI) should be solved as a priority before AI algorithms become ubiquitous, perpetuating and accentuating the bias. While the problem has been identified as an established research and policy agenda, a cohesive review of existing research specifically addressing gender bias from a socio-technical viewpoint is lacking. Thus, the purpose of this study is to determine the social causes and consequences of, and proposed solutions to, gender bias in AI algorithms.
Design/methodology/approach
A comprehensive systematic review followed established protocols to ensure accurate and verifiable identification of suitable articles. The process revealed 177 articles in the socio-technical framework, with 64 articles selected for in-depth analysis.
Findings
Most previous research has focused on technical rather than social causes, consequences and solutions to AI bias. From a social perspective, gender bias in AI algorithms can be attributed equally to algorithmic design and training datasets. Social consequences are wide-ranging, with amplification of existing bias the most common at 28%. Social solutions were concentrated on algorithmic design, specifically improving diversity in AI development teams (30%), increasing awareness (23%), human-in-the-loop (23%) and integrating ethics into the design process (21%).
Originality/value
This systematic review is the first of its kind to focus on gender bias in AI algorithms from a social perspective within a socio-technical framework. Identification of key causes and consequences of bias and the breakdown of potential solutions provides direction for future research and policy within the growing field of AI ethics.
Peer review
The peer review history for this article is available at https://publons.com/publon/10.1108/OIR-08-2021-0452
Details
Keywords
FR. Oswald A. J. Mascarenhas, S.J.
Artificial intelligence (AI) is intelligence displayed by machines, in contrast with the natural intelligence (NI) displayed by humans and other animals. It is also known as…
Abstract
Executive Summary
Artificial intelligence (AI) is intelligence displayed by machines, in contrast with the natural intelligence (NI) displayed by humans and other animals. It is also known as machine intelligence (MI) and is used because a machine mimics the cognitive functions that humans associate with human ability, such as logical reasoning, learning, and problem-solving. From Facebook’s automatic tagging suggestions to driverless cars, AI is rapidly progressing, and therefore, the ethical and moral question now is not whether AI should exist or not. AI exists and is already helping in improving various aspects of life such as health, safety, convenience, and overall standard of living. AI can replace or substitute routine mechanical, repetitive, boring jobs to free and unleash human creative and innovative talent to big thinking projects and humanizing work and society. AI can provide digital assistance in routine day-to-day tasks, detect cancer, diagnose rare diseases, and even prevent car crashes. AI can replace jobs, however, but not human work. Work as a duty, self-actualization and destiny will always continue, if not on the shop or office floors or boardrooms, at home, gardens, places of prayer and worship, and labs of creativity and innovation, in society and civilizations. While AI may indirectly free human talent for more meaningful and creative work, it can rarely participate in higher purposes such as creating bonding and belonging groups, in creating forgiving and compassionate communities, in drumming up small business, startups and corporations, and in harmonizing and humanizing this planet and cosmos for bliss or happiness. This chapter on AI, while investigating its market turbulence, will go beyond the legal aspects to ethical, moral, and spiritual dimensions and sacred opportunities of AI.
The purpose of this paper is to explain to readers how intelligent systems can fail and how artificial intelligence (AI) safety is different from cybersecurity. The goal of…
Abstract
Purpose
The purpose of this paper is to explain to readers how intelligent systems can fail and how artificial intelligence (AI) safety is different from cybersecurity. The goal of cybersecurity is to reduce the number of successful attacks on the system; the goal of AI Safety is to make sure zero attacks succeed in bypassing the safety mechanisms. Unfortunately, such a level of performance is unachievable. Every security system will eventually fail; there is no such thing as a 100 per cent secure system.
Design/methodology/approach
AI Safety can be improved based on ideas developed by cybersecurity experts. For narrow AI Safety, failures are at the same, moderate level of criticality as in cybersecurity; however, for general AI, failures have a fundamentally different impact. A single failure of a superintelligent system may cause a catastrophic event without a chance for recovery.
Findings
In this paper, the authors present and analyze reported failures of artificially intelligent systems and extrapolate our analysis to future AIs. The authors suggest that both the frequency and the seriousness of future AI failures will steadily increase.
Originality/value
This is a first attempt to assemble a public data set of AI failures and is extremely valuable to AI Safety researchers.
Details
Keywords
The purpose of this paper is to raise attention within the records management community about evolving demands for explanations that make it possible to understand the content of…
Abstract
Purpose
The purpose of this paper is to raise attention within the records management community about evolving demands for explanations that make it possible to understand the content of records, also when they reflect output from algorithms.
Design/methodology/approach
The methodological approach is a conceptual analysis based in records management theory and the philosophy of science. The concepts that are developed are thereafter applied to “the right to an explanation” and “an algorithmic ethics approach,” respectively, to further examine their viability.
Findings
Different forms of explanations, ranging from “certain” explanations to predictions, as well as varying degrees of control over the input data to algorithms, affect the nature of the explanations and what kinds of records the explanations may reside in.
Originality/value
This paper contributes to a conceptual frame for discussing where explanations to algorithms may be documented, within different kinds of records, emanating from different kinds of processes.
Details
Keywords
The case is presented as descriptive in nature and primarily involves exploratory research.
Abstract
Research methodology
The case is presented as descriptive in nature and primarily involves exploratory research.
Case overview/synopsis
Ashraf, a young graduate from Bangalore, India, started a chain of lifestyle shops, his family business in Khartoum, Sudan. To modernize the shops, Ashraf approached a small finance bank for financial assistance. However, after submitting the required documents and with a good credit score, he was denied a loan. The bank officials had mentioned that the loan automation software did not approve the application. Hence, the bank personnel said that they could not do anything further. Disappointed, Ashraf sought the help of his professor, John, to understand why the software rejected his application. Professor John explained to Ashraf the advantages and disadvantages of automation. In the process, Ashraf understood the significance and compelling need to address “Algorithm Bias,” a situation in which specific attributes of an algorithm cause unfair outcomes. The case place students in Ashraf’s position to help them understand the advantages and issues of applying automation through artificial intelligence.
Complexity academic level
The case suits graduate-level courses like business analytics, financial analytics and business intelligence.
Learning objectives
Through the case, the students will be able to: Understand the role of algorithms in business and society. Understand the causes, effects and methods of reducing algorithm bias. Demonstrate the ability to detect algorithm bias. Define policies to mitigate algorithm bias.
Details
Keywords
Brandy Pieper and Masha Krsmanovic
The purpose of this study is to examine whether implicit bias exists within the graduate admissions process at a large public research university in the Southeast United States…
Abstract
Purpose
The purpose of this study is to examine whether implicit bias exists within the graduate admissions process at a large public research university in the Southeast United States. Additionally, this research sought to identify the type of strategies graduate faculty in the USA use to assess their implicit bias and the support they may need to better recognize and gauge implicit bias during the graduate application review process.
Design/methodology/approach
This study used the use of a qualitative, phenomenological research design by conducting individual interviews with graduate faculty members that serve on admissions committees.
Findings
The findings revealed six themes in relation to the purpose of the study – bias recognition, faculty perceptions of their own bias, faculty perceptions on the bias of others, strategies for the application review process, admission committee safeguards and the need for implicit bias training.
Originality/value
The study outcomes are discussed in relation to the prior research and literature on this phenomenon. Additionally, the study presents research and practical implications, including actionable strategies for how its results can be practically applied.
Details
Keywords
This chapter assesses the power focus in contemporary interactionist theory, and advances several premises about power based on recent research and theory. I first examine the…
Abstract
This chapter assesses the power focus in contemporary interactionist theory, and advances several premises about power based on recent research and theory. I first examine the main assumptions of the view of power that emerged in the wake of the astructural bias debate, which became an implicit standard for assessments of power in the tradition. Next, I explore the criticisms of the astructural bias thesis and related conceptualization. My argument is that while the debate correctly spotlighted the power deficit of interactionism, it had theoretical implications that distracted us from the task of fully conceptualizing power. In the second part of this chapter, I examine recent interactionist work in order to build general premises that can advance interactionist theory of power. Based on this analysis, I elaborate four premises that interactionists can use, regardless of theoretical orientation. Drawing on examples from my ethnographic research, I illustrate how researchers can benefit from the use of these premises.
Details