Search results

1 – 10 of over 6000
Open Access
Article
Publication date: 1 November 2023

Dan Jin

The purpose of this study is to provide insights and guidance for practitioners in terms of ensuring rigorous ethical and moral conduct in artificial intelligence (AI) hiring and…

Abstract

Purpose

The purpose of this study is to provide insights and guidance for practitioners in terms of ensuring rigorous ethical and moral conduct in artificial intelligence (AI) hiring and implementation.

Design/methodology/approach

The research employed two experimental designs and one pilot study to investigate the ethical and moral implications of different levels of AI implementation in the hospitality industry, the intersection of self-congruency and ethical considerations when AI replaces human service providers and the impact of psychological distance associated with AI on individuals' ethical and moral considerations. These research methods included surveys and experimental manipulations to gather and analyze relevant data.

Findings

Findings provide valuable insights into the ethical and moral dimensions of AI implementation, the influence of self-congruency on ethical considerations and the role of psychological distance in individuals’ ethical evaluations. They contribute to the development of guidelines and practices for the responsible and ethical implementation of AI in various industries, including the hospitality sector.

Practical implications

The study highlights the importance of exercising rigorous ethical-moral AI hiring and implementation practices to ensure AI principles and enforcement operations in the restaurant industry. It provides practitioners with useful insights into how AI-robotization can improve ethical and moral standards.

Originality/value

The study contributes to the literature by providing insights into the ethical and moral implications of AI service robots in the hospitality industry. Additionally, the study explores the relationship between psychological distance and acceptance of AI-intervened service, which has not been extensively studied in the literature.

Details

International Hospitality Review, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2516-8142

Keywords

Article
Publication date: 20 January 2022

Verma Prikshat, Parth Patel, Arup Varma and Alessio Ishizaka

This narrative review presents a multi-stakeholder ethical framework for AI-augmented HRM, based on extant research in the domains of ethical HRM and ethical AI. More…

2407

Abstract

Purpose

This narrative review presents a multi-stakeholder ethical framework for AI-augmented HRM, based on extant research in the domains of ethical HRM and ethical AI. More specifically, the authors identify critical ethical issues pertaining to AI-augmented HRM functions and suggest ethical principles to address these issues by identifying the relevant stakeholders based on the responsibility ethics approach.

Design/methodology/approach

This paper follows a narrative review approach by first identifying various ethical/codes/issues/dilemmas discussed in HRM and AI. The authors next discuss ethical issues concerning AI-augmented HRM, drawing from recent literature. Finally, the authors propose ethical principles for AI-augmented HRM and stakeholders responsible for managing those issues.

Findings

The paper summarises key findings of extant research in the ethical HRM and AI domain and provides a multi-stakeholder ethical framework for AI-augmented HRM functions.

Originality/value

This research's value lies in conceptualising a multi-stakeholder ethical framework for AI-augmented HRM functions comprising 11 ethical principles. The research also identifies the class of stakeholders responsible for identified ethical principles. The research also presents future research directions based on the proposed model.

Details

International Journal of Manpower, vol. 43 no. 1
Type: Research Article
ISSN: 0143-7720

Keywords

Article
Publication date: 7 December 2021

Kumar Saurabh, Ridhi Arora, Neelam Rani, Debasisha Mishra and M. Ramkumar

Digital transformation (DT) leverages digital technologies to change current processes and introduce new processes in any organisation’s business model, customer/user experience…

1780

Abstract

Purpose

Digital transformation (DT) leverages digital technologies to change current processes and introduce new processes in any organisation’s business model, customer/user experience and operational processes (DT pillars). Artificial intelligence (AI) plays a significant role in achieving DT. As DT is touching each sphere of humanity, AI led DT is raising many fundamental questions. These questions raise concerns for the systems deployed, how they should behave, what risks they carry, the monitoring and evaluation control we have in hand, etc. These issues call for the need to integrate ethics in AI led DT. The purpose of this study is to develop an “AI led ethical digital transformation framework”.

Design/methodology/approach

Based on the literature survey, various existing business ethics decision-making models were synthesised. The authors mapped essential characteristics such as intensity and the individual, organisational and opportunity factors of ethics models with the proposed AI led ethical DT. The DT framework is evaluated using a thematic analysis of 23 expert interviews with relevant AI ethics personas from industry and society. The qualitative data of the interviews and opinion data has been analysed using MAXQDA software.

Findings

The authors have explored how AI can drive the ethical DT framework and have identified the core constituents of developing an AI led ethical DT framework. Backed by established ethical theories, the paper presents how DT pillars are related and sequenced to ethical factors. This research provides the potential to examine theoretically sequenced ethical factors with practical DT pillars.

Originality/value

The study establishes deduced and induced ethical value codes based on thematic analysis to develop guidelines for the pursuit of ethical DT. The authors identify four unique induced themes, namely, corporate social responsibility, perceived value, standard benchmarking and learning willingness. The comprehensive findings of this research, supported by a robust theoretical background, have substantial implications for academic research and corporate applicability. The proposed AI led ethical DT framework is unique and can be used for integrated social, technological and economic ethical research.

Details

Journal of Information, Communication and Ethics in Society, vol. 20 no. 2
Type: Research Article
ISSN: 1477-996X

Keywords

Article
Publication date: 25 April 2024

Mojtaba Rezaei, Marco Pironti and Roberto Quaglia

This study aims to identify and assess the key ethical challenges associated with integrating artificial intelligence (AI) in knowledge-sharing (KS) practices and their…

Abstract

Purpose

This study aims to identify and assess the key ethical challenges associated with integrating artificial intelligence (AI) in knowledge-sharing (KS) practices and their implications for decision-making (DM) processes within organisations.

Design/methodology/approach

The study employs a mixed-methods approach, beginning with a comprehensive literature review to extract background information on AI and KS and to identify potential ethical challenges. Subsequently, a confirmatory factor analysis (CFA) is conducted using data collected from individuals employed in business settings to validate the challenges identified in the literature and assess their impact on DM processes.

Findings

The findings reveal that challenges related to privacy and data protection, bias and fairness and transparency and explainability are particularly significant in DM. Moreover, challenges related to accountability and responsibility and the impact of AI on employment also show relatively high coefficients, highlighting their importance in the DM process. In contrast, challenges such as intellectual property and ownership, algorithmic manipulation and global governance and regulation are found to be less central to the DM process.

Originality/value

This research contributes to the ongoing discourse on the ethical challenges of AI in knowledge management (KM) and DM within organisations. By providing insights and recommendations for researchers, managers and policymakers, the study emphasises the need for a holistic and collaborative approach to harness the benefits of AI technologies whilst mitigating their associated risks.

Details

Management Decision, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0025-1747

Keywords

Open Access
Article
Publication date: 15 February 2024

Hina Naz and Muhammad Kashif

Artificial intelligence (AI) offers many benefits to improve predictive marketing practice. It raises ethical concerns regarding customer prioritization, market share…

2045

Abstract

Purpose

Artificial intelligence (AI) offers many benefits to improve predictive marketing practice. It raises ethical concerns regarding customer prioritization, market share concentration and consumer manipulation. This paper explores these ethical concerns from a contemporary perspective, drawing on the experiences and perspectives of AI and predictive marketing professionals. This study aims to contribute to the field by providing a modern perspective on the ethical concerns of AI usage in predictive marketing, drawing on the experiences and perspectives of professionals in the area.

Design/methodology/approach

The study conducted semistructured interviews for 6 weeks with 14 participants experienced in AI-enabled systems for marketing, using purposive and snowball sampling techniques. Thematic analysis was used to explore themes emerging from the data.

Findings

Results reveal that using AI in marketing could lead to unintended consequences, such as perpetuating existing biases, violating customer privacy, limiting competition and manipulating consumer behavior.

Originality/value

The authors identify seven unique themes and benchmark them with Ashok’s model to provide a structured lens for interpreting the results. The framework presented by this research is unique and can be used to support ethical research spanning social, technological and economic aspects within the predictive marketing domain.

Objetivo

La Inteligencia Artificial (IA) ofrece muchos beneficios para mejorar la práctica del marketing predictivo. Sin embargo, plantea preocupaciones éticas relacionadas con la priorización de clientes, la concentración de cuota de mercado y la manipulación del consumidor. Este artículo explora estas preocupaciones éticas desde una perspectiva contemporánea, basándose en las experiencias y perspectivas de profesionales en IA y marketing predictivo. El estudio tiene como objetivo contribuir a la literatura de este ámbito al proporcionar una perspectiva moderna sobre las preocupaciones éticas del uso de la IA en el marketing predictivo, basándose en las experiencias y perspectivas de profesionales en el área.

Diseño/metodología/enfoque

Para realizar el estudio se realizaron entrevistas semiestructuradas durante seis semanas con 14 participantes con experiencia en sistemas habilitados para IA en marketing, utilizando técnicas de muestreo intencional y de bola de nieve. Se utilizó un análisis temático para explorar los temas que surgieron de los datos.

Resultados

Los resultados revelan que el uso de la IA en marketing podría tener consecuencias no deseadas, como perpetuar sesgos existentes, violar la privacidad del cliente, limitar la competencia y manipular el comportamiento del consumidor.

Originalidad

El estudio identifica siete temas y los comparan con el modelo de Ashok para proporcionar una perspectiva estructurada para interpretar los resultados. El marco presentado por esta investigación es único y puede utilizarse para respaldar investigaciones éticas que abarquen aspectos sociales, tecnológicos y económicos dentro del ámbito del marketing predictivo.

人工智能(AI)为改进预测营销实践带来了诸多益处。然而, 这也引发了与客户优先级、市场份额集中和消费者操纵等伦理问题相关的观点。本文从当代角度深入探讨了这些伦理观点, 充分借鉴了人工智能和预测营销领域专业人士的经验和观点。旨在通过现代视角提供关于在预测营销中应用人工智能时所涉及的伦理观点, 为该领域做出有益贡献。

研究方法

本研究采用了目的性和雪球抽样技术, 与14位在人工智能营销系统领域具有丰富经验的参与者进行为期六周的半结构化访谈。研究采用主题分析方法, 旨在深入挖掘数据中显现的主要主题。

研究发现

研究结果表明, 在营销领域使用人工智能可能引发一系列意外后果, 包括但不限于加强现有偏见、侵犯客户隐私、限制竞争以及操纵消费者行为。

独创性

本研究通过明确定义七个独特的主题, 并采用阿肖克模型进行基准比较, 为读者提供了一个结构化的视角, 以解释研究结果。所提出的框架具有独特之处, 可有效支持在跨足社会、技术和经济领域的预测营销中展开的伦理研究。

Article
Publication date: 22 January 2024

Dinesh Kumar and Nidhi Suthar

Artificial intelligence (AI) has sparked interest in various areas, including marketing. However, this exhilaration is being tempered by growing concerns about the moral and legal…

1307

Abstract

Purpose

Artificial intelligence (AI) has sparked interest in various areas, including marketing. However, this exhilaration is being tempered by growing concerns about the moral and legal implications of using AI in marketing. Although previous research has revealed various ethical and legal issues, such as algorithmic discrimination and data privacy, there are no definitive answers. This paper aims to fill this gap by investigating AI’s ethical and legal concerns in marketing and suggesting feasible solutions.

Design/methodology/approach

The paper synthesises information from academic articles, industry reports, case studies and legal documents through a thematic literature review. A qualitative analysis approach categorises and interprets ethical and legal challenges and proposes potential solutions.

Findings

The findings of this paper raise concerns about ethical and legal challenges related to AI in the marketing area. Ethical concerns related to discrimination, bias, manipulation, job displacement, absence of social interaction, cybersecurity, unintended consequences, environmental impact, privacy and legal issues such as consumer security, responsibility, liability, brand protection, competition law, agreements, data protection, consumer protection and intellectual property rights are discussed in the paper, and their potential solutions are discussed.

Research limitations/implications

Notwithstanding the interesting insights gathered from this investigation of the ethical and legal consequences of AI in marketing, it is important to recognise the limits of this research. Initially, the focus of this study is confined to a review of the most important ethical and legal issues pertaining to AI in marketing. Additional possible repercussions, such as those associated with intellectual property, contracts and licencing, should be investigated more deeply in future studies. Despite the fact that this study gives various answers and best practices for tackling the stated ethical and legal concerns, the viability and efficacy of these solutions may differ depending on the context and industry. Thus, more research and case studies are required to evaluate the applicability and efficacy of these solutions in other circumstances. This research is mostly based on a literature review and may not represent the experiences or opinions of all stakeholders engaged in AI-powered marketing. Further study might involve interviews or surveys with marketing professionals, customers and other key stakeholders to offer a full knowledge of the practical difficulties and solutions. Because of the rapid speed of technical progress, AI’s ethical and regulatory ramifications in marketing are continually increasing. Consequently, this work should be a springboard for more research and continuing conversations on this subject.

Practical implications

This study’s findings have several practical implications for marketing professionals. Emphasising openness and explainability: Marketing professionals should prioritise transparency in their use of AI, ensuring that customers are fully informed about data collection and utilisation for targeted advertising. By promoting openness and explainability, marketers can foster customer trust and avoid the negative consequences of a lack of transparency. Establishing ethical guidelines: Marketing professionals need to develop ethical rules for the creation and implementation of AI-powered marketing strategies. Adhering to ethical principles ensures compliance with legal norms and aligns with the organisation’s values and ideals. Investing in bias detection tools and privacy-enhancing technology: To mitigate risks associated with AI in marketing, marketers should allocate resources to develop and implement bias detection tools and privacy-enhancing technology. These tools can identify and address biases in AI algorithms, safeguard consumer privacy and extract valuable insights from consumer data.

Social implications

This study’s social implications emphasise the need for a comprehensive approach to address the ethical and legal challenges of AI in marketing. This includes adopting a responsible innovation framework, promoting ethical leadership, using ethical decision-making frameworks and conducting multidisciplinary research. By incorporating these approaches, marketers can navigate the complexities of AI in marketing responsibly, foster an ethical organisational culture, make informed ethical decisions and develop effective solutions. Such practices promote public trust, ensure equitable distribution of benefits and risk, and mitigate potential negative social consequences associated with AI in marketing.

Originality/value

To the best of the authors’ knowledge, this paper is among the first to explore potential solutions comprehensively. This paper provides a nuanced understanding of the challenges by using a multidisciplinary framework and synthesising various sources. It contributes valuable insights for academia and industry.

Details

Journal of Information, Communication and Ethics in Society, vol. 22 no. 1
Type: Research Article
ISSN: 1477-996X

Keywords

Article
Publication date: 5 July 2023

Manoj Kumar Kamila and Sahil Singh Jasrotia

This study aims to analyse the ethical implications associated with the development of artificial intelligence (AI) technologies and to examine the potential ethical ramifications…

1624

Abstract

Purpose

This study aims to analyse the ethical implications associated with the development of artificial intelligence (AI) technologies and to examine the potential ethical ramifications of AI technologies.

Design/methodology/approach

This study undertakes a thorough examination of existing academic literature pertaining to the ethical considerations surrounding AI. Additionally, it conducts in-depth interviews with individuals to explore the potential benefits and drawbacks of AI technology operating as autonomous ethical agents. A total of 20 semi-structured interviews were conducted, and the data were transcribed using grounded theory methodology.

Findings

The study asserts the importance of fostering an ethical environment in the progress of AI and suggests potential avenues for further investigation in the field of AI ethics. The study finds privacy and security, bias and fairness, trust and reliability, transparency and human–AI interactions as major ethical concerns.

Research limitations/implications

The implications of the study are far-reaching and span across various domains, including policy development, design of AI systems, establishment of trust, education and training, public awareness and further research. Notwithstanding the potential biases inherent in purposive sampling, the constantly evolving landscape of AI ethics and the challenge of extrapolating findings to all AI applications and contexts, limitations may still manifest.

Originality/value

The novelty of the study is attributed to its comprehensive methodology, which encompasses a wide range of stakeholder perspectives on the ethical implications of AI in the corporate sector. The ultimate goal is to promote the development of AI systems that exhibit responsibility, transparency and accountability.

Details

International Journal of Ethics and Systems, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-9369

Keywords

Open Access
Article
Publication date: 21 June 2022

Othmar Manfred Lehner, Kim Ittonen, Hanna Silvola, Eva Ström and Alena Wührleitner

This paper aims to identify ethical challenges of using artificial intelligence (AI)-based accounting systems for decision-making and discusses its findings based on Rest's…

26409

Abstract

Purpose

This paper aims to identify ethical challenges of using artificial intelligence (AI)-based accounting systems for decision-making and discusses its findings based on Rest's four-component model of antecedents for ethical decision-making. This study derives implications for accounting and auditing scholars and practitioners.

Design/methodology/approach

This research is rooted in the hermeneutics tradition of interpretative accounting research, in which the reader and the texts engage in a form of dialogue. To substantiate this dialogue, the authors conduct a theoretically informed, narrative (semi-systematic) literature review spanning the years 2015–2020. This review's narrative is driven by the depicted contexts and the accounting/auditing practices found in selected articles are used as sample instead of the research or methods.

Findings

In the thematic coding of the selected papers the authors identify five major ethical challenges of AI-based decision-making in accounting: objectivity, privacy, transparency, accountability and trustworthiness. Using Rest's component model of antecedents for ethical decision-making as a stable framework for our structure, the authors critically discuss the challenges and their relevance for a future human–machine collaboration within varying agency between humans and AI.

Originality/value

This paper contributes to the literature on accounting as a subjectivising as well as mediating practice in a socio-material context. It does so by providing a solid base of arguments that AI alone, despite its enabling and mediating role in accounting, cannot make ethical accounting decisions because it lacks the necessary preconditions in terms of Rest's model of antecedents. What is more, as AI is bound to pre-set goals and subjected to human made conditions despite its autonomous learning and adaptive practices, it lacks true agency. As a consequence, accountability needs to be shared between humans and AI. The authors suggest that related governance as well as internal and external auditing processes need to be adapted in terms of skills and awareness to ensure an ethical AI-based decision-making.

Details

Accounting, Auditing & Accountability Journal, vol. 35 no. 9
Type: Research Article
ISSN: 0951-3574

Keywords

Article
Publication date: 11 March 2022

Aline Shakti Franzke

As Big Data and Artificial Intelligence (AI) proliferate, calls have emerged for ethical reflection. Ethics guidelines have played a central role in this respect. While…

Abstract

Purpose

As Big Data and Artificial Intelligence (AI) proliferate, calls have emerged for ethical reflection. Ethics guidelines have played a central role in this respect. While quantitative research on the ethics guidelines of AI/Big Data has been undertaken, there has been a dearth of systematic qualitative analyses of these documents.

Design/methodology/approach

Aiming to address this research gap, this paper analyses 70 international ethics guidelines documents from academia, NGOs and the corporate realm, published between 2017 and 2020.

Findings

The article presents four key findings: existing ethics guidelines (1) promote a broad spectrum of values; (2) focus principally on AI, followed by (Big) Data and algorithms; (3) do not adequately define the term “ethics” and related terms; and (4) have most frequent recourse to the values of “transparency,” “privacy,” and “security.” Based on these findings, the article argues that the guidelines corpus exhibits discernible utilitarian tendencies; guidelines would benefit from greater reflexivity with respect to their ethical framework; and virtue ethical approaches have a valuable contribution to make to the process of guidelines development.

Originality/value

The paper provides qualitative insights into the ethical discourse surrounding AI guidelines, as well as a concise overview of different types of operative translations of theoretical ethical concepts vis-à-vis the sphere of AI. These may prove beneficial for (applied) ethicists, developers and regulators who understand these guidelines as policy.

Details

Journal of Information, Communication and Ethics in Society, vol. 20 no. 4
Type: Research Article
ISSN: 1477-996X

Keywords

Open Access
Article
Publication date: 13 March 2024

Abdolrasoul Habibipour

This study aims to investigate how living lab (LL) activities align with responsible research and innovation (RRI) principles, particularly in artificial intelligence (AI)-driven…

Abstract

Purpose

This study aims to investigate how living lab (LL) activities align with responsible research and innovation (RRI) principles, particularly in artificial intelligence (AI)-driven digital transformation (DT) processes. The study seeks to define a framework termed “responsible living lab” (RLL), emphasizing transparency, stakeholder engagement, ethics and sustainability. This emerging issue paper also proposes several directions for future researchers in the field.

Design/methodology/approach

The research methodology involved a literature review complemented by insights from a workshop on defining RLLs. The literature review followed a concept-centric approach, searching key journals and conferences, yielding 32 relevant articles. Backward and forward citation analysis added 19 more articles. The workshop, conducted in the context of UrbanTestbeds.JR and SynAir-G projects, used a reverse brainstorming approach to explore potential ethical and responsible issues in LL activities. In total, 13 experts engaged in collaborative discussions, highlighting insights into AI’s role in promoting RRI within LL activities. The workshop facilitated knowledge sharing and a deeper understanding of RLL, particularly in the context of DT and AI.

Findings

This emerging issue paper highlights ethical considerations in LL activities, emphasizing user voluntariness, user interests and unintended participation. AI in DT introduces challenges like bias, transparency and digital divide, necessitating responsible practices. Workshop insights underscore challenges: AI bias, data privacy and transparency; opportunities: inclusive decision-making and efficient innovation. The synthesis defines RLLs as frameworks ensuring transparency, stakeholder engagement, ethical considerations and sustainability in AI-driven DT within LLs. RLLs aim to align DT with ethical values, fostering inclusivity, responsible resource use and human rights protection.

Originality/value

The proposed definition of RLL introduces a framework prioritizing transparency, stakeholder engagement, ethics and sustainability in LL activities, particularly those involving AI for DT. This definition aligns LL practices with RRI, addressing ethical implications of AI. The value of RLL lies in promoting inclusive and sustainable innovation, prioritizing stakeholder needs, fostering collaboration and ensuring environmental and social responsibility throughout LL activities. This concept serves as a foundational step toward a more responsible and sustainable LL approach in the era of AI-driven technologies.

Details

Journal of Information, Communication and Ethics in Society, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1477-996X

Keywords

1 – 10 of over 6000