Search results

1 – 10 of 89
Article
Publication date: 22 January 2024

Dinesh Kumar and Nidhi Suthar

Artificial intelligence (AI) has sparked interest in various areas, including marketing. However, this exhilaration is being tempered by growing concerns about the moral and legal…

1388

Abstract

Purpose

Artificial intelligence (AI) has sparked interest in various areas, including marketing. However, this exhilaration is being tempered by growing concerns about the moral and legal implications of using AI in marketing. Although previous research has revealed various ethical and legal issues, such as algorithmic discrimination and data privacy, there are no definitive answers. This paper aims to fill this gap by investigating AI’s ethical and legal concerns in marketing and suggesting feasible solutions.

Design/methodology/approach

The paper synthesises information from academic articles, industry reports, case studies and legal documents through a thematic literature review. A qualitative analysis approach categorises and interprets ethical and legal challenges and proposes potential solutions.

Findings

The findings of this paper raise concerns about ethical and legal challenges related to AI in the marketing area. Ethical concerns related to discrimination, bias, manipulation, job displacement, absence of social interaction, cybersecurity, unintended consequences, environmental impact, privacy and legal issues such as consumer security, responsibility, liability, brand protection, competition law, agreements, data protection, consumer protection and intellectual property rights are discussed in the paper, and their potential solutions are discussed.

Research limitations/implications

Notwithstanding the interesting insights gathered from this investigation of the ethical and legal consequences of AI in marketing, it is important to recognise the limits of this research. Initially, the focus of this study is confined to a review of the most important ethical and legal issues pertaining to AI in marketing. Additional possible repercussions, such as those associated with intellectual property, contracts and licencing, should be investigated more deeply in future studies. Despite the fact that this study gives various answers and best practices for tackling the stated ethical and legal concerns, the viability and efficacy of these solutions may differ depending on the context and industry. Thus, more research and case studies are required to evaluate the applicability and efficacy of these solutions in other circumstances. This research is mostly based on a literature review and may not represent the experiences or opinions of all stakeholders engaged in AI-powered marketing. Further study might involve interviews or surveys with marketing professionals, customers and other key stakeholders to offer a full knowledge of the practical difficulties and solutions. Because of the rapid speed of technical progress, AI’s ethical and regulatory ramifications in marketing are continually increasing. Consequently, this work should be a springboard for more research and continuing conversations on this subject.

Practical implications

This study’s findings have several practical implications for marketing professionals. Emphasising openness and explainability: Marketing professionals should prioritise transparency in their use of AI, ensuring that customers are fully informed about data collection and utilisation for targeted advertising. By promoting openness and explainability, marketers can foster customer trust and avoid the negative consequences of a lack of transparency. Establishing ethical guidelines: Marketing professionals need to develop ethical rules for the creation and implementation of AI-powered marketing strategies. Adhering to ethical principles ensures compliance with legal norms and aligns with the organisation’s values and ideals. Investing in bias detection tools and privacy-enhancing technology: To mitigate risks associated with AI in marketing, marketers should allocate resources to develop and implement bias detection tools and privacy-enhancing technology. These tools can identify and address biases in AI algorithms, safeguard consumer privacy and extract valuable insights from consumer data.

Social implications

This study’s social implications emphasise the need for a comprehensive approach to address the ethical and legal challenges of AI in marketing. This includes adopting a responsible innovation framework, promoting ethical leadership, using ethical decision-making frameworks and conducting multidisciplinary research. By incorporating these approaches, marketers can navigate the complexities of AI in marketing responsibly, foster an ethical organisational culture, make informed ethical decisions and develop effective solutions. Such practices promote public trust, ensure equitable distribution of benefits and risk, and mitigate potential negative social consequences associated with AI in marketing.

Originality/value

To the best of the authors’ knowledge, this paper is among the first to explore potential solutions comprehensively. This paper provides a nuanced understanding of the challenges by using a multidisciplinary framework and synthesising various sources. It contributes valuable insights for academia and industry.

Details

Journal of Information, Communication and Ethics in Society, vol. 22 no. 1
Type: Research Article
ISSN: 1477-996X

Keywords

Content available
Article
Publication date: 15 September 2023

Curtis C. Cain, Carlos D. Buskey and Gloria J. Washington

The purpose of this paper is to demonstrate the advancements in artificial intelligence (AI) and conversational agents, emphasizing their potential benefits while also…

Abstract

Purpose

The purpose of this paper is to demonstrate the advancements in artificial intelligence (AI) and conversational agents, emphasizing their potential benefits while also highlighting the need for vigilant monitoring to prevent unethical applications.

Design/methodology/approach

As AI becomes more prevalent in academia and research, it is crucial to explore ways to ensure ethical usage of the technology and to identify potentially unethical usage. This manuscript uses a popular AI chatbot to write the introduction and parts of the body of a manuscript discussing conversational agents, the ethical usage of chatbots and ethical concerns for academic researchers.

Findings

The authors reveal which sections were written entirely by the AI using a conversational agent. This serves as a cautionary tale highlighting the importance of ethical considerations for researchers and students when using AI and how educators must be prepared for the increasing prevalence of AI in the academy and industry. Measures to mitigate potential unethical use of this evolving technology are also discussed in the manuscript.

Originality/value

As conversational agents and chatbots increase in society, it is crucial to understand how they will impact the community and how we can live with technology instead of fighting against it.

Details

Journal of Information, Communication and Ethics in Society, vol. 21 no. 4
Type: Research Article
ISSN: 1477-996X

Keywords

Article
Publication date: 19 February 2024

Steven Alter

The lack of conceptual approaches for organizing and expressing capabilities, usage and impact of intelligent machines (IMs) in work settings is an obstacle to moving beyond…

Abstract

Purpose

The lack of conceptual approaches for organizing and expressing capabilities, usage and impact of intelligent machines (IMs) in work settings is an obstacle to moving beyond isolated case examples, domain-specific studies, 2 × 2 frameworks and expert opinion in discussions of IMs and work. This paper's purpose is to illuminate many issues that often are not addressed directly in research, practice or punditry related to IMs. It pursues that purpose by presenting an integrated approach for identifying and organizing important aspects of analysis and evaluation related to IMs in work settings. 

Design/methodology/approach

This paper integrates previously published ideas related to work systems (WSs), smart devices and systems, facets of work, roles and responsibilities of information systems, interactions between people and machines and a range of criteria for evaluating system performance.

Findings

Eight principles outline a straightforward and flexible approach for analyzing and evaluating IMs and the WSs that use them. Those principles are based on the above ideas.

Originality/value

This paper provides a novel approach for identifying design choices for situated use of IMs. The breadth, depth and integration of this approach address a gap in existing literature, which rarely aspires to this paper’s thoroughness in combining ideas that support the description, analysis, design and evaluation of situated uses of IMs.

Details

Information Technology & People, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0959-3845

Keywords

Article
Publication date: 15 April 2024

Gianluca Piero Maria Virgilio, Fausto Saavedra Hoyos and Carol Beatriz Bao Ratzemberg

The aim of this paper is to summarise the state-of-the-art debate on impact of artificial intelligence on unemployment and reporting up-to-date academic findings.

Abstract

Purpose

The aim of this paper is to summarise the state-of-the-art debate on impact of artificial intelligence on unemployment and reporting up-to-date academic findings.

Design/methodology/approach

The paper is designed as a review of the labour vs capital conundrum, the differences between industrial automation and artificial intelligence, threat to employment, the difficulty of substituting, role of soft skills and whether technology leads to the deskilling of human workers or favors increasing human capabilities.

Findings

Some authors praise the bright future developments of artificial intelligence while others warn about mass unemployment. Therefore, it is paramount to present an up-to-date overview of the problem, compare and contrast its features with what happened in past innovation waves and contribute to academic discussion about the pros/cons of current trends.

Originality/value

The main value of this paper is presenting a balanced view of 100+ different studies, the vast majority from the last five years. Reading this paper will allow to quickly grasp the main issues around the thorny topic of artificial intelligence and unemployment.

Peer review

The peer review history for this article is available at: https://publons.com/publon/10.1108/IJSE-05-2023-0338

Details

International Journal of Social Economics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0306-8293

Keywords

Article
Publication date: 29 November 2023

Cristian Morosan and Aslihan Dursun-Cengizci

Given the rapid development in artificial intelligence (AI), the hotel industry is deploying AI-based systems. In line with this important development, this study aims to examine…

Abstract

Purpose

Given the rapid development in artificial intelligence (AI), the hotel industry is deploying AI-based systems. In line with this important development, this study aims to examine the impact of trust in the hotel and AI-related performance ambiguity on consumers’ engagement with AI-based systems. This study ultimately examined the impact of engagement on consumers’ intentions to stay in hotels offering such systems, and intentions to tip.

Design/methodology/approach

This study developed a conceptual model based on the social cognition theory. The study used an online survey methodology and collected data from a nationwide sample of 400 hotel consumers from the USA. The data analysis was conducted with structural equation modeling.

Findings

Consumers’ engagement is strongly influenced by their trust in the hotel but not by performance ambiguity associated with AI. In turn, engagement strongly influenced consumers’ intentions to stay in hotels that have such systems and their intentions to tip.

Originality/value

As AI systems capable of making decisions for consumers are becoming increasingly present in hotels, little is known about the way consumers engage with such systems and whether their engagement leads to economic impact. This is the first study that validated a model that explains intentions to stay and tip for services facilitated by autonomous AI-based systems that can make decisions for consumers.

研究目的

鉴于人工智能领域的快速发展, 酒店业正在部署基于人工智能的系统。为此, 本研究探讨了客人对酒店的信任和与AI相关的性能模糊性对消费者与基于AI的系统互动的影响。最终, 本研究考察了参与度对客人在提供此类系统的酒店住宿意愿和小费意愿的影响。

研究方法

本研究基于社会认知理论开发了一个概念模型。研究采用在线调查方法, 从美国全国范围的400名酒店消费者中收集数据, 并采用结构方程建模进行数据分析。

研究发现

消费者的参与度受酒店的信任强烈影响, 但不受与AI相关的性能模糊性的影响。反过来, 参与度强烈影响了消费者在提供此类系统的酒店住宿和给小费的意愿。

研究创新

随着能够代表消费者做出决策的人工智能(AI)系统在酒店中日益普及, 人们对消费者如何与这类系统互动以及他们的互动是否会产生经济影响知之甚少。这是第一项验证了一个可以解释在自主的基于AI系统的服务下住宿和给小费意愿的模型的研究。

Details

Journal of Hospitality and Tourism Technology, vol. 15 no. 1
Type: Research Article
ISSN: 1757-9880

Keywords

Open Access
Article
Publication date: 4 April 2024

Bassem T. ElHassan and Alya A. Arabi

The purpose of this paper is to illuminate the ethical concerns associated with the use of artificial intelligence (AI) in the medical sector and to provide solutions that allow…

Abstract

Purpose

The purpose of this paper is to illuminate the ethical concerns associated with the use of artificial intelligence (AI) in the medical sector and to provide solutions that allow deriving maximum benefits from this technology without compromising ethical principles.

Design/methodology/approach

This paper provides a comprehensive overview of AI in medicine, exploring its technical capabilities, practical applications, and ethical implications. Based on our expertise, we offer insights from both technical and practical perspectives.

Findings

The study identifies several advantages of AI in medicine, including its ability to improve diagnostic accuracy, enhance surgical outcomes, and optimize healthcare delivery. However, there are pending ethical issues such as algorithmic bias, lack of transparency, data privacy issues, and the potential for AI to deskill healthcare professionals and erode humanistic values in patient care. Therefore, it is important to address these issues as promptly as possible to make sure that we benefit from the AI’s implementation without causing any serious drawbacks.

Originality/value

This paper gains its value from the combined practical experience of Professor Elhassan gained through his practice at top hospitals worldwide, and the theoretical expertise of Dr. Arabi acquired from international institutes. The shared experiences of the authors provide valuable insights that are beneficial for raising awareness and guiding action in addressing the ethical concerns associated with the integration of artificial intelligence in medicine.

Details

International Journal of Ethics and Systems, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-9369

Keywords

Article
Publication date: 25 April 2024

Mojtaba Rezaei, Marco Pironti and Roberto Quaglia

This study aims to identify and assess the key ethical challenges associated with integrating artificial intelligence (AI) in knowledge-sharing (KS) practices and their…

Abstract

Purpose

This study aims to identify and assess the key ethical challenges associated with integrating artificial intelligence (AI) in knowledge-sharing (KS) practices and their implications for decision-making (DM) processes within organisations.

Design/methodology/approach

The study employs a mixed-methods approach, beginning with a comprehensive literature review to extract background information on AI and KS and to identify potential ethical challenges. Subsequently, a confirmatory factor analysis (CFA) is conducted using data collected from individuals employed in business settings to validate the challenges identified in the literature and assess their impact on DM processes.

Findings

The findings reveal that challenges related to privacy and data protection, bias and fairness and transparency and explainability are particularly significant in DM. Moreover, challenges related to accountability and responsibility and the impact of AI on employment also show relatively high coefficients, highlighting their importance in the DM process. In contrast, challenges such as intellectual property and ownership, algorithmic manipulation and global governance and regulation are found to be less central to the DM process.

Originality/value

This research contributes to the ongoing discourse on the ethical challenges of AI in knowledge management (KM) and DM within organisations. By providing insights and recommendations for researchers, managers and policymakers, the study emphasises the need for a holistic and collaborative approach to harness the benefits of AI technologies whilst mitigating their associated risks.

Details

Management Decision, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0025-1747

Keywords

Article
Publication date: 29 February 2024

Donghee Shin, Kulsawasd Jitkajornwanich, Joon Soo Lim and Anastasia Spyridou

This study examined how people assess health information from AI and improve their diagnostic ability to identify health misinformation. The proposed model was designed to test a…

Abstract

Purpose

This study examined how people assess health information from AI and improve their diagnostic ability to identify health misinformation. The proposed model was designed to test a cognitive heuristic theory in misinformation discernment.

Design/methodology/approach

We proposed the heuristic-systematic model to assess health misinformation processing in the algorithmic context. Using the Analysis of Moment Structure (AMOS) 26 software, we tested fairness/transparency/accountability (FAccT) as constructs that influence the heuristic evaluation and systematic discernment of misinformation by users. To test moderating and mediating effects, PROCESS Macro Model 4 was used.

Findings

The effect of AI-generated misinformation on people’s perceptions of the veracity of health information may differ according to whether they process misinformation heuristically or systematically. Heuristic processing is significantly associated with the diagnosticity of misinformation. There is a greater chance that misinformation will be correctly diagnosed and checked, if misinformation aligns with users’ heuristics or is validated by the diagnosticity they perceive.

Research limitations/implications

When exposed to misinformation through algorithmic recommendations, users’ perceived diagnosticity of misinformation can be predicted accurately from their understanding of normative values. This perceived diagnosticity would then positively influence the accuracy and credibility of the misinformation.

Practical implications

Perceived diagnosticity exerts a key role in fostering misinformation literacy, implying that improving people’s perceptions of misinformation and AI features is an efficient way to change their misinformation behavior.

Social implications

Although there is broad agreement on the need to control and combat health misinformation, the magnitude of this problem remains unknown. It is essential to understand both users’ cognitive processes when it comes to identifying health misinformation and the diffusion mechanism from which such misinformation is framed and subsequently spread.

Originality/value

The mechanisms through which users process and spread misinformation have remained open-ended questions. This study provides theoretical insights and relevant recommendations that can make users and firms/institutions alike more resilient in protecting themselves from the detrimental impact of misinformation.

Peer review

The peer review history for this article is available at: https://publons.com/publon/10.1108/OIR-04-2023-0167

Details

Online Information Review, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1468-4527

Keywords

Abstract

Details

The Impact of ChatGPT on Higher Education
Type: Book
ISBN: 978-1-83797-648-5

Article
Publication date: 9 February 2023

Alberto Lopez and Ricardo Garza

Will consumers accept artificial intelligence (AI) products that evaluate them? New consumer products offer AI evaluations. However, previous research has never investigated how…

1157

Abstract

Purpose

Will consumers accept artificial intelligence (AI) products that evaluate them? New consumer products offer AI evaluations. However, previous research has never investigated how consumers feel about being evaluated by AI instead of by a human. Furthermore, why do consumers experience being evaluated by an AI algorithm or by a human differently? This research aims to offer answers to these questions.

Design/methodology/approach

Three laboratory experiments were conducted. Experiments 1 and 2 test the main effect of evaluator (AI and human) and evaluations received (positive, neutral and negative) on fairness perception of the evaluation. Experiment 3 replicates previous findings and tests the mediation effect.

Findings

Building on previous research on consumer biases and lack of transparency anxiety, the authors present converging evidence that consumers who got positive evaluations reported nonsignificant difference on the level of fairness perception on the evaluation regardless of the evaluator (human or AI). Contrarily, consumers who got negative evaluations reported lower fairness perception when the evaluation was given by AI. Further moderated mediation analysis showed that consumers who get a negative evaluation by AI experience higher levels of lack of transparency anxiety, which in turn is an underlying mechanism driving this effect.

Originality/value

To the best of the authors' knowledge, no previous research has investigated how consumers feel about being evaluated by AI instead of by a human. This consumer bias against AI evaluations is a phenomenon previously overlooked in the marketing literature, with many implications for the development and adoption of new AI products, as well as theoretical contributions to the nascent literature on consumer experience and AI.

Details

Journal of Research in Interactive Marketing, vol. 17 no. 6
Type: Research Article
ISSN: 2040-7122

Keywords

1 – 10 of 89