Search results

1 – 10 of 408
Article
Publication date: 25 April 2024

Mojtaba Rezaei, Marco Pironti and Roberto Quaglia

This study aims to identify and assess the key ethical challenges associated with integrating artificial intelligence (AI) in knowledge-sharing (KS) practices and their…

Abstract

Purpose

This study aims to identify and assess the key ethical challenges associated with integrating artificial intelligence (AI) in knowledge-sharing (KS) practices and their implications for decision-making (DM) processes within organisations.

Design/methodology/approach

The study employs a mixed-methods approach, beginning with a comprehensive literature review to extract background information on AI and KS and to identify potential ethical challenges. Subsequently, a confirmatory factor analysis (CFA) is conducted using data collected from individuals employed in business settings to validate the challenges identified in the literature and assess their impact on DM processes.

Findings

The findings reveal that challenges related to privacy and data protection, bias and fairness and transparency and explainability are particularly significant in DM. Moreover, challenges related to accountability and responsibility and the impact of AI on employment also show relatively high coefficients, highlighting their importance in the DM process. In contrast, challenges such as intellectual property and ownership, algorithmic manipulation and global governance and regulation are found to be less central to the DM process.

Originality/value

This research contributes to the ongoing discourse on the ethical challenges of AI in knowledge management (KM) and DM within organisations. By providing insights and recommendations for researchers, managers and policymakers, the study emphasises the need for a holistic and collaborative approach to harness the benefits of AI technologies whilst mitigating their associated risks.

Details

Management Decision, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0025-1747

Keywords

Article
Publication date: 30 April 2024

Md. Rifat Mahmud

This paper aims to explore the opportunities and challenges associated with adopting artificial intelligence (AI) in libraries in Bangladesh and provide recommendations to guide…

117

Abstract

Purpose

This paper aims to explore the opportunities and challenges associated with adopting artificial intelligence (AI) in libraries in Bangladesh and provide recommendations to guide the responsible integration of AI to enhance library services and accessibility.

Design/methodology/approach

The paper reviews relevant literature on the applications of AI in libraries, the current state of technology adoption in Bangladeshi libraries and the ethical considerations surrounding AI implementation. It analyzes the potential benefits of AI tools such as chatbots, intelligent search engines, text-to-speech and language translation for improving user services and inclusion. The challenges of infrastructure constraints, lack of resources and skills, data privacy issues and bias are also examined through the lens of the Bangladeshi context.

Findings

AI offers transformative opportunities to automate operations, strengthen user services through 24/7 virtual assistants and personalized recommendations and promote accessibility for diverse users in Bangladeshi libraries. However, significant challenges such as inadequate technology infrastructure, funding limitations, shortage of AI-skilled staff, data privacy risks and potential biases must be addressed. Strategically planning for sustainable implementation, building AI capacity, prioritizing ethical AI development and fostering collaborations are critical factors for successful AI adoption.

Originality/value

This paper provides an in-depth analysis of the prospects and obstacles in leveraging AI specifically for libraries in Bangladesh. It offers original insights and context-specific recommendations tailored to the needs and constraints of a developing nation working to harness AI’s potential to create dynamic, inclusive knowledge centers serving all communities.

Details

Library Hi Tech News, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0741-9058

Keywords

Open Access
Article
Publication date: 13 March 2024

Abdolrasoul Habibipour

This study aims to investigate how living lab (LL) activities align with responsible research and innovation (RRI) principles, particularly in artificial intelligence (AI)-driven…

Abstract

Purpose

This study aims to investigate how living lab (LL) activities align with responsible research and innovation (RRI) principles, particularly in artificial intelligence (AI)-driven digital transformation (DT) processes. The study seeks to define a framework termed “responsible living lab” (RLL), emphasizing transparency, stakeholder engagement, ethics and sustainability. This emerging issue paper also proposes several directions for future researchers in the field.

Design/methodology/approach

The research methodology involved a literature review complemented by insights from a workshop on defining RLLs. The literature review followed a concept-centric approach, searching key journals and conferences, yielding 32 relevant articles. Backward and forward citation analysis added 19 more articles. The workshop, conducted in the context of UrbanTestbeds.JR and SynAir-G projects, used a reverse brainstorming approach to explore potential ethical and responsible issues in LL activities. In total, 13 experts engaged in collaborative discussions, highlighting insights into AI’s role in promoting RRI within LL activities. The workshop facilitated knowledge sharing and a deeper understanding of RLL, particularly in the context of DT and AI.

Findings

This emerging issue paper highlights ethical considerations in LL activities, emphasizing user voluntariness, user interests and unintended participation. AI in DT introduces challenges like bias, transparency and digital divide, necessitating responsible practices. Workshop insights underscore challenges: AI bias, data privacy and transparency; opportunities: inclusive decision-making and efficient innovation. The synthesis defines RLLs as frameworks ensuring transparency, stakeholder engagement, ethical considerations and sustainability in AI-driven DT within LLs. RLLs aim to align DT with ethical values, fostering inclusivity, responsible resource use and human rights protection.

Originality/value

The proposed definition of RLL introduces a framework prioritizing transparency, stakeholder engagement, ethics and sustainability in LL activities, particularly those involving AI for DT. This definition aligns LL practices with RRI, addressing ethical implications of AI. The value of RLL lies in promoting inclusive and sustainable innovation, prioritizing stakeholder needs, fostering collaboration and ensuring environmental and social responsibility throughout LL activities. This concept serves as a foundational step toward a more responsible and sustainable LL approach in the era of AI-driven technologies.

Details

Journal of Information, Communication and Ethics in Society, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1477-996X

Keywords

Article
Publication date: 23 April 2024

Natalie Bidnick Andreas

The integration of artificial intelligence (AI) technologies like conversational AI and HR chatbots in international human resource development (HRD) presents both productivity…

Abstract

Purpose

The integration of artificial intelligence (AI) technologies like conversational AI and HR chatbots in international human resource development (HRD) presents both productivity benefits and ethical challenges. This study aims to examine the ethical dimensions of AI-driven HR chatbots, emphasizing the need for fairness, autonomy and nondiscrimination. It discusses inherent biases in AI systems and addresses linguistic, cultural and accessibility issues. The paper advocates for a comprehensive risk assessment approach to guide ethical integration, proposing a “risk management by design” framework. By embracing ethical principles and robust risk management strategies, organizations can navigate AI-driven HR technologies while upholding fairness and equity in global workforce management.

Design/methodology/approach

Systematic literature review.

Findings

The paper advocates for a comprehensive risk assessment approach to guide ethical integration, proposing a “risk management by design” framework.

Practical implications

By embracing ethical principles and robust risk management strategies, organizations can navigate AI-driven HR technologies while upholding fairness and equity in global workforce management.

Originality/value

This study explores the intricate ethical landscape surrounding AI-driven HR chatbots, spotlighting the imperatives of fairness, autonomy, and nondiscrimination. Uncovering biases inherent in AI systems, it addresses linguistic, cultural, and accessibility concerns. Proposing a pioneering “risk management by design” framework, the study advocates for a holistic approach to ethical integration, ensuring organizations navigate the complexities of AI-driven HR technologies while prioritizing fairness and equity in global workforce management.

Details

Strategic HR Review, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1475-4398

Keywords

Article
Publication date: 7 May 2024

Julia Stefanie Roppelt, Nina Sophie Greimel, Dominik K. Kanbach, Stephan Stubner and Thomas K. Maran

The aim of this paper is to explore how multi-national corporations (MNCs) can effectively adopt artificial intelligence (AI) into their talent acquisition (TA) practices. While…

Abstract

Purpose

The aim of this paper is to explore how multi-national corporations (MNCs) can effectively adopt artificial intelligence (AI) into their talent acquisition (TA) practices. While the potential of AI to address emerging challenges, such as talent shortages and applicant surges in specific regions, has been anecdotally highlighted, there is limited empirical evidence regarding its effective deployment and adoption in TA. As a result, this paper endeavors to develop a theoretical model that delineates the motives, barriers, procedural steps and critical factors that can aid in the effective adoption of AI in TA within MNCs.

Design/methodology/approach

Given the scant empirical literature on our research objective, we utilized a qualitative methodology, encompassing a multiple-case study (consisting of 19 cases across seven industries) and a grounded theory approach.

Findings

Our proposed framework, termed the Framework on Effective Adoption of AI in TA, contextualizes the motives, barriers, procedural steps and critical success factors essential for the effective adoption of AI in TA.

Research limitations/ implications

This paper contributes to literature on effective adoption of AI in TA and adoption theory.

Practical implications

Additionally, it provides guidance to TA managers seeking effective AI implementation and adoption strategies, especially in the face of emerging challenges.

Originality/value

To the best of the authors' knowledge, this study is unparalleled, being both grounded in theory and based on an expansive dataset that spans firms from various regions and industries. The research delves deeply into corporations' underlying motives and processes concerning the effective adoption of AI in TA.

Details

Management Decision, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0025-1747

Keywords

Article
Publication date: 29 February 2024

Donghee Shin, Kulsawasd Jitkajornwanich, Joon Soo Lim and Anastasia Spyridou

This study examined how people assess health information from AI and improve their diagnostic ability to identify health misinformation. The proposed model was designed to test a…

Abstract

Purpose

This study examined how people assess health information from AI and improve their diagnostic ability to identify health misinformation. The proposed model was designed to test a cognitive heuristic theory in misinformation discernment.

Design/methodology/approach

We proposed the heuristic-systematic model to assess health misinformation processing in the algorithmic context. Using the Analysis of Moment Structure (AMOS) 26 software, we tested fairness/transparency/accountability (FAccT) as constructs that influence the heuristic evaluation and systematic discernment of misinformation by users. To test moderating and mediating effects, PROCESS Macro Model 4 was used.

Findings

The effect of AI-generated misinformation on people’s perceptions of the veracity of health information may differ according to whether they process misinformation heuristically or systematically. Heuristic processing is significantly associated with the diagnosticity of misinformation. There is a greater chance that misinformation will be correctly diagnosed and checked, if misinformation aligns with users’ heuristics or is validated by the diagnosticity they perceive.

Research limitations/implications

When exposed to misinformation through algorithmic recommendations, users’ perceived diagnosticity of misinformation can be predicted accurately from their understanding of normative values. This perceived diagnosticity would then positively influence the accuracy and credibility of the misinformation.

Practical implications

Perceived diagnosticity exerts a key role in fostering misinformation literacy, implying that improving people’s perceptions of misinformation and AI features is an efficient way to change their misinformation behavior.

Social implications

Although there is broad agreement on the need to control and combat health misinformation, the magnitude of this problem remains unknown. It is essential to understand both users’ cognitive processes when it comes to identifying health misinformation and the diffusion mechanism from which such misinformation is framed and subsequently spread.

Originality/value

The mechanisms through which users process and spread misinformation have remained open-ended questions. This study provides theoretical insights and relevant recommendations that can make users and firms/institutions alike more resilient in protecting themselves from the detrimental impact of misinformation.

Peer review

The peer review history for this article is available at: https://publons.com/publon/10.1108/OIR-04-2023-0167

Details

Online Information Review, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 8 March 2024

Agostino Marengo, Alessandro Pagano, Jenny Pange and Kamal Ahmed Soomro

This paper aims to consolidate empirical studies between 2013 and 2022 to investigate the impact of artificial intelligence (AI) in higher education. It aims to examine published…

Abstract

Purpose

This paper aims to consolidate empirical studies between 2013 and 2022 to investigate the impact of artificial intelligence (AI) in higher education. It aims to examine published research characteristics and provide insights into the promises and challenges of AI integration in academia.

Design/methodology/approach

A systematic literature review was conducted, encompassing 44 empirical studies published as peer-reviewed journal papers. The review focused on identifying trends, categorizing research types and analysing the evidence-based applications of AI in higher education.

Findings

The review indicates a recent surge in publications concerning AI in higher education. However, a significant proportion of these publications primarily propose theoretical and conceptual AI interventions. Areas with empirical evidence supporting AI applications in academia are delineated.

Research limitations/implications

The prevalence of theoretical proposals may limit generalizability. Further research is encouraged to validate and expand upon the identified empirical applications of AI in higher education.

Practical implications

This review outlines imperative implications for future research and the implementation of evidence-based AI interventions in higher education, facilitating informed decision-making for academia and stakeholders.

Originality/value

This paper contributes a comprehensive synthesis of empirical studies, highlighting the evolving landscape of AI integration in higher education and emphasizing the need for evidence-based approaches.

Details

Interactive Technology and Smart Education, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1741-5659

Keywords

Article
Publication date: 21 May 2024

Martin Sposato

The purpose of this study is to examine the multifaceted implications of AI on leadership dynamics and organizational practices. By synthesizing insights from behavioral theory…

Abstract

Purpose

The purpose of this study is to examine the multifaceted implications of AI on leadership dynamics and organizational practices. By synthesizing insights from behavioral theory, AI analytics, and ethical considerations, the study aims to equip leaders with the requisite knowledge, skills, and mindset to foster adaptive leadership, anticipate change, and cultivate innovation amidst AI-driven disruptions.

Design/methodology/approach

This article employs a qualitative research approach, integrating literature review and conceptual analysis to explore the intersection of leadership development and Artificial Intelligence (AI). Drawing insights from scholarly articles, theoretical frameworks and practice, the study elucidates the evolving landscape of leadership in the context of AI adoption. Practical action points are derived to guide organizational leaders and educators in navigating AI-induced transformations effectively.

Findings

The integration of AI into leadership dynamics necessitates a paradigm shift in leadership paradigms, emphasizing the fusion of technical proficiency with emotional intelligence. Behavioral theory coupled with AI analytics offers valuable insights into effective leadership behaviors, facilitating the design of tailored leadership development programs. Proactive leadership strategies, ethical considerations, and talent management emerge as pivotal factors in navigating AI-induced transformations and fostering organizational resilience.

Originality/value

This article contributes to the literature by synthesizing diverse perspectives on AI leadership and offering practical action points for organizational leaders and educators. By highlighting the integration of behavioral theory, AI analytics, and ethical considerations, the study underscores the importance of interdisciplinary approaches in leadership research and education. The insights derived from this study inform organizational practices, curriculum development in higher education, and future research agendas, fostering ethical AI adoption and cultivating adaptive leadership cultures in the age of Artificial Intelligence.

Details

Development and Learning in Organizations: An International Journal, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1477-7282

Keywords

Article
Publication date: 16 May 2024

Tsung-Sheng Chang and Wei-Hung Hsiao

The rise of artificial intelligence (AI) applications has driven enterprises to provide many intelligent services to consumers. For instance, customers can use chatbots to make…

Abstract

Purpose

The rise of artificial intelligence (AI) applications has driven enterprises to provide many intelligent services to consumers. For instance, customers can use chatbots to make relevant inquiries and seek solutions to their problems. Despite the development of customer service chatbots years ago, they require significant improvements for market recognition. Many customers have reported negative experiences with customer service chatbots, contributing to resistance toward their use. Therefore, this study adopts the innovation resistance theory (IRT) perspective to understand customers’ resistance to using chatbots. It aims to integrate customers’ negative emotions into a predictive behavior model and examine users’ functional and psychological barriers.

Design/methodology/approach

In this study, we collected data from 419 valid individuals and used structural equation modeling to analyze the relationships between resistance factors and negative emotions.

Findings

The results confirmed that barrier factors affect negative emotions and amplify chatbot resistance influence. We discovered that value and risk barriers directly influence consumer use. Moreover, both functional and psychological barriers positively impact negative emotions.

Originality/value

This study adopts the innovation resistance theory perspective to understand customer resistance to using chatbots, integrates customer negative emotions to construct a predictive behavior model and explores users’ functional and psychological barriers. It can help in developing online customer service chatbots for e-commerce.

Details

Aslib Journal of Information Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2050-3806

Keywords

Article
Publication date: 26 April 2024

Moyosore Adegboye

This paper aims to explore the intricate relationship between artificial intelligence (AI) and health information literacy (HIL), examining the rise of AI in health care, the…

Abstract

Purpose

This paper aims to explore the intricate relationship between artificial intelligence (AI) and health information literacy (HIL), examining the rise of AI in health care, the intersection of AI and HIL and the imperative for promoting AI literacy and integrating it with HIL. By fostering collaboration, education and innovation, stakeholders can navigate the evolving health-care ecosystem with confidence and agency, ultimately improving health-care delivery and outcomes for all.

Design/methodology/approach

This paper adopts a conceptual approach to explore the intricate relationship between AI and HIL, aiming to provide guidance for health-care professionals navigating the evolving landscape of AI-driven health-care delivery. The methodology used in this paper involves a synthesis of existing literature, theoretical analysis and conceptual modeling to develop insights and recommendations regarding the integration of AI literacy with HIL.

Findings

Impact of AI on health-care delivery: The integration of AI technologies in health-care is reshaping the industry, offering unparalleled opportunities for improving patient care, optimizing clinical workflows and advancing medical research. Significance of HIL: HIL, encompassing the ability to access, understand and critically evaluate health information, is crucial in the context of AI-driven health-care delivery. It empowers health-care professionals, patients and the broader community to make informed decisions about their health and well-being. Intersection of AI and HIL: The convergence of AI and HIL represents a critical juncture, where technological innovation intersects with human cognition. AI technologies have the potential to revolutionize how health information is generated, disseminated and interpreted, necessitating a deeper understanding of their implications for HIL. Challenges and opportunities: While AI holds tremendous promise for enhancing health-care outcomes, it also introduces new challenges and complexities for individuals navigating the vast landscape of health information. Issues such as algorithmic bias, transparency and accountability pose ethical dilemmas that impact individuals’ ability to critically evaluate and interpret AI-generated health information. Recommendations for health-care professionals: Health-care professionals are encouraged to adopt strategies such as staying informed about developments in AI, continuous education and training in AI literacy, fostering interdisciplinary collaboration and advocating for policies that promote ethical AI practices.

Practical implications

To enhance AI literacy and integrate it with HIL, health-care professionals are encouraged to adopt several key strategies. First, staying abreast of developments in AI technologies and their applications in health care is essential. This entails actively engaging with conferences, workshops and publications focused on AI in health care and participating in professional networks dedicated to AI and health-care innovation. Second, continuous education and training are paramount for developing critical thinking skills and ethical awareness in evaluating AI-driven health information (Alowais et al., 2023). Health-care organizations should provide opportunities for ongoing professional development in AI literacy, including workshops, online courses and simulation exercises focused on AI applications in clinical practice and research.

Originality/value

This paper lies in its exploration of the intersection between AI and HIL, offering insights into the evolving health-care landscape. It innovatively synthesizes existing literature, proposes strategies for integrating AI literacy with HIL and provides guidance for health-care professionals to navigate the complexities of AI-driven health-care delivery. By addressing the transformative potential of AI while emphasizing the importance of promoting critical thinking skills and ethical awareness, this paper contributes to advancing understanding in the field and promoting informed decision-making in an increasingly digital health-care environment.

Details

Library Hi Tech News, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0741-9058

Keywords

Access

Year

Last 3 months (408)

Content type

Earlycite article (408)
1 – 10 of 408