Search results
1 – 10 of over 1000This study aims to investigate how living lab (LL) activities align with responsible research and innovation (RRI) principles, particularly in artificial intelligence (AI)-driven…
Abstract
Purpose
This study aims to investigate how living lab (LL) activities align with responsible research and innovation (RRI) principles, particularly in artificial intelligence (AI)-driven digital transformation (DT) processes. The study seeks to define a framework termed “responsible living lab” (RLL), emphasizing transparency, stakeholder engagement, ethics and sustainability. This emerging issue paper also proposes several directions for future researchers in the field.
Design/methodology/approach
The research methodology involved a literature review complemented by insights from a workshop on defining RLLs. The literature review followed a concept-centric approach, searching key journals and conferences, yielding 32 relevant articles. Backward and forward citation analysis added 19 more articles. The workshop, conducted in the context of UrbanTestbeds.JR and SynAir-G projects, used a reverse brainstorming approach to explore potential ethical and responsible issues in LL activities. In total, 13 experts engaged in collaborative discussions, highlighting insights into AI’s role in promoting RRI within LL activities. The workshop facilitated knowledge sharing and a deeper understanding of RLL, particularly in the context of DT and AI.
Findings
This emerging issue paper highlights ethical considerations in LL activities, emphasizing user voluntariness, user interests and unintended participation. AI in DT introduces challenges like bias, transparency and digital divide, necessitating responsible practices. Workshop insights underscore challenges: AI bias, data privacy and transparency; opportunities: inclusive decision-making and efficient innovation. The synthesis defines RLLs as frameworks ensuring transparency, stakeholder engagement, ethical considerations and sustainability in AI-driven DT within LLs. RLLs aim to align DT with ethical values, fostering inclusivity, responsible resource use and human rights protection.
Originality/value
The proposed definition of RLL introduces a framework prioritizing transparency, stakeholder engagement, ethics and sustainability in LL activities, particularly those involving AI for DT. This definition aligns LL practices with RRI, addressing ethical implications of AI. The value of RLL lies in promoting inclusive and sustainable innovation, prioritizing stakeholder needs, fostering collaboration and ensuring environmental and social responsibility throughout LL activities. This concept serves as a foundational step toward a more responsible and sustainable LL approach in the era of AI-driven technologies.
Details
Keywords
Werner H. Kunz and Jochen Wirtz
Despite all the recent achievements in the field of interactive marketing and artificial intelligence (AI), it is important to consider the ethical implications of these…
Abstract
Purpose
Despite all the recent achievements in the field of interactive marketing and artificial intelligence (AI), it is important to consider the ethical implications of these technologies. This paper explains the concept of corporate digital responsibility (CDR) and how it is affected by new advances in AI.
Design/methodology/approach
The authors build on the work of Wirtz et al., (2023) and derive several managerial implications for the challenges that AI poses to CDR. CDR refers to a service company's ethical and fair use of data and technology within its digital service ecosystem. It involves establishing standards, protecting customer privacy, conducting external audits and striving for an equitable power dynamic between service firms and their partners.
Findings
Despite the risks involved, many companies are not prioritizing good CDR practices. Financial benefits from the collection and use of consumer data, improved customer experience through AI-driven customization and personalization, cost reduction through service automation and the trade-offs between organizational goals and CDR practices can prevent companies from prioritizing good CDR practices.
Originality/value
This is one of the first articles in the service domain to take the concept of CDR and apply it to recent developments in generative AI.
Research limitations/implications
The emergence of powerful AI tools presents opportunities and challenges. Research opportunities include responsible business restructuring, responsible service automation to ensure fairness and human oversight, addressing dehumanization of service delivery, responsible customer profiling to address privacy and discrimination concerns and preventing AI misuse.
Details
Keywords
Pamela Lirio and Pierrich Plusquellec
This paper aims to present affective computing or Emotion AI in the context of work and how organizational leaders such as managers and human resource (HR) professionals can…
Abstract
Purpose
This paper aims to present affective computing or Emotion AI in the context of work and how organizational leaders such as managers and human resource (HR) professionals can implement this technology to foster an emotionally healthy workplace.
Design/methodology/approach
The authors provide a current overview of affective computing technology through definitions, examples and general use cases. This is in light of the current scrutiny on artificial intelligence (AI) use broadly across society. The authors address this from a research perspective and show how this advanced AI tool can be implemented in organizations for the benefit of employees.
Findings
Affective computing or Emotion AI is still relatively unknown, and yet, it is already part of our daily lives. Emotion AI platforms have the potential to be an essential part of HR tools. It is crucial, however, to use this technology in an ethical and responsible manner.
Originality/value
There is little awareness and understanding of use cases of affective computing tools in organizations, particularly for the well-being of the workforce. This paper provides HR leaders, managers and researchers with an overview of the origins of the field and major considerations for responsibly implementing Emotion AI to support employee mental health.
Details
Keywords
The purpose of this study is to raise awareness about the ethical implications of artificial intelligence (AI) in the library and information industry, specifically focusing on…
Abstract
Purpose
The purpose of this study is to raise awareness about the ethical implications of artificial intelligence (AI) in the library and information industry, specifically focusing on bias and discrimination. It aims to highlight the need for proactive measures to mitigate these issues and ensure that AI technology is developed and implemented in an ethical and unbiased manner.
Design/methodology/approach
This viewpoint paper presents a critical analysis of the ethical implications of bias and discrimination in the library and information industry with respect to AI. It explores current practices and challenges in AI implementation and proposes strategies to address bias and discrimination in AI systems.
Findings
The findings of this study reveal that bias and discrimination are significant concerns in AI systems used in the library and information industry. These biases can perpetuate existing inequalities, hinder access to information and reinforce discriminatory practices. This study identifies key strategies such as data collection and representation, algorithmic transparency and inclusive design to address these issues.
Originality/value
This study contributes to the existing literature by examining the specific challenges of bias and discrimination in AI implementation within the library and information industry. It provides valuable insights into the ethical implications of AI technology and offers practical recommendations for professionals to confront and mitigate bias and discrimination in AI systems, ensuring equitable access to information for all users.
Details
Keywords
- Ethical artificial intelligence
- Bias
- Discrimination
- Library and information industry
- AI implementation
- Ethical implications
- Literature review
- Case studies
- Proactive measures
- Data collection
- Algorithmic transparency
- Inclusive design
- Equitable access
- Critical analysis
- Thought-provoking
- AI ethics
- Responsible implementation
- Policymakers
Mojtaba Rezaei, Marco Pironti and Roberto Quaglia
This study aims to identify and assess the key ethical challenges associated with integrating artificial intelligence (AI) in knowledge-sharing (KS) practices and their…
Abstract
Purpose
This study aims to identify and assess the key ethical challenges associated with integrating artificial intelligence (AI) in knowledge-sharing (KS) practices and their implications for decision-making (DM) processes within organisations.
Design/methodology/approach
The study employs a mixed-methods approach, beginning with a comprehensive literature review to extract background information on AI and KS and to identify potential ethical challenges. Subsequently, a confirmatory factor analysis (CFA) is conducted using data collected from individuals employed in business settings to validate the challenges identified in the literature and assess their impact on DM processes.
Findings
The findings reveal that challenges related to privacy and data protection, bias and fairness and transparency and explainability are particularly significant in DM. Moreover, challenges related to accountability and responsibility and the impact of AI on employment also show relatively high coefficients, highlighting their importance in the DM process. In contrast, challenges such as intellectual property and ownership, algorithmic manipulation and global governance and regulation are found to be less central to the DM process.
Originality/value
This research contributes to the ongoing discourse on the ethical challenges of AI in knowledge management (KM) and DM within organisations. By providing insights and recommendations for researchers, managers and policymakers, the study emphasises the need for a holistic and collaborative approach to harness the benefits of AI technologies whilst mitigating their associated risks.
Details
Keywords
Ananya Hadadi Raghavendra, Siddharth Gaurav Majhi, Arindam Mukherjee and Pradip Kumar Bala
This study aims to examine the current state of academic research pertaining to the role played by artificial intelligence (AI) in the achievement of a critical sustainable…
Abstract
Purpose
This study aims to examine the current state of academic research pertaining to the role played by artificial intelligence (AI) in the achievement of a critical sustainable development goal (SDG) – poverty alleviation and describe the field’s development by identifying themes, trends, roadblocks and promising areas for the future.
Design/methodology/approach
The authors analysed a corpus of 253 studies collected from the Scopus database to examine the current state of the academic literature using bibliometric methods.
Findings
This paper identifies and analyses key trends in the evolution of this domain. Further, the paper distils the extant literature to unpack the intermediary mechanisms through which AI and related technologies help tackle the critical global issue of poverty.
Research limitations/implications
The corpus of literature used for the analysis is limited to English language studies from the Scopus database. The paper contributes to the extant research on AI for social good, and more broadly to the research on the value of emerging technologies such as AI.
Practical implications
Policymakers and government agencies will get an understanding of how technological interventions such as AI can help achieve critical SDGs such as poverty alleviation (SDG-1).
Social implications
The primary focus of this paper is on the role of AI-related technological interventions to achieve a significant social objective – poverty alleviation.
Originality/value
To the best of the authors’ knowledge, this is the first study to conduct a comprehensive bibliometric analysis of a critical research domain such as AI and poverty alleviation.
Details
Keywords
Mahantesh Halagatti, Soumya Gadag, Shashidhar Mahantshetti, Chetan V. Hiremath, Dhanashree Tharkude and Vinayak Banakar
Introduction: Numerous decision-making situations are faced in education where Artificial Intelligence may be prevalent as a decision-making support tool to capture streams of…
Abstract
Introduction: Numerous decision-making situations are faced in education where Artificial Intelligence may be prevalent as a decision-making support tool to capture streams of learners’ behaviours.
Purpose: The purpose of the present study is to understand the role of AI in student performance assessment and explore the future role of AI in educational performance assessment.
Scope: The study tries to understand the adaptability of AI in the education sector for supporting the educator in automating assessment. It supports the educator to concentrate on core teaching-learning activities.
Objectives: To understand the AI adaption for educational assessment, the positives and negatives of confidential data collections, and challenges for implementation from the view of various stakeholders.
Methodology: The study is conceptual, and information has been collected from sources comprised of expert interactions, research publications, survey and Industry reports.
Findings: The use of AI in student performance assessment has helped in early predictions for the activities to be adopted by educators. Results of AI evaluations give the data that may be combined and understood to create visuals.
Research Implications: AI-based analytics helps in fast decision-making and adapting the teaching curriculum’s fast-changing industry needs. Students’ abilities, such as participation and resilience, and qualities, such as confidence and drive, may be appraised using AI assessment systems.
Theoretical Implication: Artificial intelligence-based evaluation gives instructors, students, and parents a continuous opinion on how students learn, the help they require, and their progress towards their learning objectives.
Details
Keywords
Bartlomiej Gladysz, Davide Matteri, Krzysztof Ejsmont, Donatella Corti, Andrea Bettoni and Rodolfo Haber Guerra
Manufacturing small and medium-sized enterprises (SMEs) have already noticed the tangible benefits offered by artificial intelligence (AI). Several approaches have been proposed…
Abstract
Purpose
Manufacturing small and medium-sized enterprises (SMEs) have already noticed the tangible benefits offered by artificial intelligence (AI). Several approaches have been proposed with a view to support them in the processes entailed in this innovation path. These include multisided platforms created to enable the connection between SMEs and AI developers, making it easier for them to network each other. While such platforms are complex, they facilitate simultaneous interaction with several stakeholders and reaching out to new potential users (both SMEs and AI developers), through a collaboration with supporting ecosystems such as digital innovation hubs (DIHs).
Design/methodology/approach
Mixed methods were used. The literature review was performed to identify the existing approaches within and outside the manufacturing domain. Computer-assisted telephonic (in-depth) interviewing , was conducted to include perspectives of AI platform stakeholders and collect primary data from various European countries.
Findings
Several challenges and barriers for AI platform stakeholders were identified alongside the corresponding best practices and guidelines on how to address them.
Originality/value
An effective approach was proposed to provide support to the industrial platform managers in this field, by developing guidelines and best practices on how a platform should build its services to support the ecosystem.
Details
Keywords
Gao Shang, Sui Pheng Low and Xin Ying Valen Lim
The rise of artificial intelligence (AI) and differing attitudes towards its adoption in the building and environment (B&E) industry has an impact upon whether companies can meet…
Abstract
Purpose
The rise of artificial intelligence (AI) and differing attitudes towards its adoption in the building and environment (B&E) industry has an impact upon whether companies can meet changing demand and remain relevant and competitive. The emergence of Industry 4.0 technologies, coupled with the repercussions of COVID-19, increases the urgency and opportunities offered that companies must react to, as disruptive technologies impact how project management (PM) professionals work and necessitate acquisition of new skills. This paper attempts to identify the drivers of and barriers to, as well as the general perception and receptiveness of local PM professionals towards, AI adoption in PM and thereby propose potential strategies and recommendations to drive AI adoption in PM.
Design/methodology/approach
This study employs both quantitative and qualitative approaches to examine the findings gathered. A survey questionnaire was used as the primary method of gathering quantitative data from 60 local PM professionals. Statistical tests were performed to analyse the data. To substantiate and validate the findings, in-depth interviews with several experienced industry professionals were performed.
Findings
It is found that top drivers include support from top management and leadership, organisational readiness and the need for greater work productivity and efficiency. Top barriers were found to be the high cost of AI implementation and maintenance and the lack of top-down support and skilled employees trained in AI. These findings could be attributed to the present state of AI technologies being new and considerably underutilised in the industry. Hence, substantial top-down support with the right availability of resources and readiness, both in terms of cost and skilled employees, is paramount to kick-start AI implementation in PM.
Originality/value
Little research has been done on the use of AI in PM locally. AI's potential to improve the productivity and efficiency of PM processes in the B&E industry cannot be overlooked. An understanding of the drivers of, barriers to and attitudes towards AI adoption can facilitate more intentional and directed oversight of AI's strategic roll-out at both the governmental and corporate levels and thus mitigate potential challenges that may hinder the implementation process in the future.
Details