Search results

1 – 10 of over 3000
Open Access
Article
Publication date: 13 October 2023

Bartlomiej Gladysz, Davide Matteri, Krzysztof Ejsmont, Donatella Corti, Andrea Bettoni and Rodolfo Haber Guerra

Manufacturing small and medium-sized enterprises (SMEs) have already noticed the tangible benefits offered by artificial intelligence (AI). Several approaches have been proposed…

Abstract

Purpose

Manufacturing small and medium-sized enterprises (SMEs) have already noticed the tangible benefits offered by artificial intelligence (AI). Several approaches have been proposed with a view to support them in the processes entailed in this innovation path. These include multisided platforms created to enable the connection between SMEs and AI developers, making it easier for them to network each other. While such platforms are complex, they facilitate simultaneous interaction with several stakeholders and reaching out to new potential users (both SMEs and AI developers), through a collaboration with supporting ecosystems such as digital innovation hubs (DIHs).

Design/methodology/approach

Mixed methods were used. The literature review was performed to identify the existing approaches within and outside the manufacturing domain. Computer-assisted telephonic (in-depth) interviewing , was conducted to include perspectives of AI platform stakeholders and collect primary data from various European countries.

Findings

Several challenges and barriers for AI platform stakeholders were identified alongside the corresponding best practices and guidelines on how to address them.

Originality/value

An effective approach was proposed to provide support to the industrial platform managers in this field, by developing guidelines and best practices on how a platform should build its services to support the ecosystem.

Details

Central European Management Journal, vol. 31 no. 4
Type: Research Article
ISSN: 2658-0845

Keywords

Open Access
Article
Publication date: 9 June 2020

Mark Ryan and Bernd Carsten Stahl

The purpose of this paper is clearly illustrate this convergence and the prescriptive recommendations that such documents entail. There is a significant amount of research into…

22692

Abstract

Purpose

The purpose of this paper is clearly illustrate this convergence and the prescriptive recommendations that such documents entail. There is a significant amount of research into the ethical consequences of artificial intelligence (AI). This is reflected by many outputs across academia, policy and the media. Many of these outputs aim to provide guidance to particular stakeholder groups. It has recently been shown that there is a large degree of convergence in terms of the principles upon which these guidance documents are based. Despite this convergence, it is not always clear how these principles are to be translated into practice.

Design/methodology/approach

In this paper, the authors move beyond the high-level ethical principles that are common across the AI ethics guidance literature and provide a description of the normative content that is covered by these principles. The outcome is a comprehensive compilation of normative requirements arising from existing guidance documents. This is not only required for a deeper theoretical understanding of AI ethics discussions but also for the creation of practical and implementable guidance for developers and users of AI.

Findings

In this paper, the authors therefore provide a detailed explanation of the normative implications of existing AI ethics guidelines but directed towards developers and organisational users of AI. The authors believe that the paper provides the most comprehensive account of ethical requirements in AI currently available, which is of interest not only to the research and policy communities engaged in the topic but also to the user communities that require guidance when developing or deploying AI systems.

Originality/value

The authors believe that they have managed to compile the most comprehensive document collecting existing guidance which can guide practical action but will hopefully also support the consolidation of the guidelines landscape. The authors’ findings should also be of academic interest and inspire philosophical research on the consistency and justification of the various normative statements that can be found in the literature.

Details

Journal of Information, Communication and Ethics in Society, vol. 19 no. 1
Type: Research Article
ISSN: 1477-996X

Keywords

Article
Publication date: 1 April 2021

Jin-Young Kim and WanGyu Heo

In 2018, an artificial intelligence (AI) interview platform was introduced and adopted by companies in Korea. This study aims to explore the perspectives of applicants who have…

4398

Abstract

Purpose

In 2018, an artificial intelligence (AI) interview platform was introduced and adopted by companies in Korea. This study aims to explore the perspectives of applicants who have experienced an AI-based interview through this platform and examines the opinions of companies, a platform developer and academia.

Design/methodology/approach

This study uses a phenomenological approach. The participants, who had recent experience of AI video interviews, were recruited offline and online. Eighteen job applicants in their 20s, two companies that have adopted this interview platform, a software developer who created the platform and three professors participated in the study. To collect data, focus group interviews and in-depth interviews were conducted.

Findings

As a result, all of them believed that an AI-based interview was more efficient than a traditional one in terms of cost and time savings and is likely to be adopted by more companies in the future. They pointed to the possibility of data bias requiring an improvement in AI accountability. Applicants perceived an AI-based interview to be better than traditional evaluation procedures in procedural fairness, objectivity and consistency of algorithms. However, some applicants were dissatisfied about being assessed by AI. Digital divide and automated inequality were recurring themes in this study.

Originality/value

The study is important, as it addresses the real application of AI in detail, and a case study of smart hiring tools would be valuable in finding the practical and theoretical implications of such hiring in the fields of employment and AI.

Details

Information Technology & People, vol. 35 no. 3
Type: Research Article
ISSN: 0959-3845

Keywords

Open Access
Article
Publication date: 14 July 2022

Alejandra Rojas and Aarni Tuomi

The emergence of artificial intelligence (AI) is leading to a job transformation within the service ecosystem in which issues related to AI governance principles may hinder the…

1701

Abstract

Purpose

The emergence of artificial intelligence (AI) is leading to a job transformation within the service ecosystem in which issues related to AI governance principles may hinder the social sustainability of the sector. The relevance of AI startups in driving innovation has been recognized; thus, this paper aims to investigate whether and how AI startups may influence the sustainable social development (SSD) of the service sector.

Design/methodology/approach

An empirical study based on 24 in-depth interviews was conducted to qualitatively explore the perceptions of service sector facing AI policymakers, AI consultants and academics (n = 12), as well as AI startups (founders, AI developers; n = 12). An inductive coding approach was used to identify and analyze the data.

Findings

As part of a complex system, AI startups influence the SSD of the service sector in relation to other stakeholders’ contributions for the ethical deployment of AI. Four key factors influencing AI startups’ ability to contribute to the SSD of the service sector were identified: awareness of socioeconomic issues; fostering decent work; systematically applying ethics; and business model innovation.

Practical implications

This study proposes measures for service sector AI startups to promote collaborative efforts and implement managerial practices that adapt to their available resources.

Originality/value

This study develops original guidelines for startups that seek ethical development of beneficial AI in the service sector, building upon Ethics as a Service approach.

Details

Journal of Ethics in Entrepreneurship and Technology, vol. 2 no. 1
Type: Research Article
ISSN: 2633-7436

Keywords

Open Access
Article
Publication date: 2 May 2022

Samuli Laato, Miika Tiainen, A.K.M. Najmul Islam and Matti Mäntymäki

Inscrutable machine learning (ML) models are part of increasingly many information systems. Understanding how these models behave, and what their output is based on, is a…

10705

Abstract

Purpose

Inscrutable machine learning (ML) models are part of increasingly many information systems. Understanding how these models behave, and what their output is based on, is a challenge for developers let alone non-technical end users.

Design/methodology/approach

The authors investigate how AI systems and their decisions ought to be explained for end users through a systematic literature review.

Findings

The authors’ synthesis of the literature suggests that AI system communication for end users has five high-level goals: (1) understandability, (2) trustworthiness, (3) transparency, (4) controllability and (5) fairness. The authors identified several design recommendations, such as offering personalized and on-demand explanations and focusing on the explainability of key functionalities instead of aiming to explain the whole system. There exists multiple trade-offs in AI system explanations, and there is no single best solution that fits all cases.

Research limitations/implications

Based on the synthesis, the authors provide a design framework for explaining AI systems to end users. The study contributes to the work on AI governance by suggesting guidelines on how to make AI systems more understandable, fair, trustworthy, controllable and transparent.

Originality/value

This literature review brings together the literature on AI system communication and explainable AI (XAI) for end users. Building on previous academic literature on the topic, it provides synthesized insights, design recommendations and future research agenda.

Open Access
Article
Publication date: 7 June 2023

Zohreh Pourzolfaghar, Marco Alfano and Markus Helfert

This paper aims to describe the results of applying ethical AI requirements to a healthcare use case. The purpose of this study is to investigate the effectiveness of using open…

1267

Abstract

Purpose

This paper aims to describe the results of applying ethical AI requirements to a healthcare use case. The purpose of this study is to investigate the effectiveness of using open educational resources for Trustworthy AI to provide recommendations to an AI solution within the healthcare domain.

Design/methodology/approach

This study utilizes the Hackathon method as its research methodology. Hackathons are short events where participants share a common goal. The purpose of this to determine the efficacy of the educational resources provided to the students. To achieve this objective, eight teams of students and faculty members participated in the Hackathon. The teams made suggestions for healthcare use case based on the knowledge acquired from educational resources. A research team based at the university hosting the Hackathon devised the use case. The healthcare research team participated in the Hackathon by presenting the use case and subsequently analysing and evaluating the utility of the outcomes.

Findings

The Hackathon produced a framework of proposed recommendations for the introduced healthcare use case, in accordance with the EU's requirements for Trustworthy AI.

Research limitations/implications

The educational resources have been applied to one use-case.

Originality/value

This is the first time that open educational resources for Trustworthy AI have been utilized in higher education, making this a novel study. The university hosting the Hackathon has been the coordinator for the Trustworthy AI Hackathon (as partner to Trustworthy AI project).

Details

American Journal of Business, vol. 38 no. 3
Type: Research Article
ISSN: 1935-5181

Keywords

Open Access
Article
Publication date: 27 June 2023

Teemu Birkstedt, Matti Minkkinen, Anushree Tandon and Matti Mäntymäki

Following the surge of documents laying out organizations' ethical principles for their use of artificial intelligence (AI), there is a growing demand for translating ethical…

7183

Abstract

Purpose

Following the surge of documents laying out organizations' ethical principles for their use of artificial intelligence (AI), there is a growing demand for translating ethical principles to practice through AI governance (AIG). AIG has emerged as a rapidly growing, yet fragmented, research area. This paper synthesizes the organizational AIG literature by outlining research themes and knowledge gaps as well as putting forward future agendas.

Design/methodology/approach

The authors undertake a systematic literature review on AIG, addressing the current state of its conceptualization and suggesting future directions for AIG scholarship and practice. The review protocol was developed following recommended guidelines for systematic reviews and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA).

Findings

The results of the authors’ review confirmed the assumption that AIG is an emerging research topic with few explicit definitions. Moreover, the authors’ review identified four themes in the AIG literature: technology, stakeholders and context, regulation and processes. The central knowledge gaps revealed were the limited understanding of AIG implementation, lack of attention to the AIG context, uncertain effectiveness of ethical principles and regulation, and insufficient operationalization of AIG processes. To address these gaps, the authors present four future AIG agendas: technical, stakeholder and contextual, regulatory, and process. Going forward, the authors propose focused empirical research on organizational AIG processes, the establishment of an AI oversight unit and collaborative governance as a research approach.

Research limitations/implications

To address the identified knowledge gaps, the authors present the following working definition of AIG: AI governance is a system of rules, practices and processes employed to ensure an organization's use of AI technologies aligns with its strategies, objectives, and values, complete with legal requirements, ethical principles and the requirements set by stakeholders. Going forward, the authors propose focused empirical research on organizational AIG processes, the establishment of an AI oversight unit and collaborative governance as a research approach.

Practical implications

For practitioners, the authors highlight training and awareness, stakeholder management and the crucial role of organizational culture, including senior management commitment.

Social implications

For society, the authors review elucidates the multitude of stakeholders involved in AI governance activities and complexities related to balancing the needs of different stakeholders.

Originality/value

By delineating the AIG concept and the associated research themes, knowledge gaps and future agendas, the authors review builds a foundation for organizational AIG research, calling for broad contextual investigations and a deep understanding of AIG mechanisms. For practitioners, the authors highlight training and awareness, stakeholder management and the crucial role of organizational culture, including senior management commitment.

Details

Internet Research, vol. 33 no. 7
Type: Research Article
ISSN: 1066-2243

Keywords

Article
Publication date: 25 October 2021

Florian Königstorfer and Stefan Thalmann

Artificial intelligence (AI) is currently one of the most disruptive technologies and can be applied in many different use cases. However, applying AI in regulated environments is…

Abstract

Purpose

Artificial intelligence (AI) is currently one of the most disruptive technologies and can be applied in many different use cases. However, applying AI in regulated environments is challenging, as it is currently not clear how to achieve and assess the fairness, accountability and transparency (FAT) of AI. Documentation is one promising governance mechanism to ensure that AI is FAT when it is applied in practice. However, due to the nature of AI, documentation standards from software engineering are not suitable to collect the required evidence. Even though FAT AI is called for by lawmakers, academics and practitioners, suitable guidelines on how to document AI are not available. This interview study aims to investigate the requirements for AI documentations.

Design/methodology/approach

A total of 16 interviews were conducted with senior employees from companies in the banking and IT industry as well as with consultants. The interviews were then analyzed using an informed-inductive coding approach.

Findings

The authors found five requirements for AI documentation, taking the specific nature of AI into account. The interviews show that documenting AI is not a purely technical task, but also requires engineers to present information on how the AI is understandably integrated into the business process.

Originality/value

This paper benefits from the unique insights of senior employees into the documentation of AI.

Details

Digital Policy, Regulation and Governance, vol. 23 no. 5
Type: Research Article
ISSN: 2398-5038

Keywords

Expert briefing
Publication date: 6 May 2021

The proposal advances the Commission’s work over the last five years on developing the EU’s approach to AI. The EU seeks to become a global leader in responsibly developing and…

Details

DOI: 10.1108/OXAN-DB261321

ISSN: 2633-304X

Keywords

Geographic
Topical
Article
Publication date: 18 August 2021

Sheshadri Chatterjee, Sreenivasulu N.S. and Zahid Hussain

The applications of artificial intelligence (AI) in different sectors have become agendas for discussions in the highest circle of experts. The applications of AI can help society…

1375

Abstract

Purpose

The applications of artificial intelligence (AI) in different sectors have become agendas for discussions in the highest circle of experts. The applications of AI can help society and can harm society even by jeopardizing human rights. The purpose of this study is to examine the evolution of AI and its impacts on human rights from social and legal perspectives.

Design/methodology/approach

With the help of studies of literature and different other AI and human rights-related reports, this study has taken an attempt to provide a comprehensive and executable framework to address these challenges contemplated to occur due to the increase in usage of different AI applications in the context of human rights.

Findings

This study finds out how different AI applications could help society and harm society. It also highlighted different legal issues and associated complexity arising due to the advancement of AI technology. Finally, the study also provided few recommendations to the governments, private enterprises and non-governmental organizations on the usage of different AI applications in their organizations.

Research limitations/implications

This study mostly deals with the legal, social and business-related issues arising due to the advancement of AI technology. The study does not penetrate the technological aspects and algorithms used in AI applications. Policymakers, government agencies and private entities, as well as practitioners could take the help of the recommendations provided in this study to formulate appropriate regulations to control the usage of AI technology and its applications.

Originality/value

This study provides a comprehensive view of the emergence of AI technology and its implication on human rights. There are only a few studies that examine AI and related human rights issues from social, legal and business perspectives. Thus, this study is claimed to be a unique study. Also, this study provides valuable inputs to the government agencies, policymakers and practitioners about the need to formulate a comprehensive regulation to control the usage of AI technology which is also another unique contribution of this study.

Details

International Journal of Law and Management, vol. 64 no. 2
Type: Research Article
ISSN: 1754-243X

Keywords

1 – 10 of over 3000