Search results

1 – 10 of 16
Open Access
Article
Publication date: 20 October 2022

Deborah Richards, Salma Banu Nazeer Khan, Paul Formosa and Sarah Bankins

To protect information and communication technology (ICT) infrastructure and resources against poor cyber hygiene behaviours, organisations commonly require internal users to…

Abstract

Purpose

To protect information and communication technology (ICT) infrastructure and resources against poor cyber hygiene behaviours, organisations commonly require internal users to confirm they will abide by an ICT Code of Conduct. Before commencing enrolment, university students sign ICT policies, however, individuals can ignore or act contrary to these policies. This study aims to evaluate whether students can apply ICT Codes of Conduct and explores viable approaches for ensuring that students understand how to act ethically and in accordance with such codes.

Design/methodology/approach

The authors designed a between-subjects experiment involving 260 students’ responses to five scenario-pairs that involve breach/non-breach of a university’s ICT policy following a priming intervention to heighten awareness of ICT policy or relevant ethical principles, with a control group receiving no priming.

Findings

This study found a significant difference in students’ responses to the breach versus non-breach cases, indicating their ability to apply the ICT Code of Conduct. Qualitative comments revealed the priming materials influenced their reasoning.

Research limitations/implications

The authors’ priming interventions were inadequate for improving breach recognition compared to the control group. More nuanced and targeted priming interventions are suggested for future studies.

Practical implications

Appropriate application of ICT Code of Conduct can be measured by collecting student/employee responses to breach/non-breach scenario pairs based on the Code and embedded with ethical principles.

Social implications

Shared awareness and protection of ICT resources.

Originality/value

Compliance with ICT Codes of Conduct by students is under-investigated. This study shows that code-based scenarios can measure understanding and suggest that targeted priming might offer a non-resource intensive training approach.

Details

Organizational Cybersecurity Journal: Practice, Process and People, vol. 2 no. 2
Type: Research Article
ISSN: 2635-0270

Keywords

Open Access
Article
Publication date: 24 May 2023

Bakhtiar Sadeghi, Deborah Richards, Paul Formosa, Mitchell McEwan, Muhammad Hassan Ali Bajwa, Michael Hitchens and Malcolm Ryan

Cybersecurity vulnerabilities are often due to human users acting according to their own ethical priorities. With the goal of providing tailored training to cybersecurity…

1551

Abstract

Purpose

Cybersecurity vulnerabilities are often due to human users acting according to their own ethical priorities. With the goal of providing tailored training to cybersecurity professionals, the authors conducted a study to uncover profiles of human factors that influence which ethical principles are valued highest following exposure to ethical dilemmas presented in a cybersecurity game.

Design/methodology/approach

The authors’ game first sensitises players (cybersecurity trainees) to five cybersecurity ethical principles (beneficence, non-maleficence, justice, autonomy and explicability) and then allows the player to explore their application in multiple cybersecurity scenarios. After playing the game, players rank the five ethical principles in terms of importance. A total of 250 first-year cybersecurity students played the game. To develop profiles, the authors collected players' demographics, knowledge about ethics, personality, moral stance and values.

Findings

The authors built models to predict the importance of each of the five ethical principles. The analyses show that, generally, the main driver influencing the priority given to specific ethical principles is cultural background, followed by the personality traits of extraversion and conscientiousness. The importance of the ingroup was also a prominent factor.

Originality/value

Cybersecurity professionals need to understand the impact of users' ethical choices. To provide ethics training, the profiles uncovered will be used to build artificially intelligent (AI) non-player characters (NPCs) to expose the player to multiple viewpoints. The NPCs will adapt their training according to the predicted players’ viewpoint.

Details

Organizational Cybersecurity Journal: Practice, Process and People, vol. 3 no. 2
Type: Research Article
ISSN: 2635-0270

Keywords

Article
Publication date: 11 May 2023

Shivangi Verma and Naval Garg

With the growth and profound influence of technology on our life, it is important to address the ethical issues inherent to the development and deployment of technology…

Abstract

Purpose

With the growth and profound influence of technology on our life, it is important to address the ethical issues inherent to the development and deployment of technology. Researchers and practitioners submit the need to inspect: how technology and ethics interact, how ethical principles regulate technology and what could be the probable future course of action to execute techno-ethical practices in a socio-technical discourse effectively. To address the thoughts related to techno-ethics, the authors of the present study conducted exploratory research to understand the trend and relevance of technology ethics since its inception.

Design/methodology/approach

The study collected over 679 documents for the period 1990–2022 from the Scopus database. A quantitative approach of bibliometric analysis was conducted to study the pattern of authorship, publications, citations, prominent journals and contributors in the subject area. VOS viewer software was utilized to visualize and map academic performance in techno-ethics.

Findings

The findings revealed that the concept of techno-ethics is an emerging field and requires more investigation to harness its relevance with everchanging technology development. The data revealed substantial growth in the field of techno-ethics in humanities, social science and management domain in the last two decades. Also, most of the prominent cited references and documents in the database tend to cover the theme of Artificial Intelligence, Big data, computer ethics, morality, decision-making, IT ethics, human rights, responsibility and privacy.

Originality/value

The article provides a comprehensive overview of scientific production and main research trends in techno-ethics until 2022. The study is a pioneer in expanding the academic productivity and performance of embedding ethics in technology.

Details

Library Hi Tech, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0737-8831

Keywords

Open Access
Article
Publication date: 9 June 2020

Mark Ryan and Bernd Carsten Stahl

The purpose of this paper is clearly illustrate this convergence and the prescriptive recommendations that such documents entail. There is a significant amount of research into…

22652

Abstract

Purpose

The purpose of this paper is clearly illustrate this convergence and the prescriptive recommendations that such documents entail. There is a significant amount of research into the ethical consequences of artificial intelligence (AI). This is reflected by many outputs across academia, policy and the media. Many of these outputs aim to provide guidance to particular stakeholder groups. It has recently been shown that there is a large degree of convergence in terms of the principles upon which these guidance documents are based. Despite this convergence, it is not always clear how these principles are to be translated into practice.

Design/methodology/approach

In this paper, the authors move beyond the high-level ethical principles that are common across the AI ethics guidance literature and provide a description of the normative content that is covered by these principles. The outcome is a comprehensive compilation of normative requirements arising from existing guidance documents. This is not only required for a deeper theoretical understanding of AI ethics discussions but also for the creation of practical and implementable guidance for developers and users of AI.

Findings

In this paper, the authors therefore provide a detailed explanation of the normative implications of existing AI ethics guidelines but directed towards developers and organisational users of AI. The authors believe that the paper provides the most comprehensive account of ethical requirements in AI currently available, which is of interest not only to the research and policy communities engaged in the topic but also to the user communities that require guidance when developing or deploying AI systems.

Originality/value

The authors believe that they have managed to compile the most comprehensive document collecting existing guidance which can guide practical action but will hopefully also support the consolidation of the guidelines landscape. The authors’ findings should also be of academic interest and inspire philosophical research on the consistency and justification of the various normative statements that can be found in the literature.

Details

Journal of Information, Communication and Ethics in Society, vol. 19 no. 1
Type: Research Article
ISSN: 1477-996X

Keywords

Open Access
Article
Publication date: 27 June 2023

Teemu Birkstedt, Matti Minkkinen, Anushree Tandon and Matti Mäntymäki

Following the surge of documents laying out organizations' ethical principles for their use of artificial intelligence (AI), there is a growing demand for translating ethical

7126

Abstract

Purpose

Following the surge of documents laying out organizations' ethical principles for their use of artificial intelligence (AI), there is a growing demand for translating ethical principles to practice through AI governance (AIG). AIG has emerged as a rapidly growing, yet fragmented, research area. This paper synthesizes the organizational AIG literature by outlining research themes and knowledge gaps as well as putting forward future agendas.

Design/methodology/approach

The authors undertake a systematic literature review on AIG, addressing the current state of its conceptualization and suggesting future directions for AIG scholarship and practice. The review protocol was developed following recommended guidelines for systematic reviews and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA).

Findings

The results of the authors’ review confirmed the assumption that AIG is an emerging research topic with few explicit definitions. Moreover, the authors’ review identified four themes in the AIG literature: technology, stakeholders and context, regulation and processes. The central knowledge gaps revealed were the limited understanding of AIG implementation, lack of attention to the AIG context, uncertain effectiveness of ethical principles and regulation, and insufficient operationalization of AIG processes. To address these gaps, the authors present four future AIG agendas: technical, stakeholder and contextual, regulatory, and process. Going forward, the authors propose focused empirical research on organizational AIG processes, the establishment of an AI oversight unit and collaborative governance as a research approach.

Research limitations/implications

To address the identified knowledge gaps, the authors present the following working definition of AIG: AI governance is a system of rules, practices and processes employed to ensure an organization's use of AI technologies aligns with its strategies, objectives, and values, complete with legal requirements, ethical principles and the requirements set by stakeholders. Going forward, the authors propose focused empirical research on organizational AIG processes, the establishment of an AI oversight unit and collaborative governance as a research approach.

Practical implications

For practitioners, the authors highlight training and awareness, stakeholder management and the crucial role of organizational culture, including senior management commitment.

Social implications

For society, the authors review elucidates the multitude of stakeholders involved in AI governance activities and complexities related to balancing the needs of different stakeholders.

Originality/value

By delineating the AIG concept and the associated research themes, knowledge gaps and future agendas, the authors review builds a foundation for organizational AIG research, calling for broad contextual investigations and a deep understanding of AIG mechanisms. For practitioners, the authors highlight training and awareness, stakeholder management and the crucial role of organizational culture, including senior management commitment.

Details

Internet Research, vol. 33 no. 7
Type: Research Article
ISSN: 1066-2243

Keywords

Open Access
Article
Publication date: 5 July 2021

Babak Abedin

Research into the interpretability and explainability of data analytics and artificial intelligence (AI) systems is on the rise. However, most recent studies either solely promote…

5871

Abstract

Purpose

Research into the interpretability and explainability of data analytics and artificial intelligence (AI) systems is on the rise. However, most recent studies either solely promote the benefits of explainability or criticize it due to its counterproductive effects. This study addresses this polarized space and aims to identify opposing effects of the explainability of AI and the tensions between them and propose how to manage this tension to optimize AI system performance and trustworthiness.

Design/methodology/approach

The author systematically reviews the literature and synthesizes it using a contingency theory lens to develop a framework for managing the opposing effects of AI explainability.

Findings

The author finds five opposing effects of explainability: comprehensibility, conduct, confidentiality, completeness and confidence in AI (5Cs). The author also proposes six perspectives on managing the tensions between the 5Cs: pragmatism in explanation, contextualization of the explanation, cohabitation of human agency and AI agency, metrics and standardization, regulatory and ethical principles, and other emerging solutions (i.e. AI enveloping, blockchain and AI fuzzy systems).

Research limitations/implications

As in other systematic literature review studies, the results are limited by the content of the selected papers.

Practical implications

The findings show how AI owners and developers can manage tensions between profitability, prediction accuracy and system performance via visibility, accountability and maintaining the “social goodness” of AI. The results guide practitioners in developing metrics and standards for AI explainability, with the context of AI operation as the focus.

Originality/value

This study addresses polarized beliefs amongst scholars and practitioners about the benefits of AI explainability versus its counterproductive effects. It poses that there is no single best way to maximize AI explainability. Instead, the co-existence of enabling and constraining effects must be managed.

Open Access
Article
Publication date: 1 November 2023

Dan Jin

The purpose of this study is to provide insights and guidance for practitioners in terms of ensuring rigorous ethical and moral conduct in artificial intelligence (AI) hiring and…

Abstract

Purpose

The purpose of this study is to provide insights and guidance for practitioners in terms of ensuring rigorous ethical and moral conduct in artificial intelligence (AI) hiring and implementation.

Design/methodology/approach

The research employed two experimental designs and one pilot study to investigate the ethical and moral implications of different levels of AI implementation in the hospitality industry, the intersection of self-congruency and ethical considerations when AI replaces human service providers and the impact of psychological distance associated with AI on individuals' ethical and moral considerations. These research methods included surveys and experimental manipulations to gather and analyze relevant data.

Findings

Findings provide valuable insights into the ethical and moral dimensions of AI implementation, the influence of self-congruency on ethical considerations and the role of psychological distance in individuals’ ethical evaluations. They contribute to the development of guidelines and practices for the responsible and ethical implementation of AI in various industries, including the hospitality sector.

Practical implications

The study highlights the importance of exercising rigorous ethical-moral AI hiring and implementation practices to ensure AI principles and enforcement operations in the restaurant industry. It provides practitioners with useful insights into how AI-robotization can improve ethical and moral standards.

Originality/value

The study contributes to the literature by providing insights into the ethical and moral implications of AI service robots in the hospitality industry. Additionally, the study explores the relationship between psychological distance and acceptance of AI-intervened service, which has not been extensively studied in the literature.

Details

International Hospitality Review, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2516-8142

Keywords

Article
Publication date: 28 March 2019

Jean Paul Simon

This paper aims to clarify the notion of artificial intelligence (AI), reviewing the present scope of the phenomenon through its main applications. It aims at describing the…

4855

Abstract

Purpose

This paper aims to clarify the notion of artificial intelligence (AI), reviewing the present scope of the phenomenon through its main applications. It aims at describing the various applications while assessing the markets, highlighting some of the leading industrial sectors in the field. Therefore, it identifies pioneering companies and the geographical distribution of AI companies.

Design/methodology/approach

The paper builds upon an in-depth investigation of public initiatives focusing mostly on the EU. It is based on desk research, a comprehensive review of the main grey and scientific literature in this field.

Findings

The paper notes that there is no real consensus on any definition for this umbrella term, that the definition does fluctuate over time but highlights some of the main changes and advances that took place over the past 60 years. It stresses that, in spite of the hype, on both the business and consumer sides, the demand appears uncertain. The scope of the announced disruptions is not easy to assess, technological innovation associated with AI may be modest or take some time to be fully deployed. However, some companies and regions are leading already in the field.

Research limitations/implications

The paper, based on desk research, does not consider any expert opinions. Besides, the scientific literature on the phenomenon is still scarce (but not the technical one in the specific research sectors of AI). Most of the data come from consultancies or government publications which may introduce some bias, although the paper gathered various, often conflicting viewpoints.

Originality/value

The paper gives a thorough review of the available literature (consultancies, governments) stressing the limitations of the available research on economic and social aspects. It aims at providing a comprehensive overview of the major trends in the field. It gives a global overview of companies and regions.

Details

Digital Policy, Regulation and Governance, vol. 21 no. 3
Type: Research Article
ISSN: 2398-5038

Keywords

Open Access
Article
Publication date: 10 February 2022

Anders Nordgren

The purpose of this paper is to pinpoint and analyse ethical issues raised by the dual role of artificial intelligence (AI) in relation to climate change, that is, AI as a…

13128

Abstract

Purpose

The purpose of this paper is to pinpoint and analyse ethical issues raised by the dual role of artificial intelligence (AI) in relation to climate change, that is, AI as a contributor to climate change and AI as a contributor to fighting climate change.

Design/methodology/approach

This paper consists of three main parts. The first part provides a short background on AI and climate change respectively, followed by a presentation of empirical findings on the contribution of AI to climate change. The second part presents proposals by various AI researchers and commentators on how AI companies may contribute to fighting climate change by reducing greenhouse gas emissions from training and use of AI and by providing AI assistance to various mitigation and adaptation measures. The final part investigates ethical issues raised by some of the options presented in the second part.

Findings

AI applications may lead to substantial emissions but may also play an important role in mitigation and adaptation. Given this dual role of AI, ethical considerations by AI companies and governments are of vital importance.

Practical implications

This paper pinpoints practical ethical issues that AI companies and governments should take into account.

Social implications

Given the potential impact of AI on society, it is vital that AI companies and governments take seriously the ethical issues raised by the dual role of AI in relation to climate change.

Originality/value

AI has been the subject of substantial ethical investigation, and even more so has climate change. However, the relationship between AI and climate change has received only limited attention from an ethical perspective. This paper provides such considerations.

Details

Journal of Information, Communication and Ethics in Society, vol. 21 no. 1
Type: Research Article
ISSN: 1477-996X

Keywords

Article
Publication date: 22 June 2022

Ugo Pagallo, Jacopo Ciani Sciolla and Massimo Durante

The paper aims to examine the environmental challenges of artificial intelligence (AI) in EU law that regard both illicit uses of the technology, i.e. overuse or misuse of AI and…

1397

Abstract

Purpose

The paper aims to examine the environmental challenges of artificial intelligence (AI) in EU law that regard both illicit uses of the technology, i.e. overuse or misuse of AI and its possible underuses. The aim of the paper is to show how such regulatory efforts of legislators should be understood as a critical component of the Green Deal of the EU institutions, that is, to save our planet from impoverishment, plunder and destruction.

Design/methodology/approach

To illustrate the different ways in which AI can represent a game-changer for our environmental challenges, attention is drawn to a multidisciplinary approach, which includes the analysis of the initiatives on the European Green Deal; the proposals for a new legal framework on data governance and AI; principles of environmental and constitutional law; the interaction of such principles and provisions of environmental and constitutional law with AI regulations; other sources of EU law and of its Member States.

Findings

Most recent initiatives on AI, including the AI Act (AIA) of the European Commission, have insisted on a human-centric approach, whereas it seems obvious that the challenges of environmental law, including those triggered by AI, should be addressed in accordance with an ontocentric, rather than anthropocentric stance. The paper provides four recommendations for the legal consequences of this short-sighted view, including the lack of environmental concerns in the AIA.

Research limitations/implications

The environmental challenges of AI suggest complementing current regulatory efforts of EU lawmakers with a new generation of eco-impact assessments; duties of care and disclosure of non-financial information; clearer parameters for the implementation of the integration principle in EU constitutional law; special policies for the risk of underusing AI for environmental purposes. Further research should examine these policies in connection with the principle of sustainability and the EU plan for a circular economy, as another crucial ingredient of the Green Deal.

Practical implications

The paper provides a set of concrete measures to properly tackle both illicit uses of AI and the risk of its possible underuse for environmental purposes. Such measures do not only concern the “top down” efforts of legislators but also litigation and the role of courts. Current trends of climate change litigation and the transplant of class actions into several civil law jurisdictions shed new light on the ways in which we should address the environmental challenges of AI, even before a court.

Social implications

A more robust protection of people’s right to a high level of environmental protection and the improvement of the quality of the environment follows as a result of the analysis on the legal threats and opportunities brought forth by AI.

Originality/value

The paper explores a set of issues, often overlooked by scholars and institutions, that is nonetheless crucial for any Green Deal, such as the distinction between the human-centric approach of current proposals in the field of technological regulation and the traditional ontocentric stance of environmental law. The analysis considers for the first time the legal issues that follow this distinction in the field of AI regulation and how we should address them.

Details

Transforming Government: People, Process and Policy, vol. 16 no. 3
Type: Research Article
ISSN: 1750-6166

Keywords

1 – 10 of 16