Search results

1 – 10 of 41
Article
Publication date: 7 March 2023

Omoregie Charles Osifo

The purpose of this paper is to identify the key roles of transparency in making artificial intelligence (AI) greener (i.e. causing lesser carbon dioxide emissions) during the…

Abstract

Purpose

The purpose of this paper is to identify the key roles of transparency in making artificial intelligence (AI) greener (i.e. causing lesser carbon dioxide emissions) during the design, development and manufacturing stages or processes of AI technologies (e.g. apps, systems, agents, tools, artifacts) and use the “explicability requirement” as an essential value within the framework of transparency in supporting arguments for realizing greener AI.

Design/methodology/approach

The approach of this paper is argumentative, which is supported by ideas from existing literature and documents.

Findings

This paper puts forward a relevant recommendation for achieving better and sustainable outcomes after the reexamination of the identified roles played by transparency within the AI technology context. The proposed recommendation is based on scientific opinion, which is justified by the roles and importance of the two approaches (compliance and integrity) in ethics management and other areas of ethical studies.

Originality/value

The originality of this paper falls within the boundary of filling the gap that exists in sustainable AI technology and the roles of transparency.

Details

Journal of Information, Communication and Ethics in Society, vol. 21 no. 2
Type: Research Article
ISSN: 1477-996X

Keywords

Open Access
Article
Publication date: 24 May 2023

Bakhtiar Sadeghi, Deborah Richards, Paul Formosa, Mitchell McEwan, Muhammad Hassan Ali Bajwa, Michael Hitchens and Malcolm Ryan

Cybersecurity vulnerabilities are often due to human users acting according to their own ethical priorities. With the goal of providing tailored training to cybersecurity…

1542

Abstract

Purpose

Cybersecurity vulnerabilities are often due to human users acting according to their own ethical priorities. With the goal of providing tailored training to cybersecurity professionals, the authors conducted a study to uncover profiles of human factors that influence which ethical principles are valued highest following exposure to ethical dilemmas presented in a cybersecurity game.

Design/methodology/approach

The authors’ game first sensitises players (cybersecurity trainees) to five cybersecurity ethical principles (beneficence, non-maleficence, justice, autonomy and explicability) and then allows the player to explore their application in multiple cybersecurity scenarios. After playing the game, players rank the five ethical principles in terms of importance. A total of 250 first-year cybersecurity students played the game. To develop profiles, the authors collected players' demographics, knowledge about ethics, personality, moral stance and values.

Findings

The authors built models to predict the importance of each of the five ethical principles. The analyses show that, generally, the main driver influencing the priority given to specific ethical principles is cultural background, followed by the personality traits of extraversion and conscientiousness. The importance of the ingroup was also a prominent factor.

Originality/value

Cybersecurity professionals need to understand the impact of users' ethical choices. To provide ethics training, the profiles uncovered will be used to build artificially intelligent (AI) non-player characters (NPCs) to expose the player to multiple viewpoints. The NPCs will adapt their training according to the predicted players’ viewpoint.

Details

Organizational Cybersecurity Journal: Practice, Process and People, vol. 3 no. 2
Type: Research Article
ISSN: 2635-0270

Keywords

Article
Publication date: 4 December 2020

Anton Saveliev and Denis Zhurenkov

The purpose of this paper is to review and analyze how the development and utilization of artificial intelligence (AI) technologies for social responsibility are defined in the…

1631

Abstract

Purpose

The purpose of this paper is to review and analyze how the development and utilization of artificial intelligence (AI) technologies for social responsibility are defined in the national AI strategies of the USA, Russia and China.

Design/methodology/approach

The notion of responsibility concerning AI is currently not legally defined by any country in the world. The authors of this research are going to use the methodology, based on Luciano Floridi’s Unified framework of five principles for AI in society, to determine how social responsibility is implemented in the AI strategies of the USA, Russia and China.

Findings

All three strategies for the development of AI in the USA, Russia and China, as evaluated in the paper, contain some or other components aimed at achieving public responsibility and responsible use of AI. The Unified framework of five principles for AI in society, developed by L. Floridi, can be used as a viable assessment tool to determine at least in general terms how social responsibility is implied and implemented in national strategic documents in the field of AI. However, authors of the paper call for further development in the field of mutually recognizable ethical models for socially beneficial AI.

Practical implications

This study allows us to better understand the linkages, overlaps and differences between modern philosophy of information, AI-ethics, social responsibility and government regulation. The analysis provided in this paper can serve as a basic blueprint for future attempts to define how social responsibility is understood and implied by government decision-makers.

Originality/value

The analysis provided in the paper, however general and empirical it may be, is a first-time example of how the Unified framework of five principles for AI in society can be applied as an assessment tool to determine social responsibility in AI-related official documents.

Content available
Book part
Publication date: 27 June 2022

Abstract

Details

The Emerald Handbook of Computer-Mediated Communication and Social Media
Type: Book
ISBN: 978-1-80071-598-1

Article
Publication date: 7 March 2016

Richard Grover

– The purpose of this paper is to review the issues involved in the implementation of mass valuation systems and the conditions needed for doing so.

1166

Abstract

Purpose

The purpose of this paper is to review the issues involved in the implementation of mass valuation systems and the conditions needed for doing so.

Design/methodology/approach

The method makes use of case studies of and fieldwork in countries that have either recently introduced mass valuations, brought about major changes in their systems or have been working towards introducing mass valuations.

Findings

Mass valuation depends upon a degree of development and transparency in property markets and an institutional structure capable of collecting and maintaining up-to-date price data and attributes of properties. Countries introducing mass valuation may need to undertake work on improving the institutional basis for this as a pre-condition for successful implementation of mass valuation.

Practical implications

Although much of the literature is concerned with how to improve the statistical modelling of market prices, there are significant issues concerned with the type and quality of the data used in mass valuation models and the requirements for successful use of mass valuations.

Originality/value

Much of the literature on mass valuation takes the form of the development of statistical models of value. There has been much less attention given to the issues involved in the implementation of mass valuation.

Details

Journal of Property Investment & Finance, vol. 34 no. 2
Type: Research Article
ISSN: 1463-578X

Keywords

Article
Publication date: 27 June 2023

Stefano Calzati

The purpose of this paper is to explore the epistemological tensions embedded within big data and data-driven technologies to advance a socio-political reconsideration of the…

Abstract

Purpose

The purpose of this paper is to explore the epistemological tensions embedded within big data and data-driven technologies to advance a socio-political reconsideration of the public dimension in the assessment of their implementation.

Design/methodology/approach

This paper builds upon (and revisits) the European Union’s (EU) normative understanding of artificial intelligence (AI) and data-driven technologies, blending reflections rooted in philosophy of technology with issues of democratic participation in tech-related matters.

Findings

This paper proposes the conceptual design of sectorial and/or local-level e-participation platforms to ignite an ongoing discussion – involving experts, private actors, as well as cognizant citizens – over the implementation of data-driven technologies, to avoid siloed, tech-solutionist decisions.

Originality/value

This paper inscribes the EU’s normative approach to AI and data-driven technologies, as well as critical work on the governance of these technologies, into a broader political dimension, suggesting a way to democratically and epistocratically opening up the decisional processes over the development and implementation of these technologies and turn such processes into a systemic civic involvement.

Details

Journal of Information, Communication and Ethics in Society, vol. 21 no. 3
Type: Research Article
ISSN: 1477-996X

Keywords

Article
Publication date: 9 September 2022

Enrico Bracci

Governments are increasingly turning to artificial intelligence (AI) algorithmic systems to increase efficiency and effectiveness of public service delivery. While the diffusion…

1051

Abstract

Purpose

Governments are increasingly turning to artificial intelligence (AI) algorithmic systems to increase efficiency and effectiveness of public service delivery. While the diffusion of AI offers several desirable benefits, caution and attention should be posed to the accountability of AI algorithm decision-making systems in the public sector. The purpose of this paper is to establish the main challenges that an AI algorithm might bring about to public service accountability. In doing so, the paper also delineates future avenues of investigation for scholars.

Design/methodology/approach

This paper builds on previous literature and anecdotal cases of AI applications in public services, drawing on streams of literature from accounting, public administration and information technology ethics.

Findings

Based on previous literature, the paper highlights the accountability gaps that AI can bring about and the possible countermeasures. The introduction of AI algorithms in public services modifies the chain of responsibility. This distributed responsibility requires an accountability governance, together with technical solutions, to meet multiple accountabilities and close the accountability gaps. The paper also delineates a research agenda for accounting scholars to make accountability more “intelligent”.

Originality/value

The findings of the paper shed new light and perspective on how public service accountability in AI should be considered and addressed. The results developed in this paper will stimulate scholars to explore, also from an interdisciplinary perspective, the issues public service organizations are facing to make AI algorithms accountable.

Details

Accounting, Auditing & Accountability Journal, vol. 36 no. 2
Type: Research Article
ISSN: 0951-3574

Keywords

Open Access
Article
Publication date: 20 October 2022

Deborah Richards, Salma Banu Nazeer Khan, Paul Formosa and Sarah Bankins

To protect information and communication technology (ICT) infrastructure and resources against poor cyber hygiene behaviours, organisations commonly require internal users to…

Abstract

Purpose

To protect information and communication technology (ICT) infrastructure and resources against poor cyber hygiene behaviours, organisations commonly require internal users to confirm they will abide by an ICT Code of Conduct. Before commencing enrolment, university students sign ICT policies, however, individuals can ignore or act contrary to these policies. This study aims to evaluate whether students can apply ICT Codes of Conduct and explores viable approaches for ensuring that students understand how to act ethically and in accordance with such codes.

Design/methodology/approach

The authors designed a between-subjects experiment involving 260 students’ responses to five scenario-pairs that involve breach/non-breach of a university’s ICT policy following a priming intervention to heighten awareness of ICT policy or relevant ethical principles, with a control group receiving no priming.

Findings

This study found a significant difference in students’ responses to the breach versus non-breach cases, indicating their ability to apply the ICT Code of Conduct. Qualitative comments revealed the priming materials influenced their reasoning.

Research limitations/implications

The authors’ priming interventions were inadequate for improving breach recognition compared to the control group. More nuanced and targeted priming interventions are suggested for future studies.

Practical implications

Appropriate application of ICT Code of Conduct can be measured by collecting student/employee responses to breach/non-breach scenario pairs based on the Code and embedded with ethical principles.

Social implications

Shared awareness and protection of ICT resources.

Originality/value

Compliance with ICT Codes of Conduct by students is under-investigated. This study shows that code-based scenarios can measure understanding and suggest that targeted priming might offer a non-resource intensive training approach.

Details

Organizational Cybersecurity Journal: Practice, Process and People, vol. 2 no. 2
Type: Research Article
ISSN: 2635-0270

Keywords

Article
Publication date: 24 April 2020

Jenny Bunn

This paper aims to introduce the topic of explainable artificial intelligence (XAI) and reports on the outcomes of an interdisciplinary workshop exploring it. It reflects on XAI…

1066

Abstract

Purpose

This paper aims to introduce the topic of explainable artificial intelligence (XAI) and reports on the outcomes of an interdisciplinary workshop exploring it. It reflects on XAI through the frame and concerns of the recordkeeping profession.

Design/methodology/approach

This paper takes a reflective approach. The origins of XAI are outlined as a way of exploring how it can be viewed and how it is currently taking shape. The workshop and its outcomes are briefly described and reflections on the process of investigating and taking part in conversations about XAI are offered.

Findings

The article reinforces the value of undertaking interdisciplinary and exploratory conversations with others. It offers new perspectives on XAI and suggests ways in which recordkeeping can productively engage with it, as both a disruptive force on its thinking and a set of newly emerging record forms to be created and managed.

Originality/value

The value of this paper comes from the way in which the introduction it provides will allow recordkeepers to gain a sense of what XAI is and the different ways in which they are both already engaging and can continue to engage with it.

Details

Records Management Journal, vol. 30 no. 2
Type: Research Article
ISSN: 0956-5698

Keywords

1 – 10 of 41