Search results

1 – 10 of over 8000
Article
Publication date: 17 June 2021

Sheshadri Chatterjee and Sreenivasulu N.S.

The purpose of this study is to investigate the impact of artificial intelligence (AI) on the human rights issue. This study has also examined issues with AI for business…

Abstract

Purpose

The purpose of this study is to investigate the impact of artificial intelligence (AI) on the human rights issue. This study has also examined issues with AI for business and its civil and criminal liability. This study has provided inputs to the policymakers and government authorities to overcome different challenges.

Design/methodology/approach

This study has analysed different international and Indian laws on human rights issues and the impacts of these laws to protect the human rights of the individual, which could be under threat due to the advancement of AI technology. This study has used descriptive doctrinal legal research methods to examine and understand the insights of existing laws and regulations in India to protect human rights and how these laws could be further developed to protect human rights under the Indian jurisprudence, which is under threat due to rapid advancement of AI-related technology.

Findings

The study provides a comprehensive insight on the influence of AI on human rights issues and the existing laws in India. The study also shows different policy initiatives by the Government of India to regulate AI.

Research limitations/implications

The study highlights some of the key policy recommendations helpful to regulate AI. Moreover, this study provides inputs to the regulatory authorities and legal fraternity to draft a much-needed comprehensive policy to regulate AI in the context of the protection of human rights of the citizens.

Originality/value

AI is constantly posing entangled challenges to human rights. There is no comprehensive study, which investigated the emergence of AI and its influence on human rights issues, especially from the Indian legal perspective. So there is a research gap. This study provides a unique insight of the emergence of AI applications and its influence on human rights issues and provides inputs to the policymaker to help them to draft an effective regulation on AI to protect the human rights of Indian citizens. Thus, this study is considered a unique study that adds value towards the overall literature.

Details

International Journal of Law and Management, vol. 64 no. 1
Type: Research Article
ISSN: 1754-243X

Keywords

Open Access
Article
Publication date: 25 January 2022

Yash Chawla, Fumio Shimpo and Maciej M. Sokołowski

India is a fast-growing economy, that has a majority share in the global information technology industry (IT). Rapid urbanisation and modernisation in India have strained…

Abstract

Purpose

India is a fast-growing economy, that has a majority share in the global information technology industry (IT). Rapid urbanisation and modernisation in India have strained its energy sector, which is being reformed to cope. Despite being the global IT heart and having above average research output in the field of artificial intelligence (AI), India has not yet managed to leverage its benefits to the full. This study aims to address the role of AI and information management (IM) in India’s energy transition to highlight the challenges and barriers to its development and use in the energy sector.

Design/methodology/approach

The study, through analysis of proposed strategies, current policies, available literature and reports, discusses the role of AI and IM in the energy transition in India, highlighting the current situation and challenges.

Findings

The results show dispersed research and development incentives for IT in the Indian energy sector; however, the needed holistic top-down approach is lacking, calling for due attention in this matter. Adaptive and swift actions from policymakers towards AI and IM are warranted in India.

Practical implications

The ongoing transition of the Indian energy sector with the integration of smart technologies would result in increased access to big data. Extracting the maximum benefits from this would require a comprehensive AI and IM policy.

Social implications

The revolution in AI and robotics must be carried out in line with sustainable development goals, to support climate action and to consider privacy issues – both areas in India must be strengthened.

Originality/value

The paper offers an original discussion on certain applicable solutions regarding the energy transition of AI coming from the Global South; they are based on lessons learned from the Indian case studies presented in this study.

Details

Digital Policy, Regulation and Governance, vol. 24 no. 1
Type: Research Article
ISSN: 2398-5038

Keywords

Book part
Publication date: 15 July 2020

Gina Granados Palmer

Harnessing the power and potential of Artificial Intelligence (AI) continues a centuries-old trajectory of the application of science and knowledge for the benefit of…

Abstract

Harnessing the power and potential of Artificial Intelligence (AI) continues a centuries-old trajectory of the application of science and knowledge for the benefit of humanity. Such an endeavor has great promise, but also the possibility of creating conflict and disorder. This chapter draws upon the strengths of the previous chapters to provide readers with a purposeful assessment of the current AI security landscape, concluding with four key considerations for a globally secure future.

Details

Artificial Intelligence and Global Security
Type: Book
ISBN: 978-1-78973-812-4

Keywords

Article
Publication date: 11 March 2022

Aline Shakti Franzke

As Big Data and Artificial Intelligence (AI) proliferate, calls have emerged for ethical reflection. Ethics guidelines have played a central role in this respect. While…

Abstract

Purpose

As Big Data and Artificial Intelligence (AI) proliferate, calls have emerged for ethical reflection. Ethics guidelines have played a central role in this respect. While quantitative research on the ethics guidelines of AI/Big Data has been undertaken, there has been a dearth of systematic qualitative analyses of these documents.

Design/methodology/approach

Aiming to address this research gap, this paper analyses 70 international ethics guidelines documents from academia, NGOs and the corporate realm, published between 2017 and 2020.

Findings

The article presents four key findings: existing ethics guidelines (1) promote a broad spectrum of values; (2) focus principally on AI, followed by (Big) Data and algorithms; (3) do not adequately define the term “ethics” and related terms; and (4) have most frequent recourse to the values of “transparency,” “privacy,” and “security.” Based on these findings, the article argues that the guidelines corpus exhibits discernible utilitarian tendencies; guidelines would benefit from greater reflexivity with respect to their ethical framework; and virtue ethical approaches have a valuable contribution to make to the process of guidelines development.

Originality/value

The paper provides qualitative insights into the ethical discourse surrounding AI guidelines, as well as a concise overview of different types of operative translations of theoretical ethical concepts vis-à-vis the sphere of AI. These may prove beneficial for (applied) ethicists, developers and regulators who understand these guidelines as policy.

Details

Journal of Information, Communication and Ethics in Society, vol. 20 no. 4
Type: Research Article
ISSN: 1477-996X

Keywords

Book part
Publication date: 16 September 2019

Lorien Pratt

Abstract

Details

Link
Type: Book
ISBN: 978-1-78769-654-9

Article
Publication date: 15 July 2021

Nehemia Sugianto, Dian Tjondronegoro, Rosemary Stockdale and Elizabeth Irenne Yuwono

The paper proposes a privacy-preserving artificial intelligence-enabled video surveillance technology to monitor social distancing in public spaces.

Abstract

Purpose

The paper proposes a privacy-preserving artificial intelligence-enabled video surveillance technology to monitor social distancing in public spaces.

Design/methodology/approach

The paper proposes a new Responsible Artificial Intelligence Implementation Framework to guide the proposed solution's design and development. It defines responsible artificial intelligence criteria that the solution needs to meet and provides checklists to enforce the criteria throughout the process. To preserve data privacy, the proposed system incorporates a federated learning approach to allow computation performed on edge devices to limit sensitive and identifiable data movement and eliminate the dependency of cloud computing at a central server.

Findings

The proposed system is evaluated through a case study of monitoring social distancing at an airport. The results discuss how the system can fully address the case study's requirements in terms of its reliability, its usefulness when deployed to the airport's cameras, and its compliance with responsible artificial intelligence.

Originality/value

The paper makes three contributions. First, it proposes a real-time social distancing breach detection system on edge that extends from a combination of cutting-edge people detection and tracking algorithms to achieve robust performance. Second, it proposes a design approach to develop responsible artificial intelligence in video surveillance contexts. Third, it presents results and discussion from a comprehensive evaluation in the context of a case study at an airport to demonstrate the proposed system's robust performance and practical usefulness.

Details

Information Technology & People, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0959-3845

Keywords

Book part
Publication date: 15 July 2020

John R. Shook, Tibor Solymosi and James Giordano

Weapons systems and platforms guided by Artificial Intelligence can be designed for greater autonomous decision-making with less real-time human control. Their performance…

Abstract

Weapons systems and platforms guided by Artificial Intelligence can be designed for greater autonomous decision-making with less real-time human control. Their performance will depend upon independent assessments about the relative benefits, burdens, threats, and risks involved with possible action or inaction. An ethical dimension to autonomous Artificial Intelligence (aAI) is therefore inescapable. The actual performance of aAI can be morally evaluated, and the guiding heuristics to aAI decision-making could incorporate adherence to ethical norms. Who shall be rightly held responsible for what happens if and when aAI commits immoral or illegal actions? Faulting aAI after misdeeds occur is not the same as holding it morally responsible, but that does not mean that a measure of moral responsibility cannot be programmed. We propose that aAI include a “Cooperating System” for participating in the communal ethos within NSID/military organizations.

Details

Artificial Intelligence and Global Security
Type: Book
ISBN: 978-1-78973-812-4

Keywords

Article
Publication date: 20 February 2019

Antonio Vetrò, Antonio Santangelo, Elena Beretta and Juan Carlos De Martin

This paper aims to analyze the limitations of the mainstream definition of artificial intelligence (AI) as a rational agent, which currently drives the development of most…

Abstract

Purpose

This paper aims to analyze the limitations of the mainstream definition of artificial intelligence (AI) as a rational agent, which currently drives the development of most AI systems. The authors advocate the need of a wider range of driving ethical principles for designing more socially responsible AI agents.

Design/methodology/approach

The authors follow an experience-based line of reasoning by argument to identify the limitations of the mainstream definition of AI, which is based on the concept of rational agents that select, among their designed actions, those which produce the maximum expected utility in the environment in which they operate. The problem of biases in the data used by AI is taken as example, and a small proof of concept with real datasets is provided.

Findings

The authors observe that biases measurements on the datasets are sufficient to demonstrate potential risks of discriminations when using those data in AI rational agents. Starting from this example, the authors discuss other open issues connected to AI rational agents and provide a few general ethical principles derived from the White Paper AI at the service of the citizen, recently published by Agid, the agency of the Italian Government which designs and monitors the evolution of the IT systems of the Public Administration.

Originality/value

The paper contributes to the scientific debate on the governance and the ethics of AI with a critical analysis of the mainstream definition of AI.

Details

Digital Policy, Regulation and Governance, vol. 21 no. 3
Type: Research Article
ISSN: 2398-5038

Keywords

Article
Publication date: 20 January 2022

Verma Prikshat, Parth Patel, Arup Varma and Alessio Ishizaka

This narrative review presents a multi-stakeholder ethical framework for AI-augmented HRM, based on extant research in the domains of ethical HRM and ethical AI. More…

Abstract

Purpose

This narrative review presents a multi-stakeholder ethical framework for AI-augmented HRM, based on extant research in the domains of ethical HRM and ethical AI. More specifically, the authors identify critical ethical issues pertaining to AI-augmented HRM functions and suggest ethical principles to address these issues by identifying the relevant stakeholders based on the responsibility ethics approach.

Design/methodology/approach

This paper follows a narrative review approach by first identifying various ethical/codes/issues/dilemmas discussed in HRM and AI. The authors next discuss ethical issues concerning AI-augmented HRM, drawing from recent literature. Finally, the authors propose ethical principles for AI-augmented HRM and stakeholders responsible for managing those issues.

Findings

The paper summarises key findings of extant research in the ethical HRM and AI domain and provides a multi-stakeholder ethical framework for AI-augmented HRM functions.

Originality/value

This research's value lies in conceptualising a multi-stakeholder ethical framework for AI-augmented HRM functions comprising 11 ethical principles. The research also identifies the class of stakeholders responsible for identified ethical principles. The research also presents future research directions based on the proposed model.

Details

International Journal of Manpower, vol. 43 no. 1
Type: Research Article
ISSN: 0143-7720

Keywords

Article
Publication date: 4 December 2020

Anton Saveliev and Denis Zhurenkov

The purpose of this paper is to review and analyze how the development and utilization of artificial intelligence (AI) technologies for social responsibility are defined…

Abstract

Purpose

The purpose of this paper is to review and analyze how the development and utilization of artificial intelligence (AI) technologies for social responsibility are defined in the national AI strategies of the USA, Russia and China.

Design/methodology/approach

The notion of responsibility concerning AI is currently not legally defined by any country in the world. The authors of this research are going to use the methodology, based on Luciano Floridi’s Unified framework of five principles for AI in society, to determine how social responsibility is implemented in the AI strategies of the USA, Russia and China.

Findings

All three strategies for the development of AI in the USA, Russia and China, as evaluated in the paper, contain some or other components aimed at achieving public responsibility and responsible use of AI. The Unified framework of five principles for AI in society, developed by L. Floridi, can be used as a viable assessment tool to determine at least in general terms how social responsibility is implied and implemented in national strategic documents in the field of AI. However, authors of the paper call for further development in the field of mutually recognizable ethical models for socially beneficial AI.

Practical implications

This study allows us to better understand the linkages, overlaps and differences between modern philosophy of information, AI-ethics, social responsibility and government regulation. The analysis provided in this paper can serve as a basic blueprint for future attempts to define how social responsibility is understood and implied by government decision-makers.

Originality/value

The analysis provided in the paper, however general and empirical it may be, is a first-time example of how the Unified framework of five principles for AI in society can be applied as an assessment tool to determine social responsibility in AI-related official documents.

1 – 10 of over 8000