Search results

1 – 10 of over 1000
Open Access
Article
Publication date: 15 January 2024

Christine Prince, Nessrine Omrani and Francesco Schiavone

Research on online user privacy shows that empirical evidence on how privacy literacy relates to users' information privacy empowerment is missing. To fill this gap, this paper…

1114

Abstract

Purpose

Research on online user privacy shows that empirical evidence on how privacy literacy relates to users' information privacy empowerment is missing. To fill this gap, this paper investigated the respective influence of two primary dimensions of online privacy literacy – namely declarative and procedural knowledge – on online users' information privacy empowerment.

Design/methodology/approach

An empirical analysis is conducted using a dataset collected in Europe. This survey was conducted in 2019 among 27,524 representative respondents of the European population.

Findings

The main results show that users' procedural knowledge is positively linked to users' privacy empowerment. The relationship between users' declarative knowledge and users' privacy empowerment is partially supported. While greater awareness about firms and organizations practices in terms of data collections and further uses conditions was found to be significantly associated with increased users' privacy empowerment, unpredictably, results revealed that the awareness about the GDPR and user’s privacy empowerment are negatively associated. The empirical findings reveal also that greater online privacy literacy is associated with heightened users' information privacy empowerment.

Originality/value

While few advanced studies made systematic efforts to measure changes occurred on websites since the GDPR enforcement, it remains unclear, however, how individuals perceive, understand and apply the GDPR rights/guarantees and their likelihood to strengthen users' information privacy control. Therefore, this paper contributes empirically to understanding how online users' privacy literacy shaped by both users' declarative and procedural knowledge is likely to affect users' information privacy empowerment. The study empirically investigates the effectiveness of the GDPR in raising users' information privacy empowerment from user-based perspective. Results stress the importance of greater transparency of data tracking and processing decisions made by online businesses and services to strengthen users' control over information privacy. Study findings also put emphasis on the crucial need for more educational efforts to raise users' awareness about the GDPR rights/guarantees related to data protection. Empirical findings also show that users who are more likely to adopt self-protective approaches to reinforce personal data privacy are more likely to perceive greater control over personal data. A broad implication of this finding for practitioners and E-businesses stresses the need for empowering users with adequate privacy protection tools to ensure more confidential transactions.

Details

Information Technology & People, vol. 37 no. 8
Type: Research Article
ISSN: 0959-3845

Keywords

Open Access
Book part
Publication date: 17 August 2021

Mike Hynes

Abstract

Details

The Social, Cultural and Environmental Costs of Hyper-Connectivity: Sleeping Through the Revolution
Type: Book
ISBN: 978-1-83909-976-2

Open Access
Article
Publication date: 22 November 2023

Juliana Elisa Raffaghelli, Marc Romero Carbonell and Teresa Romeu-Fontanillas

It has been demonstrated that AI-powered, data-driven tools’ usage is not universal, but deeply linked to socio-cultural contexts. The purpose of this paper is to display the need…

Abstract

Purpose

It has been demonstrated that AI-powered, data-driven tools’ usage is not universal, but deeply linked to socio-cultural contexts. The purpose of this paper is to display the need of adopting situated lenses, relating to specific personal and professional learning about data protection and privacy.

Design/methodology/approach

The authors introduce the results of a case study based on a large educational intervention at a fully online university. The views of the participants from degrees representing different knowledge areas and contexts of technology adoption (work, education and leisure) were explored after engaging in the analysis of the terms and conditions of use about privacy and data usage. After consultation, 27 course instructors (CIs) integrated the activity and worked with 823 students (702 of whom were complete and correct for analytical purposes).

Findings

The results of this study indicated that the intervention increased privacy-conscious online behaviour among most participants. Results were more contradictory when looking at the tools’ daily usage, with overall positive considerations around the tools being mostly needed or “indispensable”.

Research limitations/implications

Though appliable only to the authors’ case study and not generalisable, the authors’ results show both the complexity of privacy views and the presence of forms of renunciation in the trade-off between data protection and the need of using a specific software into a personal and professional context.

Practical implications

This study provides an example of teaching and learning activities that supports the development of data literacy, with a focus on data privacy. Therefore, beyond the research findings, any educator can build over the authors’ proposal to produce materials and interventions aimed at developing awareness on data privacy issues.

Social implications

Developing awareness, understanding and skills relating to data privacy is crucial to live in a society where digital technologies are used in any area of our personal and professional life. Well-informed citizens will be able to obscure, resist or claim for their rights whenever a violation of their privacy takes place. Also, they will be able to support (through adoption) better quality apps and platforms, instead of passively accepting what is evident or easy to use.

Originality/value

The authors specifically spot how students and educators, as part of a specific learning and cultural ecosystem, need tailored opportunities to keep on reflecting on their degrees of freedom and their possibilities to act regarding evolving data systems and their alternatives.

Details

Information and Learning Sciences, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2398-5348

Keywords

Open Access
Book part
Publication date: 4 June 2021

Anne Cheung

Doxing refers to the intentional public release by a third party of personal data without consent, often with the intent to humiliate, intimidate, harass, or punish the individual…

Abstract

Doxing refers to the intentional public release by a third party of personal data without consent, often with the intent to humiliate, intimidate, harass, or punish the individual concerned. Intuitively, it is tempting to condemn doxing as a crude form of cyber violence that weaponizes personal data. When it is used as a strategy of resistance by the powerless to hold the powerful accountable, however, a more nuanced understanding is called for. This chapter focuses on the doxing phenomenon in Hong Kong, where doxing incidents against police officers and their family members have skyrocketed since 2019 (a 75-fold increase over 2018). It contends that doxing for political purposes is closely related to digital vigilantism, signifying a loss of confidence in the ruling authority and a yearning for an alternative form of justice. The chapter therefore argues that public interest should be recognized as a legal defense in doxing cases when those discharging or entrusted with public duty are the targets. Equally, it is important to confine the categories of personal data disclosed to information necessary to reveal the alleged wrongdoer or wrongdoing. Only in this way can a fair balance be struck between privacy, freedom of expression, and public interest.

Details

The Emerald International Handbook of Technology-Facilitated Violence and Abuse
Type: Book
ISBN: 978-1-83982-849-2

Keywords

Open Access
Article
Publication date: 13 January 2023

Bianca Kronemann, Hatice Kizgin, Nripendra Rana and Yogesh K. Dwivedi

This paper aims to explore the overall research question “How can artificial intelligence (AI) influence consumer information disclosure?”. It considers how anthropomorphism of…

8030

Abstract

Purpose

This paper aims to explore the overall research question “How can artificial intelligence (AI) influence consumer information disclosure?”. It considers how anthropomorphism of AI, personalisation and privacy concerns influence consumers’ attitudes and encourage disclosure of their private information.

Design/methodology/approach

This research draws upon the personalisation-privacy paradox (PPP) and privacy calculus theory (PCT) to address the research question and examine how AI can influence consumer information disclosure. It is proposed that anthropomorphism of AI and personalisation positively influence consumer attitudes and intentions to disclose personal information to a digital assistant, while privacy concerns negatively affect attitude and information disclosure.

Findings

This paper develops a conceptual model based on and presents seven research propositions (RPs) for future research.

Originality/value

Building upon PPP and PCT, this paper presents a view on the benefits and drawbacks of AI from a consumer perspective. This paper contributes to literature by critically reflecting upon on the question how consumer information disclosure is influenced by AI. In addition, seven RPs and future research areas are outlined in relation to privacy and consumer information disclosure in relation to AI.

¿Cómo anima la IA a los consumidores a compartir sus secretos?

El papel del antropomorfismo, la personalización y los problemas de privacidad y perspectivas para la investigación futura

Resumen

Propósito

Este artículo explora la pregunta general de investigación “¿Cómo puede influir la inteligencia artificial (IA) en la divulgación de información por parte de los consumidores? Se analiza cómo el antropomorfismo de la IA, la personalización y la preocupación por la privacidad influyen en la actitud de los consumidores y fomentan la revelación de su información privada.

Diseño/metodología/enfoque

Esta investigación se basa en la paradoja de la personalización y la privacidad y en la teoría del cálculo de la privacidad para abordar la pregunta de investigación y examinar cómo la IA puede influir en la revelación de información de los consumidores. Se propone que el antropomorfismo de la IA y la personalización influyen positivamente en las actitudes de los consumidores y en su intención de revelar información personal a un asistente digital, mientras que la preocupación por la privacidad afecta negativamente a la actitud y a la revelación de información.

Conclusiones

Este artículo desarrolla un modelo conceptual basado en siete propuestas de investigación para el futuro.

Originalidad

Basándose en la paradoja de la personalización y la privacidad y en la teoría del cálculo de la privacidad, este artículo presenta un punto de vista sobre los beneficios e inconvenientes de la IA desde la perspectiva del consumidor. Este artículo contribuye a la literatura al reflexionar de forma crítica sobre la cuestión de cómo influye la IA en la revelación de información del consumidor. Además, se esbozan siete propuestas de investigación y futuras áreas de investigación en relación con la privacidad y la divulgación de información del consumidor en relación con la IA.

人工智能如何

鼓励消费者分享他们的秘密?拟人化、个性化和隐私问题的作用以及未来研究的途径

摘要

目的

本文探讨了 “人工智能如何影响消费者的信息披露?"这一总体研究问题。它考虑了人工智能(AI)的拟人化、个性化和隐私问题是如何影响消费者的态度并鼓励他们披露私人信息的。

设计/方法/途径

本研究借鉴了个性化-隐私悖论和隐私计算理论来解决研究问题, 并研究人工智能如何影响消费者信息披露。本文提出, 人工智能的拟人化和个性化对消费者向数字助理披露个人信息的态度和意图有积极影响, 而隐私问题对态度和信息披露有消极影响。

研究结果

本文在此基础上建立了一个概念模型, 并为未来的研究提出了七个研究命题。

原创性

在个性化-隐私悖论和隐私计算理论的基础上, 本文从消费者的角度提出了对人工智能的好处和坏处的看法。本文通过对消费者信息披露如何受到人工智能影响的问题进行批判性反思, 对文献做出了贡献。此外, 本文概述了与人工智能相关的隐私和消费者信息披露方面的七个研究命题和未来研究领域。

Open Access
Article
Publication date: 19 June 2023

Jorge Xavier and Winnie Ng Picoto

Regulatory initiatives and related technological shifts have been imposing restrictions on data-driven marketing (DDM) practices. This paper aims to find the main restrictions for…

1685

Abstract

Purpose

Regulatory initiatives and related technological shifts have been imposing restrictions on data-driven marketing (DDM) practices. This paper aims to find the main restrictions for DDM and the key management theories applied to investigate the consequences of these restrictions.

Design/methodology/approach

The authors conducted a unified bibliometric analysis with 104 publications retrieved from both Scopus and Web of Science, followed by a qualitative, in-depth systematic literature review to identify the management theories in literature and inform a research agenda.

Findings

The fragmentation of the research outcomes was overcome by the identification of 3 main clusters and 11 management theories that structured 18 questions for future research.

Originality/value

To the best of the authors’ knowledge, this paper sets for the first time a frontier between almost three decades where DDM evolved with no significative restrictions, grounded on innovations and market autoregulation, and an era where data privacy, anti-trust and competition and data sovereignty regulations converge to impose structural changes, requiring scholars and practitioners to rethink the roles of data at the strategic level of the firm.

Details

International Journal of Law and Management, vol. 65 no. 5
Type: Research Article
ISSN: 1754-243X

Keywords

Open Access
Article
Publication date: 12 April 2022

Dijana Peras and Renata Mekovec

The purpose of this paper is to improve the understanding of cloud service users’ privacy concerns, which are anticipated to considerably hinder cloud service market growth. The…

1491

Abstract

Purpose

The purpose of this paper is to improve the understanding of cloud service users’ privacy concerns, which are anticipated to considerably hinder cloud service market growth. The researchers have explored privacy concerns from dimensions that were identified as relevant in the cloud context.

Design/methodology/approach

Content analysis was used to identify privacy problems that were most often raised in previous cloud research. Multidimensional developmental theory (MDT) was used to build a conceptual model of cloud privacy concerns. Literature review was made to identify the privacy-related constructs used to measure privacy concerns in previous cloud research.

Findings

The paper provides systematization of recent cloud privacy research, proposal of a conceptual model of cloud privacy concerns, identification of measuring instruments that were used to measure privacy concerns in previous cloud research and identification of categories of problems that need to be addressed in future cloud research.

Originality/value

This paper has identified the categories of privacy problems and dimensions that have not yet been measured in the cloud context, to the best of the authors’ knowledge. Their simultaneous examination could clarify the effects of different dimensions on the privacy concerns of cloud users. The conceptual model of cloud privacy concerns will allow cloud service providers to focus on key cloud problems affecting users’ privacy concerns and use the most appropriate privacy protection communication and preservation approaches.

Details

Information & Computer Security, vol. 30 no. 5
Type: Research Article
ISSN: 2056-4961

Keywords

Open Access
Article
Publication date: 19 July 2023

Magnus Söderlund

Service robots are expected to become increasingly common, but the ways in which they can move around in an environment with humans, collect and store data about humans and share…

1124

Abstract

Purpose

Service robots are expected to become increasingly common, but the ways in which they can move around in an environment with humans, collect and store data about humans and share such data produce a potential for privacy violations. In human-to-human contexts, such violations are transgression of norms to which humans typically react negatively. This study examines if similar reactions occur when the transgressor is a robot. The main dependent variable was the overall evaluation of the robot.

Design/methodology/approach

Service robot privacy violations were manipulated in a between-subjects experiment in which a human user interacted with an embodied humanoid robot in an office environment.

Findings

The results show that the robot's violations of human privacy attenuated the overall evaluation of the robot and that this effect was sequentially mediated by perceived robot morality and perceived robot humanness. Given that a similar reaction pattern would be expected when humans violate other humans' privacy, the present study offers evidence in support of the notion that humanlike non-humans can elicit responses similar to those elicited by real humans.

Practical implications

The results imply that designers of service robots and managers in firms using such robots for providing service to employees should be concerned with restricting the potential for robots' privacy violation activities if the goal is to increase the acceptance of service robots in the habitat of humans.

Originality/value

To date, few empirical studies have examined reactions to service robots that violate privacy norms.

Details

Journal of Service Theory and Practice, vol. 33 no. 7
Type: Research Article
ISSN: 2055-6225

Keywords

Open Access
Article
Publication date: 8 February 2024

Leo Van Audenhove, Lotte Vermeire, Wendy Van den Broeck and Andy Demeulenaere

The purpose of this paper is to analyse data literacy in the new Digital Competence Framework for Citizens (DigComp 2.2). Mid-2022 the Joint Research Centre of the European…

Abstract

Purpose

The purpose of this paper is to analyse data literacy in the new Digital Competence Framework for Citizens (DigComp 2.2). Mid-2022 the Joint Research Centre of the European Commission published a new version of the DigComp (EC, 2022). This new version focusses more on the datafication of society and emerging technologies, such as artificial intelligence. This paper analyses how DigComp 2.2 defines data literacy and how the framework looks at this from a societal lens.

Design/methodology/approach

This study critically examines DigComp 2.2, using the data literacy competence model developed by the Knowledge Centre for Digital and Media Literacy Flanders-Belgium. The examples of knowledge, skills and attitudes focussing on data literacy (n = 84) are coded and mapped onto the data literacy competence model, which differentiates between using data and understanding data.

Findings

Data literacy is well-covered in the framework, but there is a stronger emphasis on understanding data rather than using data, for example, collecting data is only coded once. Thematically, DigComp 2.2 primarily focusses on security and privacy (31 codes), with less attention given to the societal impact of data, such as environmental impact or data fairness.

Originality/value

Given the datafication of society, data literacy has become increasingly important. DigComp is widely used across different disciplines and now integrates data literacy as a required competence for citizens. It is, thus, relevant to analyse its views on data literacy and emerging technologies, as it will have a strong impact on education in Europe.

Details

Information and Learning Sciences, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2398-5348

Keywords

Open Access
Article
Publication date: 15 February 2024

Hina Naz and Muhammad Kashif

Artificial intelligence (AI) offers many benefits to improve predictive marketing practice. It raises ethical concerns regarding customer prioritization, market share…

1956

Abstract

Purpose

Artificial intelligence (AI) offers many benefits to improve predictive marketing practice. It raises ethical concerns regarding customer prioritization, market share concentration and consumer manipulation. This paper explores these ethical concerns from a contemporary perspective, drawing on the experiences and perspectives of AI and predictive marketing professionals. This study aims to contribute to the field by providing a modern perspective on the ethical concerns of AI usage in predictive marketing, drawing on the experiences and perspectives of professionals in the area.

Design/methodology/approach

The study conducted semistructured interviews for 6 weeks with 14 participants experienced in AI-enabled systems for marketing, using purposive and snowball sampling techniques. Thematic analysis was used to explore themes emerging from the data.

Findings

Results reveal that using AI in marketing could lead to unintended consequences, such as perpetuating existing biases, violating customer privacy, limiting competition and manipulating consumer behavior.

Originality/value

The authors identify seven unique themes and benchmark them with Ashok’s model to provide a structured lens for interpreting the results. The framework presented by this research is unique and can be used to support ethical research spanning social, technological and economic aspects within the predictive marketing domain.

Objetivo

La Inteligencia Artificial (IA) ofrece muchos beneficios para mejorar la práctica del marketing predictivo. Sin embargo, plantea preocupaciones éticas relacionadas con la priorización de clientes, la concentración de cuota de mercado y la manipulación del consumidor. Este artículo explora estas preocupaciones éticas desde una perspectiva contemporánea, basándose en las experiencias y perspectivas de profesionales en IA y marketing predictivo. El estudio tiene como objetivo contribuir a la literatura de este ámbito al proporcionar una perspectiva moderna sobre las preocupaciones éticas del uso de la IA en el marketing predictivo, basándose en las experiencias y perspectivas de profesionales en el área.

Diseño/metodología/enfoque

Para realizar el estudio se realizaron entrevistas semiestructuradas durante seis semanas con 14 participantes con experiencia en sistemas habilitados para IA en marketing, utilizando técnicas de muestreo intencional y de bola de nieve. Se utilizó un análisis temático para explorar los temas que surgieron de los datos.

Resultados

Los resultados revelan que el uso de la IA en marketing podría tener consecuencias no deseadas, como perpetuar sesgos existentes, violar la privacidad del cliente, limitar la competencia y manipular el comportamiento del consumidor.

Originalidad

El estudio identifica siete temas y los comparan con el modelo de Ashok para proporcionar una perspectiva estructurada para interpretar los resultados. El marco presentado por esta investigación es único y puede utilizarse para respaldar investigaciones éticas que abarquen aspectos sociales, tecnológicos y económicos dentro del ámbito del marketing predictivo.

人工智能(AI)为改进预测营销实践带来了诸多益处。然而, 这也引发了与客户优先级、市场份额集中和消费者操纵等伦理问题相关的观点。本文从当代角度深入探讨了这些伦理观点, 充分借鉴了人工智能和预测营销领域专业人士的经验和观点。旨在通过现代视角提供关于在预测营销中应用人工智能时所涉及的伦理观点, 为该领域做出有益贡献。

研究方法

本研究采用了目的性和雪球抽样技术, 与14位在人工智能营销系统领域具有丰富经验的参与者进行为期六周的半结构化访谈。研究采用主题分析方法, 旨在深入挖掘数据中显现的主要主题。

研究发现

研究结果表明, 在营销领域使用人工智能可能引发一系列意外后果, 包括但不限于加强现有偏见、侵犯客户隐私、限制竞争以及操纵消费者行为。

独创性

本研究通过明确定义七个独特的主题, 并采用阿肖克模型进行基准比较, 为读者提供了一个结构化的视角, 以解释研究结果。所提出的框架具有独特之处, 可有效支持在跨足社会、技术和经济领域的预测营销中展开的伦理研究。

1 – 10 of over 1000