Search results

1 – 10 of over 1000
Open Access
Article
Publication date: 15 January 2024

Christine Prince, Nessrine Omrani and Francesco Schiavone

Research on online user privacy shows that empirical evidence on how privacy literacy relates to users' information privacy empowerment is missing. To fill this gap, this paper…

1106

Abstract

Purpose

Research on online user privacy shows that empirical evidence on how privacy literacy relates to users' information privacy empowerment is missing. To fill this gap, this paper investigated the respective influence of two primary dimensions of online privacy literacy – namely declarative and procedural knowledge – on online users' information privacy empowerment.

Design/methodology/approach

An empirical analysis is conducted using a dataset collected in Europe. This survey was conducted in 2019 among 27,524 representative respondents of the European population.

Findings

The main results show that users' procedural knowledge is positively linked to users' privacy empowerment. The relationship between users' declarative knowledge and users' privacy empowerment is partially supported. While greater awareness about firms and organizations practices in terms of data collections and further uses conditions was found to be significantly associated with increased users' privacy empowerment, unpredictably, results revealed that the awareness about the GDPR and user’s privacy empowerment are negatively associated. The empirical findings reveal also that greater online privacy literacy is associated with heightened users' information privacy empowerment.

Originality/value

While few advanced studies made systematic efforts to measure changes occurred on websites since the GDPR enforcement, it remains unclear, however, how individuals perceive, understand and apply the GDPR rights/guarantees and their likelihood to strengthen users' information privacy control. Therefore, this paper contributes empirically to understanding how online users' privacy literacy shaped by both users' declarative and procedural knowledge is likely to affect users' information privacy empowerment. The study empirically investigates the effectiveness of the GDPR in raising users' information privacy empowerment from user-based perspective. Results stress the importance of greater transparency of data tracking and processing decisions made by online businesses and services to strengthen users' control over information privacy. Study findings also put emphasis on the crucial need for more educational efforts to raise users' awareness about the GDPR rights/guarantees related to data protection. Empirical findings also show that users who are more likely to adopt self-protective approaches to reinforce personal data privacy are more likely to perceive greater control over personal data. A broad implication of this finding for practitioners and E-businesses stresses the need for empowering users with adequate privacy protection tools to ensure more confidential transactions.

Details

Information Technology & People, vol. 37 no. 8
Type: Research Article
ISSN: 0959-3845

Keywords

Article
Publication date: 5 April 2024

Jawahitha Sarabdeen and Mohamed Mazahir Mohamed Ishak

General Data Protection Regulation (GDPR) of the European Union (EU) was passed to protect data privacy. Though the GDPR intended to address issues related to data privacy in the…

Abstract

Purpose

General Data Protection Regulation (GDPR) of the European Union (EU) was passed to protect data privacy. Though the GDPR intended to address issues related to data privacy in the EU, it created an extra-territorial effect through Articles 3, 45 and 46. Extra-territorial effect refers to the application or the effect of local laws and regulations in another country. Lawmakers around the globe passed or intensified their efforts to pass laws to have personal data privacy covered so that they meet the adequacy requirement under Articles 45–46 of GDPR while providing comprehensive legislation locally. This study aims to analyze the Malaysian and Saudi Arabian legislation on health data privacy and their adequacy in meeting GDPR data privacy protection requirements.

Design/methodology/approach

The research used a systematic literature review, legal content analysis and comparative analysis to critically analyze the health data protection in Malaysia and Saudi Arabia in comparison with GDPR and to see the adequacy of health data protection that could meet the requirement of EU data transfer requirement.

Findings

The finding suggested that the private sector is better regulated in Malaysia than the public sector. Saudi Arabia has some general laws to cover health data privacy in both public and private sector organizations until the newly passed data protection law is implemented in 2024. The finding also suggested that the Personal Data Protection Act 2010 of Malaysia and the Personal Data Protection Law 2022 of Saudi Arabia could be considered “adequate” under GDPR.

Originality/value

The research would be able to identify the key principles that could identify the adequacy of the laws about health data in Malaysia and Saudi Arabia as there is a dearth of literature in this area. This will help to propose suggestions to improve the laws concerning health data protection so that various stakeholders can benefit from it.

Details

International Journal of Law and Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1754-243X

Keywords

Article
Publication date: 28 February 2024

Mustafa Saritepeci, Hatice Yildiz Durak, Gül Özüdoğru and Nilüfer Atman Uslu

Online privacy pertains to an individual’s capacity to regulate and oversee the gathering and distribution of online information. Conversely, online privacy concern (OPC) pertains…

Abstract

Purpose

Online privacy pertains to an individual’s capacity to regulate and oversee the gathering and distribution of online information. Conversely, online privacy concern (OPC) pertains to the protection of personal information, along with the worries or convictions concerning potential risks and unfavorable outcomes associated with its collection, utilization and distribution. With a holistic approach to these relationships, this study aims to model the relationships between digital literacy (DL), digital data security awareness (DDSA) and OPC and how these relationships vary by gender.

Design/methodology/approach

The participants of this study are 2,835 university students. Data collection tools in the study consist of personal information form and three different scales. Partial least squares (PLS), structural equation modeling (SEM) and multi-group analysis (MGA) were used to test the framework determined in the context of the research purpose and to validate the proposed hypotheses.

Findings

DL has a direct and positive effect on digital data security awareness (DDSA), and DDSA has a positive effect on OPC. According to the MGA results, the hypothesis put forward in both male and female sub-samples was supported. The effect of DDSA on OPC is higher for males.

Originality/value

This study highlights the positive role of DL and perception of data security on OPC. In addition, MGA findings by gender reveal some differences between men and women.

Peer review

The peer review history for this article is available at: https://publons.com/publon/10.1108/OIR-03-2023-0122

Details

Online Information Review, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 15 July 2021

Nehemia Sugianto, Dian Tjondronegoro, Rosemary Stockdale and Elizabeth Irenne Yuwono

The paper proposes a privacy-preserving artificial intelligence-enabled video surveillance technology to monitor social distancing in public spaces.

Abstract

Purpose

The paper proposes a privacy-preserving artificial intelligence-enabled video surveillance technology to monitor social distancing in public spaces.

Design/methodology/approach

The paper proposes a new Responsible Artificial Intelligence Implementation Framework to guide the proposed solution's design and development. It defines responsible artificial intelligence criteria that the solution needs to meet and provides checklists to enforce the criteria throughout the process. To preserve data privacy, the proposed system incorporates a federated learning approach to allow computation performed on edge devices to limit sensitive and identifiable data movement and eliminate the dependency of cloud computing at a central server.

Findings

The proposed system is evaluated through a case study of monitoring social distancing at an airport. The results discuss how the system can fully address the case study's requirements in terms of its reliability, its usefulness when deployed to the airport's cameras, and its compliance with responsible artificial intelligence.

Originality/value

The paper makes three contributions. First, it proposes a real-time social distancing breach detection system on edge that extends from a combination of cutting-edge people detection and tracking algorithms to achieve robust performance. Second, it proposes a design approach to develop responsible artificial intelligence in video surveillance contexts. Third, it presents results and discussion from a comprehensive evaluation in the context of a case study at an airport to demonstrate the proposed system's robust performance and practical usefulness.

Details

Information Technology & People, vol. 37 no. 2
Type: Research Article
ISSN: 0959-3845

Keywords

Article
Publication date: 2 February 2023

Lai-Wan Wong, Garry Wei-Han Tan, Keng-Boon Ooi and Yogesh Dwivedi

The deployment of artificial intelligence (AI) technologies in travel and tourism has received much attention in the wake of the pandemic. While societal adoption of AI has…

1298

Abstract

Purpose

The deployment of artificial intelligence (AI) technologies in travel and tourism has received much attention in the wake of the pandemic. While societal adoption of AI has accelerated, it also raises some trust challenges. Literature on trust in AI is scant, especially regarding the vulnerabilities faced by different stakeholders to inform policy and practice. This work proposes a framework to understand the use of AI technologies from the perspectives of institutional and the self to understand the formation of trust in the mandated use of AI-based technologies in travelers.

Design/methodology/approach

An empirical investigation using partial least squares-structural equation modeling was employed on responses from 209 users. This paper considered factors related to the self (perceptions of self-threat, privacy empowerment, trust propensity) and institution (regulatory protection, corporate privacy responsibility) to understand the formation of trust in AI use for travelers.

Findings

Results showed that self-threat, trust propensity and regulatory protection influence trust in users on AI use. Privacy empowerment and corporate responsibility do not.

Originality/value

Insights from the past studies on AI in travel and tourism are limited. This study advances current literature on affordance and reactance theories to provide a better understanding of what makes travelers trust the mandated use of AI technologies. This work also demonstrates the paradoxical effects of self and institution on technologies and their relationship to trust. For practice, this study offers insights for enhancing adoption via developing trust.

Details

Internet Research, vol. 34 no. 2
Type: Research Article
ISSN: 1066-2243

Keywords

Article
Publication date: 9 April 2024

M A Shariful Amin, Vess L. Johnson, Victor Prybutok and Chang E. Koh

The purpose of this research is to propose and empirically validate a theoretical framework to investigate the willingness of the elderly to disclose personal health information…

Abstract

Purpose

The purpose of this research is to propose and empirically validate a theoretical framework to investigate the willingness of the elderly to disclose personal health information (PHI) to improve the operational efficiency of AI-integrated caregiver robots.

Design/methodology/approach

Drawing upon Privacy Calculus Theory (PCT) and the Technology Acceptance Model (TAM), 274 usable responses were collected through an online survey.

Findings

Empirical results reveal that trust, privacy concerns, and social isolation have a direct impact on the willingness to disclose PHI. Perceived ease of use (PEOU), perceived usefulness (PU), social isolation, and recognized benefits significantly influence user trust. Conversely, elderly individuals with pronounced privacy concerns are less inclined to disclose PHI when using AI-enabled caregiver robots.

Practical implications

Given the pressing need for AI-enabled caregiver robots due to the aging population and a decrease in professional human caregivers, understanding factors that influence the elderly's disclosure of PHI can guide design considerations and policymaking.

Originality/value

Considering the increased demand for accurate and comprehensive elder services, this is the first time that information disclosure and AI-enabled caregiver robot technologies have been combined in the field of healthcare management. This study bridges the gap between the necessity for technological improvement in caregiver robots and the importance of transparent operational information by disclosing the elderly's willingness to share PHI.

Article
Publication date: 18 January 2024

Yelena Smirnova and Victoriano Travieso-Morales

The general data protection regulation (GDPR) was designed to address privacy challenges posed by globalisation and rapid technological advancements; however, its implementation…

Abstract

Purpose

The general data protection regulation (GDPR) was designed to address privacy challenges posed by globalisation and rapid technological advancements; however, its implementation has also introduced new hurdles for companies. This study aims to analyse and synthesise the existing literature that focuses on challenges of GDPR implementation in business enterprises, while also outlining the directions for future research.

Design/methodology/approach

The methodology of this review follows the preferred reporting items for systematic reviews and meta-analysis guidelines. It uses an extensive search strategy across Scopus and Web of Science databases, rigorously applying inclusion and exclusion criteria, yielding a detailed analysis of 16 selected studies that concentrate on GDPR implementation challenges in business organisations.

Findings

The findings indicate a predominant use of conceptual study methodologies in prior research, often limited to specific countries and technology-driven sectors. There is also an inclination towards exploring GDPR challenges within small and medium enterprises, while larger enterprises remain comparatively unexplored. Additionally, further investigation is needed to understand the implications of emerging technologies on GDPR compliance.

Research limitations/implications

This study’s limitations include reliance of the search strategy on two databases, potential exclusion of relevant research, limited existing literature on GDPR implementation challenges in business context and possible influence of diverse methodologies and contexts of previous studies on generalisability of the findings.

Originality/value

The originality of this review lies in its exclusive focus on analysing GDPR implementation challenges within the business context, coupled with a fresh categorisation of these challenges into technical, legal, organisational, and regulatory dimensions.

Details

International Journal of Law and Management, vol. 66 no. 3
Type: Research Article
ISSN: 1754-243X

Keywords

Open Access
Article
Publication date: 8 February 2024

Leo Van Audenhove, Lotte Vermeire, Wendy Van den Broeck and Andy Demeulenaere

The purpose of this paper is to analyse data literacy in the new Digital Competence Framework for Citizens (DigComp 2.2). Mid-2022 the Joint Research Centre of the European…

Abstract

Purpose

The purpose of this paper is to analyse data literacy in the new Digital Competence Framework for Citizens (DigComp 2.2). Mid-2022 the Joint Research Centre of the European Commission published a new version of the DigComp (EC, 2022). This new version focusses more on the datafication of society and emerging technologies, such as artificial intelligence. This paper analyses how DigComp 2.2 defines data literacy and how the framework looks at this from a societal lens.

Design/methodology/approach

This study critically examines DigComp 2.2, using the data literacy competence model developed by the Knowledge Centre for Digital and Media Literacy Flanders-Belgium. The examples of knowledge, skills and attitudes focussing on data literacy (n = 84) are coded and mapped onto the data literacy competence model, which differentiates between using data and understanding data.

Findings

Data literacy is well-covered in the framework, but there is a stronger emphasis on understanding data rather than using data, for example, collecting data is only coded once. Thematically, DigComp 2.2 primarily focusses on security and privacy (31 codes), with less attention given to the societal impact of data, such as environmental impact or data fairness.

Originality/value

Given the datafication of society, data literacy has become increasingly important. DigComp is widely used across different disciplines and now integrates data literacy as a required competence for citizens. It is, thus, relevant to analyse its views on data literacy and emerging technologies, as it will have a strong impact on education in Europe.

Details

Information and Learning Sciences, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2398-5348

Keywords

Open Access
Article
Publication date: 15 February 2024

Hina Naz and Muhammad Kashif

Artificial intelligence (AI) offers many benefits to improve predictive marketing practice. It raises ethical concerns regarding customer prioritization, market share…

1921

Abstract

Purpose

Artificial intelligence (AI) offers many benefits to improve predictive marketing practice. It raises ethical concerns regarding customer prioritization, market share concentration and consumer manipulation. This paper explores these ethical concerns from a contemporary perspective, drawing on the experiences and perspectives of AI and predictive marketing professionals. This study aims to contribute to the field by providing a modern perspective on the ethical concerns of AI usage in predictive marketing, drawing on the experiences and perspectives of professionals in the area.

Design/methodology/approach

The study conducted semistructured interviews for 6 weeks with 14 participants experienced in AI-enabled systems for marketing, using purposive and snowball sampling techniques. Thematic analysis was used to explore themes emerging from the data.

Findings

Results reveal that using AI in marketing could lead to unintended consequences, such as perpetuating existing biases, violating customer privacy, limiting competition and manipulating consumer behavior.

Originality/value

The authors identify seven unique themes and benchmark them with Ashok’s model to provide a structured lens for interpreting the results. The framework presented by this research is unique and can be used to support ethical research spanning social, technological and economic aspects within the predictive marketing domain.

Objetivo

La Inteligencia Artificial (IA) ofrece muchos beneficios para mejorar la práctica del marketing predictivo. Sin embargo, plantea preocupaciones éticas relacionadas con la priorización de clientes, la concentración de cuota de mercado y la manipulación del consumidor. Este artículo explora estas preocupaciones éticas desde una perspectiva contemporánea, basándose en las experiencias y perspectivas de profesionales en IA y marketing predictivo. El estudio tiene como objetivo contribuir a la literatura de este ámbito al proporcionar una perspectiva moderna sobre las preocupaciones éticas del uso de la IA en el marketing predictivo, basándose en las experiencias y perspectivas de profesionales en el área.

Diseño/metodología/enfoque

Para realizar el estudio se realizaron entrevistas semiestructuradas durante seis semanas con 14 participantes con experiencia en sistemas habilitados para IA en marketing, utilizando técnicas de muestreo intencional y de bola de nieve. Se utilizó un análisis temático para explorar los temas que surgieron de los datos.

Resultados

Los resultados revelan que el uso de la IA en marketing podría tener consecuencias no deseadas, como perpetuar sesgos existentes, violar la privacidad del cliente, limitar la competencia y manipular el comportamiento del consumidor.

Originalidad

El estudio identifica siete temas y los comparan con el modelo de Ashok para proporcionar una perspectiva estructurada para interpretar los resultados. El marco presentado por esta investigación es único y puede utilizarse para respaldar investigaciones éticas que abarquen aspectos sociales, tecnológicos y económicos dentro del ámbito del marketing predictivo.

人工智能(AI)为改进预测营销实践带来了诸多益处。然而, 这也引发了与客户优先级、市场份额集中和消费者操纵等伦理问题相关的观点。本文从当代角度深入探讨了这些伦理观点, 充分借鉴了人工智能和预测营销领域专业人士的经验和观点。旨在通过现代视角提供关于在预测营销中应用人工智能时所涉及的伦理观点, 为该领域做出有益贡献。

研究方法

本研究采用了目的性和雪球抽样技术, 与14位在人工智能营销系统领域具有丰富经验的参与者进行为期六周的半结构化访谈。研究采用主题分析方法, 旨在深入挖掘数据中显现的主要主题。

研究发现

研究结果表明, 在营销领域使用人工智能可能引发一系列意外后果, 包括但不限于加强现有偏见、侵犯客户隐私、限制竞争以及操纵消费者行为。

独创性

本研究通过明确定义七个独特的主题, 并采用阿肖克模型进行基准比较, 为读者提供了一个结构化的视角, 以解释研究结果。所提出的框架具有独特之处, 可有效支持在跨足社会、技术和经济领域的预测营销中展开的伦理研究。

Article
Publication date: 27 November 2023

Natália Lemos, Cândida Sofia Machado and Cláudia Cardoso

The rapid advancement of technology has transformed the health-care industry and enabled the emergence of m-Health solutions such as health apps. The viability and success of…

Abstract

Purpose

The rapid advancement of technology has transformed the health-care industry and enabled the emergence of m-Health solutions such as health apps. The viability and success of these apps depends on the definition of a monetization model appropriate to their specificities. In this sense, the purpose of this paper is to study the mechanisms of monetization of health apps, to stablish how alternative revenues determine if a health app is to be free or paid.

Design/methodology/approach

Probability models are used to identify the factors that explain if a health app is free or paid.

Findings

Results show that the presence of alternative monetization mechanisms negatively impacts the likelihood of a health app being paid for. The use of personal data to customize advertising (the monetization of “privacy capital”) or the inclusion of ads on the app are alternative means of monetization with potential to decrease the likelihood of a health app being paid for. The possibility of in-app purchases has a lower negative impact on the probability of a health app being paid for. The choice of platform to commercialize an app is also a strategic decision that influences the likelihood of an app being paid for.

Originality/value

This work stands out for bringing together the two largest platforms present in Portugal and for focusing on the perspective of revenue and monetization of health apps and not on the perspective of downloads.

Details

International Journal of Pharmaceutical and Healthcare Marketing, vol. 18 no. 2
Type: Research Article
ISSN: 1750-6123

Keywords

1 – 10 of over 1000