Search results
1 – 10 of over 1000Mustafa Saritepeci, Hatice Yildiz Durak, Gül Özüdoğru and Nilüfer Atman Uslu
Online privacy pertains to an individual’s capacity to regulate and oversee the gathering and distribution of online information. Conversely, online privacy concern (OPC) pertains…
Abstract
Purpose
Online privacy pertains to an individual’s capacity to regulate and oversee the gathering and distribution of online information. Conversely, online privacy concern (OPC) pertains to the protection of personal information, along with the worries or convictions concerning potential risks and unfavorable outcomes associated with its collection, utilization and distribution. With a holistic approach to these relationships, this study aims to model the relationships between digital literacy (DL), digital data security awareness (DDSA) and OPC and how these relationships vary by gender.
Design/methodology/approach
The participants of this study are 2,835 university students. Data collection tools in the study consist of personal information form and three different scales. Partial least squares (PLS), structural equation modeling (SEM) and multi-group analysis (MGA) were used to test the framework determined in the context of the research purpose and to validate the proposed hypotheses.
Findings
DL has a direct and positive effect on digital data security awareness (DDSA), and DDSA has a positive effect on OPC. According to the MGA results, the hypothesis put forward in both male and female sub-samples was supported. The effect of DDSA on OPC is higher for males.
Originality/value
This study highlights the positive role of DL and perception of data security on OPC. In addition, MGA findings by gender reveal some differences between men and women.
Peer review
The peer review history for this article is available at: https://publons.com/publon/10.1108/OIR-03-2023-0122
Details
Keywords
M A Shariful Amin, Vess L. Johnson, Victor Prybutok and Chang E. Koh
The purpose of this research is to propose and empirically validate a theoretical framework to investigate the willingness of the elderly to disclose personal health information…
Abstract
Purpose
The purpose of this research is to propose and empirically validate a theoretical framework to investigate the willingness of the elderly to disclose personal health information (PHI) to improve the operational efficiency of AI-integrated caregiver robots.
Design/methodology/approach
Drawing upon Privacy Calculus Theory (PCT) and the Technology Acceptance Model (TAM), 274 usable responses were collected through an online survey.
Findings
Empirical results reveal that trust, privacy concerns, and social isolation have a direct impact on the willingness to disclose PHI. Perceived ease of use (PEOU), perceived usefulness (PU), social isolation, and recognized benefits significantly influence user trust. Conversely, elderly individuals with pronounced privacy concerns are less inclined to disclose PHI when using AI-enabled caregiver robots.
Practical implications
Given the pressing need for AI-enabled caregiver robots due to the aging population and a decrease in professional human caregivers, understanding factors that influence the elderly's disclosure of PHI can guide design considerations and policymaking.
Originality/value
Considering the increased demand for accurate and comprehensive elder services, this is the first time that information disclosure and AI-enabled caregiver robot technologies have been combined in the field of healthcare management. This study bridges the gap between the necessity for technological improvement in caregiver robots and the importance of transparent operational information by disclosing the elderly's willingness to share PHI.
Details
Keywords
Hamid Reza Nikkhah, Varun Grover and Rajiv Sabherwal
This study aims to argue that user’s continued use behavior is contingent upon two perceptions (i.e. the app and the provider). This study examines the moderating effects of…
Abstract
Purpose
This study aims to argue that user’s continued use behavior is contingent upon two perceptions (i.e. the app and the provider). This study examines the moderating effects of user’s perceptions of apps and providers on the effects of security and privacy concerns and investigate whether assurance mechanisms decrease such concerns.
Design/methodology/approach
This study conducts a scenario-based survey with 694 mobile cloud computing (MCC) app users to understand their perceptions and behaviors.
Findings
This study finds that while perceived value of data transfer to the cloud moderates the effects of security and privacy concerns on continued use behavior, trust only moderates the effect of privacy concerns. This study also finds that perceived effectiveness of security and privacy intervention impacts privacy concerns but does not decrease security concerns.
Originality/value
Prior mobile app studies mainly focused on mobile apps and did not investigate the perceptions of app providers along with app features in the same study. Furthermore, International Organization for Standardization 27018 certification and privacy policy notification are the interventions that exhibit data assurance mechanisms. However, it is unknown whether these interventions are able to decrease users’ security and privacy concerns after using MCC apps.
Details
Keywords
Maosheng Yang, Lei Feng, Honghong Zhou, Shih-Chih Chen, Ming K. Lim and Ming-Lang Tseng
This study aims to empirically analyse the influence mechanism of perceived interactivity in real estate APP which affects consumers' psychological well-being. With the growing…
Abstract
Purpose
This study aims to empirically analyse the influence mechanism of perceived interactivity in real estate APP which affects consumers' psychological well-being. With the growing application of human–machine interaction in real estate APP, it is crucial to utilize human–machine interaction to stimulate perceived interactivity between humans and machines to positively impact consumers' psychological well-being and sustainable development of real estate APP. However, it is unclear whether perceived interactivity improves consumers' psychological well-being.
Design/methodology/approach
This study proposes and examines a theoretical model grounded in the perceived interactivity theory, considers the relationship between perceived interactivity and consumers' psychological well-being and explores the mediating effect of perceived value and the moderating role of privacy concerns. It takes real estate APP as the research object, analyses the data of 568 consumer samples collected through questionnaires and then employs structural equation modelling to explore and examine the proposed theoretical model of this study.
Findings
The findings are that perceived interactivity (i.e. human–human interaction and human–information interaction) positively influences perceived value, which in turn affects psychological well-being, and that perceived value partially mediates the effect of perceived interaction on psychological well-being. More important findings are that privacy concerns not only negatively moderate human–information interaction on perceived value, but also negatively moderate the indirect effects of human–information interaction on users' psychological well-being through perceived value.
Originality/value
This study expands the context on perceived interaction and psychological well-being in the field of real estate APP, validating the mediating role and boundary conditions of perceived interactivity created by human–machine interaction on consumers' psychological well-being, and suggesting positive implications for practitioners exploring human–machine interaction technologies to improve the perceived interaction between humans and machines and thus enhance consumer psychological well-being and span sustainable development of real estate APP.
Details
Keywords
Balakrishnan Unny R., Samik Shome, Amit Shankar and Saroj Kumar Pani
This study aims to provide a systematic review of consumer privacy literature in the context of smartphones and undertake a comprehensive analysis of academic research on this…
Abstract
Purpose
This study aims to provide a systematic review of consumer privacy literature in the context of smartphones and undertake a comprehensive analysis of academic research on this evolving research area.
Design/methodology/approach
This review synthesises antecedents, consequences and mediators reported in consumer privacy literature and presents these factors in a conceptual framework to demonstrate the consumer privacy phenomenon.
Findings
Based on the synthesis of constructs reported in the existing literature, a conceptual framework is proposed highlighting antecedents, mediators and outcomes of experiential marketing efforts. Finally, this study deciphers overlooked areas of consumer privacy in the context of smartphone research and provides insightful directions to advance research in this domain in terms of theory development, context, characteristics and methodology.
Originality/value
This study significantly contributes to consumer behaviour literature, specifically consumer privacy literature.
Details
Keywords
Haroon Iqbal Maseeh, Charles Jebarajakirthy, Achchuthan Sivapalan, Mitchell Ross and Mehak Rehman
Smartphone apps collect users' personal information, which triggers privacy concerns for app users. Consequently, app users restrict apps from accessing their personal…
Abstract
Purpose
Smartphone apps collect users' personal information, which triggers privacy concerns for app users. Consequently, app users restrict apps from accessing their personal information. This may impact the effectiveness of in-app advertising. However, research has not yet demonstrated what factors impact app users' decisions to use apps with restricted permissions. This study is aimed to bridge this gap.
Design/methodology/approach
Using a quantitative research method, the authors collected the data from 384 app users via a structured questionnaire. The data were analysed using AMOS and fuzzy-set qualitative comparative analysis (fsQCA).
Findings
The findings suggest privacy concerns and risks have a significant positive effect on app usage with restricted permissions, whilst reputation, trust and perceived benefits have significant negative impact on it. Some app-related factors, such as the number of apps installed and type of apps, also impact app usage with restricted permissions.
Practical implications
Based on the findings, the authors provided several implications for app stores, app developers and app marketers.
Originality/value
This study examines the factors that influence smartphone users' decisions to use apps with restricted permission requests. By doing this, the authors' study contributes to the consumer behaviour literature in the context of smartphone app usage. Also, by explaining the underlying mechanisms through which the principles of communication privacy management theory operate in smartphone app context, the authors' research contributes to the communication privacy management theory.
Details
Keywords
Andreas Skalkos, Aggeliki Tsohou, Maria Karyda and Spyros Kokolakis
Search engines, the most popular online services, are associated with several concerns. Users are concerned about the unauthorized processing of their personal data, as well as…
Abstract
Purpose
Search engines, the most popular online services, are associated with several concerns. Users are concerned about the unauthorized processing of their personal data, as well as about search engines keeping track of their search preferences. Various search engines have been introduced to address these concerns, claiming that they protect users’ privacy. The authors call these search engines privacy-preserving search engines (PPSEs). This paper aims to investigate the factors that motivate search engine users to use PPSEs.
Design/methodology/approach
This study adopted protection motivation theory (PMT) and associated its constructs with subjective norms to build a comprehensive research model. The authors tested the research model using survey data from 830 search engine users worldwide.
Findings
The results confirm the interpretive power of PMT in privacy-related decision-making and show that users are more inclined to take protective measures when they consider that data abuse is a more severe risk and that they are more vulnerable to data abuse. Furthermore, the results highlight the importance of subjective norms in predicting and determining PPSE use. Because subjective norms refer to perceived social influences from important others to engage or refrain from protective behavior, the authors reveal that the recommendation from people that users consider important motivates them to take protective measures and use PPSE.
Research limitations/implications
Despite its interesting results, this research also has some limitations. First, because the survey was conducted online, the study environment was less controlled. Participants may have been disrupted or affected, for example, by the presence of others or background noise during the session. Second, some of the survey items could possibly be misinterpreted by the respondents in the study questionnaire, as they did not have access to clarifications that a researcher could possibly provide. Third, another limitation refers to the use of the Amazon Turk tool. According Paolacci and Chandler (2014) in comparison to the US population, the MTurk workers are more educated, younger and less religiously and politically diverse. Fourth, another limitation of this study could be that Actual Use of PPSE is self-reported by the participants. This could cause bias because it is argued that internet users’ statements may be in contrast with their actions in real life or in an experimental scenario (Berendt et al., 2005, Jensen et al., 2005); Moreover, some limitations of this study emerge from the use of PMT as the background theory of the study. PMT identifies the main factors that affect protection motivation, but other environmental and cognitive factors can also have a significant role in determining the way an individual’s attitude is formed. As Rogers (1975) argued, PMT as proposed does not attempt to specify all of the possible factors in a fear appeal that may affect persuasion, but rather a systematic exposition of a limited set of components and cognitive mediational processes that may account for a significant portion of the variance in acceptance by users. In addition, as Tanner et al. (1991) argue, the ‘PMT’s assumption that the subjects have not already developed a coping mechanism is one of its limitations. Finally, another limitation is that the sample does not include users from China, which is the second most populated country. Unfortunately, DuckDuckGo has been blocked in China, so it has not been feasible to include users from China in this study.
Practical implications
The proposed model and, specifically, the subjective norms construct proved to be successful in predicting PPSE use. This study demonstrates the need for PPSE to exhibit and advertise the technology and measures they use to protect users’ privacy. This will contribute to the effort to persuade internet users to use these tools.
Social implications
This study sought to explore the privacy attitudes of search engine users using PMT and its constructs’ association with subjective norms. It used the PMT to elucidate users’ perceptions that motivate them to privacy adoption behavior, as well as how these perceptions influence the type of search engine they use. This research is a first step toward gaining a better understanding of the processes that drive people’s motivation to, or not to, protect their privacy online by means of using PPSE. At the same time, this study contributes to search engine vendors by revealing that users’ need to be persuaded not only about their policy toward privacy but also by considering and implementing new strategies of diffusion that could enhance the use of the PPSE.
Originality/value
This research is a first step toward gaining a better understanding of the processes that drive people’s motivation to, or not to, protect their privacy online by means of using PPSEs.
Details
Keywords
Abstract
Purpose
Based on the cognition–affect–conation pattern, this study explores the factors that affect the intention to use facial recognition services (FRS). The study adopts the driving factor perspective to examine how network externalities influence FRS use intention through the mediating role of satisfaction and the barrier factor perspective to analyze how perceived privacy risk affects FRS use intention through the mediating role of privacy cynicism.
Design/methodology/approach
The data collected from 478 Chinese FRS users are analyzed via partial least squares-based structural equation modeling (PLS-SEM).
Findings
The study produces the following results. (1) FRS use intention is motivated directly by the positive affective factor of satisfaction and the negative affective factor of privacy cynicism. (2) Satisfaction is affected by cognitive factors related to network externalities. Perceived complementarity and perceived compatibility, two indirect network externalities, positively affect satisfaction, whereas perceived critical mass, a direct network externality, does not significantly affect satisfaction. In addition, perceived privacy risk generates privacy cynicism. (3) Resistance to change positively moderates the relationship between privacy cynicism and intention to use FRS.
Originality/value
This study extends knowledge on people's use of FRS by exploring affect- and cognitive-based factors and finding that the affect-based factors (satisfaction and privacy cynicism) play fully mediating roles in the relationship between the cognitive-based factors and use intention. This study also expands the cognitive boundaries of FRS use by exploring the functional condition between affect-based factors and use intention, that is, the moderating role of resistance to use.
Details
Keywords
Kristen L. Walker and George R. Milne
The authors argue that privacy is integral to the well-being of consumers and an essential component in not only corporate social responsibility (CSR) but what they term uniquely…
Abstract
Purpose
The authors argue that privacy is integral to the well-being of consumers and an essential component in not only corporate social responsibility (CSR) but what they term uniquely as social media responsibility (SMR). A conceptual framework is proposed that delineates the privacy issues companies should pay attention to in artificial intelligence (AI)-fueled social media environments.
Design/methodology/approach
The authors review literature on privacy issues in social media and AI in the academic and practitioner literatures. Based on the review, arguments focus on the need for an SMR framework, proposing responsible use of consumer data that is attentive to consumers' privacy concerns.
Findings
Implications from the framework are a path forward for social media companies to treat consumer data more fairly in this new environment. The framework has implications for companies to reduce potential harms to consumers and consider addressing their power and responsibility. With social media and AI transforming consumer behavior so profoundly, there are a variety of short- and long-term social implications.
Originality
Since AI tools are becoming integral to social media company activities, this research addresses the changing responsibilities social media companies have in securing consumers' data and enabling consumers the agency to protect their privacy effectively. The authors propose an SMR framework based on CSR research and AI tools employed by social media companies.
Details
Keywords
Eugene Cheng-Xi Aw, Lai-Ying Leong, Jun-Jie Hew, Nripendra P. Rana, Teck Ming Tan and Teck-Weng Jee
Under the pressure of dynamic business environments, firms in the banking and finance industry are gradually embracing Fintech, such as robo-advisors, as part of their digital…
Abstract
Purpose
Under the pressure of dynamic business environments, firms in the banking and finance industry are gradually embracing Fintech, such as robo-advisors, as part of their digital transformation process. While robo-advisory services are expected to witness lucrative growth, challenges persist in the current landscape where most consumers are unready to adopt and even resist the new service. The study aims to investigate resistance to robo-advisors through the privacy and justice perspective. The human-like attributes are modeled as the antecedents to perceived justice, followed by the subsequent outcomes of privacy concerns, perceived intrusiveness and resistance.
Design/methodology/approach
An online survey was conducted to gather consumer responses about their perceptions of robo-advisors. Two hundred valid questionnaires were collected and analyzed using Partial Least Squares Structural Equation Modeling (PLS-SEM).
Findings
The results revealed that (1) perceived anthropomorphism and perceived autonomy are the positive determinants of perceived justice, (2) perceived justice negatively impacts privacy concerns and perceived intrusiveness and (3) privacy concerns and perceived intrusiveness positively influence resistance to robo-advisors.
Originality/value
The present study contributes to robo-advisory service research by applying a privacy and justice perspective to explain consumer resistance to robo-advisors, thereby complementing past studies that focused on the technology acceptance paradigm. The study also offers practical implications for mitigating resistance to robo-advisors.
Details