Search results
1 – 10 of over 2000Dillip Kumar Rath and Ajit Kumar
In today’s digitized environment, information privacy has become a prime concern for everybody. The purpose of this paper is to provide an understanding of information privacy…
Abstract
Purpose
In today’s digitized environment, information privacy has become a prime concern for everybody. The purpose of this paper is to provide an understanding of information privacy concern arising because of the application of computer-based information system in the various domains (E-Governance, E-Commerce, E-Health, E-Banking and E-Finance), and at different levels, i.e. individual, group, organizational and societal.
Design/methodology/approach
The authors performed an in-depth analysis of different research articles related to information privacy concerns and elements affecting those at certain level of applications. The primary sources of literature were articles retrieved from online databases. Various online journal and scholarly articles were searched in detail to locate information privacy-related articles.
Findings
The authors have carried out a detailed literature review to identify the different levels where the privacy is a big challenging task. This paper provides insights whether information privacy concern may obstruct in the successful dispersal and adoption of different applications in various application domains. Consumers’ attitude towards information privacy concerns have enlightened and addressed at individual levels in numerous domains. Privacy concerns at the individual level, as suggested by our analysis, seem to have been sufficiently addressed or addressed. However, information privacy concerns at other levels – group, organizational and societal levels – need the attention of researchers.
Originality/value
In this paper, the authors have posited that it will help the researchers to more focus at group level privacy perspective in the information privacy era.
Details
Keywords
David D’Acunto, Serena Volo and Raffaele Filieri
This study aims to explore US hotel guests’ privacy concerns with a twofold aim as follows: to investigate the privacy categories, themes and attributes most commonly discussed by…
Abstract
Purpose
This study aims to explore US hotel guests’ privacy concerns with a twofold aim as follows: to investigate the privacy categories, themes and attributes most commonly discussed by guests in their reviews and to examine the influence of cultural proximity on privacy concerns.
Design/methodology/approach
This study combined automated text analytics with content analysis. The database consisted of 68,000 hotel reviews written by US guests lodged in different types of hotels in five European cities. Linguistic Inquiry Word Count, Leximancer and SPSS software were used for data analysis. Automated text analytics and a validated privacy dictionary were used to investigate the reviews by exploring the categories, themes and attributes of privacy concerns. Content analysis was used to analyze the narratives and select representative snippets.
Findings
The findings revealed various categories, themes and concepts related to privacy concerns. The two most commonly discussed categories were privacy restriction and outcome state. The main themes discussed in association with privacy were “room,” “hotel,” “breakfast” and several concepts within each of these themes were identified. Furthermore, US guests showed the lowest levels of privacy concerns when staying at American hotel chains as opposed to non-American chains or independent hotels, highlighting the role of cultural proximity in privacy concerns.
Practical implications
Hotel managers can benefit from the results by improving their understanding of hotel and service attributes mostly associated with privacy concerns. Specific suggestions are provided to hoteliers on how to increase guests’ privacy and on how to manage issues related to cultural distance with guests.
Originality/value
This study contributes to the hospitality literature by investigating a neglected issue: on-site hotel guests’ privacy concerns. Using an unobtrusive method of data collection and text analytics, this study offers valuable insights into the categories of privacy, the most recurrent themes in hotel guests’ reviews and the potential relationship between cultural proximity and privacy concerns.
Details
Keywords
Dijana Peras and Renata Mekovec
The purpose of this paper is to improve the understanding of cloud service users’ privacy concerns, which are anticipated to considerably hinder cloud service market growth. The…
Abstract
Purpose
The purpose of this paper is to improve the understanding of cloud service users’ privacy concerns, which are anticipated to considerably hinder cloud service market growth. The researchers have explored privacy concerns from dimensions that were identified as relevant in the cloud context.
Design/methodology/approach
Content analysis was used to identify privacy problems that were most often raised in previous cloud research. Multidimensional developmental theory (MDT) was used to build a conceptual model of cloud privacy concerns. Literature review was made to identify the privacy-related constructs used to measure privacy concerns in previous cloud research.
Findings
The paper provides systematization of recent cloud privacy research, proposal of a conceptual model of cloud privacy concerns, identification of measuring instruments that were used to measure privacy concerns in previous cloud research and identification of categories of problems that need to be addressed in future cloud research.
Originality/value
This paper has identified the categories of privacy problems and dimensions that have not yet been measured in the cloud context, to the best of the authors’ knowledge. Their simultaneous examination could clarify the effects of different dimensions on the privacy concerns of cloud users. The conceptual model of cloud privacy concerns will allow cloud service providers to focus on key cloud problems affecting users’ privacy concerns and use the most appropriate privacy protection communication and preservation approaches.
Details
Keywords
Service robots are expected to become increasingly common, but the ways in which they can move around in an environment with humans, collect and store data about humans and share…
Abstract
Purpose
Service robots are expected to become increasingly common, but the ways in which they can move around in an environment with humans, collect and store data about humans and share such data produce a potential for privacy violations. In human-to-human contexts, such violations are transgression of norms to which humans typically react negatively. This study examines if similar reactions occur when the transgressor is a robot. The main dependent variable was the overall evaluation of the robot.
Design/methodology/approach
Service robot privacy violations were manipulated in a between-subjects experiment in which a human user interacted with an embodied humanoid robot in an office environment.
Findings
The results show that the robot's violations of human privacy attenuated the overall evaluation of the robot and that this effect was sequentially mediated by perceived robot morality and perceived robot humanness. Given that a similar reaction pattern would be expected when humans violate other humans' privacy, the present study offers evidence in support of the notion that humanlike non-humans can elicit responses similar to those elicited by real humans.
Practical implications
The results imply that designers of service robots and managers in firms using such robots for providing service to employees should be concerned with restricting the potential for robots' privacy violation activities if the goal is to increase the acceptance of service robots in the habitat of humans.
Originality/value
To date, few empirical studies have examined reactions to service robots that violate privacy norms.
Details
Keywords
Denise L. Anthony and Timothy Stablein
The purpose of this paper is to explore different health care professionals’ discourse about privacy – its definition and importance in health care, and its role in their…
Abstract
Purpose
The purpose of this paper is to explore different health care professionals’ discourse about privacy – its definition and importance in health care, and its role in their day-to-day work. Professionals’ discourse about privacy reveals how new technologies and laws challenge existing practices of information control within and between professional groups in health care, with implications not only for patient privacy, but also for the role of information control in professions more generally.
Design/methodology/approach
The authors conducted in-depth, semi-structured interviews with n=83 doctors, nurses, and health information professionals in two academic medical centers and one veteran’s administration hospital/clinic in the Northeastern USA. Interview responses were qualitatively coded for themes and patterns across groups were identified.
Findings
The health care providers and the authors studied actively sought to uphold the protection (and control) of patient information through professional ethics and practices, as well as through the use of technologies and compliance with legal regulations. They used discourses of professionalism, as well as of law and technology, to sometimes accept and sometimes resist changes to practice required in the changing technological and legal context of health care. The authors found differences across professional groups; for some, protection of patient information is part of core professional ethics, while for others it is simply part of their occupational work, aligned with organizational interests.
Research limitations/implications
This qualitative study of physicians, nurses, and health information professionals revealed some differences in views and practices for protecting patient information in the changing technological and legal context of health care that suggest some professional groups (doctors) may be more likely to resist such changes and others (health information professionals) will actively adopt them.
Practical implications
New technologies and regulations are changing how information is used in health care delivery, challenging professional practices for the control of patient information that may change the value or meaning of medical records for different professional groups.
Originality/value
Qualitative findings suggest that professional groups in health care vary in the extent of information control they have, as well in how they view such control. Some groups may be more likely to (be able to) resist changes in the professional control of information that stem from new technologies or regulatory policies. Some professionals recognize that new IT systems and regulations challenge existing social control of information in health care, with the potential to undermine (or possibly bolster) professional self-control for some but not necessarily all occupational groups.
Details
Keywords
Kristen Thomasen and Suzie Dunn
Perpetrators of technology-facilitated gender-based violence are taking advantage of increasingly automated and sophisticated privacy-invasive tools to carry out their abuse…
Abstract
Perpetrators of technology-facilitated gender-based violence are taking advantage of increasingly automated and sophisticated privacy-invasive tools to carry out their abuse. Whether this be monitoring movements through stalkerware, using drones to nonconsensually film or harass, or manipulating and distributing intimate images online such as deepfakes and creepshots, invasions of privacy have become a significant form of gender-based violence. Accordingly, our normative and legal concepts of privacy must evolve to counter the harms arising from this misuse of new technology. Canada's Supreme Court recently addressed technology-facilitated violations of privacy in the context of voyeurism in R v Jarvis (2019) . The discussion of privacy in this decision appears to be a good first step toward a more equitable conceptualization of privacy protection. Building on existing privacy theories, this chapter examines what the reasoning in Jarvis might mean for “reasonable expectations of privacy” in other areas of law, and how this concept might be interpreted in response to gender-based technology-facilitated violence. The authors argue the courts in Canada and elsewhere must take the analysis in Jarvis further to fully realize a notion of privacy that protects the autonomy, dignity, and liberty of all.
Details
Keywords
Advances in Big Data, artificial Intelligence and data-driven innovation bring enormous benefits for the overall society and for different sectors. By contrast, their misuse can…
Abstract
Advances in Big Data, artificial Intelligence and data-driven innovation bring enormous benefits for the overall society and for different sectors. By contrast, their misuse can lead to data workflows bypassing the intent of privacy and data protection law, as well as of ethical mandates. It may be referred to as the ‘creep factor’ of Big Data, and needs to be tackled right away, especially considering that we are moving towards the ‘datafication’ of society, where devices to capture, collect, store and process data are becoming ever-cheaper and faster, whilst the computational power is continuously increasing. If using Big Data in truly anonymisable ways, within an ethically sound and societally focussed framework, is capable of acting as an enabler of sustainable development, using Big Data outside such a framework poses a number of threats, potential hurdles and multiple ethical challenges. Some examples are the impact on privacy caused by new surveillance tools and data gathering techniques, including also group privacy, high-tech profiling, automated decision making and discriminatory practices. In our society, everything can be given a score and critical life changing opportunities are increasingly determined by such scoring systems, often obtained through secret predictive algorithms applied to data to determine who has value. It is therefore essential to guarantee the fairness and accurateness of such scoring systems and that the decisions relying upon them are realised in a legal and ethical manner, avoiding the risk of stigmatisation capable of affecting individuals’ opportunities. Likewise, it is necessary to prevent the so-called ‘social cooling’. This represents the long-term negative side effects of the data-driven innovation, in particular of such scoring systems and of the reputation economy. It is reflected in terms, for instance, of self-censorship, risk-aversion and lack of exercise of free speech generated by increasingly intrusive Big Data practices lacking an ethical foundation. Another key ethics dimension pertains to human-data interaction in Internet of Things (IoT) environments, which is increasing the volume of data collected, the speed of the process and the variety of data sources. It is urgent to further investigate aspects like the ‘ownership’ of data and other hurdles, especially considering that the regulatory landscape is developing at a much slower pace than IoT and the evolution of Big Data technologies. These are only some examples of the issues and consequences that Big Data raise, which require adequate measures in response to the ‘data trust deficit’, moving not towards the prohibition of the collection of data but rather towards the identification and prohibition of their misuse and unfair behaviours and treatments, once government and companies have such data. At the same time, the debate should further investigate ‘data altruism’, deepening how the increasing amounts of data in our society can be concretely used for public good and the best implementation modalities.
Details
Keywords
Renata Monteiro Martins, Sofia Batista Ferraz and André Francisco Alcântara Fagundes
This study aims to propose an innovative model that integrates variables and examines the influence of internet usage expertise, perceived risk and attitude toward information…
Abstract
Purpose
This study aims to propose an innovative model that integrates variables and examines the influence of internet usage expertise, perceived risk and attitude toward information control on privacy concerns (PC) and, consequently, in consumers’ willingness to disclose personal information online. The authors also propose to test the mediation role of trust between PCs and willingness to disclose information. Trust is not a predictor of PC but a causal mechanism – considering that the focus is to understand consumers’ attitudes and behavior regarding the virtual environment (not context-specific) (Martin, 2018).
Design/methodology/approach
The authors developed a survey questionnaire based on the constructs that compose the proposed model to collect data from 864 respondents. The survey questionnaire included the following scales: internet usage expertise from Ohanian (1990); perceived risk, attitude toward information control, trust and willingness to disclose personal information online from Malhotra et al. (2004); and PC from Castañeda and Montoro (2007). All items were measured on a Likert seven-point scale (1 = totally disagree; 7 = totally agree). To obtain Westin’s attitudinal categories toward privacy, respondents answered Westin’s three-item privacy index. For data analysis, the authors applied covariance-based structural equation modeling.
Findings
First, the proposed model explains the drivers of consumers’ disposition to provide personal information at a level that surpasses specific contexts (Martin, 2018), bringing the analysis to consumers’ level and considering their general perceptions toward data privacy. Second, the findings provide inputs to propose a better definition of Westin’s attitudinal categories toward privacy, which used to be defined only by individuals’ information privacy perception. Consumers’ perceptions about their abilities in using the internet, the risks, their beliefs toward information control and trust also help to delimitate and distinguish the fundamentalists, the pragmatics and the unconcerned.
Research limitations/implications
Some limitations weigh the theoretical and practical implications of this study. The sample size of pragmatic and unconcerned respondents was substantially smaller than that of fundamentalists. It might be explained by applying Westin’s self-report index to classify the groups according to their score regarding PCs. Most individuals affirm having a great concern for their data privacy but still provide online information for the benefit of personalization – known as the privacy paradox (Zeng et al., 2021). It leads to another limitation of this research, given the lack of measures that classify respondents by considering their actual behavior toward privacy.
Practical implications
PC emerges as an important predictor of consumer trust and willingness to disclose their data online, and trust also influences this disposition. Managers need to implement actions that effectively reduce consumers’ concerns about privacy and increase their trust in the company – e.g. adopting a clear and transparent policy on how the data collected is stored, treated, protected and used to benefit the consumer. Regarding the perception of risk, if managers convince consumers that the data collected on the internet is protected, they tend to be less concerned about privacy.
Social implications
The results suggest different aspects influencing the willingness to disclose personal information online, including different responses considering consumers’ PCs. Through their policies and legislation, the authors understand that governments must be attentive to this aspect, establishing regulations that protect consumers’ data in the virtual environment. In addition to regulatory policies, education campaigns can be carried out for both consumers and managers to raise the discussion about privacy and the availability of information in the online environment, demonstrating the importance of protecting personal data to benefit the government, consumers and organizations.
Originality/value
Although there is increasing research on consumers’ privacy, studies have not considered their attitudinal classifications – high, moderate and low concern – as moderators of willingness to disclose information online. Researchers have also increased attention to the antecedents of PCs and disclosure of information but overlooked possible mechanisms that explain the relationship between them.
Details
Keywords
This article advocates that privacy literacy research and praxis mobilize people toward changing the technological and social conditions that discipline subjects toward advancing…
Abstract
Purpose
This article advocates that privacy literacy research and praxis mobilize people toward changing the technological and social conditions that discipline subjects toward advancing institutional, rather than community, goals.
Design/methodology/approach
This article analyzes theory and prior work on datafication, privacy, data literacy, privacy literacy and critical literacy to provide a vision for future privacy literacy research and praxis.
Findings
This article (1) explains why privacy is a valuable rallying point around which people can resist datafication, (2) locates privacy literacy within data literacy, (3) identifies three ways that current research and praxis have conceptualized privacy literacy (i.e. as knowledge, as a process of critical thinking and as a practice of enacting information flows) and offers a shared purpose to animate privacy literacy research and praxis toward social change and (4) explains how critical literacy can help privacy literacy scholars and practitioners orient their research and praxis toward changing the conditions that create privacy concerns.
Originality/value
This article uniquely synthesizes existing scholarship on data literacy, privacy literacy and critical literacy to provide a vision for how privacy literacy research and praxis can go beyond improving individual understanding and toward enacting social change.
Details
Keywords
Damian Gordon, Ioannis Stavrakakis, J. Paul Gibson, Brendan Tierney, Anna Becevel, Andrea Curley, Michael Collins, William O’Mahony and Dympna O’Sullivan
Computing ethics represents a long established, yet rapidly evolving, discipline that grows in complexity and scope on a near-daily basis. Therefore, to help understand some of…
Abstract
Purpose
Computing ethics represents a long established, yet rapidly evolving, discipline that grows in complexity and scope on a near-daily basis. Therefore, to help understand some of that scope it is essential to incorporate a range of perspectives, from a range of stakeholders, on current and emerging ethical challenges associated with computer technology. This study aims to achieve this by using, a three-pronged, stakeholder analysis of Computer Science academics, ICT industry professionals, and citizen groups was undertaken to explore what they consider to be crucial computing ethics concerns. The overlap between these stakeholder groups are explored, as well as whether their concerns are reflected in the existing literature.
Design/methodology/approach
Data collection was performed using focus groups, and the data was analysed using a thematic analysis. The data was also analysed to determine if there were overlaps between the literature and the stakeholders’ concerns and attitudes towards computing ethics.
Findings
The results of the focus group analysis show a mixture of overlapping concerns between the different groups, as well as some concerns that are unique to each of the specific groups. All groups stressed the importance of data as a key topic in computing ethics. This includes concerns around the accuracy, completeness and representativeness of data sets used to develop computing applications. Academics were concerned with the best ways to teach computing ethics to university students. Industry professionals believed that a lack of diversity in software teams resulted in important questions not being asked during design and development. Citizens discussed at length the negative and unexpected impacts of social media applications. These are all topics that have gained broad coverage in the literature.
Social implications
In recent years, the impact of ICT on society and the environment at large has grown tremendously. From this fast-paced growth, a myriad of ethical concerns have arisen. The analysis aims to shed light on what a diverse group of stakeholders consider the most important social impacts of technology and whether these concerns are reflected in the literature on computing ethics. The outcomes of this analysis will form the basis for new teaching content that will be developed in future to help illuminate and address these concerns.
Originality/value
The multi-stakeholder analysis provides individual and differing perspectives on the issues related to the rapidly evolving discipline of computing ethics.
Details