Search results

1 – 10 of over 1000
Article
Publication date: 10 June 2022

Priya C. Kumar and Virginia L. Byrne

Existing privacy-related educational materials are not situated in privacy theory, making it hard to understand what specifically children learn about privacy. This article aims…

Abstract

Purpose

Existing privacy-related educational materials are not situated in privacy theory, making it hard to understand what specifically children learn about privacy. This article aims to offer learning objectives and guidance grounded in theories of privacy and learning to serve as a foundation for privacy literacy efforts.

Design/methodology/approach

This article reviews theories of privacy and literacy as social practices and uses these insights to contribute a set of learning objectives for privacy education called the 5Ds of privacy literacy.

Findings

This article connects the 5Ds of privacy literacy with existing curricular standards and offers guidance for using the 5Ds to create educational efforts for preteens grounded in theories of sociocultural learning.

Practical implications

Learning scientists, instructional designers and privacy educators can use the 5Ds of privacy literacy to develop educational programs that help children hone their ability to enact appropriate information flows.

Social implications

Current approaches to privacy education treat privacy as something people need to protect from the incursions of technology, but the authors believe the 5Ds of privacy literacy can redefine privacy – for children and adults alike – as something people experience with the help of technology.

Originality/value

This study uniquely integrates theories of privacy and learning into an educational framework to guide privacy literacy pedagogy.

Details

Information and Learning Sciences, vol. 123 no. 7/8
Type: Research Article
ISSN: 2398-5348

Keywords

Article
Publication date: 13 July 2021

Andrew MacFarlane, Sondess Missaoui, Stephann Makri and Marisela Gutierrez Lopez

Belkin and Robertson (1976a) reflected on the ethical implications of theoretical research in information science and warned that there was potential for abuse of knowledge gained…

Abstract

Purpose

Belkin and Robertson (1976a) reflected on the ethical implications of theoretical research in information science and warned that there was potential for abuse of knowledge gained by undertaking such research and applying it to information systems. In particular, they identified the domains of advertising and political propaganda that posed particular problems. The purpose of this literature review is to revisit these ideas in the light of recent events in global information systems that demonstrate that their fears were justified.

Design/methodology/approach

The authors revisit the theory in information science that Belkin and Robertson used to build their argument, together with the discussion on ethics that resulted from this work in the late 1970s and early 1980s. The authors then review recent literature in the field of information systems, specifically information retrieval, social media and recommendation systems that highlight the problems identified by Belkin and Robertson.

Findings

Information science theories have been used in conjunction with empirical evidence gathered from user interactions that have been detrimental to both individuals and society. It is argued in the paper that the information science and systems communities should find ways to return control to the user wherever possible, and the ways to achieve this are considered.

Research limitations/implications

The ethical issues identified require a multidisciplinary approach with research in information science, computer science, information systems, business, sociology, psychology, journalism, government and politics, etc. required. This is too large a scope to deal with in a literature review, and we focus only on the design and implementation of information systems (Zimmer, 2008a) through an information science and information systems perspective.

Practical implications

The authors argue that information systems such as search technologies, social media applications and recommendation systems should be designed with the recipient of the information in mind (Paisley and Parker, 1965), not the sender of that information.

Social implications

Information systems designed ethically and with users in mind will go some way to addressing the ill effects typified by the problems for individuals and society evident in global information systems.

Originality/value

The authors synthesize the evidence from the literature to provide potential technological solutions to the ethical issues identified, with a set of recommendations to information systems designers and implementers.

Details

Journal of Documentation, vol. 78 no. 2
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 6 May 2014

Tobias Matzner

Ubiquitous computing and “big data” have been widely recognized as requiring new concepts of privacy and new mechanisms to protect it. While improved concepts of privacy have been…

3553

Abstract

Purpose

Ubiquitous computing and “big data” have been widely recognized as requiring new concepts of privacy and new mechanisms to protect it. While improved concepts of privacy have been suggested, the paper aims to argue that people acting in full conformity to those privacy norms still can infringe the privacy of others in the context of ubiquitous computing and “big data”.

Design/methodology/approach

New threats to privacy are described. Helen Nissenbaum's concept of “privacy as contextual integrity” is reviewed concerning its capability to grasp these problems. The argument is based on the assumption that the technologies work, persons are fully informed and capable of deciding according to advanced privacy considerations.

Findings

Big data and ubiquitous computing enable privacy threats for persons whose data are only indirectly involved and even for persons about whom no data have been collected and processed. Those new problems are intrinsic to the functionality of these new technologies and need to be addressed on a social and political level. Furthermore, a concept of data minimization in terms of the quality of the data is proposed.

Originality/value

The use of personal data as a threat to the privacy of others is established. This new perspective is used to reassess and recontextualize Helen Nissenbaum's concept of privacy. Data minimization in terms of quality of data is proposed as a new concept.

Details

Journal of Information, Communication and Ethics in Society, vol. 12 no. 2
Type: Research Article
ISSN: 1477-996X

Keywords

Article
Publication date: 27 September 2011

Jo Pierson and Rob Heyman

The advent of Web 2.0 or so‐called social media have enabled a new kind of communication, called mass self‐communication. These tools and the new form of communication are

11336

Abstract

Purpose

The advent of Web 2.0 or so‐called social media have enabled a new kind of communication, called mass self‐communication. These tools and the new form of communication are believed to empower users in everyday life. The authors of this paper observe a paradox: if this positive potential is possible, the negative downside is also possible. There is often a denial of this downside and it is especially visible in social media at the level of privacy and dataveillance. The purpose of this paper is to illustrate this point through an analysis of cookies.

Design/methodology/approach

The paper illustrates how mass self‐communication in social media enables a new form of vulnerability for privacy. This is best shown by redefining privacy as flows of Personal Identifiable Information (PII) that are regulated by informational norms of Nissenbaum's concept of contextual integrity. Instead of analysing these contexts on a general level, the paper operationalises them on the user level to illustrate the lack of user awareness regarding cookies. The results of the research were gathered through desk research and expert interviews.

Findings

The positive aspects of cookies, unobtrusiveness and ease of use, are also the main challenges for user privacy. This technology can be disempowering because users are often hardly aware of its existence. In that way cookies can obfuscate the perceived context of personal data exposure.

Originality/value

The research shows how user disempowerment in social media is often overlooked by overstressing their beneficial potential.

Details

info, vol. 13 no. 6
Type: Research Article
ISSN: 1463-6697

Keywords

Article
Publication date: 14 February 2019

David Lewis Coss and Gurpreet Dhillon

To effectively develop privacy policies and practices for cloud computing, organizations need to define a set of guiding privacy objectives that can be applied across their…

1146

Abstract

Purpose

To effectively develop privacy policies and practices for cloud computing, organizations need to define a set of guiding privacy objectives that can be applied across their organization. It is argued that it is important to understand individuals’ privacy values with respect to cloud computing to define cloud privacy objectives.

Design/methodology/approach

For the purpose of this study, the authors adopted Keeney’s (1994) value-focused thinking approach to identify privacy objectives with respect to cloud computing.

Findings

The results of this study identified the following six fundamental cloud privacy objectives: to increase trust with cloud provider, to maximize identity management controls, to maximize responsibility of information stewardship, to maximize individual’s understanding of cloud service functionality, to maximize protection of rights to privacy, and to maintain the integrity of data.

Research limitations/implications

One limitation is generalizability of the cloud privacy objectives, and the second is research bias. As this study focused on cloud privacy, the authors felt that the research participants’ increased knowledge of technology usage, including that of cloud technology, was a benefit that outweighed risks associated with not having a random selection of the general population. The newness and unique qualities of privacy issues in cloud computing are better fitted to a qualitative study where issues can emerge naturally through a holistic approach opposed to trying to force fit an existing set of variables or constructs into the context of privacy and cloud computing.

Practical implications

The findings of this research study can be used to assist management in the process of formulating a cloud privacy policy, develop cloud privacy evaluation criteria as well as assist auditors in developing their privacy audit work plans.

Originality/value

Currently, there is little to no guidance in the literature or in practice as to what organizations need to do to ensure they protect their stakeholders privacy in a cloud computing environment. This study works at closing this knowledge gap by identifying cloud privacy objectives.

Details

Information & Computer Security, vol. 27 no. 2
Type: Research Article
ISSN: 2056-4961

Keywords

Article
Publication date: 9 May 2016

Amaya Noain-Sánchez

The purpose of this paper is to lay out an approach to addressing the problem of privacy protection in the global digital environment based on the importance that information has…

1436

Abstract

Purpose

The purpose of this paper is to lay out an approach to addressing the problem of privacy protection in the global digital environment based on the importance that information has to improve users’ informational self-determination. Following this reasoning, this paper focuses on the suitable way to provide user with the correct amount of information they may need to maintain a desirable grade of autonomy as far as their privacy protection is concerned and decide whether or not to put their personal data on the internet.

Design/methodology/approach

The authors arrive at this point in their analysis by qualitative discourse analysis of the most relevant scientific papers and dossiers relating to privacy protection.

Findings

The goal of this paper is twofold. The first is to illustrate the importance of privacy by default and informed consent working together to protect information and communication technology (ICT) users’ privacy. The second goal is to develop a suitable way to administrate the mentioned “informed consent” to users.

Originality/value

To fulfil this purpose, the authors present a new concept of informed consent: active “informed consent” or “Opt-in” model by layers. “Opt-in” regimens have already been used with cookies but never with 2.0 applications, as, for instance, social network sites (SNS).

Details

Journal of Information, Communication and Ethics in Society, vol. 14 no. 2
Type: Research Article
ISSN: 1477-996X

Keywords

Article
Publication date: 8 January 2020

Yong Jin Park, Yoonmo Sang, Hoon Lee and S. Mo Jones-Jang

The digitization of the life has brought complexities associated with addressing digital life after one’s death. This paper aims to investigate the two related issues of the…

Abstract

Purpose

The digitization of the life has brought complexities associated with addressing digital life after one’s death. This paper aims to investigate the two related issues of the privacy and property of postlife digital assets.

Design/methodology/approach

The understanding of digital assets has not been fully unpacked largely due to the current policy complexities in accessing and obtaining digital assets at death. This paper calls critical attention to the importance of respecting user rights in digital environments that currently favor service providers’ interests.

Findings

It is argued that there are ethical blind spots when protecting users’ rights, given no ontological difference between a person’s digital beings and physical existence. These derive from the restrictive corporate terms and ambiguous conditions drafted by digital service providers.

Originality/value

Fundamentally, the transition to the big data era, in which the collection, use and dissemination of digital activities became integral part of the ontology, poses new challenges to privacy and property rights after death.

Details

Digital Policy, Regulation and Governance, vol. 22 no. 1
Type: Research Article
ISSN: 2398-5038

Keywords

Article
Publication date: 30 December 2021

Michael Zimmer and Sarah Logan

Existing algorithms for predicting suicide risk rely solely on data from electronic health records, but such models could be improved through the incorporation of publicly…

Abstract

Purpose

Existing algorithms for predicting suicide risk rely solely on data from electronic health records, but such models could be improved through the incorporation of publicly available socioeconomic data – such as financial, legal, life event and sociodemographic data. The purpose of this study is to understand the complex ethical and privacy implications of incorporating sociodemographic data within the health context. This paper presents results from a survey exploring what the general public’s knowledge and concerns are about such publicly available data and the appropriateness of using it in suicide risk prediction algorithms.

Design/methodology/approach

A survey was developed to measure public opinion about privacy concerns with using socioeconomic data across different contexts. This paper presented respondents with multiple vignettes that described scenarios situated in medical, private business and social media contexts, and asked participants to rate their level of concern over the context and what factor contributed most to their level of concern. Specific to suicide prediction, this paper presented respondents with various data attributes that could potentially be used in the context of a suicide risk algorithm and asked participants to rate how concerned they would be if each attribute was used for this purpose.

Findings

The authors found considerable concern across the various contexts represented in their vignettes, with greatest concern in vignettes that focused on the use of personal information within the medical context. Specific to the question of incorporating socioeconomic data within suicide risk prediction models, the results of this study show a clear concern from all participants in data attributes related to income, crime and court records, and assets. Data about one’s household were also particularly concerns for the respondents, suggesting that even if one might be comfortable with their own being used for risk modeling, data about other household members is more problematic.

Originality/value

Previous studies on the privacy concerns that arise when integrating data pertaining to various contexts of people’s lives into algorithmic and related computational models have approached these questions from individual contexts. This study differs in that it captured the variation in privacy concerns across multiple contexts. Also, this study specifically assessed the ethical concerns related to a suicide prediction model and determining people’s awareness of the publicness of select data attributes, as well as which of these data attributes generated the most concern in such a context. To the best of the authors’ knowledge, this is the first study to pursue this question.

Details

Journal of Information, Communication and Ethics in Society, vol. 20 no. 2
Type: Research Article
ISSN: 1477-996X

Keywords

Article
Publication date: 22 November 2011

Christian Fuchs

There are a lot of discussions about privacy in relation to contemporary communication systems (such as Facebook and other “social media” platforms), but discussions about privacy

2465

Abstract

Purpose

There are a lot of discussions about privacy in relation to contemporary communication systems (such as Facebook and other “social media” platforms), but discussions about privacy on the internet in most cases misses a profound understanding of the notion of privacy and where this notion is coming from. The purpose of this paper is to challenge the liberal notion of privacy and explore foundations of an alternative privacy conception.

Design/methodology/approach

A typology of privacy definitions is elaborated based on Giddens' theory of structuration. The concept of privacy fetishism that is based on critical political economy is introduced. Limits of the liberal concept of privacy are discussed. This discussion is connected to the theories of Marx, Arendt and Habermas. Some foundations of an alternative privacy concept are outlined.

Findings

The notion of privacy fetishism is introduced for criticizing naturalistic accounts of privacy. Marx and Engels have advanced four elements of the critique of the liberal privacy concept that were partly taken up by Arendt and Habermas: privacy as atomism that advances; possessive individualism that harms the public good; legitimizes and reproduces the capitalist class structure; and capitalist patriarchy.

Research limitations/implications

Given the criticisms advanced in this paper, the need for an alternative, socialist privacy concept is ascertained and it is argued that privacy rights should be differentiated according to the position individuals occupy in the power structure, so that surveillance makes transparent wealth and income gaps and company's profits and privacy protects workers and consumers from capitalist domination.

Originality/value

The paper contributes to the establishment of a concept of privacy that is grounded in critical political economy. Owing to the liberal bias of the privacy concept, the theorization of privacy has thus far been largely ignored in critical political economy. The paper contributes to illuminating this blind spot.

Details

Journal of Information, Communication and Ethics in Society, vol. 9 no. 4
Type: Research Article
ISSN: 1477-996X

Keywords

Article
Publication date: 3 June 2014

Rob Heyman, Ralf De Wolf and Jo Pierson

The purpose of this paper is to define two types of privacy, which are distinct but often reduced to each other. It also investigates which form of privacy is most prominent in…

5359

Abstract

Purpose

The purpose of this paper is to define two types of privacy, which are distinct but often reduced to each other. It also investigates which form of privacy is most prominent in privacy settings of online social networks (OSN). Privacy between users is different from privacy between a user and a third party. OSN, and to a lesser extent researchers, often reduce the former to the latter, which results in misleading users and public debate about privacy.

Design/methodology/approach

The authors define two types of privacy that account for the difference between interpersonal and third-party disclosure. The first definition draws on symbolic interactionist accounts of privacy, wherein users are performing dramaturgically for an intended audience. Third-party privacy is based on the data that represent the user in data mining and knowledge discovery processes, which ultimately manipulate users into audience commodities. This typology was applied to the privacy settings of Facebook, LinkedIn and Twitter. The results are presented as a flowchart.

Findings

The research indicates that users are granted more options in controlling their interpersonal information flow towards other users than third parties or service providers.

Research limitations/implications

This distinction needs to be furthered empirically, by comparing user’s privacy expectations in both situations. On more theoretical grounds, this typology could also be linked to Habermas’ system and life-world.

Originality/value

A typology has been provided to compare the relative autonomy users receive for settings that drive revenue and settings, which are independent from revenue.

Details

info, vol. 16 no. 4
Type: Research Article
ISSN: 1463-6697

Keywords

1 – 10 of over 1000