Search results

1 – 10 of 90
To view the access options for this content please click here
Article
Publication date: 1 March 2011

Yohko Orito

The purpose of this paper is to examine the social impacts of “silent control” of individuals by means of the architecture of dataveillance systems. It addresses the…

Abstract

Purpose

The purpose of this paper is to examine the social impacts of “silent control” of individuals by means of the architecture of dataveillance systems. It addresses the question whether individuals, in reality, can actually determine autonomously the kinds of information that they can acquire and convey in today's dataveillance environments. The paper argues that there is a risk of a “counter‐control revolution” that may threaten to reverse the “control revolution” described by Shapiro.

Design/methodology/approach

Using relevant business cases, this paper describes the nature of dataveillance systems, then it examines situations in which the intellectual freedom of individuals is silently constrained by the architecture of such systems. This analysis leads to the conclusion that individuals in today's information society face the risk of a “counter‐control revolution” that can threaten their intellectual freedom. Given this troubling conclusion, the present paper addresses the challenges of establishing socially acceptable dataveillance systems.

Findings

Intentionally or unintentionally, the architecture of dataveillance systems determines what kinds of information an individual can access or receive. This means that social sorting occurs based upon the processing of personal information by dataveillance systems; and, as a result, individuals' intellectual freedom could be constrained without their realising that it is happening. Under this circumstance, the ability of individuals to control the transmission and flow of information, recently made possible by the “control revolution”, already has been compromised by business organisations that operate dataveillance systems. It is business organisations, and not the individuals themselves, that control the kinds of information that individuals are able to acquire and transmit.

Originality/value

This paper provides an analysis of social risks caused by the architecture of dataveillance systems, and it introduces the concept of a “counter‐control revolution”. These contributions provide a good starting point to evaluate the social impacts of dataveillance systems and to establish better, more socially acceptable dataveillance systems.

Details

Journal of Information, Communication and Ethics in Society, vol. 9 no. 1
Type: Research Article
ISSN: 1477-996X

Keywords

To view the access options for this content please click here
Article
Publication date: 8 May 2019

Claire Seungeun Lee

The purpose of this paper is twofold: first, to explore how China uses a social credit system as part of its “data-driven authoritarianism” policy; and second, to…

Downloads
3292

Abstract

Purpose

The purpose of this paper is twofold: first, to explore how China uses a social credit system as part of its “data-driven authoritarianism” policy; and second, to investigate how datafication, which is a method to legitimize data collection, and dataveillance, which is continuous surveillance through the use of data, offer the Chinese state a legitimate method of monitoring, surveilling and controlling citizens, businesses and society. Taken together, China’s social credit system is analyzed as an integrated tool for datafication, dataveillance and data-driven authoritarianism.

Design/methodology/approach

This study combines the personal narratives of 22 Chinese citizens with policy analyses, online discussions and media reports. The stories were collected using a scenario-based story completion method to understand the participants’ perceptions of the recently introduced social credit system in China.

Findings

China’s new social credit system, which turns both online and offline behaviors into a credit score through smartphone apps, creates a “new normal” way of life for Chinese citizens. This data-driven authoritarianism uses data and technology to enhance citizen surveillance. Interactions between individuals, technologies and information emerge from understanding the system as one that provides social goods, using technologies, and raising concerns of privacy, security and collectivity. An integrated critical perspective that incorporates the concepts of datafication and dataveillance enhances a general understanding of how data-driven authoritarianism develops through the social credit system.

Originality/value

This study builds upon an ongoing debate and an emerging body of literature on datafication, dataveillance and digital sociology while filling empirical gaps in the study of the global South. The Chinese social credit system has growing recognition and importance as both a governing tool and a part of everyday datafication and dataveillance processes. Thus, these phenomena necessitate discussion of its consequences for, and applications by, the Chinese state and businesses, as well as affected individuals’ efforts to adapt to the system.

To view the access options for this content please click here
Article
Publication date: 19 February 2021

Claire Seungeun Lee

The first case of coronavirus disease 2019 (COVID-19) was documented in China, and the virus was soon to be introduced to its neighboring country – South Korea. South…

Downloads
387

Abstract

Purpose

The first case of coronavirus disease 2019 (COVID-19) was documented in China, and the virus was soon to be introduced to its neighboring country – South Korea. South Korea, one of the earliest countries to initiate a national pandemic response to COVID-19 with fairly substantial measures at the individual, societal and governmental level, is an interesting example of a rapid response by the Global South. The current study examines contact tracing mobile applications (hereafter, contact tracing apps) for those who were subject to self-quarantine through the lenses of dataveillance and datafication. This paper analyzes online/digital data from those who were mandatorily self-quarantined by the Korean government largely due to returning from overseas travel.

Design/methodology/approach

This study uses an Internet ethnography approach to collect and analyze data. To extract data for this study, self-quarantined Korean individuals' blog entries were collected and verified with a combination of crawling and manual checking. Content analysis was performed with the codes and themes that emerged. In the COVID-19 pandemic era, this method is particularly useful to gain access to those who are affected by the situation. This approach advances the author’s understandings of COVID-19 contact tracing mobile apps and the experiences of self-quarantined people who use them.

Findings

The paper shows Korean citizens' understandings and views of using the COVID-19 self-tracing application in South Korea through examining their experiences. The research argues that the application functions as a datafication tool that collects the self-quarantined people's information and performs dataveillance on the self-quarantined people. This research further offers insights for various agreements/disagreements at different actors (i.e. the self-quarantined, their families, contact tracers/government officials) in the process of contact tracing for COVID-19.

Originality/value

This study also provides insights into the implications of information and technology as they affect datafication and dataveillance conducted on the public. This study investigates an ongoing debate of COVID-19's contact tracing method concerning privacy and builds upon an emerging body of literature on datafication, dataveillance, social control and digital sociology.

Peer review

The peer review history for this article is available at: https://publons.com/publon/10.1108/OIR-08-2020-0377

Details

Online Information Review, vol. 45 no. 4
Type: Research Article
ISSN: 1468-4527

Keywords

To view the access options for this content please click here
Article
Publication date: 1 June 1994

Roger Clarke

Computer matching is a mass surveillance technique involving thecomparison of data about many people, which have been acquired frommultiple sources. Its use offers…

Downloads
1470

Abstract

Computer matching is a mass surveillance technique involving the comparison of data about many people, which have been acquired from multiple sources. Its use offers potential benefits, particularly financial savings. It is also error‐prone, and its power results in threats to established patterns and values. The imperatives of efficiency and equity demand that computer matching be used, and the information privacy interest demands that it be used only where justified, and be subjected to effective controls. Provides background to this important technique, including its development and application in the USA and in Australia, and a detailed technical description. Contends that the technique, its use, and controls over its use are very important issues which demand research. Computing, telecommunications and robotics artefacts which have the capacity to change society radically need to be subjected to early and careful analysis, not only by sociologists, lawyers and philosophers, but also by information technologists themselves.

Details

Information Technology & People, vol. 7 no. 2
Type: Research Article
ISSN: 0959-3845

Keywords

To view the access options for this content please click here
Article
Publication date: 27 September 2011

Jo Pierson and Rob Heyman

The advent of Web 2.0 or so‐called social media have enabled a new kind of communication, called mass self‐communication. These tools and the new form of communication are

Downloads
10166

Abstract

Purpose

The advent of Web 2.0 or so‐called social media have enabled a new kind of communication, called mass self‐communication. These tools and the new form of communication are believed to empower users in everyday life. The authors of this paper observe a paradox: if this positive potential is possible, the negative downside is also possible. There is often a denial of this downside and it is especially visible in social media at the level of privacy and dataveillance. The purpose of this paper is to illustrate this point through an analysis of cookies.

Design/methodology/approach

The paper illustrates how mass self‐communication in social media enables a new form of vulnerability for privacy. This is best shown by redefining privacy as flows of Personal Identifiable Information (PII) that are regulated by informational norms of Nissenbaum's concept of contextual integrity. Instead of analysing these contexts on a general level, the paper operationalises them on the user level to illustrate the lack of user awareness regarding cookies. The results of the research were gathered through desk research and expert interviews.

Findings

The positive aspects of cookies, unobtrusiveness and ease of use, are also the main challenges for user privacy. This technology can be disempowering because users are often hardly aware of its existence. In that way cookies can obfuscate the perceived context of personal data exposure.

Originality/value

The research shows how user disempowerment in social media is often overlooked by overstressing their beneficial potential.

Details

info, vol. 13 no. 6
Type: Research Article
ISSN: 1463-6697

Keywords

To view the access options for this content please click here
Book part
Publication date: 16 October 2018

Antonio Francesco Maturo and Veronica Moretti

The chapter critically analyzes the concepts and the practices of surveillance in modern and postmodern societies along with their consequences. We show the changes in the…

Abstract

The chapter critically analyzes the concepts and the practices of surveillance in modern and postmodern societies along with their consequences. We show the changes in the systems, which are used to monitor individuals, and emphasize the transition toward soft surveillance systems, probably stimulated by digital technologies. This switch from top-down control to “lateral” monitoring systems encloses surveillance practices with suggestive names like interveillance, synopticon, and dataveillance. The dark side of digital health has a bright start. According to Topol’s (2016) vision of the future, we will soon be the “consumers,” the real protagonists, of the management of our health – thanks largely to the practically endless data about our bodies, behaviors, and lifestyles we will be able to collect and analyze. We will share our health information in real time with the doctors whom we will choose based on their score in clinical rankings (here, too, quantification rears its head). Yet, this simplified version of health makes it seem that there are always some solutions, which the algorithm can supply as long as it has enough information. Moreover, in the United States, some health-insurance companies have started to offer a discount on premiums to the members who agree to collect and share self-tracking data with them. Clearly, the discount is given only to the workers who have healthy habits. At first sight, this can seem as a win-win trade-off; however, what today is presented as an individual option can easily become a requirement tomorrow.

Details

Digital Health and the Gamification of Life: How Apps Can Promote a Positive Medicalization
Type: Book
ISBN: 978-1-78754-366-9

Keywords

To view the access options for this content please click here
Article
Publication date: 15 October 2020

Ash Watson and Deborah Lupton

The purpose of this paper is to report on the findings from the Digital Privacy Story Completion Project, which investigated Australian participants' understandings of and…

Abstract

Purpose

The purpose of this paper is to report on the findings from the Digital Privacy Story Completion Project, which investigated Australian participants' understandings of and responses to digital privacy scenarios using a novel method and theoretical approach.

Design/methodology/approach

The story completion method was brought together with De Certeau's concept of tactics and more-than-human theoretical perspectives. Participants were presented with four story stems on an online platform. Each story stem introduced a fictional character confronted with a digital privacy dilemma. Participants were asked to complete the stories by typing in open text boxes, responding to the prompts “How does the character feel? What does she/he do? What happens next?”. A total of 29 participants completed the stories, resulting in a corpus of 116 narratives for a theory-driven thematic analysis.

Findings

The stories vividly demonstrate the ways in which tactics are entangled with relational connections and affective intensities. They highlight the micropolitical dimensions of human–nonhuman affordances when people are responding to third-party use of their personal information. The stories identified the tactics used and boundaries that are drawn in people's sense-making concerning how they define appropriate and inappropriate use of their data.

Originality/value

This paper demonstrates the value and insights of creatively attending to personal data privacy issues in ways that decentre the autonomous tactical and agential individual and instead consider the more-than-human relationality of privacy.

Peer review

The peer review history for this article is available at: https://publons.com/publon/10.1108/OIR-05-2020-0174

To view the access options for this content please click here
Article
Publication date: 10 August 2010

Anna Vartapetiance Salmasi and Lee Gillam

The purpose of this paper is to discuss the UK National DNA Database (NDNAD) and some of the controversies surrounding it with reference to legal and ethical issues…

Abstract

Purpose

The purpose of this paper is to discuss the UK National DNA Database (NDNAD) and some of the controversies surrounding it with reference to legal and ethical issues, focusing particularly on privacy and human rights. Governance of this database involves specific exemptions from the Data Protection Act (DPA), and this gives a rise to concerns regarding both the extent of surveillance on the UK population and the possibility for harm to all citizens. This is of wider importance since every current citizen, and everybody who visits the UK, could become a record in the DNA database. Principally, the paper seeks to explore whether these exemptions would also imply exemptions for software developers from codes of practice and ethics of their professional societies as relate to constructing or maintaining such data and the database.

Design/methodology/approach

The paper makes a comparison between the principles of the DPA, as would need to be followed by all other organizations handling personal data, professional responsibilities‐based codes of ethics of professional societies, and the current reality as reported in relation to the NDNAD and the exemptions offered through the DPA.

Findings

Primarily, if NDNAD was not exempted from certain provisions in the DPA, the potential for the kinds of data leakages and other mishandlings could largely be avoided without the need for further considerations over so‐called “data minimization”. It can be seen how the lack of afforded protection allows for a wide range of issues as relate at least to privacy.

Originality/value

The paper provides the first evaluation of the combination of law, codes of ethics, and activities in the real world as related to NDNAD, with concomitant considerations for privacy, liberty, and human rights. Originality is demonstrated through consideration of the implications of certain exemptions in the DPA in relation to crime and taxation and national security and in relating the expected protections for personal data to widely reported evidence that such protections may be variously lacking. In addition, the paper provides a broad overview of controversies over certain newer kinds of DNA analysis, and other relatively recent findings, that seem generally absent from the vast majority of debates over this kind of analysis.

Details

Journal of Information, Communication and Ethics in Society, vol. 8 no. 3
Type: Research Article
ISSN: 1477-996X

Keywords

To view the access options for this content please click here
Book part
Publication date: 24 September 2018

Chelsea Palmer and Rochelle Fairfield

In June 2017, The Human Data Commons Foundation released its first annual Quantified Self Report Card. This project consisted of a qualitative review of the privacy policy…

Abstract

In June 2017, The Human Data Commons Foundation released its first annual Quantified Self Report Card. This project consisted of a qualitative review of the privacy policy documentation of 55 private sector companies in the self-tracking and biometric data industry. Two researchers recorded their ratings on concrete criteria for each company’s website, as well as providing a blend of objective and subjective ratings on the overall ease of readability and navigability within each site’s documentation. This chapter explains the unique context of user privacy rights within the Quantified Self tracking industry, and summarises the overall results from the 2017 Quantified Self Report Card. The tension between user privacy and data sharing in commercial data-collection practices is explored and the authors provide insight into possibilities for resolving these tensions. The self-as-instrument in research is touched on in autoethnographic narrative confronting and interrogating the difficult process of immersive qualitative analytics in relation to such intensely complex and personal issues as privacy and ubiquitous dataveillance. Drawing upon excerpted reflections from the Report Card’s co-author, a few concluding thoughts are shared on freedom and choice. Finally, goals for next year’s Quantified Self Report Card are revealed, and a call extended for public participation.

Content available
Book part
Publication date: 9 December 2021

Marina Da Bormida

Advances in Big Data, artificial Intelligence and data-driven innovation bring enormous benefits for the overall society and for different sectors. By contrast, their…

Abstract

Advances in Big Data, artificial Intelligence and data-driven innovation bring enormous benefits for the overall society and for different sectors. By contrast, their misuse can lead to data workflows bypassing the intent of privacy and data protection law, as well as of ethical mandates. It may be referred to as the ‘creep factor’ of Big Data, and needs to be tackled right away, especially considering that we are moving towards the ‘datafication’ of society, where devices to capture, collect, store and process data are becoming ever-cheaper and faster, whilst the computational power is continuously increasing. If using Big Data in truly anonymisable ways, within an ethically sound and societally focussed framework, is capable of acting as an enabler of sustainable development, using Big Data outside such a framework poses a number of threats, potential hurdles and multiple ethical challenges. Some examples are the impact on privacy caused by new surveillance tools and data gathering techniques, including also group privacy, high-tech profiling, automated decision making and discriminatory practices. In our society, everything can be given a score and critical life changing opportunities are increasingly determined by such scoring systems, often obtained through secret predictive algorithms applied to data to determine who has value. It is therefore essential to guarantee the fairness and accurateness of such scoring systems and that the decisions relying upon them are realised in a legal and ethical manner, avoiding the risk of stigmatisation capable of affecting individuals’ opportunities. Likewise, it is necessary to prevent the so-called ‘social cooling’. This represents the long-term negative side effects of the data-driven innovation, in particular of such scoring systems and of the reputation economy. It is reflected in terms, for instance, of self-censorship, risk-aversion and lack of exercise of free speech generated by increasingly intrusive Big Data practices lacking an ethical foundation. Another key ethics dimension pertains to human-data interaction in Internet of Things (IoT) environments, which is increasing the volume of data collected, the speed of the process and the variety of data sources. It is urgent to further investigate aspects like the ‘ownership’ of data and other hurdles, especially considering that the regulatory landscape is developing at a much slower pace than IoT and the evolution of Big Data technologies. These are only some examples of the issues and consequences that Big Data raise, which require adequate measures in response to the ‘data trust deficit’, moving not towards the prohibition of the collection of data but rather towards the identification and prohibition of their misuse and unfair behaviours and treatments, once government and companies have such data. At the same time, the debate should further investigate ‘data altruism’, deepening how the increasing amounts of data in our society can be concretely used for public good and the best implementation modalities.

Details

Ethical Issues in Covert, Security and Surveillance Research
Type: Book
ISBN: 978-1-80262-414-4

Keywords

1 – 10 of 90