Search results

1 – 10 of over 1000
Open Access
Book part
Publication date: 4 June 2021

Briony Anderson and Mark A. Wood

This chapter examines the phenomenon of doxxing: the practice of publishing private, proprietary, or personally identifying information on the internet, usually with malicious…

Abstract

This chapter examines the phenomenon of doxxing: the practice of publishing private, proprietary, or personally identifying information on the internet, usually with malicious intent. Undertaking a scoping review of research into doxxing, we develop a typology of this form of technology-facilitated violence (TFV) that expands understandings of doxxing, its forms and its harms, beyond a taciturn discussion of privacy and harassment online. Building on David M. Douglas's typology of doxxing, our typology considers two key dimensions of doxxing: the form of loss experienced by the victim and the perpetrator's motivation(s) for undertaking this form of TFV. Through examining the extant literature on doxxing, we identify seven mutually non-exclusive motivations for this form of TFV: extortion, silencing, retribution, controlling, reputation-building, unintentional, and doxxing in the public interest. We conclude by identifying future areas for interdisciplinary research into doxxing that brings criminology into conversation with the insights of media-focused disciplines.

Details

The Emerald International Handbook of Technology-Facilitated Violence and Abuse
Type: Book
ISBN: 978-1-83982-849-2

Keywords

Article
Publication date: 18 October 2023

Langdon Holmes, Scott Crossley, Harshvardhan Sikka and Wesley Morris

This study aims to report on an automatic deidentification system for labeling and obfuscating personally identifiable information (PII) in student-generated text.

Abstract

Purpose

This study aims to report on an automatic deidentification system for labeling and obfuscating personally identifiable information (PII) in student-generated text.

Design/methodology/approach

The authors evaluate the performance of their deidentification system on two data sets of student-generated text. Each data set was human-annotated for PII. The authors evaluate using two approaches: per-token PII classification accuracy and a simulated reidentification attack design. In the reidentification attack, two reviewers attempted to recover student identities from the data after PII was obfuscated by the authors’ system. In both cases, results are reported in terms of recall and precision.

Findings

The authors’ deidentification system recalled 84% of student name tokens in their first data set (96% of full names). On the second data set, it achieved a recall of 74% for student name tokens (91% of full names) and 75% for all direct identifiers. After the second data set was obfuscated by the authors’ system, two reviewers attempted to recover the identities of students from the obfuscated data. They performed below chance, indicating that the obfuscated data presents a low identity disclosure risk.

Research limitations/implications

The two data sets used in this study are not representative of all forms of student-generated text, so further work is needed to evaluate performance on more data.

Practical implications

This paper presents an open-source and automatic deidentification system appropriate for student-generated text with technical explanations and evaluations of performance.

Originality/value

Previous study on text deidentification has shown success in the medical domain. This paper develops on these approaches and applies them to text in the educational domain.

Details

Information and Learning Sciences, vol. 124 no. 9/10
Type: Research Article
ISSN: 2398-5348

Keywords

Article
Publication date: 17 April 2020

Kevin Watson and Dinah M. Payne

The purpose of this paper is to review current practice in sharing and mining medical data revealing benefits, costs and ethical issues. Based on stakeholder perspectives and…

Abstract

Purpose

The purpose of this paper is to review current practice in sharing and mining medical data revealing benefits, costs and ethical issues. Based on stakeholder perspectives and values, the authors create an ethical code to regulate the sharing and mining of medical information.

Design/methodology/approach

The framework is based on a review of academic, practitioner and legal research.

Findings

Owing to the inability of current safeguards to protect consumers from risks related to the disclosure of medical information, the authors develop a framework for ethical sharing and mining of medical data, security, transparency, respect, accountability, community and quality (STRACQ), which espouses security, transparency, respect, accountability, community and quality as the basic tenets of ethical data sharing and mining practice.

Research limitations/implications

The STRACQ framework is an original, previously unpublished contribution that will require modification over time based on discussion and debate within and among the academy, medical community and public policymakers.

Social implications

The framework for sharing borrows from the Fair Credit Reporting Act, allowing the collection and dissemination of identified medical data but placing strict limitations on use. Following this framework, benefits of shared and mined medical data are freely available with appropriate safeguards for consumer privacy.

Originality/value

Mandates for adoption of electronic health-care records require an understanding of medical data mining. This paper presents a review of data mining techniques and reasons for engaging in the practice of identifying benefits, costs and ethical issues. The authors create an original framework, STRACQ, for ethical sharing and mining of medical information, allowing knowledge exploration while protecting consumer privacy.

Details

Journal of Information, Communication and Ethics in Society, vol. 19 no. 1
Type: Research Article
ISSN: 1477-996X

Keywords

Article
Publication date: 13 February 2019

Darra Hofman, Victoria Louise Lemieux, Alysha Joo and Danielle Alves Batista

This paper aims to explore a paradoxical situation, asking whether it is possible to reconcile the immutable ledger known as blockchain with the requirements of the General Data…

1938

Abstract

Purpose

This paper aims to explore a paradoxical situation, asking whether it is possible to reconcile the immutable ledger known as blockchain with the requirements of the General Data Protection Regulations (GDPR), and more broadly privacy and data protection.

Design/methodology/approach

This paper combines doctrinal legal research examining the GDPR’s application and scope with case studies examining blockchain solutions from an archival theoretic perspective to answer several questions, including: What risks are blockchain solutions said to impose (or mitigate) for organizations dealing with data that is subject to the GDPR? What are the relationships between the GDPR principles and the principles of archival theory? How can these two sets of principles be aligned within a particular blockchain solution? How can archival principles be applied to blockchain solutions so that they support GDPR compliance?

Findings

This work will offer an initial exploration of the strengths and weaknesses of blockchain solutions for GDPR compliant information governance. It will present the disjunctures between GDPR requirements and some current blockchain solution designs and implementations, as well as discussing how solutions may be designed and implemented to support compliance. Immutability of information recorded on a blockchain is a differentiating positive feature of blockchain technology from the perspective of trusted exchanges of value (e.g. cryptocurrencies) but potentially places organizations at risk of non-compliance with GDPR if personally identifiable information cannot be removed. This work will aid understanding of how blockchain solutions should be designed to ensure compliance with GDPR, which could have significant practical implications for organizations looking to leverage the strengths of blockchain technology to meet their needs and strategic goals.

Research limitations/implications

Some aspects of the social layer of blockchain solutions, such as law and business procedures, are also well understood. Much less well understood is the data layer, and how it serves as an interface between the social and the technical in a sociotechnical system like blockchain. In addition to a need for more research about the data/records layer of blockchains and compliance, there is a need for more information governance professionals who can provide input on this layer, both to their organizations and other stakeholders.

Practical implications

Managing personal data will continue to be one of the most challenging, fraught issues for information governance moving forward; given the fairly broad scope of the GDPR, many organizations, including those outside of the EU, will have to manage personal data in compliance with the GDPR. Blockchain technology could play an important role in ensuring organizations have easily auditable, tamper-resistant, tamper-evident records to meet broader organizational needs and to comply with the GDPR.

Social implications

Because the GDPR professes to be technology-neutral, understanding its application to novel technologies such as blockchain provides an important window into the broader context of compliance in evolving information governance spaces.

Originality/value

The specific question of how GDPR will apply to blockchain information governance solutions is almost entirely novel. It has significance to the design and implementation of blockchain solutions for recordkeeping. It also provides insight into how well “technology-neutral” laws and regulations actually work when confronted with novel technologies and applications. This research will build upon significant bodies of work in both law and archival science to further understand information governance and compliance as we are shifting into the new GDPR world.

Details

Records Management Journal, vol. 29 no. 1/2
Type: Research Article
ISSN: 0956-5698

Keywords

Article
Publication date: 16 November 2012

May O. Lwin, Anthony D. Miyazaki, Andrea J.S. Stanaland and Evonne Lee

This paper aims to examine motivations for young consumers' internet use, how these motivations relate to children's privacy concerns and, subsequently, children's willingness to…

1249

Abstract

Purpose

This paper aims to examine motivations for young consumers' internet use, how these motivations relate to children's privacy concerns and, subsequently, children's willingness to disclose personally identifiable information.

Design/methodology/approach

The strengths of three common internet usage motives (information seeking, entertainment, and socializing) in predicting disclosure behavior are examined via survey research with a sample of children aged 10‐12.

Findings

Two of the motives – information seeking and socializing – are found to influence privacy concerns, which in turn, are shown to affect willingness to disclose information. Information‐seeking motivations were positively related to privacy concerns, while socializing motivations were negatively related to privacy concerns. Direct incentives are also found to increase disclosure.

Originality/value

The findings suggest that the uses and gratifications theory is useful for understanding children's privacy behaviors relating to information seeking and socializing motivations. Combining this with the varying levels of interactivity of websites that might satisfy various motives helps researchers begin to understand how particular motives may lead to increases or decreases in risky behavior; in this case, preteen disclosure of personal information.

Article
Publication date: 10 August 2015

Aimee van Wynsberghe and Jeroen van der Ham

The purpose of this paper is to develop a novel approach for the ethical analysis of data collected from an online file-sharing site known as The PirateBay. Since the creation of…

821

Abstract

Purpose

The purpose of this paper is to develop a novel approach for the ethical analysis of data collected from an online file-sharing site known as The PirateBay. Since the creation of Napster back in the late 1990s for the sharing and distribution of MP3 files across the Internet, the entertainment industry has struggled to deal with the regulation of information sharing at large. Added to the ethical questions of censorship and distributive justice are questions related to the use of data collected from such file-sharing sites for research purposes.

Design/methodology/approach

The approach is based on previous work analysing the use of data from online social networking sites and involves value analysis of the collection of data throughout the data’s various life cycles.

Findings

This paper highlights the difficulties faced when attempting to apply a deontological or utilitarian approach to cases like the one used here. With this in mind, the authors point to a virtue ethics approach as a way to address ethical issues related to data sharing in the face of ever-changing data gathering and sharing practices.

Practical implications

This work is intended to provide a concrete approach for ethical data sharing practices in the domain of Internet security research.

Originality/value

The approach presented in this paper is a novel approach combining the insights from: the embedded values concept, value-sensitive design and the approach of the embedded ethicist.

Details

Journal of Information, Communication and Ethics in Society, vol. 13 no. 3/4
Type: Research Article
ISSN: 1477-996X

Keywords

Book part
Publication date: 30 June 2017

Leslie P. Francis and John G. Francis

Reusing existing data sets of health information for public health or medical research has much to recommend it. Much data repurposing in medical or public health research or…

Abstract

Reusing existing data sets of health information for public health or medical research has much to recommend it. Much data repurposing in medical or public health research or practice involves information that has been stripped of individual identifiers but some does not. In some cases, there may have been consent to the reuse but in other cases consent may be absent and people may be entirely unaware of how the data about them are being used. Data sets are also being combined and may contain information with very different sources, consent histories, and individual identifiers. Much of the ethical and policy discussion about the permissibility of data reuse has centered on two questions: for identifiable data, the scope of the original consent and whether the reuse is permissible in light of that scope, and for de-identified data, whether there are unacceptable risks that the data will be reidentified in a manner that is harmful to any data subjects. Prioritizing these questions rests on a picture of the ethics of data use as primarily about respecting the choices of the data subject. We contend that this picture is mistaken; data repurposing, especially when data sets are combined, raises novel questions about the impacts of research on groups and their implications for individuals regarded as falling within these groups. These impacts suggest that the controversies about de-identification or reconsent for reuse are to some extent beside the point. Serious ethical questions are also raised by the inferences that may be drawn about individuals from the research and resulting risks of stigmatization. These risks may arise even when individuals were not part of the original data set being repurposed. Data reuse, repurposing, and recombination may have damaging effects on others not included within the original data sets. These issues of justice for individuals who might be regarded as indirect subjects of research are not even raised by approaches that consider only the implications for or agreement of the original data subject. This chapter argues that health information should be available for reuse, information should be available for use, but in a way that does not yield unexpected surprises, produce direct harm to individuals, or violate warranted trust.

Details

Studies in Law, Politics, and Society
Type: Book
ISBN: 978-1-78714-811-6

Keywords

Article
Publication date: 2 January 2023

Deepak Choudhary

As the number of devices that connect to the Internet of Things (IoT) has grown, privacy and security issues have come up. Because IoT devices collect so much sensitive information

Abstract

Purpose

As the number of devices that connect to the Internet of Things (IoT) has grown, privacy and security issues have come up. Because IoT devices collect so much sensitive information, like user names, locations, phone numbers and even how they usually use energy, it is very important to protect users' privacy and security. IoT technology will be hard to use on the client side because IoT-enabled devices do not have clear privacy and security controls.

Design/methodology/approach

IoT technology would be harder to use on the client side if the IoT did not offer enough well-defined ways to protect users’ privacy and security. The goal of this research is to protect people's privacy in the IoT by using the oppositional artificial flora optimization (EGPKC-OAFA) algorithm to generate the best keys for the ElGamal public key cryptosystem (EGPKC). The EGPKC-OAFA approach puts the most weight on the IEEE 802.15.4 standard for MAC, which is the most important part of the standard. The security field is part of the MAC header of this standard. In addition, the MAC header includes EGPKC, which makes it possible to make authentication keys as quickly as possible.

Findings

With the proliferation of IoT devices, privacy and security have become major concerns in the academic world. Security and privacy are of the utmost importance due to the large amount of personally identifiable information acquired by IoT devices, such as name, location, phone numbers and energy use. Client-side deployment of IoT technologies will be hampered by the absence of well-defined privacy and security solutions afforded by the IoT. The purpose of this research is to present the EGPKC with optimum key generation using the EGPKC-OAFA algorithm for the purpose of protecting individual privacy within the context of the IoT. The EGPKC-OAFA approach is concerned with the MAC standard defined by the IEEE 802.15.4 standard, which includes the security field in its MAC header. Also, the MAC header incorporates EGPKC, which enables the fastest possible authentication key generation. In addition, the best methodology award goes to the OAFA strategy, which successfully implements the optimum EGPKC selection strategy by combining opposition-based (OBL) and standard AFA ideas. The EGPKC-OAFA method has been proved to effectively analyze performance in a number of simulations, with the results of various functions being identified.

Originality/value

In light of the growing prevalence of the IoT, an increasing number of people are becoming anxious about the protection and confidentiality of the personal data that they save online. This is especially true in light of the fact that more and more things are becoming connected to the internet. The IoT is capable of gathering personally identifiable information such as names, addresses and phone numbers, as well as the quantity of energy that is used. It will be challenging for customers to adopt IoT technology because of worries about the security and privacy of the data generated by users. In this work, the EGPKC is paired with adversarial artificial flora, which leads in an increase to the privacy security provided by EGPKC for the IoT (EGPKC-OAFA). The MAC security field that is part of the IEEE 802.15.4 standard is one of the areas that the EGPKC-OAFA protocol places a high focus on. The Authentication Key Generation Protocol Key Agreement, also known as EGPKCA, is used in MAC headers. The abbreviation for this protocol is EGPKCA. The OAFA technique, also known as the combination of OBL and AFA, is the most successful method for selecting EGPKCs. This method is recognized by its acronym, OAFA. It has been shown via a variety of simulations that the EGPKC-OAFA technique is a very useful instrument for carrying out performance analysis.

Details

International Journal of Pervasive Computing and Communications, vol. 19 no. 5
Type: Research Article
ISSN: 1742-7371

Keywords

Book part
Publication date: 8 March 2021

Aroon P. Manoharan and Tony Carrizales

With the increasing use of the Internet and social media, governments worldwide are adopting digital technologies and innovative strategies to communicate and engage with their…

Abstract

With the increasing use of the Internet and social media, governments worldwide are adopting digital technologies and innovative strategies to communicate and engage with their citizens. Public sector agencies, especially at the local level, have been adopting emerging technologies such as the Internet of Things, artificial Intelligence, and blockchain and they are increasingly leveraging big data analytics to improve their decision-making and organizational performance. These rapid innovations pose important questions about, and concerns for, the privacy and security of the citizens accessing government information and services online. This chapter explores these issues, discusses the role of privacy policies in addressing such concerns, and highlights the need for ethical privacy policies to restore the trust and confidence of citizen users of government websites.

Details

Corruption in the Public Sector: An International Perspective
Type: Book
ISBN: 978-1-83909-643-3

Keywords

Article
Publication date: 5 May 2020

Rashmi Gupta, Martin Crane and Cathal Gurrin

The continuous advancements in wearable sensing technologies enable the easy collection and publishing of visual lifelog data. The widespread adaptation of visual lifelog…

278

Abstract

Purpose

The continuous advancements in wearable sensing technologies enable the easy collection and publishing of visual lifelog data. The widespread adaptation of visual lifelog technologies would have the potential to pose challenges for ensuring the personal privacy of subjects and bystanders in lifelog data. This paper presents preliminary findings from a study of lifeloggers with the aim of better understanding their concerns regarding privacy in lifelog data.

Design/methodology/approach

In this study, we have collected a visual dataset of 64,837 images from 25 lifelogging participants over a period of two days each, and we conducted an interactive session (face to face conversation) with each participant in order to capture their concerns when sharing the lifelog data across three specified categories (i.e. Private (Only for Me), Semi-Private (Family/Friends) and Public).

Findings

In general, we found that participants tend to err on the side of conservative privacy settings and that there is a noticeable difference in what different participants are willing to share. In summary, we found that the categories of images that the participants wished to be kept private included personally identifiable information and professional information; categories of images that could be shared with family/friends include family moments or content related to daily routine lifestyle, and other visual lifelog data could potentially be made public).

Originality/value

We analysed the potential differences in the willingness of 25 participants to share data. In addition, reasons for being a volunteer to collect lifelog data and how the lifelogging device affected the lifestyle of the lifelogger are analysed. Based on the findings of this study, we propose a set of challenges for the anonymisation of lifelog data that should be solved when supporting lifelog data sharing.

Details

Online Information Review, vol. 45 no. 2
Type: Research Article
ISSN: 1468-4527

Keywords

1 – 10 of over 1000