The Emerald International Handbook of Technology-Facilitated Violence and Abuse

Cover of The Emerald International Handbook of Technology-Facilitated Violence and Abuse
Subject:

Synopsis

Table of contents

(47 chapters)
Abstract

While digital technologies have led to many important social and cultural advances worldwide, they also facilitate the perpetration of violence, abuse and harassment, known as technology-facilitated violence and abuse (TFVA). TFVA includes a spectrum of behaviors perpetrated online, offline, and through a range of technologies, including artificial intelligence, livestreaming, GPS tracking, and social media. This chapter provides an overview of TFVA, including a brief snapshot of existing quantitative and qualitative research relating to various forms of TFVA. It then discusses the aims and contributions of this book as a whole, before outlining five overarching themes arising from the contributions. The chapter concludes by mapping out the structure of the book.

Section 1 TFVA Across a Spectrum of Behaviors

Abstract

When discussing the term “technology-facilitated violence” (TFV) it is often asked: “Is it actually violence?” While international human rights standards, such as the United Nations' Convention on the Elimination of All Forms of Discrimination against Women (United Nations General Assembly, 1979), have long recognized emotional and psychological abuse as forms of violence, including many forms of technology-facilitated abuse (United Nations, 2018), law makers and the general public continue to grapple with the question of whether certain harmful technology-facilitated behaviors are actually forms of violence. This chapter explores this question in two parts. First, it reviews three theoretical concepts of violence and examines how these concepts apply to technology-facilitated behaviors. In doing so, this chapter aims to demonstrate how some harmful technology-facilitated behaviors fit under the greater conceptual umbrella of violence. Second, it examines two recent cases, one from the British Columbia Court of Appeal (BCCA) in Canada and a Romanian case from the European Court of Human Rights (ECtHR), that received attention for their legal determinations on whether to define harmful technology-facilitated behaviors as forms of violence or not. This chapter concludes with observations on why we should conceptualize certain technology-facilitated behaviors as forms of violence.

Abstract

Online environments have become a central part of our social, private, and economic life. The term for this is “digital existence,” characterized as a new epoch in mediated experience. Over the last decade, there has been a growing interest in how online abuse impacts one's digital existence. Drawing on 15 interviews with women, this chapter demonstrates a type of labor—which I call “ontological labor”—that women exercise when processing their own experiences of online abuse, and when sharing their experiences with others. Ontological labor is the process of overcoming a denial of experience. In the case of online abuse, this denial stems partly from the treatment of online and offline life as separate and opposing. This division is known as digital dualism, which I argue is a discourse that denies women the space to have their experiences of online abuse recognized as such.

Abstract

Polyvictimization means looking at multiple victimizations of different kinds that one person has experienced. Virtually, all of the work in this field focuses on the effects of childhood trauma and victimization on currently distressed children, and empirical and theoretical work on the intertwining of adult female offline and online abuse experiences is in short supply. Recently, however, some scholars are starting to fill these research gaps by generating data showing that technology-facilitated violence and abuse are part and parcel of women's polyvictimization experiences at institutions of higher education. This chapter provides an in-depth review of the extant social scientific literature on the role technology-facilitated violence and abuse plays in the polyvictimization of female college/university students. In addition to proposing new ways of knowing, we suggest progressive policies and practices aimed at preventing polyvictimization on the college campus.

Abstract

Incidents of violence perpetrated through digital technology platforms or facilitated by these means have been reported, often in high-income countries. Very little scholarly attention has been given to the nature of technology-facilitated violence and abuse (TFVA) across sub-Saharan Africa (SSA) despite an explosion in the use of various technologies. We conducted a literature review to identify and harmonize available data relating to the types of TFVA taking place in SSA. This was followed by an online survey of young adults through the SHYad.NET forum to understand the nature of TFVA among young adults in SSA. Our literature review revealed various types of TFVA to be happening across SSA, including cyberbullying, cyberstalking, trolling, dating abuse, image-based sexual violence, sextortion, and revenge porn. The results of our online survey revealed that both young men and women experience TFVA, with the most commonly reported TFVA being receiving unwanted sexually explicit images, comments, emails, or text messages. Female respondents more often reported repeated and/or unwanted sexual requests online via email or text message while male respondents more often reported experiencing violent threats. Respondents used various means to cope with TFVA including blocking the abuser or deleting the abused profile on social media.

Abstract

The nature and extent of adults' engagement in diverse manifestations of technology-facilitated aggression is not yet well understood. Most research has focused on victimization. When explored, engagement in online aggression and abuse has centered on children and young people, particularly in school and higher education settings. Drawing on nationally representative data from New Zealand adults aged 18 and over, this chapter explores the overall prevalence of online aggression with a focus on gender and age. Our findings support the need to also understand adult aggressors' behaviors to better address the distress and harm caused to targets through digital communications. The chapter discusses the implications of the results for policy and practice and proposes some directions for future research.

Abstract

This article considers how digital technologies are informed by, and implicated in, the systematic and interlocking oppressions of colonialism, misogyny, and racism, all of which have been identified as root causes of the missing and murdered Indigenous women crisis in Canada. The authors consider how technology can facilitate multiple forms of violence against women including stalking and intimate partner violence, human trafficking, pornography and child abuse images, and online hate and harassment and note instances where Indigenous women and girls may be particularly vulnerable. The authors also explore some of the complexities related to police use of technology for investigatory purposes, touching on police use of social media and DNA technology. Without simplistically blaming technology, the authors argue that technology interacts with multiple factors in the complex historical, socio-cultural environment that incubates the national crisis of missing and murdered Indigenous women and girls. The article concludes with related questions that may be considered at the impending national inquiry.

Abstract

Broadly understood as repeated, intentional, and aggressive behaviors facilitated by digital technologies, cyberbullying has been identified as a significant public health concern in Australia. However, there have been critical debates about the theoretical and methodological assumptions of cyberbullying research. On the whole, this research has demonstrated an aversion to accounting for context, difference, and complexity. This insensitivity to difference is evident in the absence of nuanced accounts of Indigenous people's experiences of cyberbullying. In this chapter, we extend recent critiques of dominant approaches to cyberbullying research and argue for novel theoretical and methodological engagements with Indigenous people's experiences of cyberbullying. We review a range of literature that unpacks the many ways that social, cultural, and political life is different for Indigenous peoples. More specifically, we demonstrate there are good reasons to assume that online conflict is different for Indigenous peoples, due to diverse cultural practices and the broader political context of settler-colonialism. We argue that the standardization of scholarly approaches to cyberbullying is delimiting its ability to attend to social difference in online conflict, and we join calls for more theoretically rigorous, targeted, difference-sensitive studies into bullying.

Section 2 Text-Based Harms

Abstract

While a growing body of literature reveals the prevalence of men's harassment and abuse of women online, scant research has been conducted into women's attacks on each other in digital networked environments. This chapter responds to this research gap by analyzing data obtained from qualitative interviews with Australian women who have received at times extremely savage cyberhate they know or strongly suspect was sent by other women. Drawing on scholarly literature on historical intra-feminism schisms – specifically what have been dubbed the “mommy wars” and the “sex wars” – this chapter argues that the conceptual lenses of internalized misogyny and lateral violence are useful in their framing of internecine conflict within marginalized groups as diagnostic of broader, systemic oppression rather than being solely the fault of individual actors. These lenses, however, require multiple caveats and have many limitations. In conclusion, I canvas the possibility that the pressure women may feel to present a united front in the interests of feminist politics could itself be considered an outcome of patriarchal oppression (even if performing solidarity is politically expedient and/or essential). As such, there might come a time when openly renouncing discourses of sisterhood and feeling free to disagree with, and even dislike, other women might be considered markers of liberation.

Abstract

Transgender people have received substantial attention in recent years, with gender identity being a focal point of online debate. Transgender identities are central to discussions relating to sex-segregated spaces and activities, such as public toilets, prisons, and sports participation. The introduction of “gender-neutral” spaces has received criticism because some argue that there is an increased risk of sexual violence against women and children. However, little is known about the implications that these constructions have for whom is able to claim a “victim status.” In this chapter, I provide a critical analysis of the techniques used by individuals to align themselves with a “victim status.” These claims are presented and contextualized within varying notions of victimization, from being victims of political correctness to victims of a more aggressive minority community. This feeds into an inherently transphobic discourse that is difficult to challenge without facing accusations of perpetuating an individual's “victimhood.” Transphobic rhetoric is most commonly expressed through constructing transgender people as “unnatural,” “sinful,” or as experiencing a “mental health issue.” This chapter argues that the denial of transphobia and simultaneous claims of victimization made by the dominant, cisgender majority are intrinsically linked.

Abstract

This chapter examines the phenomenon of doxxing: the practice of publishing private, proprietary, or personally identifying information on the internet, usually with malicious intent. Undertaking a scoping review of research into doxxing, we develop a typology of this form of technology-facilitated violence (TFV) that expands understandings of doxxing, its forms and its harms, beyond a taciturn discussion of privacy and harassment online. Building on David M. Douglas's typology of doxxing, our typology considers two key dimensions of doxxing: the form of loss experienced by the victim and the perpetrator's motivation(s) for undertaking this form of TFV. Through examining the extant literature on doxxing, we identify seven mutually non-exclusive motivations for this form of TFV: extortion, silencing, retribution, controlling, reputation-building, unintentional, and doxxing in the public interest. We conclude by identifying future areas for interdisciplinary research into doxxing that brings criminology into conversation with the insights of media-focused disciplines.

Abstract

The growth of online communities and social media has led to a growing need for methods, concepts, and tools for researching online cultures. Particular attention should be paid to polarizing online discussion cultures and dynamics that increase inequality in online environments. Social media has enormous potential to create good, but in order to unlock its full potential, we also need to examine the mechanisms keeping these spaces monotonous, homogenous, and even hostile toward some groups. With this need in mind, I have developed the concept and theory of othering online discourse (OOD).

This chapter introduces and defines the concept of OOD and explains the key characteristics and different attributes of OOD in relation to other concepts that deal with disruptive and discriminatory behavior in online spaces. The attributes of OOD are demonstrated drawing on examples gathered from the Finnish Suomi24 (Finland24) forum.

Abstract

The ideal of an open, all-inclusive, and participatory internet has been undermined by the rise of gender-based and misogynistic abuse on social media platforms. Limited progress has been made at supranational and national levels in addressing this issue, and where steps have been taken to combat online violence against women (OVAW), they are typically limited to legislative developments addressing image-based sexual abuse. As such, harms associated with image-based abuse have gained recognition in law while harms caused by text-based abuse (TBA) have not been conceptualized in an equivalent manner.

This chapter critically outlines the lack of judicial consideration given to online harms in British courts, identifying a range of harms arising from TBAs which currently are not recognized by the criminal justice system. We refer to non-traditional harms recognized in cases heard before the British courts, assessing these in light of traditionally recognized harms in established legal authorities. This chapter emphasizes the connection between the harms suffered and the recognition of impact on the victims, demonstrated through specific case studies. Through this assessment, this chapter advocates for greater recognition of online harms within the legal system – especially those which take the forms of misogynistic and/or gendered TBA.

Section 3 Image-Based Harms

Abstract

Videos of police abuse are often spread through technology, raising questions around how perceptions of police are impacted by these images, especially for 18–24-year-olds who are constantly “logged on.” Limited research investigates the impact of social media on attitudes toward police accounting for age and race. The present study utilizes 19 in-depth interviews with a diverse sample of urban college students who regularly use social media in order to understand how they have been impacted by this content. The findings suggest the necessity of using an intersectional framework to understand the impact of tech-witnessed violence. While no gender differences were uncovered, racial differences did surface. White participants described being minimally influenced by videos of police misconduct, rationalizing it as a “few bad apples.” In contrast, participants of color, except those with family members in law enforcement, described being negatively impacted. Viral content contributed to negative opinions of police, emotional distress, and fears of victimization. Ultimately, videos of police brutality do not impact young populations equally. Instead, they are comparatively more harmful to young people of color who spend more time on social media, can envision themselves as the victims, and experience feelings of fear, despair, and anger after watching these videos.

Abstract

Mainstream pornography is popular, freely accessible, and infused with themes of male dominance, aggression, and female subservience. Through depicting sex in these ways, mainstream pornography has the potential to influence the further development of harmful sexual scripts that condone or endorse violence against women and girls. These concerns warrant the adoption of a harms-based perspective in critical examinations of pornography's influence on sexual experiences. This chapter reports on findings from interviews with 24 heterosexual emerging adults living in Aotearoa/New Zealand about how pornography has impacted their lives. Despite a shared awareness among participants of mainstream pornography's misogynistic tendencies, and the potential for harm from those displays, men's and women's experiences were profoundly gendered. Men's reported experiences were often associated with concerns about their own sexual behaviors, performances, and/or abilities. Conversely, women's experiences were often shaped by how pornography had affected the way that men related to them sexually. Their experiences included instances of sexual coercion and assault which were not reported by the men. These findings signal the need for a gendered lens, situated within a broader harms-based perspective, in examinations of pornography's influence.

Abstract

Media attention on nonconsensual intimate image dissemination has led to the relatively recent proliferation of academic research on the topic. This literature has focused on many areas including victimization and perpetration prevalence rates, coerced sexting, legal and/or criminal contexts, sexual violence in digital spaces, gendered constructions of blame and risk, and legal analysis of high-profile cases and legislation. Despite this research, several gaps exist, including a lack of empirical research with service providers. Informed by in-depth interviews with 10 sexual violence frontline professionals in Southern Ontario (Canada), this chapter focuses on their perspectives of the additive role of technology. With respect to nonconsensual intimate image dissemination, technology acts as a digital “layer” that operates in addition to the commission of physical acts of sexual violence, and compounds the harms experienced by the victim by adding a virtual – and indelible – “permanent remembering” of the violence. Nuancing the contours of consent in a digital age, this chapter concludes by considering what consent means in a technological context.

Section 4 Dating Applications

Abstract

In recent years, the use of dating and hook up apps has become an increasingly socially acceptable and commonly used method of seeking romantic and sexual partners. This has seen a corresponding rise in media and crime reports of sexual harms facilitated through these services, including sexual harassment, unsolicited sexual imagery, and sexual assault. Emerging empirical research shows that experiences of sexual harms in this context are common and predominantly impact women and girls. The aim of this chapter is to examine the sociocultural and sexual norms that underpin online dating and which perpetuate a “rape culture” within which sexual harms become both possible and normalized. This chapter also considers how the discourses that minimize and legitimize sexual harms are encoded within the responses undertaken by dating and hook up apps to sexual harms. It is argued that together these norms and discourses may act to facilitate and/or prevent sexual harms, and may normalize and excuse these harms when they occur.

Abstract

Rape culture, described as when “violence is seen as sexy and sexuality as violent” (Buchwald, Fletcher, & Roth, 1993, p. vii), exists online and offline (Henry & Powell, 2014). Much of the research on rape culture focuses on the experiences of heterosexual women, and few studies have explored rape culture in the context of dating apps. This chapter explores how men who have sex with men (MSM) understand and experience rape culture through their use of Grindr and similar dating apps. A thematic analysis of interviews with 25 MSM dating app users revealed problematic user behavior as well as unwanted sexual messages and images as common manifestations of rape culture on dating apps. Participants explained that rape culture extends beyond in-app interactions to in-person encounters, as evident by incidents of sexual violence that several participants had experienced and one participant had committed. Participants were unsure about the extent to which MSM dating apps facilitate rape culture but asserted that some apps enable rape culture more than others. This chapter demonstrates the importance of investigating sexual violence against people of diverse gender and sexual identities to ensure their experiences are not minimized, ignored, or rendered invisible.

Abstract

Mobile dating apps are widely used in the queer community. Whether for sexual exploration or dating, mobile and geosocial dating apps facilitate connection. But they also bring attendant privacy risks. This chapter is based on original research about the ways gay and bisexual men navigate their privacy on geosocial dating apps geared toward the LGBTQI community. It argues that, contrary to the conventional wisdom that people who share semi-nude or nude photos do not care about their privacy, gay and bisexual users of geosocial dating apps care very much about their privacy and engage in complex, overlapping privacy navigation techniques when sharing photos. They share semi-nude and nude photos for a variety of reasons, but generally do so only after building organic trust with another person. Because trust can easily break down without supportive institutions, this chapter argues that law and design must help individuals protect their privacy on geosocial dating apps.

Section 5 Intimate Partner Violence and Digital Coercive Control

Abstract

Technology increasingly features in intimate relationships and is used by domestic violence perpetrators to enact harm. In this chapter, we propose a theoretical and practical framework for technology-facilitated harms in heterosexual relationships which we characterize as digital coercive control. Here, we include behaviors which can be classified as abuse and stalking and also individualized tactics which are less easy to categorize, but evoke fear and restrict the freedoms of a particular woman. Drawing on their knowledge of a victim/survivor's experiences and, in the context of patterns and dynamics of abuse, digital coercive control strategies are personalized by perpetrators and extend and exacerbate “real-world” violence.

Digital coercive control is unique because of its spacelessness and the ease, speed, and identity-shielding which technology affords. Victim/survivors describe how perpetrator use of technology creates a sense of omnipresence and omnipotence which can deter women from exiting violent relationships and weakens the (already tenuous) notion that abuse can be “escaped.” We contend that the ways that digital coercive control shifts temporal and geographic boundaries warrant attention. However, spatiality more broadly cannot be overlooked. The place and shape in which victim/survivors and perpetrators reside will shape both experiences of and response to violence. In this chapter, we explore these ideas, reporting on findings from a study on digital coercive control in regional, rural, and remote Australia. We adopt a feminist research methodology in regard to our ethos, research processes, analysis, and the outputs and outcomes of our project. Women's voices are foreground in this approach and the emphasis is on how research can be used to inform, guide, and develop responses to domestic violence.

Abstract

Technology-facilitated violence against women (TFVW) is readily becoming a key site of analysis for feminist criminologists. The scholarship in this area has identified online sexual harassment, contact-based harassment, image-based abuse, and gender-based cyberhate – among others – as key manifestations of TFVW. It has also unpacked the legal strategies available to women seeking formal justice outcomes. However, much of the existing empirical scholarship has been produced within countries like the United States, United Kingdom, and Australia, and there has been limited research on this phenomenon within South East Asia. As such, this chapter maps how technology is shaping Singaporean women's experiences of gendered, sexual, and domestic violence. To do so, it draws upon findings from a research project which examined TFVW in Singapore by utilizing semistructured interviews with frontline workers in the fields of domestic and sexual violence and LGBT services. Drawing from Dragiewicz et al.’s (2018) work on technology-facilitated coercive control (TFCC), I argue that victims-survivors of dating, domestic, and family violence need to be provided with support that is TFCC informed and technically guided. I also suggest that further research is needed to fully understand the prevalence and nature of TFVW in the Singaporean context.

Abstract

Much of the research on intimate partner violence focuses on adults, and little of it emanates from the Global-South. The study reported upon in this chapter is aimed at addressing these gaps. Adopting a Southern Feminist Framework, it discusses findings from interviews with Brasilian and Australian advocates working on prevention of youth IPV. Participants from both countries noted disturbing instances of digital coercive control among the youth with whom they work, as well as underlying factors such as gender-based discrimination that simultaneously contribute to the prevalence of such behaviors, as well as their normalization among young people. However, they also emphasized the positive role that technology can play in distributing educational programming that reaches young people where they are and circumvents conservative agendas that in some cases keep education about gender discrimination and healthy relationships out of schools.

Abstract

The rapid advancement of technology poses many social challenges including the emerging issue of technology-facilitated abuse (TFA) and violence. In Australia, women from culturally and linguistically diverse (CALD) backgrounds are found to be more vulnerable to domestic violence (DV) and abuse, including TFA. This chapter presents a snapshot of CALD women's technology-facilitated domestic abuse (TFDA) experiences in Melbourne through the eyes of a small group of DV practitioners. Findings show CALD women experience TFA similar to that of the mainstream, with tracking and monitoring through the use of smartphone and social media most common. Their migration and financial status, and language and digital literacy can increase their vulnerability to TFDA, making their experience more complicated. Appropriate digital services and resources together with face-to-face support services can be a way forward. Further research should focus on better understanding CALD women's perceptions of and responses to TFDA and explore ways to improve engagement with and use of community media channels/platforms.

Abstract

This chapter examines technology-facilitated violence from the perspective of international human rights law. It explores current research relating to technology-facilitated violence and then highlights the international human rights instruments that are triggered by the various forms of such violence. Ultimately, it focuses upon international human rights to privacy and to freedom from violence (especially gender-based violence) and the obligations on State and Nonstate actors to address violations of these rights. It argues that adoption of a human rights perspective on technology-facilitated violence better enables us to hold State and Nonstate actors to account in finding meaningful ways to address violence in all of its forms.

Abstract

Violence against women and girls is globally prevalent. Overcoming it is a prerequisite for attaining gender equality and achieving sustainable development. The United Nation's 2030 Agenda for Sustainable Development considers technology as a means to combat violence against women and girls, and there is ample evidence on the positive impact of technology in combating violence. At the same time, however, technology can promote and perpetrate new forms of violence. Research shows that more than 70% of women and girls online are exposed to forms of cyber violence. Most of these cases remain unreported.

This chapter argues that technology contributes to increasing cyber violence against women and girls which in turn leads to severe social and economic implications affecting them. It also argues that legislative and policy reforms can limit this type of violence while enabling women and girls to leverage technology for empowerment. It highlights cases of cyber violence in the Arab region and provides an overview of applicable legislative frameworks. The chapter concludes with recommended policy reforms and measures to strengthen and harmonize efforts to combat cyber violence against women and girls in the Arab region.

Abstract

Technology-facilitated violence and abuse including image-based sexual abuse (IBSA) is a phenomenon affecting women and girls around the world. Abusers misuse technology to attack victims and threaten their safety, privacy, and dignity. This abuse is gendered and a form of domestic and sexual violence. In this article, the authors compare criminal law approaches to tackling IBSA in Scotland and Malawi. We critically analyze the legislative landscape in both countries, with a view to assessing the potential for victims to seek and obtain redress for IBSA. We assess the role criminal law has to play in each jurisdiction while acknowledging the limits of criminal law alone in terms of providing redress.

Abstract

Canada criminalized the nonconsensual distribution of intimate images in 2014. Lawmakers and commentators noted that this new offense would fill a legislative gap in relation to “revenge pornography,” which entails individuals (typically men) sharing intimate images of their ex-partners (typically women) online in an attempt to seek revenge or cause them harm. Feminist writers and activists categorize revenge pornography as a symptom and consequence of “rape culture,” in which sexual violence is routinely trivialized and viewed as acceptable or entertaining, and women are blamed for their sexual victimization. In this chapter, I analyze Canada's burgeoning revenge pornography case law and find that these cases support an understanding of revenge pornography as a serious form of communal, gendered, intimate partner violence, which is extremely effective at harming victims because of broader rape culture. While Canadian judges are taking revenge pornography seriously, there is some indication from the case law that they are at risk of relying on gendered reasoning and assumptions previously observed by feminists in sexual assault jurisprudence, which may have the result of bolstering rape culture, rather than contesting it.

Abstract

Perpetrators of technology-facilitated gender-based violence are taking advantage of increasingly automated and sophisticated privacy-invasive tools to carry out their abuse. Whether this be monitoring movements through stalkerware, using drones to nonconsensually film or harass, or manipulating and distributing intimate images online such as deepfakes and creepshots, invasions of privacy have become a significant form of gender-based violence. Accordingly, our normative and legal concepts of privacy must evolve to counter the harms arising from this misuse of new technology. Canada's Supreme Court recently addressed technology-facilitated violations of privacy in the context of voyeurism in R v Jarvis (2019) . The discussion of privacy in this decision appears to be a good first step toward a more equitable conceptualization of privacy protection. Building on existing privacy theories, this chapter examines what the reasoning in Jarvis might mean for “reasonable expectations of privacy” in other areas of law, and how this concept might be interpreted in response to gender-based technology-facilitated violence. The authors argue the courts in Canada and elsewhere must take the analysis in Jarvis further to fully realize a notion of privacy that protects the autonomy, dignity, and liberty of all.

Abstract

Doxing refers to the intentional public release by a third party of personal data without consent, often with the intent to humiliate, intimidate, harass, or punish the individual concerned. Intuitively, it is tempting to condemn doxing as a crude form of cyber violence that weaponizes personal data. When it is used as a strategy of resistance by the powerless to hold the powerful accountable, however, a more nuanced understanding is called for. This chapter focuses on the doxing phenomenon in Hong Kong, where doxing incidents against police officers and their family members have skyrocketed since 2019 (a 75-fold increase over 2018). It contends that doxing for political purposes is closely related to digital vigilantism, signifying a loss of confidence in the ruling authority and a yearning for an alternative form of justice. The chapter therefore argues that public interest should be recognized as a legal defense in doxing cases when those discharging or entrusted with public duty are the targets. Equally, it is important to confine the categories of personal data disclosed to information necessary to reveal the alleged wrongdoer or wrongdoing. Only in this way can a fair balance be struck between privacy, freedom of expression, and public interest.

Abstract

As the means and harms of technology-facilitated violence have become more evident, some governments have taken steps to create or empower centralized bodies with statutory mandates as part of an effort to combat it. This chapter argues that these bodies have the potential to meaningfully further a survivor-centered approach to combatting technology-facilitated violence against women – one that places their experiences, rights, wishes, and needs at its core. It further argues that governments should consider integrating them into a broader holistic response to this conduct.

An overview is provided of the operations of New Zealand's Netsafe, the eSafety Commissioner in Australia, Nova Scotia's Cyberscan Unit, and the Canadian Centre for Child Protection in Manitoba. These types of centralized bodies have demonstrated an ability to advance survivor-centered approaches to technology-facilitated violence against women through direct involvement in resolving instances of violence, education, and research. However, these bodies are not a panacea. This chapter outlines critiques of their operations and the challenges they face in maximizing their effectiveness.

Notwithstanding these challenges and critiques, governments should consider creating such bodies or empowering existing bodies with a statutory mandate as one aspect of a broader response to combatting technology-facilitated violence against women. Some proposed best practices to maximize their effectiveness are identified.

Section 7 Responses Beyond Law

Abstract

While research on digital dangers has been growing, studies on their respective solutions and justice responses have not kept pace. The agathokakological nature of technology demands that we pay attention to not only harms associated with interconnectivity, but also the potential for technology to counter offenses and “do good.” This chapter discusses technology as both a weapon and a shield when it comes to violence against women and girls in public spaces and private places. First, we review the complex and varied manifestations of technological gender violence, ranging from the use of technology to exploit, harass, stalk, and otherwise harm women and girls in communal spaces, to offenses that occur behind closed doors. Second, we discuss justice-related responses, underscoring how women and girls have “flipped the script” when their needs are not met. By developing innovative ways to respond to the wrongs committed against them and creating alternate systems that offer a voice, victims/survivors have repurposed technology to redress harms and unite in solidarity with others in an ongoing quest for justice.

Abstract

The reality of domestic violence does not disappear when people enter the digital world, as abusers may use technology to stalk, exploit, and control their victims. In this chapter, we discuss three unique types of technological abuse: (1) financial abuse via banking websites and apps; (2) abuse via smart home devices (i.e., “Internet of Things” abuse); and (3) stalking via geo-location or GPS. We also argue pregnancy and wellness apps provide an opportunity for meaningful intervention for pregnant victims of domestic violence.

While there is no way to ensure users' safety in all situations, we argue thoughtful considerations while designing and building digital products can result in meaningful contributions to victims' safety. This chapter concludes with PenzeyMoog's (2020) “Framework for Inclusive Safety,” which is a roadmap for building technology that increases the safety of domestic violence survivors. This framework includes three key points: (1) the importance of educating technologists about domestic violence; (2) the importance of identifying possible abuse situations and designing against them; and (3) identifying user interactions that might signal abuse and offering safe interventions.

Abstract

Technology-facilitated abuse, so-called “tech abuse,” through phones, trackers, and other emerging innovations, has a substantial impact on the nature of intimate partner violence (IPV). The current chapter examines the risks and harms posed to IPV victims/survivors from the burgeoning Internet of Things (IoT) environment. IoT systems are understood as “smart” devices such as conventional household appliances that are connected to the internet. Interdependencies between different products together with the devices' enhanced functionalities offer opportunities for coercion and control. Across the chapter, we use the example of IoT to showcase how and why tech abuse is a socio-technological issue and requires not only human-centered (i.e., societal) but also cybersecurity (i.e., technical) responses. We apply the method of “threat modeling,” which is a process used to investigate potential cybersecurity attacks, to shift the conventional technical focus from the risks to systems toward risks to people. Through the analysis of a smart lock, we highlight insufficiently designed IoT privacy and security features and uncover how seemingly neutral design decisions can constrain, shape, and facilitate coercive and controlling behaviors.

Abstract

This chapter examines the structure and sentiment of the Twitter response to Nathan Broad's naming as the originator of an image-based sexual abuse incident following the 2017 Australian Football League Grand Final. Employing Social Network Analysis to visualize the hierarchy of Twitter users responding to the incident and Applied Thematic Analysis to trace the diffusion of differing streams of sentiment within this hierarchy, we produced a representation of participatory social media engagement in the context of image-based sexual abuse. Following two streams of findings, a model of social media user engagement was established that hierarchized the interplay between institutional and personal Twitter users. In this model, it was observed that the Broad incident generated sympathetic and compassionate discourses among an articulated network of social media users. This sentiment gradually diffused to institutional Twitter users – or Reference accounts – through the process of intermedia agenda-setting, whereby the narrative of terrestrial media accounts was altered by personal Twitter users over time.

Abstract

Bystander apathy has been a source of debate for decades. In the past half-century, psychologists developed theoretical frameworks to understand bystander activity, commonly referred to as bystander intervention models (BIMs). More recently, BIMs have been modified to facilitate initiatives to prevent various forms of online victimization. This chapter begins with a review of BIMs and recent applications of bystander intervention research to online environments. We also present several future directions for research along with applications for reducing technology-facilitated violence, including programming recommendations and theoretical development.

Abstract

This chapter examines the phenomenon of internet users attempting to report and prevent online child sexual exploitation (CSE) and child sexual abuse material (CSAM) in the absence of adequate intervention by internet service providers, social media platforms, and government. The chapter discusses the history of online CSE, focusing on regulatory stances over time in which online risks to children have been cast as natural and inevitable by the hegemony of a “cyberlibertarian” ideology. We illustrate the success of this ideology, as well as its profound contradictions and ethical failures, by presenting key examples in which internet users have taken decisive action to prevent online CSE and promote the removal of CSAM. Rejecting simplistic characterizations of “vigilante justice,” we argue instead that the fact that often young internet users report feeling forced to act against online CSE and CSAM undercuts libertarian claims that internet regulation is impossible, unworkable, and unwanted. Recent shifts toward a more progressive ethos of online harm minimization are promising; however, this ethos risks offering a new legitimizing ideology for online business models that will continue to put children at risk of abuse and exploitation. In conclusion, we suggest ways forward toward an internet built in the interests of children, rather than profit.

Abstract

The nonconsensual taking or sharing of nude or sexual images, also known as “image-based sexual abuse,” is a major social and legal problem in the digital age. In this chapter, we examine the problem of image-based sexual abuse in the context of digital platform governance. Specifically, we focus on two key governance issues: first, the governance of platforms, including the regulatory frameworks that apply to technology companies; and second, the governance by platforms, focusing on their policies, tools, and practices for responding to image-based sexual abuse. After analyzing the policies and practices of a range of digital platforms, we identify four overarching shortcomings: (1) inconsistent, reductionist, and ambiguous language; (2) a stark gap between the policy and practice of content regulation, including transparency deficits; (3) imperfect technology for detecting abuse; and (4) the responsibilization of users to report and prevent abuse. Drawing on a model of corporate social responsibility (CSR), we argue that until platforms better address these problems, they risk failing victim-survivors of image-based sexual abuse and are implicated in the perpetration of such abuse. We conclude by calling for reasonable and proportionate state-based regulation that can help to better align governance by platforms with CSR-initiatives.

Abstract

The emergence of technology-facilitated violence and abuse (TFVA) has led to calls for increased collaboration across and among sectors. Growing recognition of the need for multistakeholder collaboration (MSC) between industry, civil society, government, and academia reflects the number of moving parts involved, the need for specialized knowledge and skills in relation to certain issues, and the importance of recognizing the ways in which interlocking systems of subordination can lead to very different experiences with and impressions of social justice issues (Crenshaw, 1991). Numerous financial, professional, and personal factors incentivize MSC. Notwithstanding growing opportunities and incentives for TFVA-related MSC, collaborative efforts bring with them their own set of challenges. This chapter integrates elements of the literature on MSC, particularly those focusing on risks, benefits, and ways forward, with excerpts from a dialogue between an academic and community organization leader who are collaborating on a research partnership encompassing TFVA against young Canadians.

Abstract

Technology-facilitated violence and abuse is a truly global problem. As the diverse perspectives and experiences featured in this book have shown, the deep entanglement between technologies, inequality, marginalization, abuse, and violence require multi-faceted and collaborative responses that exist within and beyond the law. When this chapter was written, society was (and continues to be) facing an unprecedented challenge in COVID-19 – a global pandemic. At the same time, a renewed focus on racist police and civilian violence has occurred following the killings of George Floyd, Ahmaud Arbery, and Breonna Taylor in the United States. As we describe in this chapter, these two major moments are ongoing reminders of the profound social inequalities within our global communities, which are grounded in systemically discriminatory oppressions and their intersections. This chapter draws together some thoughts on technology-facilitated violence and abuse in an era of COVID-19 and antiracist protest. It explores these within the context of the book as a whole, highlighting the importance for improved understanding of, and responses to, technology-facilitated violence and abuse as part of a broader push for social justice.

Cover of The Emerald International Handbook of Technology-Facilitated Violence and Abuse
DOI
10.1108/9781839828485
Publication date
2021-06-04
Book series
Emerald Studies In Digital Crime, Technology and Social Harms
Editors
Series copyright holder
Emerald Publishing Limited
ISBN
978-1-83982-849-2
eISBN
978-1-83982-848-5