Search results1 – 10 of over 5000
This chapter is about online hate speech propagated via platforms operated by social media companies (SMCs). It examines the options open to states in forcing SMCs to take…
This chapter is about online hate speech propagated via platforms operated by social media companies (SMCs). It examines the options open to states in forcing SMCs to take responsibility for the hateful content that appears on their sites. It examines the technological and legal context for imposing legal obligations on SMCs, and analyses initiatives in Germany, the United Kingdom, the European Union and elsewhere. It argues that while SMCs can play a role in controlling online hate speech, there are limitations to what they can achieve.
Purpose – This chapter demonstrates the power that Google, Apple, Facebook, Amazon and Microsoft (or the “GAFAM”) exercise over platforms within society, highlights the…
Purpose – This chapter demonstrates the power that Google, Apple, Facebook, Amazon and Microsoft (or the “GAFAM”) exercise over platforms within society, highlights the alt-right’s use of GAFAM sites and services as a platform for hate, and examines GAFAM’s establishment and use of hate content moderation apparatuses to de-platform alt-right users and delete hate content. Approach – Drawing upon a political economy of communications approach, this chapter demonstrates GAFAM’s power in society. It also undertakes a reading of GAFAM “terms of service agreements” and “community guidelines” documents to identify GAFAM hate content moderation apparatuses. Findings – GAFAM are among the most powerful platforms in the world, and their content moderation apparatuses are empowered by the US government’s cyber-libertarian approach to Internet law and regulation. GAFAM are defining hate speech, deciding what’s to be done about it, and censoring it. Value – This chapter probes GAFAM’s hate content moderation apparatuses for Internet platforms, and shows how GAFAM enable and constrain the alt-right’s hate speech on their platforms. It also reflexively assesses the politics of empowering GAFAM to de-platform the alt-right.
Purpose: The study makes use of situational crime prevention framework for analyzing online community reactions to the banning of deepfake pornographic content from Reddit.…
Purpose: The study makes use of situational crime prevention framework for analyzing online community reactions to the banning of deepfake pornographic content from Reddit.
Methodology/approach: Qualitative text analysis of user comments posted to Reddit’s rule-change announcement (N = 582) was carried out. Analysis relied on the original 25 techniques of situational crime prevention that were adapted into a table of activities and mechanisms meant specifically for use with online platforms.
Findings: Analysis indicates that Reddit users voiced several shortcomings that are currently present in Reddit’s platform management approach. In particular, users emphasized issues related to the lack of a consistent and transparent approach to community rule enforcement, as users believed the rule changes to be sudden and poorly reasoned. The general reactionary nature of Reddit’s approach to moderating community-harming actions also was a point of emphasis, alongside the platform’s continued rigid stance on freedom of expression, even with regard to illegal and demeaning content. Regarding Reddit and the new rules on involuntary pornography and the sexualization of minors, enforcement of sitewide policy appears contingent on external influences, such as attention from mainstream media or financial matters, rather than stemming from an inherent stance on decreasing community-harming activities.
Research limitations: The study only pertains to a specific rule change by Reddit and subsequent reactions from the platform’s community. Future research is needed to test the applicability of the adapted table of 25 techniques of situational crime prevention in the context of other online platforms.
Originality/value: First, the study applies the situational crime prevention approach in the context of moderating online platforms. Second, results from the study shed light on current practices in online content moderation from the perspective of criminological theory, as well as inform specific actions that can be taken to decrease the presence of community-harming phenomena and improve the enforcement of sitewide policy rules in general. Finally, by adapting the original 25 techniques of situational crime prevention to online content moderation, the study suggests a tentative roadmap for similar research in the future.
The nonconsensual taking or sharing of nude or sexual images, also known as “image-based sexual abuse,” is a major social and legal problem in the digital age. In this…
The nonconsensual taking or sharing of nude or sexual images, also known as “image-based sexual abuse,” is a major social and legal problem in the digital age. In this chapter, we examine the problem of image-based sexual abuse in the context of digital platform governance. Specifically, we focus on two key governance issues: first, the governance of platforms, including the regulatory frameworks that apply to technology companies; and second, the governance by platforms, focusing on their policies, tools, and practices for responding to image-based sexual abuse. After analyzing the policies and practices of a range of digital platforms, we identify four overarching shortcomings: (1) inconsistent, reductionist, and ambiguous language; (2) a stark gap between the policy and practice of content regulation, including transparency deficits; (3) imperfect technology for detecting abuse; and (4) the responsibilization of users to report and prevent abuse. Drawing on a model of corporate social responsibility (CSR), we argue that until platforms better address these problems, they risk failing victim-survivors of image-based sexual abuse and are implicated in the perpetration of such abuse. We conclude by calling for reasonable and proportionate state-based regulation that can help to better align governance by platforms with CSR-initiatives.
The purpose of this paper is to investigate how decisions of managers and administrators of online communities on norms and rules affect the sense of virtual community…
The purpose of this paper is to investigate how decisions of managers and administrators of online communities on norms and rules affect the sense of virtual community (SOVC), which is an important factor of the quality of online information.
The study followed a two-level research design based on 970 online community members, nested within 36 online communities. Data collection consisted of two stages: first a web survey of a sample of online community members was conducted, followed by a web survey of administrators of the same online communities. A two-level hierarchical regression analysis was used to test the hypotheses.
The empirical results suggest that prominence of rules under the condition of members’ participation in their creation, presence of reputation mechanisms, and content moderation contribute significantly to the SOVC , while presence of lighter sanctions and interactive moderation do not.
Since this study is based on web forums, the validity of the proposed hypotheses for other types of online communities cannot be firmly established. Additional elements of online community management could be considered for a stronger system-level explanation of the SOVC.
The study demonstrates that online community administrators need to be considerate in creating and enforcing norms, as their decisions have an impact on the SOVC and consequently on the quality of online information.
The literature considers many factors of the SOVC but none of the previous studies have considered how community management is associated with this phenomenon.
2019 was a big year. The Great Hack and investigative journalism of Carole Cadwalladr exposed the machinations of Cambridge Analytica. The US senate summoned Mark Zuckerberg to face an extended interrogation on the ways in which Facebook screens content. Greta Thunberg fomented a global ‘climate emergency’ movement with attacks on lying political leaders. If 2016 saw ‘post-truth’ rise to prominence as a concept, 2019 was characterised by myriad efforts to champion truth and counter misinformation. And then the COVID-19 crisis hit. The urgency we began to feel in 2019 to address the ills in our society and hunt for a cause and cure has intensified. We now daily ask at whose door we can lay the blame and, from there, what solutions we can implement. For now, we have drawn the battle lines between tech and society and looked to pit governments against technologies which have changed the face of media. But amidst this flurry of activity, we need to stop and ask ourselves: are we setting our sights on the right actors and are we taking the right next steps?
Written in the midst of the COVID-19 pandemic, this contribution responds to the burning debate on how to overcome our current infodemic and immunise against future outbreaks. It offers an alternative narrative and argues for a much more radical course of action. It posits that we have misidentified the root cause of our current post-truth reality. It argues that we are in fact experiencing the extreme consequence of decades of poor education the world over. It champions a shift from drilling young people in so-called facts and figures to developing those deep levels of literacy in which critical thinking plays a fundamental part. This is not to exculpate the Facebooks and Twitters of our time – new tech has no doubt facilitated the dissemination of half-truths and untruths. But it is to insist upon contextualising our current albeit horrifying reality within a much more complex and longer-running societal challenge. In other words, this chapter makes a fresh clarion call for rethinking how we have got to where we are and where we might most meaningfully go next, as well as how, indeed, we might conceptualise the links between technology, government, media and education.
EUROPE: Online content moderation rules will toughen