Search results1 – 10 of over 14000
On Inauguration Day 2017, Milo Yiannopoulos gave a talk sponsored by the University of Washington College Republicans entitled “Cyberbullying Isn’t Real.” This chapter is…
On Inauguration Day 2017, Milo Yiannopoulos gave a talk sponsored by the University of Washington College Republicans entitled “Cyberbullying Isn’t Real.” This chapter is based on participant-observation conducted in the crowd outside the venue that night and analyzes the violence that occurs when the blurring of the boundaries between “free” and “hate” speech is enacted on the ground. This ethnographic examination rethinks relationships between law, bodies, and infrastructure as it considers debates over free speech on college campuses from the perspectives of legal and public policy, as well as those who supported and protested Yiannopoulos’s right to speak at the University of Washington. First, this analysis uses ethnographic research to critique the absolutist free speech argument presented by the legal scholars Erwin Chemerinsky and Howard Gillman. Second, this essay uses the theoretical work of Judith Butler and Sara Ahmed to make claims concerning relationships between speech, vulnerability, and violence. In so doing, this chapter argues that debates over free speech rights on college campuses need to be situated by processes of neoliberalization in higher education and reconsidered in light of the ways in which an absolutist position disproportionately protects certain people at the expense of certain others.
Purpose – This chapter explores the topic of free speech protections and social media use in academia through an examination of the current legal landscape as it applies to various stakeholders on university campuses in the United States. The authors focus this examination primarily on public universities. Methodology/Approach – Legal research methods were utilized, including an analysis of relevant United States federal and state laws, case law, and secondary sources such as law reviews. Non-legal sources, such as academic journals, were also reviewed, with particular emphasis on topics such as university policies, tenure protections, academic freedom, as well as current events. Findings – The law regarding personal social media communications in a university setting is a series of complex and interconnected legal questions. Courts are still flushing out how free speech protections, personal social media use, and other relevant legal protections (e.g., employment law) may interface in a university-related case. Outcomes of cases are highly fact driven, and legal precedent is still being established. Originality/Value – This chapter offers a comprehensive examination of the topic of free speech and social media use in United States academia by (1) examining legal protections as applied to various stakeholders on a college campus and (2) analyzing the current legal landscape of social media cases involving universities.
The article examines the issue of free speech in a law and economics perspective. The property rights approach is contrasted with the common law and constitutional…
The article examines the issue of free speech in a law and economics perspective. The property rights approach is contrasted with the common law and constitutional standpoints. Consequentialist and market efficiency may not provide adequate criteria for judging limitations to freedom of speech. Constitutional instruments may then be required.
In this article, I trace the slow evolution of the contemporary idea of “academic freedom” through two court cases of the early twentieth century. Unfortunately for…
In this article, I trace the slow evolution of the contemporary idea of “academic freedom” through two court cases of the early twentieth century. Unfortunately for academics, this history does not end with a ringing endorsement of the right of academics to speak freely without being afraid of losing their teaching jobs. Rather, the courts have tended to agree that while faculty do have freedom of speech under the first amendment, they do not necessarily have the right to keep their jobs no matter what they say. This chapter illustrates the court’s early validation of punishing the “free speech” of employees if it promotes a “bad tendency” in Patterson v. Colorado in 1907 and concludes with Oliver Wendell Holmes’ ruling in 1919 that introduces the concept of the “marketplace of ideas” to evaluate speech even though the defendants were convicted of espionage as they exercised their “freedom of speech.” For the educator, freedom of speech is essential in having the academic freedom to pursue their discipline.
Purpose – This chapter has three general purposes: to trace Canada’s hate speech laws from their policy inception to their current state; to identify the importance that media and mass communication have played in the creation and development of Canada’s hate speech laws; and to demonstrate the critical relationship that media has had to significant legal cases on hate speech. Methodology/Approach – This chapter historically maps the policy development of and legal challenges to Canada’s hate speech laws. It takes directed notice of the relationship of media and mass communication to the development and implementation of those laws. It engages with libertarian and egalitarian arguments on free speech throughout the chapter testing these ideas through an examination of the legal cases cited. Findings – Canadian legislators and courts have long grappled with the balancing of rights with respect to the issue of “hate speech.” Advances in mass communication technology have added intricate challenges to that legal balancing. Awareness of media’s allure to hatemongers and racial extremists and of media’s protean characteristics make regulation of its hateful content a continuous legal challenge. Canada’s greatest challenge yet to the regulation of hate speech will be its adaptive response to the growing phenomenon of online hate. Originality/Value – This chapter highlights the little recognized prescient statements made by the Cohen Committee about the allure of media and the dangers of its technological advancements in Canadian free speech debates. Providing a comprehensive survey of Canada’s “hate speech” laws, it recognizes the importance that advancements in mass communication have played in the creation and development of Canada’s “hate speech” laws.
This chapter is about online hate speech propagated via platforms operated by social media companies (SMCs). It examines the options open to states in forcing SMCs to take…
This chapter is about online hate speech propagated via platforms operated by social media companies (SMCs). It examines the options open to states in forcing SMCs to take responsibility for the hateful content that appears on their sites. It examines the technological and legal context for imposing legal obligations on SMCs, and analyses initiatives in Germany, the United Kingdom, the European Union and elsewhere. It argues that while SMCs can play a role in controlling online hate speech, there are limitations to what they can achieve.
Hate speech in recent times has become a troubling development. It has different meanings to different people in different cultures. The anonymity and ubiquity of the…
Hate speech in recent times has become a troubling development. It has different meanings to different people in different cultures. The anonymity and ubiquity of the social media provides a breeding ground for hate speech and makes combating it seems like a lost battle. However, what may constitute a hate speech in a cultural or religious neutral society may not be perceived as such in a polarized multi-cultural and multi-religious society like Nigeria. Defining hate speech, therefore, may be contextual. Hate speech in Nigeria may be perceived along ethnic, religious and political boundaries. The purpose of this paper is to check for the presence of hate speech in social media platforms like Twitter, and to what degree is hate speech permissible, if available? It also intends to find out what monitoring mechanisms the social media platforms like Facebook and Twitter have put in place to combat hate speech. Lexalytics is a term coined by the authors from the words lexical analytics for the purpose of opinion mining unstructured texts like tweets.
This research developed a Python software called polarized opinions sentiment analyzer (POSA), adopting an ego social network analytics technique in which an individual’s behavior is mined and described. POSA uses a customized Python N-Gram dictionary of local context-based terms that may be considered as hate terms. It then applied the Twitter API to stream tweets from popular and trending Nigerian Twitter handles in politics, ethnicity, religion, social activism, racism, etc., and filtered the tweets against the custom dictionary using unsupervised classification of the texts as either positive or negative sentiments. The outcome is visualized using tables, pie charts and word clouds. A similar implementation was also carried out using R-Studio codes and both results are compared and a t-test was applied to determine if there was a significant difference in the results. The research methodology can be classified as both qualitative and quantitative. Qualitative in terms of data classification, and quantitative in terms of being able to identify the results as either negative or positive from the computation of text to vector.
The findings from two sets of experiments on POSA and R are as follows: in the first experiment, the POSA software found that the Twitter handles analyzed contained between 33 and 55 percent hate contents, while the R results show hate contents ranging from 38 to 62 percent. Performing a t-test on both positive and negative scores for both POSA and R-studio, results reveal p-values of 0.389 and 0.289, respectively, on an α value of 0.05, implying that there is no significant difference in the results from POSA and R. During the second experiment performed on 11 local handles with 1,207 tweets, the authors deduce as follows: that the percentage of hate contents classified by POSA is 40 percent, while the percentage of hate contents classified by R is 51 percent. That the accuracy of hate speech classification predicted by POSA is 87 percent, while free speech is 86 percent. And the accuracy of hate speech classification predicted by R is 65 percent, while free speech is 74 percent. This study reveals that neither Twitter nor Facebook has an automated monitoring system for hate speech, and no benchmark is set to decide the level of hate contents allowed in a text. The monitoring is rather done by humans whose assessment is usually subjective and sometimes inconsistent.
This study establishes the fact that hate speech is on the increase on social media. It also shows that hate mongers can actually be pinned down, with the contents of their messages. The POSA system can be used as a plug-in by Twitter to detect and stop hate speech on its platform. The study was limited to public Twitter handles only. N-grams are effective features for word-sense disambiguation, but when using N-grams, the feature vector could take on enormous proportions and in turn increasing sparsity of the feature vectors.
The findings of this study show that if urgent measures are not taken to combat hate speech there could be dare consequences, especially in highly polarized societies that are always heated up along religious and ethnic sentiments. On daily basis tempers are flaring in the social media over comments made by participants. This study has also demonstrated that it is possible to implement a technology that can track and terminate hate speech in a micro-blog like Twitter. This can also be extended to other social media platforms.
This study will help to promote a more positive society, ensuring the social media is positively utilized to the benefit of mankind.
The findings can be used by social media companies to monitor user behaviors, and pin hate crimes to specific persons. Governments and law enforcement bodies can also use the POSA application to track down hate peddlers.
It is the contention of this paper that the state of siege on the part of students against academics and administrators of the 1960s has been replaced by one led by…
It is the contention of this paper that the state of siege on the part of students against academics and administrators of the 1960s has been replaced by one led by university administrators who are now waging a war of “political correctness” against students and faculty. These administrators were in large part the radicals of the 1960s. The present paper attempts to analyze the effects of government involvement in this industry on this phenomenon and the role of free speech rights in the presence and absence of marketplace considerations.
Discusses political correctness and the economics of higher education.
The contention is that competition brings about a better product at a lower price, and that the educational sector is no exception to this general rule. If free enterprise were allowed to operate in this context, much of these difficulties would disappear.
The paper offers insights into today's higher education industry and how economic analysis can explain the current “state of siege” on university campuses.
Purpose – This chapter demonstrates the power that Google, Apple, Facebook, Amazon and Microsoft (or the “GAFAM”) exercise over platforms within society, highlights the…
Purpose – This chapter demonstrates the power that Google, Apple, Facebook, Amazon and Microsoft (or the “GAFAM”) exercise over platforms within society, highlights the alt-right’s use of GAFAM sites and services as a platform for hate, and examines GAFAM’s establishment and use of hate content moderation apparatuses to de-platform alt-right users and delete hate content. Approach – Drawing upon a political economy of communications approach, this chapter demonstrates GAFAM’s power in society. It also undertakes a reading of GAFAM “terms of service agreements” and “community guidelines” documents to identify GAFAM hate content moderation apparatuses. Findings – GAFAM are among the most powerful platforms in the world, and their content moderation apparatuses are empowered by the US government’s cyber-libertarian approach to Internet law and regulation. GAFAM are defining hate speech, deciding what’s to be done about it, and censoring it. Value – This chapter probes GAFAM’s hate content moderation apparatuses for Internet platforms, and shows how GAFAM enable and constrain the alt-right’s hate speech on their platforms. It also reflexively assesses the politics of empowering GAFAM to de-platform the alt-right.