Search results
1 – 10 of over 1000
Purpose – Attitudinal inoculation has a long history of success in communication studies. A wealth of literature has shown it to be an effective strategy for preventing the…
Abstract
Purpose – Attitudinal inoculation has a long history of success in communication studies. A wealth of literature has shown it to be an effective strategy for preventing the assimilation of beliefs and attitudes in several domains, including healthcare, politics, and advertising. Despite its demonstrated efficacy, its utility as a means of preventing the adoption of beliefs and attitudes consistent with strategic messaging distributed by malicious actors has yet to be sufficiently evaluated. This chapter introduces attitudinal inoculation as a viable strategy for challenging online disinformation produced by violent extremist groups.
Methods – Through a systematic review of the literature on attitudinal inoculation and disinformation, this chapter represents an attempt to link broad themes of narrative persuasion with the field of counter-terrorism.
Findings – This chapter will offer specific guidance on the development of inoculation messages intended to mitigate the persuasive efficacy of online disinformation produced and distributed by violent extremist organizations.
Originality/Value – As one of the first attempts to demonstrate the utility of attitudinal inoculation in the field of terrorism and radicalization studies, this chapter presents a novel approach to understanding contemporary issues of political extremism.
Details
Keywords
Zulma Valedon Westney, Inkyoung Hur, Ling Wang and Junping Sun
Disinformation on social media is a serious issue. This study examines the effects of disinformation on COVID-19 vaccination decision-making to understand how social media users…
Abstract
Purpose
Disinformation on social media is a serious issue. This study examines the effects of disinformation on COVID-19 vaccination decision-making to understand how social media users make healthcare decisions when disinformation is presented in their social media feeds. It examines trust in post owners as a moderator on the relationship between information types (i.e. disinformation and factual information) and vaccination decision-making.
Design/methodology/approach
This study conducts a scenario-based web survey experiment to collect extensive survey data from social media users.
Findings
This study reveals that information types differently affect social media users' COVID-19 vaccination decision-making and finds a moderating effect of trust in post owners on the relationship between information types and vaccination decision-making. For those who have a high degree of trust in post owners, the effect of information types on vaccination decision-making becomes large. In contrast, information types do not affect the decision-making of those who have a very low degree of trust in post owners. Besides, identification and compliance are found to affect trust in post owners.
Originality/value
This study contributes to the literature on online disinformation and individual healthcare decision-making by demonstrating the effect of disinformation on vaccination decision-making and providing empirical evidence on how trust in post owners impacts the effects of information types on vaccination decision-making. This study focuses on trust in post owners, unlike prior studies that focus on trust in information or social media platforms.
Details
Keywords
Disinformation, false information designed with the intention to mislead, can significantly damage organizational operation and reputation, interfering with communication and…
Abstract
Purpose
Disinformation, false information designed with the intention to mislead, can significantly damage organizational operation and reputation, interfering with communication and relationship management in a wide breadth of risk and crisis contexts. Modern digital platforms and emerging technologies, including artificial intelligence (AI), introduce novel risks in crisis management (Guthrie and Rich, 2022). Disinformation literature in security and computer science has assessed how previously introduced technologies have affected disinformation, demanding a systematic and coordinated approach for sustainable counter-disinformation efforts. However, there is a lack of theory-driven, evidence-based research and practice in public relations that advises how organizations can effectively and proactively manage risks and crises driven by AI (Guthrie and Rich, 2022).
Design/methodology/approach
As a first step in closing this research-practice gap, the authors first synthesize theoretical and technical literature characterizing the effects of AI on disinformation. Upon this review, the authors propose a conceptual framework for disinformation response in the corporate sector that assesses (1) technologies affecting disinformation attacks and counterattacks and (2) how organizations can proactively prepare and equip communication teams to better protect businesses and stakeholders.
Findings
This research illustrates that future disinformation response efforts will not be able to rely solely on detection strategies, as AI-created content quality becomes more and more convincing (and ultimately, indistinguishable), and that future disinformation management efforts will need to rely on content influence rather than volume (due to emerging capabilities for automated production of disinformation). Built upon these fundamental, literature-driven characteristics, the framework provides organizations actor-level and content-level perspectives for influence and discusses their implications for disinformation management.
Originality/value
This research provides a theoretical basis and practitioner insights by anticipating how AI technologies will impact corporate disinformation attacks and outlining how companies can respond. The proposed framework provides a theory-driven, practical approach for effective, proactive disinformation management systems with the capacity and agility to detect risks and mitigate crises driven by evolving AI technologies. Together, this framework and the discussed strategies offer great value to forward-looking disinformation management efforts. Subsequent research can build upon this framework as AI technologies are deployed in disinformation campaigns, and practitioners can leverage this framework in the development of counter-disinformation efforts.
Details
Keywords
Mitali Desai, Rupa G. Mehta and Dipti P. Rana
Scholarly communications, particularly, questions and answers (Q&A) present on digital scholarly platforms provide a new avenue to gain knowledge. However, several studies have…
Abstract
Purpose
Scholarly communications, particularly, questions and answers (Q&A) present on digital scholarly platforms provide a new avenue to gain knowledge. However, several studies have raised a concern about the content anomalies in these Q&A and suggested a proper validation before utilizing them in scholarly applications such as influence analysis and content-based recommendation systems. The content anomalies are referred as disinformation in this research. The purpose of this research is firstly, to assess scholarly communications in order to identify disinformation and secondly, to help scholarly platforms determine the scholars who probably disseminate such disinformation. These scholars are referred as the probable sources of disinformation.
Design/methodology/approach
To identify disinformation, the proposed model deduces (1) content redundancy and contextual redundancy in questions (2) contextual nonrelevance in answers with respect to the questions and (3) quality of answers with respect to the expertise of the answering scholars. Then, the model determines the probable sources of disinformation using the statistical analysis.
Findings
The model is evaluated on ResearchGate (RG) data. Results suggest that the model efficiently identifies disinformation from scholarly communications and accurately detects the probable sources of disinformation.
Practical implications
Different platforms with communication portals can use this model as a regulatory mechanism to restrict the prorogation of disinformation. Scholarly platforms can use this model to generate an accurate influence assessment mechanism and also relevant recommendations for their scholars.
Originality/value
The existing studies majorly deal with validating the answers using statistical measures. The proposed model focuses on questions as well as answers and performs a contextual analysis using an advanced word embedding technique.
Details
Keywords
YiShu Wu, Dandan Wang and Feicheng Ma
The purpose of this study is to explore the evolutionary path and stable strategy for the competitive dissemination between disinformation and knowledge on social media to provide…
Abstract
Purpose
The purpose of this study is to explore the evolutionary path and stable strategy for the competitive dissemination between disinformation and knowledge on social media to provide effective solutions to curb the dissemination of disinformation and promote the spread of knowledge.
Design/methodology/approach
Based on the social capital (SC) theory, the benefit matrix is constructed and an evolutional game model is established in this paper. Through model solving and Matrix Laboratory (MATLAB) simulation, the factors that influence disinformation-believing users (DUs) and knowledge-believing users (KUs) to choose different strategies are analyzed.
Findings
The initial dissemination willingness, the disinformation infection probability, the knowledge infection probability and the knowledge penetration probability are proved to be crucial factors influencing the game equilibrium in the competitive dissemination process of disinformation and knowledge. Moreover, some countermeasures and recommendations for the governance of disinformation are proposed.
Originality/value
Currently most research interest lies in the disinformation dissemination model but ignores the interaction between disinformation and knowledge in the diffusion process. This study reveals the dynamic mechanism of social media users disseminating disinformation and knowledge and is expected to promote the formation of cleaner cyberspace.
Details
Keywords
Naresh Kumar Agarwal and Farraj Alsaeedi
This paper seeks to disambiguate the phenomenon by clarifying terms, highlighting current efforts, including the importance of critical thought and awareness, and a test for…
Abstract
Purpose
This paper seeks to disambiguate the phenomenon by clarifying terms, highlighting current efforts, including the importance of critical thought and awareness, and a test for genuine serendipity.
Design/methodology/approach
The authors review the literature, primarily from a library and information science perspective, and arrive at a theoretical framework and model.
Findings
The authors find various initiatives to fight fake news. Building upon Karlova and Fisher's (2013) model as well as research on critical thinking and serendipity, the primary contribution of the paper is a disinformation behavior framework and model. The model includes both the problem of disinformation from a creator and user perspective, as well as the solutions to fight it.
Research limitations/implications
The framework will guide practitioners and researchers in library and information science and beyond, as well as other stakeholders in both understanding the phenomenon, and leading the fight against it.
Originality/value
The spreading of false information has become an alarming phenomenon in the last few years, leading to the popularity of terms such as misinformation, disinformation, infodemic and fake news. While information professionals have been called upon to lead the fight against fake news, in the lack of a comprehensive understanding of the phenomenon, current efforts have been isolated and inadequate. Most models of information behavior deal with information, and not misinformation or disinformation per se.
Details
Keywords
Librarians have long been part of a group of professionals that took responsibility for the reliability of information and protected their users from the bad epistemic…
Abstract
Purpose
Librarians have long been part of a group of professionals that took responsibility for the reliability of information and protected their users from the bad epistemic consequences caused by inaccurate information. Now users are acquiring information from the internet and using it to make important decisions. This method of acquisition is threatening the epistemological protection librarians have provided. The problem is one of verifiability, the users do not have a way to verify whether information is accurate or inaccurate. The verification is even more difficult with disinformation. The purpose of this paper is to explore possible alternatives to this problem and recommend using a new multi‐literacy instructional method as the solution.
Design/methodology/approach
A review of current literature confirmed the problem of disinformation and this paper examines possible solutions to controlling disinformation and makes suggestions on how we, as librarians, can use instruction to protect internet users from the harmful effects of using the false information.
Findings
Research found that disinformation is a widespread problem and its use has epistemic consequences that are harmful to internet users. The paper proposes a new method of instruction using a combination of learning paradigms to help users protect themselves from disinformation.
Originality/value
The paper presents a new instructional method that may help in identifying disinformation and help internet users avoid the bad epistemic consequences of using disinformation.
Details
Keywords
With the outset of automatic detection of information, misinformation, and disinformation, the purpose of this paper is to examine and discuss various conceptions of information…
Abstract
Purpose
With the outset of automatic detection of information, misinformation, and disinformation, the purpose of this paper is to examine and discuss various conceptions of information, misinformation, and disinformation within philosophy of information.
Design/methodology/approach
The examinations are conducted within a Gricean framework in order to account for the communicative aspects of information, misinformation, and disinformation as well as the detection enterprise.
Findings
While there often is an exclusive focus on truth and falsity as that which distinguish information from misinformation and disinformation, this paper finds that the distinguishing features are actually intention/intentionality and non-misleadingness/misleadingness – with non-misleadingness/misleadingness as the primary feature. Further, the paper rehearses the argument in favor of a true variety of disinformation and extends this argument to include true misinformation.
Originality/value
The findings are novel and pose a challenge to the possibility of automatic detection of misinformation and disinformation. Especially the notions of true disinformation and true misinformation, as varieties of disinformation and misinformation, which force the true/false dichotomy for information vs mis-/disinformation to collapse.
Details
Keywords
The study aims to investigate the predictors of engaging in combat against the spread of misinformation and disinformation online, and of actively sharing disinformation by users…
Abstract
Purpose
The study aims to investigate the predictors of engaging in combat against the spread of misinformation and disinformation online, and of actively sharing disinformation by users. The study advances an understanding of user active engagement with disinformation as political participation, especially linked to violent activism, in alignment with the view of disinformation as political weapon.
Design/methodology/approach
A survey of 502 Israeli internet users inquired into respondents' political participation, trust and orientation, definitions and perceptions of “Fake News,” and previous engagement in sharing misinformation disinformation items, combating or intention to combat against the spread of disinformation.
Findings
In addition to identifying predictors for each practice, the findings indicate that sharing and combating against disinformation are closely linked. They are also all directly linked to political participation of various kinds. Most interestingly, working for a political party significantly correlates with knowingly sharing disinformation items, and participating in illegal or violent political activities significantly correlates with knowingly sharing and actively participating in combat against disinformation.
Originality/value
The spread of disinformation online and its implications has received much scholarly as well as public attention in recent years. However, the characteristics of individual users who share or combat against the spread of disinformation online, as forms of political participation, have not been examined. This study fills this gap by inquiring into such practices and the behaviors, perceptions and demographics that predict them.
Details