Search results
1 – 10 of over 2000This study empirically investigates how the COVID-infodemic manifests differently in different languages and in different countries. This paper focuses on the topical and…
Abstract
Purpose
This study empirically investigates how the COVID-infodemic manifests differently in different languages and in different countries. This paper focuses on the topical and temporal features of misinformation related to COVID-19 in five countries.
Design/methodology/approach
COVID-related misinformation was retrieved from 4,487 fact-checked articles. A novel approach to conducting cross-lingual topic extraction was applied. The rectr algorithm, empowered by aligned word-embedding, was utilised. To examine how the COVID-infodemic interplays with the pandemic, a time series analysis was used to construct and compare their temporal development.
Findings
The cross-lingual topic model findings reveal the topical characteristics of each country. On an aggregated level, health misinformation represents only a small portion of the COVID-infodemic. The time series results indicate that, for most countries, the infodemic curve fluctuates with the epidemic curve. In this study, this form of infodemic is referred to as “point-source infodemic”. The second type of infodemic is continuous infodemic, which is seen in India and the United States (US). In those two countries, the infodemic is predominantly caused by political misinformation; its temporal distribution appears to be largely unrelated to the epidemic development.
Originality/value
Despite the growing attention given to misinformation research, existing scholarship is dominated by single-country or mono-lingual research. This study takes a cross-national and cross-lingual comparative approach to investigate the problem of online misinformation. This paper demonstrates how the technological barrier of cross-lingual topic analysis can be overcome with aligned word-embedding algorithms.
Peer review:
The peer review history for this article is available at: https://publons.com/publon/10.1108/OIR-09-2020-0417
Details
Keywords
This paper aims to understand the popular themes of coronavirus disease 2019 (COVID-19)-related online misinformation in Bangladesh and to provide some suggestions to…
Abstract
Purpose
This paper aims to understand the popular themes of coronavirus disease 2019 (COVID-19)-related online misinformation in Bangladesh and to provide some suggestions to abate the problem.
Design/methodology/approach
This paper discusses online COVID-19-related misinformation in Bangladesh. Following thematic analyses, the paper discusses some dominant misinformation themes based on the data collected from three fact-checking websites of Bangladesh run by media professionals and scholars.
Findings
COVID-19-related online misinformation in Bangladesh has six popular themes: health, political, religious, crime, entertainment and miscellaneous. To curb misinformation, many initiatives have been taken so far that have produced little success. This paper briefly proposes the implementation of an experimental two-way misinformation prevention technique for a better result.
Originality/value
Acknowledging previous initiatives, this paper discusses the major themes and offers additional solutions to reduce online misinformation which would benefit academics as well as policymakers.
Details
Keywords
Informed by the third-person effects (TPE) theory, this study aims to analyze restrictive versus corrective actions in response to the perceived TPE of misinformation on…
Abstract
Purpose
Informed by the third-person effects (TPE) theory, this study aims to analyze restrictive versus corrective actions in response to the perceived TPE of misinformation on social media in the USA.
Design/methodology/approach
The authors conducted an online survey among 1,793 adults in the USA in early April. All participants were randomly enrolled in this research through a professional survey company. The structural equation modeling via Amos 20 was adopted for hypothesis testing.
Findings
Results indicated that individuals also perceived that others were more influenced by misinformation about COVID-19 than they were. Further, such a perceptual gap was associated with public support for governmental restrictions and corrective action. Negative affections toward health misinformation directly affected public support for governmental restrictions rather than corrective action. Support for governmental restrictions could further facilitate corrective action.
Originality/value
This study examined the applicability of TPE theory in the context of digital health misinformation during a unique global crisis. It explored the significant role of negative affections in influencing restrictive and corrective actions. Practically, this study offered implications for information and communication educators and practitioners.
Peer review
The peer review history for this article is available at: https://publons.com/publon/10.1108/OIR-08-2020-0386
Details
Keywords
This study focused on the impact of misinformation on social networking sites. Through theorizing and integrating literature from interdisciplinary fields such as…
Abstract
Purpose
This study focused on the impact of misinformation on social networking sites. Through theorizing and integrating literature from interdisciplinary fields such as information behavior, communication and relationship management, this study explored how misinformation on Facebook influences users' trust, distrust and intensity of Facebook use.
Design/methodology/approach
This study employed quantitative survey research and collected panel data via an online professional survey platform. A total of 661 participants in the USA completed this study, and structural equation modeling (SEM) was used to test the theoretical model using Amos 20.
Findings
Based on data from an online questionnaire (N = 661) in the USA, results showed that information trustworthiness and elaboration, users' self-efficacy of detecting misinformation and prescriptive expectancy of the social media platform significantly predicted both trust and distrust toward Facebook, which in turn jointly influenced users' intensity of using this information system.
Originality/value
This study contributes to the growing body of literature on information and relationship management and digital communication from several important aspects. First, this study disclosed the underlying cognitive psychological and social processing of online misinformation and addressed the strategies for future system design and behavioral intervention of misinformation. Second, this study systematically examined both trust and distrust as cognitive and affective dimensions of the human mindsets, encompassed the different components of the online information behavior and enriched one’s understanding of how misinformation affected publics' perceptions of the information system where it appeared. Last but not least, this study advanced the relationship management literature and demonstrated that a trustful attitude exerted a stronger influence on the intensity of Facebook use than distrust did.
Peer review
The peer review history for this article is available at: https://publons.com/publon/10.1108/OIR-04-2020-0130
Details
Keywords
Misinformation can have lasting impacts in the management and control of a public emergency. The purpose of this paper is to demonstrate how misinformation flows and how…
Abstract
Purpose
Misinformation can have lasting impacts in the management and control of a public emergency. The purpose of this paper is to demonstrate how misinformation flows and how user characteristics can shape such flows in the context of a violent riot in Singapore.
Design/methodology/approach
The authors apply the two-step flow theory and discuss the mixed methods approach involving wrangling Twitter data and descriptive analysis to develop and analyse two corpuses of misinformation related to the riot.
Findings
The findings are mostly consistent with the two-step flow theory, in that misinformation flows to the masses from opinion leaders (as indicated by higher measures such as online social influence and followers/following ratio). In the presence of misinformation, tweets opposing such misinformation may not always come from opinion leaders.
Practical implications
The authors work furthers knowledge about how misinformation goes viral, which provides practical implications to help policymakers and scholars in understanding and managing the dynamics and pitfalls of misinformation during an emergency.
Originality/value
This paper tackles the problem of misinformation in public emergencies using a mixed methods approach and contributes to ongoing theoretical work on managing online misinformation especially in public emergencies and crises.
Details
Keywords
The World Wide Web is a potentially powerful channel for misinformation. Focusing upon scholarly misconduct as a source of misinformation, examines the potential impact of…
Abstract
The World Wide Web is a potentially powerful channel for misinformation. Focusing upon scholarly misconduct as a source of misinformation, examines the potential impact of misinformation on the Web. Floridi has suggested three methods of countering misinformation on the Web: quality certification of information sources; limiting monopolies controlling information resources on the Web; and greater information literacy among Web users. Focus groups composed of LIS faculty and research students in Singapore discussed this topic. Members of the groups felt that there was sufficient motivation for trying to publish the results of scholarly misconduct on the Web. Group members agreed that greater information literacy was a good way to counter misinformation. They did not believe that quality certification would stop misinformation, and that there was a danger that a certifying group would become a censoring body. Focus group members said that greater plurality would decrease misinformation. Some argued that large and prestigious publishers should be welcomed on to the Web rather than opposed.
Details
Keywords
With the outset of automatic detection of information, misinformation, and disinformation, the purpose of this paper is to examine and discuss various conceptions of…
Abstract
Purpose
With the outset of automatic detection of information, misinformation, and disinformation, the purpose of this paper is to examine and discuss various conceptions of information, misinformation, and disinformation within philosophy of information.
Design/methodology/approach
The examinations are conducted within a Gricean framework in order to account for the communicative aspects of information, misinformation, and disinformation as well as the detection enterprise.
Findings
While there often is an exclusive focus on truth and falsity as that which distinguish information from misinformation and disinformation, this paper finds that the distinguishing features are actually intention/intentionality and non-misleadingness/misleadingness – with non-misleadingness/misleadingness as the primary feature. Further, the paper rehearses the argument in favor of a true variety of disinformation and extends this argument to include true misinformation.
Originality/value
The findings are novel and pose a challenge to the possibility of automatic detection of misinformation and disinformation. Especially the notions of true disinformation and true misinformation, as varieties of disinformation and misinformation, which force the true/false dichotomy for information vs mis-/disinformation to collapse.
Details
Keywords
Lauren A. Monds, Helen M. Paterson and Keenan Whittle
Operational debriefing and psychological debriefing both involve groups of participants (typically from the emergency services) discussing a critical incident. Research on…
Abstract
Purpose
Operational debriefing and psychological debriefing both involve groups of participants (typically from the emergency services) discussing a critical incident. Research on post‐incident debriefing has previously raised concerns over the likelihood that this discussion may affect not only psychological responses, but also memory integrity. It is possible that discussion in this setting could increase susceptibility to the misinformation effect. This paper seeks to address these issues.
Design/methodology/approach
The aim of this study was to investigate whether including a warning to the debriefing instructions about the possibility of memory contamination could reduce the misinformation effect. Participants viewed a stressful film, and were assigned to one of three conditions: debriefing with standard instructions, debriefing with a memory warning, or an individual recall control condition. Free recall memory and distress for the film were assessed.
Findings
Results indicate that participants in both debriefing conditions reported significantly more misinformation than those who did not participate in a discussion. Additionally it was found that the warning of memory contamination did not diminish the misinformation effect.
Originality/value
These findings are discussed with suggestions for the future of debriefing, with a particular focus on the emergency services.
Details
Keywords
Misinformation on the Web has the potential to distort the learning of higher education students. Research with faculty, research students and taught students showed that…
Abstract
Misinformation on the Web has the potential to distort the learning of higher education students. Research with faculty, research students and taught students showed that higher education students are naïve about the problem of misinformation. They believe they can identify it and do not make extra effort to check the sources of their information. Greater information literacy is advocated as one means of countering misinformation. Other suggested controls, such as certification of sources, and greater plurality of the ownership of information sources, receive less support. We do not know how much misinformation is on the Web, so determining the cost‐benefit of proposed solutions is still not possible.
Details
Keywords
The purpose of this paper is to treat disinformation and misinformation (intentionally deceptive and unintentionally inaccurate misleading information, respectively) as a…
Abstract
Purpose
The purpose of this paper is to treat disinformation and misinformation (intentionally deceptive and unintentionally inaccurate misleading information, respectively) as a socio-cultural technology-enabled epidemic in digital news, propagated via social media.
Design/methodology/approach
The proposed disinformation and misinformation triangle is a conceptual model that identifies the three minimal causal factors occurring simultaneously to facilitate the spread of the epidemic at the societal level.
Findings
Following the epidemiological disease triangle model, the three interacting causal factors are translated into the digital news context: the virulent pathogens are falsifications, clickbait, satirical “fakes” and other deceptive or misleading news content; the susceptible hosts are information-overloaded, time-pressed news readers lacking media literacy skills; and the conducive environments are polluted poorly regulated social media platforms that propagate and encourage the spread of various “fakes.”
Originality/value
The three types of interventions – automation, education and regulation – are proposed as a set of holistic measures to reveal, and potentially control, predict and prevent further proliferation of the epidemic. Partial automated solutions with natural language processing, machine learning and various automated detection techniques are currently available, as exemplified here briefly. Automated solutions assist (but not replace) human judgments about whether news is truthful and credible. Information literacy efforts require further in-depth understanding of the phenomenon and interdisciplinary collaboration outside of the traditional library and information science, incorporating media studies, journalism, interpersonal psychology and communication perspectives.
Details